url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://www.dummies.com/how-to/content/find-the-zerostate-response-of-a-parallel-rl-circu.navId-403865.html
A first-order RL parallel circuit has one resistor (or network of resistors) and a single inductor. First-order circuits can be analyzed using first-order differential equations. By analyzing a first-order circuit, you can understand its timing and delays. To find the total response of an RL parallel circuit such as the one shown here, you need to find the zero-input response and the zero-state response and then add them together. After fiddling with the math, you determine that the zero-input response of the sample circuit is this: Now you are ready to calculate the zero-state response for the circuit. Zero-state response means zero initial conditions. For the zero-state circuit shown earlier, zero initial conditions means looking at the circuit with zero inductor current at t < 0. You need to find the homogeneous and particular solutions to get the zero-state response. Next, you have zero initial conditions and an input current of iN(t) = u(t), where u(t) is a unit step input. When the step input u(t) = 0, the solution to the differential equation is the solution ih(t): The inductor current ih(t) is the solution to the homogeneous first-order differential equation: This solution is the general solution for the zero input. You find the constant c1 after finding the particular solution and applying the initial condition of no inductor current. After time t = 0, a unit step input describes the transient inductor current. The inductor current for this step input is called the step response. You find the particular solution ip(t) by setting the step input u(t) equal to 1. For a unit step input iN(t) = u(t), substitute u(t) = 1 into the differential equation: The particular solution ip(t) is the solution for the differential equation when the input is a unit step u(t) = 1 after t = 0. Because u(t) = 1 (a constant) after time t = 0, assume a particular solution ip(t) is a constant IA. Because the derivative of a constant is 0, the following is true: Substitute ip(t) = IA into the first-order differential equation: The particular solution eventually follows the form of the input because the zero-input (or free response) diminishes to 0 over time. You can generalize the result when the input step has strength IA or IAu(t). You need to add the homogeneous solution ih(t) and the particular solution ip(t) to get the zero-state response: At t = 0, the initial condition is 0 because this is a zero-state calculation. To find c1, apply iZS(0) = 0: Solving for c1 gives you C1 = -IA Substituting c1 into the zero-state response iZS(t), you wind up with
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225404262542725, "perplexity": 821.3837749611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146196.88/warc/CC-MAIN-20160205193906-00336-ip-10-236-182-209.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-equations/127199-existance-specific-diff-eq.html
# Math Help - existance of a specific diff. eq. 1. ## existance of a specific diff. eq. hi, i have the following question: is there an ordinary 2nd order, linear differential equation, which solution is the family of circles: $(x-c1)^2+y^2 = c2^2$? and if there is, find such an example. i am clueless... 2. anyone? 3. Originally Posted by vonflex1 hi, i have the following question: is there an ordinary 2nd order, linear differential equation, which solution is the family of circles: $(x-c1)^2+y^2 = c2^2$? and if there is, find such an example. i am clueless... Differentiate the given equation with respect to x. $2(x- c_1)+ 2y y'= 0$. Write that as $2y y'= 2x- 3c_1$ and differentiate again: $2y y"+ 2 y'^2= 2$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649637341499329, "perplexity": 1133.8799822838757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769990.68/warc/CC-MAIN-20141217075249-00108-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/gravity-time-dilation.917460/
I Gravity Time Dilation Tags: 1. Jun 13, 2017 If we assume I live in Jupiter and there is one year passed in my clock how many years passes in the earth? And how can I use that Equation in the attached? Attached Files: • Capture.PNG File size: 2.2 KB Views: 87 2. Jun 13, 2017 Staff: Mentor How are you going to compare the readings on the clocks? You can't do it directly since Jupiter is very far away. 3. Jun 13, 2017 Janus Staff Emeritus I should also point out that any such comparison would need to take into account that Earth and Jupiter are at different locations in the Sun's gravity well and moving at different orbital speeds. 4. Jun 13, 2017 I means if I spend a lot time there and go back to the earth and I had a twin there what the deference between my age and my brother age @Janus @PeterDonis 5. Jun 13, 2017 Staff: Mentor That is a different question. By having you and your twin separate and rejoin you take care of the problem that PeterDonis points out but you still have all the complications that Janus mentions, as well as some additional ones because it makes a difference which path you follow to Jupiter and back to Earth. An easier question to answer would be how the rate of a clock on the surface of Jupiter compares with the rate of a clock in orbit safely above the Jovian atmosphere. 6. Jun 13, 2017 Staff: Mentor Ah, ok, this is a well-defined problem. The short answer is, not much difference, because all of the gravitational fields involved (to a first approximation, the Earth's, the Sun's, and Jupiter's) are really weak, in the sense that their time dilation effects are really small. There are also effects due to your motion relative to your twin, both because you have to get in a spacecraft and travel to Jupiter and back, and because as @Janus said, Jupiter is moving relative to the Earth; those are also small, because all of the relative speeds involved are small compared to the speed of light (at least if you use current space travel technology), but that doesn't mean they're small relative to the gravitational effects. The equation you give is relevant, but incomplete, because it only takes into account gravitational effects, not the effects of relative motion, and only from one source, whereas, as I said, there are three. There are a number of ways to construct a formula that takes all that into account; I'll only describe one here, because it seems to me to be the easiest to grasp conceptually. Imagine a hypothetical observer who is far outside the solar system, looking in with a very powerful telescope and recording things. This observer can adopt coordinates in which, to a good approximation, the Sun is at rest, and we can treat all other objects (the Earth, Jupiter, and your spaceship) as moving by using their speeds relative to the Sun, and assuming that all of these speeds are small compared to the speed of light. This observer also carries a clock with him whose time can be treated as the coordinate time in these coordinates, i.e., as a common reference. The only other assumption we need is that gravity is very weak everywhere that you and your twin will be, so that the gravitational fields of the three massive objects in question (the Sun, Earth, and Jupiter) superpose linearly--i.e., their effects can just be added together. This also allows us to ignore terms of higher than first order in $GM / c^2 r$ and $v^2 / c^2$, and to expand out the square root that would normally be in the formulas using the binomial theorem (dropping quadratic and higher order terms). With all these approximations as given, the general formula for the instantaneous time dilation of an observer who is at coordinates $\vec{r}$ and moving with speed $v$ is: $$\frac{d\tau}{dt} = 1 - \frac{G M_S}{c^2 \vert \vec{r} \vert} - \frac{G M_E}{c^2 \vert \vec{r} - \vec{r}_E \vert} - \frac{G M_J}{c^2 \vert \vec{r} - \vec{r}_J \vert} - \frac{1}{2} \frac{v^2}{c^2}$$ where $\tau$ is the proper time of the observer, $t$ is coordinate time, $M_S$, $M_E$, and $M_J$ are the masses of the Sun, Earth, and Jupiter, and $\vec{r}_E$ and $\vec{r}_J$ are the position vectors of the Earth and Jupiter (so the differences are just the distances from the observer to Earth and Jupiter). Note that the speed $v$ of the observer is relative to the Sun, so, for example, if the observer is your twin, at rest on Earth, his speed is Earth's speed in its orbit. Now, this is an instantaneous time dilation, which means that, in order to get the total elapsed time for the observer from one event to another (for example, from the event of you and your twin parting to the event of you coming back together again), you have to know, first, the coordinate times $t$ for the two events, and second, the observer's position $\vec{r}$ and speed $v$ as functions of $t$. Then you plug those functions into the above formula and integrate with respect to $t$. All that is complicated, and it would be nice if we could make some more approximations to simplify it. Fortunately, we can. First, by running numbers, you can see that the term in $M_E$ is negligible unless the observer is close to Earth, and the term in $M_J$ is negligible unless the observer is close to Jupiter. (Note carefully that the term in $M_S$ is not negligible in either case.) Second, at any speed achievable by current space technology, it will take years for you to get from Earth to Jupiter (and to return, when you return), so during the transit we can ignore both the $M_E$ and the $M_J$ terms, because you won't be close to Earth or Jupiter for practically all of the transit time. That gives us two much simpler integrals to do. First, for your twin, who stays on Earth the whole time, if we idealize the Earth's orbit as circular, and ignore the Earth's rotation (for extra credit, you should check for yourself that that is a good approximation here), and observe that the Earth's orbital velocity about the Sun is just $G M_S / \vert \vec{r}_E \vert$, we simply have $$\tau_\text{twin} = \int_0^T \left[ 1 - \frac{G M_E}{c^2 R_E} - \frac{3 G M_S}{2 c^2 r_E} \right] dt$$ where $R_E$ is the radius of the Earth (since the twin is on the Earth's surface) and $r_E$ is the Earth's mean distance from the Sun (i.e., it's $\vert \vec{r}_E \vert$). The factor of $3/2$ in the last term in the integrand comes from the equation for the Earth's orbital velocity. All of the factors in the integrand are constant, so we just have $$\tau_\text{twin} = \left[ 1 - \frac{G M_E}{c^2 R_E} - \frac{3 G M_S}{2 c^2 r_E} \right] T$$ where $T$ is the total coordinate time elapsed from when you leave your twin to when you return. For you, we divide your trip into three legs: outbound to Jupiter, stay on Jupiter, return to Earth. For the second of these, the formula will look similar to what we just did for your twin, but with the figures for Jupiter instead of Earth. For outbound and returning, the only mass term will be for the Sun, and there will be a term for your speed, and the formula will be the same for both since the trips are mirror images of each other. We can idealize your motion as being purely radial and at constant speed $v$, which means your distance from the Sun will be $r_E + vt$, where $t$ is the coordinate time from the trip start. The total coordinate time for the trip will be $T_\text{trip} = (r_J - r_E) / v$. So we have $\tau_\text{you} = 2 * \tau_\text{trip} + \tau_\text{Jupiter}$, where $$\tau_\text{Jupiter} = \left[ 1 - \frac{G M_J}{c^2 R_J} - \frac{3 G M_S}{2 c^2 r_J} \right] T_\text{Jupiter}$$ $$\tau_\text{trip} = \int_0^{T_\text{trip}} \left[ 1 - \frac{G M_S}{c^2 \left( r_E + v t \right)} - \frac{1}{2} \frac{v^2}{c^2} \right] dt$$ Only one term actually is a function of $t$, and looking in a table of integrals and substituting for $T_\text{trip}$ gives $$\tau_\text{trip} = \frac{1}{v} \left[ \left( 1 - \frac{1}{2} \frac{v^2}{c^2} \right) \left( r_J - r_E \right) - \frac{G M_S}{c^2} \ln \frac{r_J}{r_E} \right]$$ Putting all of the above together, and using $T = 2 T_\text{trip} + T_\text{Jupiter}$ to relate the coordinate times for you and your twin, we end up with $$\tau_\text{twin} = \left( 1 - \frac{G M_E}{c^2 R_E} - \frac{3 G M_S}{2 c^2 r_E} \right) \left( T_\text{Jupiter} + \frac{2 \left( r_J - r_E \right)}{v} \right)$$ $$\tau_\text{you} = \left( 1 - \frac{G M_J}{c^2 R_J} - \frac{3 G M_S}{2 c^2 r_J} \right) T_\text{Jupiter} + \frac{2}{v} \left[ \left( 1 - \frac{1}{2} \frac{v^2}{c^2} \right) \left( r_J - r_E \right) - \frac{G M_S}{c^2} \ln \frac{r_J}{r_E} \right]$$ I'll leave it to you to plug in numbers to these formulas and see what comes out for various possible values of $T_\text{Jupiter}$ and $v$. Last edited: Jun 13, 2017 7. Jun 13, 2017 PAllen A simplification could be to imagine very long lived twins. Traveling twin goes to Jupiter, stays put for a million years his time, then returns to earth. Then the impact of the rocket trips can be taken to be negligible.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8657993078231812, "perplexity": 294.31436030446054}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591140.45/warc/CC-MAIN-20180719144851-20180719164851-00369.warc.gz"}
http://jakewestfall.org/blog/index.php/2016/03/25/five-different-cohens-d-statistics-for-within-subject-designs/?replytocom=79
# Five different “Cohen’s d” statistics for within-subject designs Jeff Rouder poses an “effect size puzzler” where the puzzle is simply to compute a standardized effect size for a simulated dataset where subjects make 50 responses in each of 2 conditions. He offers a sweatshirt prize (???) to anyone who can do so “correctly.” (Update: Jeff posted his answer to the effect size puzzler here.) As it happens, I’ve been meaning for a while to write a blog post on issues with computing d-like effect sizes (or other standardized effect sizes) for within-subject designs, so now seems like a good time to finally hammer out the post. Jeff didn’t actually say anything to restrict us to standardized mean difference type measures (as opposed to, say, variance explained type measures), and we can only guess whether the “correct” effect size he has in mind is a d-like measure or an $R^2$-like measure or what. But here I’ll focus on d-like measures, which are perhaps the most popular for studies with categorical predictors, and offer plenty of complications that I think are often under-appreciated (and it seems Jeff would agree). What I’ll show here is that there are at least 5 different and non-equivalent ways that people might compute a d-like effect size (which they would invariably simply call “Cohen’s d”) for Jeff’s dataset, and the resulting effect sizes range from about 0.25 to 1.91. I’ll compare and contrast these procedures and ultimately choose one that I think is the least crazy, if you must compute a standardized effect size (more on that later). When I read a paper that reports “Cohen’s d” for a within-subject design, I usually have no idea which of these 5 procedures the authors actually applied unless I try to reproduce the effect size myself from some of the descriptives in the paper, which is often not possible. Jeff’s dataset can be downloaded here.  Here’s some code to load it into R, examine it, print the two conditions means, and save the mean difference as md: ### d: Classical Cohen’s d This is essentially a straightforward application of the formula provided by Cohen himself: $d=\frac{M_1-M_2}{\sigma}$ The numerator is the mean difference. Cohen never actually gives a clear, unambiguous definition of $\sigma$ in the denominator, but it is usually taken to be the pooled standard deviation, that is, the square root of the average variance in each condition (assuming equal sample sizes in both conditions). In R code: (d <- md / with(dat, sqrt(mean(tapply(rt, cond, var))))) # 0.2497971 The crucial thing to recognize about applying the classical Cohen’s d is that it deliberately ignores information about the design of the study. That is, you compute d the same way whether you are dealing with a between-subjects, within-subjects, or mixed design. Basically, in computing d, you always treat the data as if it came from a simple two-independent-groups design. I don’t want to get bogged down by a discussion of why that is a good thing at this point in the post—I’ll get into that later on. For now I just note that, with this effect size, within-subject designs tend to be powerful not because they lead to larger effect sizes—if anything, the reverse is probably true, in that people elect to use within-subject designs when Cohen’s d is particularly small, for example in many reaction time studies—but rather because they allow us to efficiently detect smaller effect sizes due to removing irrelevant sources of variation from the denominator of the test statistic. ### da: Cohen’s d after averaging over replicates For this next way of computing a d-like effect size, the section heading pretty much says it all. In Jeff’s dataset, each participant makes a total of 100 responses, 50 in each condition. The computation of $d_a$ proceeds by first averaging over the 50 responses in each subject-by-condition cell of the experiment, so that each subject’s data is reduced to 2 means, and then applying the classical d formula to this aggregated data. In R code: sub_means <- with(dat, tapply(rt, list(id, cond), mean)) (d_a <- md / sqrt(mean(diag(var(sub_means))))) # 0.8357347 Richard Morey blogged about some weird things that can happen when you compute a standardized effect size this way. The basic issue is that, unlike classical Cohen’s d, $d_a$ does not ignore all the design information: $d_a$ will tend to be larger when there are more replicates, that is, when each subject responds more frequently in each condition. ### dz: Standardized difference scores A third way to compute a d-like effect size is to reduce each subject’s data to a single difference score—the mean difference between their responses in each condition—and then use the standard deviation of these difference scores as the denominator of d. Cohen actually discusses this statistic in his power analysis textbook (Cohen, 1988, p. 48), where he carefully distinguishes it from the classical Cohen’s d by calling it $d_z$. In R, we can compute this as: (d_z <- md / sd(sub_means[,2] - sub_means[,1])) # 1.353713 There is a straightforward relationship between $d_z$ and the test statistic: $t_w = d_z\sqrt{n}$, where $t_w$ is the paired-samples t-statistic from a within-subjects design and $n$ is the number of subjects. One might regard this as a virtue of $d_z$. I will argue below that I don’t think it’s a good idea to use $d_z$. ### dt: Naive conversion from t-statistic In a simple two-independent groups design, one can compute the classical Cohen’s d from the t-statistic using $d=t_b\sqrt{\frac{2}{n}}$, where $t_b$ is the independent-samples t-statistic for a between-subjects design and $n$ is the number of subjects per group. Many, many authors over the years have incorrectly assumed that this same conversion formula will yield sensible results for other designs as well, such as in the present within-subjects case. Dunlap et al. (1996) wrote a whole paper about this issue. One might suppose that applying this conversion formula to $t_w$ will yield $d_z$, but we can see that this is not the case by solving the equation given in the previous section for $d_z$, which yields $d_z=\frac{t_w}{\sqrt{n}}$. In other words, naive application of the between-subjects conversion formula yields an effect size that is off by a factor of $\sqrt{2}$. To compute $d_t$ in R: t_stat <- t.test(sub_means[,2] - sub_means[,1])$statistic (d_t <- t_stat*sqrt(2/nrow(sub_means))) # 1.914439 ### dr: Residual standard deviation in the denominator Finally, one can compute a d-like effect size for this within-subject design by assuming that the $\sigma$ in the classical Cohen’s d formula refers to the standard deviation of the residuals. This is the approach taken in Rouder et al. (2012) on Bayes factors for ANOVA designs. In between-subjects designs where each subject contributes a single response, this is equivalent to classical Cohen’s d. But it differs from classical Cohen’s d in designs where subjects contribute multiple responses. To compute $d_r$ in R, we need an estimate of the residual standard deviation. Two ways to obtain this are from a full ANOVA decomposition of the data or by fitting a linear mixed model to the data. Here I do it the mixed model way: options(contrasts=c("contr.helmert","contr.poly")) library("lme4") mod <- lmer(rt ~ cond + (cond||id), data=dat) summary(mod) # Random effects: # Groups Name Variance Std.Dev. # id (Intercept) 0.0026469 0.05145 # id.1 cond 0.0001887 0.01374 # Residual 0.0407839 0.20195 # Number of obs: 2500, groups: id, 25 # # Fixed effects: # Estimate Std. Error t value # (Intercept) 0.779958 0.016402 47.55 # cond 0.052298 0.008532 6.13 The residual standard deviation is estimated as 0.20195, which gives us $d_r$0.259. It turns out that, for this dataset, this is quite close to the classical Cohen’s d, which was 0.25. Basically, classical Cohen’s d is equivalent to using the square root of the sum of all the variance components in the denominator1,2, rather than just the square root of the residual variance as $d_r$ uses. For this simulated dataset, the two additional variance components (intercepts and slopes varying randomly across subjects) are quite small compared to the residual variance, so adding them to the denominator of the effect size does not change it much. But the important thing to note is that for other datasets, it is possible that $d$ and $d_r$ could differ dramatically. ### So which one should I compute? Contrary to Jeff, I don’t really think there’s a “correct” answer here. (Well, maybe we can say that it’s hard to see any justification for computing $d_t$.) As I put it in the comments to an earlier blog post: Basically all standardized effect sizes are just made-up quantities that we use because we think they have more sensible and desirable properties for certain purposes than the unstandardized effects. For a given unstandardized effect, there are any number of ways we could “standardize” that effect, and the only real basis we have for choosing among these different effect size definitions is in choosing the one that has the most sensible derivation and the most desirable properties relative to other candidates. I believe that classical Cohen’s d is the option that makes the most sense among these candidates. Indeed, in my dissertation I proposed a general definition of d that I claim is the most natural generalization of Cohen’s d to the class of general ANOVA designs, and I considered it very important that it reduce to the classical Cohen’s d for datasets like Jeff’s. My reasoning is this. One of the primary motivations for using standardized effect sizes at all is so that we can try to meaningfully compare effects from different studies, including studies that might use different designs. But all of the effect size candidates other than classical Cohen’s d are affected by the experimental design; that is, the “same” effect will have a larger or smaller effect size based on whether we used a between- or within-subjects design, how many responses we required each subject to make, and so on. Precisely because of this, we cannot meaningfully compare these effect sizes across different experimental designs. Because classical Cohen’s d deliberately ignores design information, it is at least in-principle possible to compare effect sizes across different designs. Morris and DeShon (2002) is a nice paper that talks about these issues. Bakeman (2005) also has some great discussion of essentially this same issue, focused instead on “variance explained”-type effect sizes. Although I don’t really want to say that there’s a “correct” answer about which effect size to use, I will say that if you choose to compute $d_z$$d_r$, or anything other than classical Cohen’s d, just please do not call it Cohen’s d. If you think these other effect sizes are useful, fine, but they are not the d statistic defined by Cohen! This kind of mislabeling is how we’ve ended up with 5 different ways of computing “Cohen’s d” for within-subjects designs. Finally, there is a serious discussion to be had about whether it is a good idea to routinely summarize results in terms of Cohen’s d or other standardized effect sizes at all, even in less complicated cases such as simple between-subjects designs. Thom Baguley has a nice paper with some thoughtful criticism of standardized effect sizes, and Jan Vanhove has written a couple of nice blog posts about it. Even Tukey seemed dissatisfied with the enterprise. In my opinion, standardized effect sizes are generally a bad idea for data summary and meta-analytic purposes. It’s hard to imagine a cumulative science built on standardized effect sizes, rather than on effects expressed in terms of psychologically meaningful units. With that said, I do think standardized effect sizes can be useful for doing power analysis or for defining reasonably informative priors when you don’t have previous experimental data. Footnotes 1 In general, this is really a weighted sum where the variance components due to random slopes must be multiplied by a term that depends on the contrast codes that were used. Because I used contrast codes of -1 and +1, it works out to simply be the sum of the variance components here, which is precisely why I changed the default contrasts before fitting the mixed model. But be aware that for other contrast codes, it won’t simply be the sum of the variance components. For more info, see pp. 20-21 of my dissertation. 2 If you actually compute classical Cohen’s d using the square root of the sum of the estimated variance components, you will find that there is a very slight numerical difference between this and the way we computed d in the earlier section (0.2498 vs. 0.2504). These two computations are technically slightly different, although they estimate the same quantity and should be asymptotically equivalent. In practice the numerical differences are negligible, and it is usually easier to compute d the previous way, that is, without having to fit a mixed model. ## 11 thoughts on “Five different “Cohen’s d” statistics for within-subject designs” 1. grand moff tarkin says: For within-subjects designs (e.g., paired t-tests), I calculate a Cohen’s d using bootstrapping on contrast scores (level1 – level2). The package and function named bootES does this easily. 2. Wolfgang says: Just for context, I am coming here from Cross Validated (CV): http://stats.stackexchange.com/questions/256053/variance-of-cohens-d-for-within-subjects-designs Jake asked me to comment, so here we go. First of all, the question on CV was about a simpler situation where each subject provides a single observation within the two conditions (pre/post). The ‘puzzler’ by Jeff is about a more complex case where each subject provides 50 observations within the two conditions. Also, the CV question was in part about the computation of the sampling variances of whatever ‘effect size’ measure we compute. For many of the proposed measures, one would have to jump through extra hoops to get any kind of sensible measure of variability (especially if one does not have the raw data, as would be typical in the meta-analytic context). Leaving this aside, I would also fit a mixed-effects model to these data, like you did for d_r, but allowing intercepts and slopes to correlate, so with (cond|id). Also, ‘dat$y <- log(dat$rt – .3)' to use the same transformation as was done by Jeff and using 'dat$cond <- dat\$cond – 1' (since 'cond' is coded 1/2 in the original dataset). The condition random effect has a close to 0 variance, but not quite 0 (the SD is 0.0123), while the condition fixed effect is estimated to be 0.0950. So, per Jeff, we would compute 0.0950 / 0.0123 ~= 7.7. Not quite infinity, but as far as 'effect sizes' are concerned, pretty close :) Is this the only way to compute a standardized effect size here? Not at all. For more complex designs like this, I don't think there is *one* way of computing a standardized effect. Even in the simpler case that was addressed on CV, there are (at least) two sensible approaches for computing a 'standardized mean change' (using raw score or change score standardization), and both of them have merit (and for both of them, we can derive the sampling variance). Most importantly, I think whatever measure is computed, we need to make explicit which sources of variability are taken into consideration. So, b_cond / SD_cond is one possibility (Jeff's suggestion), b_cond / SD_error, b_cond / sqrt(Var_cond + Var_error), and b_cond / sqrt(Var_id + Var_cond + Var_error) are other possibilities. One can debate which one is best or is most sensible, but the most important thing is that we should clearly describe which one was computed (and not just call it 'd'). 1. Donald Williams says: Sure, one can compute a standardized effect in many ways, and transparency is key. However, we should also not be afraid to use prior knowledge about what is reasonable. In other words, does a standardized effect of 7 + or infinity make sense. I would state, strongly, this is very implausible and inflated effect sizes are likely when only using some of the variability. As I do use your metaFor package, your effect sizes (e.g., SMCR) make use of all the variability in the data, not just one component. 1. Donald Williams says: Furthermore, for some standardized effect sizes in metaFor, the actual effect size does not change due to pre post correlation. What changes is the uncertainty of the estimate, if we assume a confidence interval is a measure of uncertainty (as opposed to a credible interval). 3. Marco Alonso says: Thanks for sharing this. I have a GLMM with a binary response and 3 predictors. Do you think some of these effect standardizations can be applied to this model? if so, which one would you recommend? thanks 4. John says: To calculate classical Cohen’s d from the t-statistic, should the formula perhaps read d = t*2/sqrt(n) instead of d = t*sqrt(2/n)? [I’ve also come across d = t*2/sqrt(df), but I’ve never seen the formula given above before.] 1. If n is the sample size in each group and N is the total sample size, then d = t*sqrt(2/n) = t*2/sqrt(N). In my post I used n, the number of subjects per group (and defined it as such), not N. Hope that helps clear it up. 5. Thank you Jake! This is the best blog on Cohen’s d for paired t-tests. 6. Kelly Vess says: Awesome article: great perspective in presenting the details and the big picture! It empowers me to explain why I’m using the classic Cohen’s D in my N=1 research! 7. Hanna says: A puzzle for experts. Can someone disentangle this formula? d <- sqrt(((tstat^2)*2*(1-correlation))/(df+1)) [or: d = t * sqrt(2*(1-r)/n)] Where tstat is the t-statistic, correlation is the correlation between the paired observations, and df is the t-statistics df. A collaborator wrote this a long time ago in a handy little function to calculate paired t-tests. He says that he used a scientific paper to select the formula for Cohen's d (possibly Dunlap et al., 1996), but I can't find the formula in there. Does this fit any of the Cohen's d recommended for paired-sample t-tests!?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8300209641456604, "perplexity": 1085.404418257692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531702.36/warc/CC-MAIN-20210123001629-20210123031629-00421.warc.gz"}
http://nrich.maths.org/313/index?nomenu=1
Triangle $ABC$ has altitudes $h_1$, $h_2$ and $h_3$. The radius of the inscribed circle is $r$, while the radii of the escribed circles are $r_1$, $r_2$ and $r_3$ respectively. $$$\frac{1}{r} = \frac{1}{h_1} + \frac{1}{h_2} + \frac{1}{h_3} = \frac{1}{r_1} + \frac{1}{r_2} + \frac{1}{r_3}.$$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9906238317489624, "perplexity": 218.1353463212658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246655962.81/warc/CC-MAIN-20150417045735-00303-ip-10-235-10-82.ec2.internal.warc.gz"}
http://blog.computationalcomplexity.org/2014/04/factorization-in-conp-in-other-domains.html?showComment=1397446999695
## Sunday, April 13, 2014 ### Factorization in coNP- in other domains? I had on an exam in my grad complexity course to show that the following set is in coNP FACT = { (n,m) : there is a factor y of n with 2 \le y \le m } The answer I was looking for was to write FACTbar (the complement) as FACTbar = { (n,m) | (\exists p_1,...,p_L) where L \le log n for all i \le L we have m < p_i \le n and p_i is prime (the p_i are not necc distinct) n =p_1 p_2 ... p_L } INTUITION: Find the unique factorization and note that the none of the primes are < m To prove this work you seem to need to use the Unique Factorization theorem and you need that PRIMES is in NP (the fact that its in P does not help). A student who I will call Jesse (since that is his name) didn't think to complement the set  so instead he wrote the following CORRECT answer FACT = { (n,m) | n is NOT PRIME and forall p_1,p_2,...,p_L  where 2\le L\le log n for all i \le L,  m< p_i \le n-1 , (p_i prime but not necc distinct). n \ne p_1 p_2 ... p_L } (I doubt this proof that FACT is in coNP is new.) INTUITION: show that all possible ways to multiply together numbers larger than m do not yield n, hence n must have a factor \le m. Here is what strikes me- Jesse's proof does not seem to use Unique Factorization.  Hence it can be used in other domains(?). Even those that do not have Unique Factorization (e.g. Z[\sqrt{-5}]. Let D= Z[\alpha_1,...,\alpha_k] where the alpha_i are algebraic. If n\in D then let N(n) be the absolute value of the sum of the coefficients (we might want to use the product of n with all of its conjugates instead, but lets not for now). FACT = { (n,m) : n\in D, m\in NATURALS, there is a factor y in D of n with 2 \le N(y) \le m} Is this in NP? Not obvious (to me) --- how many such y's are there. Is this the set we care about? That is, if we knew this set is in P would factoring be in P? Not obv (to me). I suspect FACT is in NP, though perhaps with a diff definition of N( ). What about FACTbar? I think Jesse's approach works there, though might need  diff bound then log L. I am (CLEARLY) not an expert here and I suspect a lot of this is known, so my real point is that a students diff answer then you had in mind can be inspiring. And in fact I am inspired to read Factorization: Unique and Otherwise by Weintraub which is one of many books I've been meaning to read for a while. 1. The various definitions of FACT and FACT-bar seem have the form { (n,m) | }. I'm pretty good at figuring out what you mean, but I can't tell what Jesse means (Are there more typos in there? Did you really mean it when you said "n \ne p_1 p_2 ... p_L"?) It would also be helpful to get an inline LaTeX parser for the blog! 2. Has Jesse's proof been garbled? If I am understanding it right, it says FACT is the set of (n,m) such that every factorization of n contains a factor between 2 and m, which is not correct. 3. YES, I garbled things and also my editor garbled things. But its fixed now. I hope. 4. Maybe I'm confused, but is Jesse's proof really correct? A pair (n=64, m=6) belongs to FACT (2 is a factor). But in this definition, for L=2, p1=p2=8 we have that it does not belong. 1. It looks like you're right. Couldn't we add in the condition that each p_i is prime? PRIMES is clearly in co-NP: for all q, r, q*r=p -> q=1 or r=1. 5. maybe someone should answer malcin's concern ? 1. Malcin is correct, and Andy is correct, so I used Andy's correction. 6. Here http://npcomplete-001-site1.myasp.net/ resolved Exact cover ploblem, on line solver aviable 7. What about adding MathJax to the site? (one line of code :-)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461609125137329, "perplexity": 1945.2964800649358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982964275.89/warc/CC-MAIN-20160823200924-00164-ip-10-153-172-175.ec2.internal.warc.gz"}
https://arxiv.org/abs/nlin/0412008v2
nlin (what is this?) # Title: The Nonlinear Schrödinger Equation on the Interval Abstract: Let $q(x,t)$ satisfy the Dirichlet initial-boundary value problem for the nonlinear Schr\"odinger equation on the finite interval, $0 < x < L$, with $q_{0}(x) = q(x,0)$, $g_{0}(t) = q(0,t)$, $f_{0}(t) = q(L,t)$. Let $g_{1}(t)$ and $f_{1}(t)$ denote the {\it unknown} boundary values $q_{x}(0,t)$ and $q_{x}(L,t)$, respectively. We first show that these unknown functions can be expressed in terms of the given initial and boundary conditions through the solution of a system of nonlinear ODEs. Although the question of the global existence of solution of this system remains open, it appears that this is the first time in the literature that such a characterization is explicitely described for a nonlinear evolution PDE defined on the interval; this result is the extension of the analogous result of [4] and [6] from the half-line to the interval. We then show that $q(x,t)$ can be expressed in terms of the solution of a $2\times 2$ matrix Riemann-Hilbert problem formulated in the complex $k$ - plane. This problem has explicit $(x,t)$ dependence in the form $\exp[2ikx + 4ik^2t]$, and it has jumps across the real and imaginary axes. The relevant jump matrices are explicitely given in terms of the spectral data $\{a(k), b(k)\}$, $\{A(k), B(k)\}$, and $\{\A(k), \B(k)\}$, which in turn are defined in terms of $q_{0}(x)$, $\{g_{0}(t), g_{1}(t)\}$, and $\{f_{0}(t), f_{1}(t)\}$, espectively. Subjects: Exactly Solvable and Integrable Systems (nlin.SI); Mathematical Physics (math-ph); Analysis of PDEs (math.AP) DOI: 10.1088/0951-7715/18/4/019 Cite as: arXiv:nlin/0412008 [nlin.SI] (or arXiv:nlin/0412008v2 [nlin.SI] for this version) ## Submission history From: Thanasis Fokas S. [view email] [v1] Wed, 1 Dec 2004 16:18:35 GMT (22kb) [v2] Mon, 21 Feb 2005 18:16:31 GMT (22kb)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9611404538154602, "perplexity": 296.52141981753164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807056.67/warc/CC-MAIN-20171124012912-20171124032912-00757.warc.gz"}
https://www.physicsforums.com/threads/double-angle-formulas.3660/
# Double-Angle Formulas 1. Jul 7, 2003 ### Cod For some reason, I cannot comprehend the concepts behind this. I read the example problems over and over; however, I still cannot understand the process when I go to study or do work on it. Just to refresh your minds, the double-angle formulas: sin2x = 2(sinx)(cosx) cos2x = cos^2x - sin^2x = 1 - 2sin^2x = 2cos^2x-1 tan2x = 2tanx/1-tan^2x The book example: If cosx = -2/3 and x is in quadrant II; find sin2x and cos2x. If someone could explain the processes when using these formulas to solve problems, I'd greatly appreciate it. The book just isn't helping me any. 2. Jul 7, 2003 ### Guybrush Threepwood sin x = sqrt(1 - (cos x)^2). Since cos x = -2/3 we have: sin x = sqrt(5/9). So sin x = sqrt(5)/3 or sin x = -sqrt(5)/3. We know that x is in the second quadrant and that makes sin x > 0. So sin x = sqrt(5)/3. Now you know sin x and cos x. Just replace them and find sin 2x and cos 2x. cos 2x = (cos x)^2 - (sin x)^2 so it's less confusing..... 3. Jul 7, 2003 ### dextercioby It's quite simple really:you have cos x,then you compute sinx and sustitute in the formulas for the double angle.Got it?? 4. Jul 7, 2003 ### Cod So how would you go about finding 'tan2x'? I understand that tanx = sinx/cosx. I just don't see how you can plug that into the formula: tan2x = 2tanx/1-tan^2x. Unless... 2(sinx/cosx)/1-(sin^2x/cos^2x) <-----would that be correct? If that's correct, would I just plug in the known values of sin and cos? Then do the arithmatic? Last edited: Jul 7, 2003 5. Jul 7, 2003 ### Guybrush Threepwood yes, you could do that or you could do (sin 2x/cos 2x) after you have found the previous two results. 6. Jul 7, 2003 ### AndersHermansson I think this might be what you are looking for: sin(2x) = sin(x+x) cos(2x) = cos(x+x) tan(2x) = sin(2x)/cos(2x) now we use the rule of addition: sin(x+x) = sin(x)cos(x) + cos(x)sin(x) = 2sin(x)cos(x) cos(x+x) = cos(x)cos(x) - sin(x)sin(x) = cos^2(x) - sin^2(x) = cos^2(x) - (1 - cos^2(x)) = 2cos^2(x) - 1 tan(2x) = 2sin(x)cos(x) / 2cos^2(x) - 1 ... etc Last edited: Jul 7, 2003
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8839988708496094, "perplexity": 3141.6015656748423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212639.36/warc/CC-MAIN-20180817163057-20180817183057-00551.warc.gz"}
https://advances.sciencemag.org/content/2/9/e1600807.full
Research ArticleMATERIALS SCIENCE # Energy gap evolution across the superconductivity dome in single crystals of (Ba1−xKx)Fe2As2 See allHide authors and affiliations Vol. 2, no. 9, e1600807 ## Abstract The mechanism of unconventional superconductivity in iron-based superconductors (IBSs) is one of the most intriguing questions in current materials research. Among non-oxide IBSs, (Ba1−xKx)Fe2As2 has been intensively studied because of its high superconducting transition temperature and fascinating evolution of the superconducting gap structure from being fully isotropic at optimal doping (x ≈ 0.4) to becoming nodal at x > 0.8. Although this marked evolution was identified in several independent experiments, there are no details of the gap evolution to date because of the lack of high-quality single crystals covering the entire K-doping range of the superconducting dome. We conducted a systematic study of the London penetration depth, λ(T), across the full phase diagram for different concentrations of point-like defects introduced by 2.5-MeV electron irradiation. Fitting the low-temperature variation with the power law, Δλ ~ Tn, we find that the exponent n is the highest and the Tc suppression rate with disorder is the smallest at optimal doping, and they evolve with doping being away from optimal, which is consistent with increasing gap anisotropy, including an abrupt change around x ≃ 0.8, indicating the onset of nodal behavior. Our analysis using a self-consistent t-matrix approach suggests the ubiquitous and robust nature of s± pairing in IBSs and argues against a previously suggested transition to a d-wave state near x = 1 in this system. Keywords • Superconductor • pairing symmetry • pnictides ## INTRODUCTION Understanding the mechanisms of superconductivity in iron-based superconductors (IBSs) is a challenging task, partially due to the multiband character of interactions and scattering (14). On the other hand, the rich chemistry of IBSs offers a unique opportunity to study the physics within one family of materials and test material-specific theories of superconductivity. It is widely believed that spin fluctuations due to repulsive Coulomb interactions are responsible for superconductivity and lead to sign-changing pair states around the Fermi surface. Theories of superconductivity based on exchange of these electronic excitations predict that large-momentum pair scattering processes dominate the pairing interactions, but details distinguish between competing pair states, usually s wave and d wave. For the simplest band structures characteristic of these systems, it was found that optimally doped systems should have a fully gapped, s-wave ground state, but as the system was overdoped, d wave would become more competitive and even the s-wave state would become extremely anisotropic (5). Systematically testing these predictions would be an important step toward understanding the origins of superconductivity in these systems. Among various IBSs, (Ba1−xKx)Fe2As2 (BaK122) is, perhaps, one of the most interesting and intensively studied compounds, exhibiting an unusual variation of the superconducting gap structure across the superconducting dome that exists between x ≈ 0.18 and 1. In the optimally doped region, x ≈ 0.35 to 0.4, two effective isotropic superconducting gap scales (roughly with a 2:1 magnitude ratio) were identified in many experiments, for example, thermal conductivity (6), London penetration depth (7, 8), and angle-resolved photoemission spectroscopy (ARPES) (812). In the heavily overdoped region, x ≥ 0.8, a gap with line nodes was identified by thermal conductivity (1315), London penetration depth (16), and ARPES (9, 11). Although some thermodynamic (17, 18) and small-angle neutron measurements (19) have reported tiny full gaps instead, there is a consensus that the gap anisotropy is very strong. An important feature of the overdoped Ba122 system is the Lifshitz transition reported for both electron-doped (20) and hole-doped (21) compounds. In the material of interest here, BaK122, there is a series of Lifshitz transitions in the x ~ 0.7 to 0.9 region that result in the replacement of electron-like pockets at the M point by hole-like pockets (21, 22), although a more precise determination of the critical compositions and the exact evolution of the three-dimensional band structure is still lacking. Indeed, this marked change in the electronic band structure must be taken into account when trying to explain the observed evolution of the superconducting properties with doping. One of the problems is the absence of systematic studies for a sufficient number of different compositions with reliably established values of x, spanning the whole doping range. Here, we measured 16 different compositions with x values determined by the wavelength-dispersive spectroscopy (WDS) in each measured sample. Although there is an overall experimental consensus on the evolution with doping from large, isotropic to smaller, highly anisotropic gaps in BaK122, several possible theoretical interpretations exist. Most authors propose a crossover between two generalized s-wave states, where the usual configuration of isotropic gaps with opposite signs on the electron and hole pockets crosses over to a configuration with opposite signs on the hole pockets resulting in accidental nodes (15). This crossover may happen through an intermediate time-reversal symmetry broken s + is state (23). Some consider a transition from s± to d wave either directly (24) or via an intermediate s + id state (1, 25, 26). Still, others propose the existence of too-small-to-measure but finite “Lilliputian” gaps (17, 18). Because of the multitude of Fermi surface sheets and the absence of direct phase-sensitive experiments, it is difficult to pinpoint the most plausible explanation, and further studies are needed. This is where the introduction of controlled artificial disorder becomes a very useful tool. In fact, impurity scattering is phase-sensitive and therefore can be used to at least narrow down possible scenarios. In most cases, only suppression of Tc is studied, and even then, it can provide important information. For example, strong support for s± pairing was found in electron-irradiated Ba(Fe,Ru)2As2 (27). Of course, in IBSs, it is rather difficult to draw definitive conclusions from Tc suppression alone because of the many parameters involved in multiband pair-breaking (28). Measurements of another disorder-sensitive parameter, for example, low-temperature behavior of London penetration depth, can significantly constrain theoretical interpretations. This was suggested as a way to distinguish between s± and s++ pairing (29). This idea has been used to interpret the data in BaFe2(As,P)2 (30) and SrFe2(As,P)2 (31), where potential scattering lifted the nodes, thus proving them accidental and, therefore, lending a strong support to s± pairing. Here, we measured low-temperature variation of London penetration depth, Δλ(T), down to 50 mK in 16 different compositions of BaK122, for most of which the effect of artificial point-like disorder induced by 2.5-MeV electron irradiation at several doses was examined. By analyzing both the rate of Tc suppression and changes in Δλ(T), we conclude that increasing gap anisotropy on one of the hole bands at the Γ point leads to the development of accidental nodes, and when the electron band no longer crosses the Fermi level at the M point, s± pairing is realized between two hole bands. This is illustrated schematically in Fig. 1. In principle, the incipient electron band can still play a role in interband interactions and pair-breaking scattering, but these effects are not qualitatively relevant here because superconductivity is supported by robust bands at the Fermi level (32, 33). We also discuss the possibility of a crossover from s to d symmetry with increasing x and conclude that this is very unlikely, in line with ARPES studies that find accidental nodes on hole bands all the way up to x = 1 (11, 34). ## RESULTS Figure 2A shows the composition phase diagram of BaK122 compounds. The superconducting transition temperature, Tc(x), was determined as the midpoint of the transition in penetration depth measurements (see fig. S1). For pristine samples, Tc0(x) shows its maximum value of 39 K at around x ≈ 0.40 and gradually decreases toward lower and higher x, forming a ubiquitous superconducting “dome.” Although the evolution of Tc(x) is smooth in general, there is an apparent jump near x = 0.80. This anomaly correlates with the appearance of accidental nodes induced in this material as a consequence of the Lifshitz transition (35). For most compositions shown in Fig. 2, the same samples were electron-irradiated and the London penetration depth was measured before and after each irradiation run. The relative change, (TcTc0)/Tc0, is shown in Fig. 2B for the same samples as in Fig. 2A. The largest suppression of ~47% per 1 C/cm2 (~56.4% for 1.2 C/cm2) was found in pure KFe2As2, and the smallest suppression was found in the optimally doped compounds. As shown in Fig. 3, the “physically meaningful” normalized Tc suppression plotted versus resistivity at Tc shows a significant increase when transitioning from optimal to overdoped regimes. Note that because of magnetic ordering, these rates should not be compared directly to those of the underdoped regime, which require a separate analysis as a result of the competition between superconductivity and magnetism (36). In terms of the rate per irradiation fluence, the normalized suppression rate of optimally doped samples (fig. S4A) is about 0.025 per 1 C/cm2 and increases to 0.07 per 1 C/cm2 in the underdoped samples (x = 0.22), consistent with our previous report (7). In sharp contrast, the suppression rate increases markedly in the far overdoped region, reaching 0.47 per 1 C/cm2, which is 20 times larger than that of the optimally doped regime. All these numbers for the rate of Tc suppression (i) are much greater than those expected from conventional s++ pairing and (ii) can be explained within a generalized s± pairing model if one is allowed to tune gap anisotropy and the ratio of interband/intraband scattering [see Prozorov et al. (27) and references therein]. To understand the evolution of the superconducting gap with doping and disorder, we analyze the low-temperature behavior of the London penetration depth in terms of the power law, Δλ(T) = A(T/Tc)n, as shown in Fig. 4 and summarized in Fig. 5. To present the observed systematic trends, the upper panels in Fig. 4 show Δλ(T) on a fixed scale of 0 to 140 nm and at a temperature range of 0 to 0.3 (T/Tc) (fig. S1 shows full-range curves). Figure 5A shows the composition variation of Δλ(0.3T/Tc) reflecting the density of thermally excited quasi-particles. There is a clear trend of a marked increase in Δλ as we move away from optimal doping. At small x, this trend is naturally explained in terms of the competition between superconducting and SDW order (7, 36, 37). The increase toward the underdoped region is quite monotonic, whereas the increase toward x = 1 is distinctly nonmonotonic. There is even some decrease of Δλ(0.3Tc) around x = 0.80, coincident with the anomaly in Tc(x) (Fig. 2) and where the Lifshitz transition is believed to occur (21). Similar nonmonotonic behavior in the same region was reported before (15); thus, it seems that this is not an experimental aberration. In fact, this feature may signal the onset of accidental nodes near the Lifshitz transition (35). The lower panels in Fig. 4 show the exponent, n, obtained in the power-law fitting. To examine the robustness of the power-law representation, fitting of Δλ(T/Tc) was performed from the base temperature up to three different upper limits, Tup/Tc = 0.2, 0.25, and 0.30. The results are shown by three points in each frame of the lower panel in Fig. 4. Figure 5B summarizes the composition and irradiation evolution of the exponent n obtained at Tup/Tc = 0.3. Horizontal lines show three principal limits of the exponent n expected for different scenarios. A clean line nodal gap corresponds to n = 1, whereas exponential behavior is experimentally not distinguishable from a large exponent (n ≥ 3 to 4). In all cases, n = 2 is the terminal dirty limit for any scenario with pair-breaking (s± or d wave), but it should be exponential for s++ pairing where nonmagnetic scattering is not pair-breaking. At small x, in the coexistence regime, the gap anisotropy increases, but we find no evidence of nodes, consistent with previous studies (7, 36). This result argues against an s++ gap structure, in which the reconstruction of Fermi surfaces due to SDW order must lead to robust nodes (37). Upon electron irradiation, Tc slowly decreases, suggesting moderate gap anisotropy and the presence of small but significant interband impurity scattering (38). Close to the optimal composition of x = 0.4, the penetration depth exponent n decreases significantly with irradiation, providing strong evidence for s± pairing. On the other hand, even upon a high-dose irradiation of 3.4 C/cm2, the exponent remains greater than n = 3, whereas Tc decreases by 8%, which is suggestive of robust full gaps. In a fully gapped s++ state, the only effect of disorder is to average the gap over the Fermi surface, leading inevitably to the increase of the minimum gap and therefore an increase in the exponent n with disorder, contrary to our observations. Moving to a higher x away from optimal composition, the gap anisotropy increases and the exponent n for the pristine samples decreases. Upon irradiation, the gap anisotropy is smeared out and the exponent increases even in the s± case, provided that all bands are still fully gapped and the intraband impurity scattering is dominant. This is apparently the case for x = 0.54. For yet higher doping levels, the anisotropy becomes so strong that the system develops accidental nodes (n → 1), which, in this case, are apparently not lifted by the disorder (39). This is possibly due to (i) the substantial change in the electronic band structure approaching the Lifshitz transition and/or (ii) substantial interband impurity scattering. Note that this evolution is very different from the isovalently substituted BaFe2(As,P)2 (30) in which line nodes are found at all x values and the band structure is unchanged. In our case, at a large x, the exponential temperature dependence in pristine samples changes to ~ T2 at around x = 0.60 and tends toward ~ T at x ≥ 0.80, indicative of gaps with line nodes. Unlike the optimally doped region, the electron irradiation is much more effective in decreasing Tc, that is, 41% upon 3.4 C/cm2 (x = 0.81) and 56% upon 1.2 C/cm2 (x = 1.00). Nevertheless, the exponent n never exceeds 2. ### s± superconductivity in the overdoped region In previous studies of London penetration depth and thermal conductivity in overdoped samples, nodal behavior was identified above x = 0.8 and attributed to a crossover from a fully isotropic s-wave state with sign change between electron and hole pockets to a new type of s± pairing with sign change between the hole pockets (15), which also acquired accidental nodes. Thermal conductivity measurements of the end member, at x = 1, indicate line nodes and were interpreted in terms of d-wave pairing (13, 40), which was also claimed theoretically (24). Here, we add an additional restriction on possible interpretations by looking simultaneously at the variation of Tc and temperature-dependent London penetration depth with controlled point-like disorder. As we mentioned above, the suppression of Tc with irradiation at optimal doping rules out a global s++ state. Now, the challenge is to begin with a “conventional” (nodeless) s± state and determine whether a reasonable model of superconducting gap and its evolution with composition can be constructed to describe all experimental results. We find that a generalized sign-changing s± state with accidental nodes can be used to describe the entire phase diagram, including a crossover from nodeless to nodal gap. The novel assertion of our approach is that with electron pockets absent above Lifshitz transition, x > 0.8, the s± pairing shifts to hole pockets, naturally resulting in a nodal state. We use the self-consistent t-matrix formalism and sign-changing s± state to describe both the London penetration depth and the Tc suppression rate for different levels of disorder. To keep the analysis tractable and to fit the experimental data, we minimize our parameter set by working in the 2Fe-BZ and modeling the gap structure as shown schematically in Fig. 1. Specifically, before the Lifshitz transitions, two electron pockets at the M point and two hole pockets at the Γ point are each modeled as a single C4 symmetric pocket with gap, and , respectively. Here, angle φ is measured from the zone diagonal. After the Lifshitz transition, the electron pockets disappear, and the two model bands now correspond to two hole pockets. Each hole pocket gap is now modeled independently, with its own isotropic and anisotropic components, . We realize that the actual band structure is more complex, and its evolution across the Lifshitz transition involves several bands changing across the Brillouin zone. However, we find that a model with two effective gaps each having isotropic and anisotropic parts is sufficient to explain the observed results. First, we fit the data for the pristine samples and then include impurity scattering within self-consistent t-matrix formalism (4144). We model the defects induced by electron irradiation as point-like scatterers, which scatter between the bands with a certain (interband) amplitude and within the same band with another (inband) amplitude. The presence of interband impurity scattering and the relative sign change between these two bands are necessary to explain the Tc suppression and penetration depth in the irradiated samples (see the Supplementary Materials for details of the fitting procedure). The obtained gap amplitudes are shown in Fig. 6 as a function of composition, x. It is important that the average gap h1 on one of the hole bands changes its sign with increasing x. This is essential to fit the Tc suppression and penetration depth on equal footing in a self-consistent manner (see fig. S4b). Without a relative sign change between the hole pockets, even a strong interband impurity scattering will average the gap anisotropy, leading to a weak suppression of Tc and activated behavior in the temperature dependence of the low-temperature penetration depth, which is not in agreement with the data. The obtained evolution of gaps suggests a new paradigm where an s± superconducting state with relative sign change between the hole and electron pockets at moderate doping levels evolves into an s± state with the sign change between the hole bands with accidental line nodes. This evolution of the gap structure is shown schematically in Fig. 1 and is the central result of this paper. We note that if one concentrates exclusively on Fermi surface–integrated quantities, such as thermal conductivity or penetration depth, distinguishing d-wave states from anisotropic, deeply nodal s states can be very difficult. As shown in Fig. 7, both d-wave and anisotropic s± states give reasonable fits to the pristine penetration depth data for x = 1.0. Furthermore, distinguishing on the basis of disorder is difficult because here we do not have a well-established link between the pair-breaking rate and irradiation dosage; thus, it is possible to find parameters for either “dirty d wave” or “dirty nodal s wave” cases that fit both the Δλ and ΔTc data for the single x = 1 sample. However, on the basis of the fits to the heavily K-doped alloys near x = 0.9 in Fig. 7, we see that there is substantial additional curvature at low temperatures that is incompatible with the cos 2φ d wave. It is conceivable that a strong antiphase cos 6φ component in a d-wave state could fit the penetration depth data. However, there is no theory in support of such a state, and we therefore conclude that the superconducting condensate in this system has s-wave symmetry throughout the phase diagram and simply evolves in an anisotropic manner as roughly depicted in Fig. 1. In Fig. 7, we show a comparison between the state with accidental nodes and a d-wave state for x = 0.91 and x = 0.92. For x = 0.91 and x = 0.92, we see the incompatibility of a d-wave gap with experimental data. However, for x = 1.0, both d-wave and s± states with accidental nodes can fit the data. Thus, we cannot rule out a crossover between s-wave and d-wave symmetries between x = 0.92 and x = 1.0. However, ARPES measurements provide a strong argument against this scenario (11). An additional argument favoring s± pairing with accidental nodes over the d wave is the nonmonotonicity of Δλ(0.3Tc, x) near x ~ 0.8 (see Fig. 5). Because of an overall decrease in Tc in the overdoped region, the normal fluid density in an isotropic s-wave or d-wave state is expected to monotonically increase. On the other hand, this nonmonotonicity occurs naturally during a smooth onset of accidental nodes—where a fully gapped Δ(φ) near the expected nodal region transits to a linear-in-φ dependence through an intermediate quadratic, Δ(φ) ~ φ2, dependence. Accidental nodes not only are more probable for s-wave pairing as opposed to d-wave pairing but also are expected to appear around the Lifshitz transition (35). Of course, nonmonotonicity of Δλ(0.3Tc, x) does not uniquely imply accidental nodes, but accidental nodes naturally lead to the observed nonmonotonic behavior. This scenario can also explain variations observed at the lowest temperatures for close compositions, such as x = 0.91 and 0.92 (see Fig. 7). We emphasize that our analysis of the rate of Tc suppression by nonmagnetic scattering supports accidental nodes in an s± state rather than in an s++ state. Although a small gap could be present in the x = 0.92 sample, at x = 1, our data and fitting appear to rule this out. However, within our experimental temperature range, down to 50 mK, it is impossible from the penetration depth alone to definitively rule out gaps on the order of 0.1 meV or smaller, such as those suggested by the analysis of the specific heat experiments (17, 18). Nevertheless, our systematic measurements and analysis of many different compositions add to the growing experimental support for the s-wave origin of the pairing interaction near x ~ 1 (and therefore over the whole phase diagram). This, in turn, indicates that any competing d-wave channel, as predicted by many theoretical approaches, is competitive but subleading all the way up to x = 1. The extent of this competition can be addressed by probing collective modes in the nonpairing channel using other experimental techniques (for example, Raman scattering). We note that some studies of the x = 1 composition under pressure also propose a change of pairing symmetry from d to s (45). Our work poses severe difficulties for such an interpretation. ## CONCLUSIONS In conclusion, we used deliberately introduced point-like disorder as a phase-sensitive tool to study the compositional evolution of the superconducting gap structure in BaK122. Measurements of both the low-temperature variation of London penetration depth and Tc suppression provided stringent constraints on the possible gap structures. By using a generalized s± model and t-matrix calculations, we were able to describe the compositional evolution of the superconducting gap, including a crossover from nodeless to nodal concomitant with the Lifshitz transition. Our model provides a natural interpretation of the rich physics of this system and shows that s± pairing is a very robust state of iron pnictides. ## MATERIALS AND METHODS ### Crystal growth We developed an inverted temperature gradient method to grow large and high-quality single crystals of BaK122. The starting materials—Ba and K lumps, and Fe and As powders—were weighed and loaded into an alumina crucible in a glove box. The alumina crucible was sealed in a tantalum tube by arc welding. The tantalum tube was then sealed in a quartz ampoule to prevent the tantalum tube from oxidizing in the furnace. The crystallization processes from the top of a liquid melt help to expel impurity phases during the crystal growth, compared to the growth inside the flux. Details of the growth and detailed characterization for the entire dome can be found elsewhere (46, 47). ### Sample characterization and selection Sixteen different compositions ranging from x = 0.20 to 1.00 were identified using WDS. More than one sample of each composition was studied. The crystals had typical dimensions of 0.5 mm × 0.5 mm × 0.03 mm. All samples were prescreened using a dipper version of the tunnel diode resonator (TDR) technique (48), using the sharpness of the superconducting transition as a measure of quality in each particular piece. After this prescreening, the chemical composition of each individual sample was determined using WDS in a JEOL JXA-8200 electron microprobe. In each sample, the composition was measured for 12 points per surface area and averaged (46). The variation of in-plane London penetration depth Δλ(T) was measured down to 50 mK using a self-oscillating TDR (4951). To study the effect of disorder, Δλ(T) for each crystal was measured before and after the irradiation. Irradiation by 2.5-MeV electrons was performed at the SIRIUS Pelletron in Laboratoire des Solides Irradiés at École Polytechnique (Palaiseau, France). The electrons created Frenkel pairs that acted as point-like atomic defects. Throughout the paper, the total acquired irradiation dose was conveniently measured in coulombs per square centimeter, where 1 C/cm2 = 6.24 × 1018 electrons/cm2. With a calculated head-on collision displacement energy for Fe ions of 22 eV and a cross section to create Frenkel pairs at 2.5 MeV of 115 barn (b), a dose of 1 C/cm2 resulted in about 0.07% of the defects per iron site. Similar numbers were obtained for other sites—cross sections for Ba and As are 105 and 35 b, respectively. It is known that the interstitials migrate to various sinks (surface, dislocations, etc) and vacancies remain in the lattice. The electron irradiation was conducted in liquid hydrogen at 22 K, and recombination of the vacancy-interstitial pairs upon warming up to room temperature was 20 to 30%, as measured directly from the decrease of residual resistivity (27). After initial annealing, the defects remained stable, which was established from remeasurements performed several months (up to more than a year) apart. In addition, by measuring the Hall coefficient, it was determined that electron irradiation did not change the effective doping level; neither did it induce a measurable magnetic signal, which would be detected in our sensitive TDR measurements. ## SUPPLEMENTARY MATERIALS London penetration depth T-matrix fitting procedure fig. S1. (Color online) Full transition curves of Δλ(T) for the studied samples. fig. S2. (Color online) Resistivity estimated from the skin depth (TDR). fig. S3. (Color online) t-Matrix fitting of the London penetration depth for compositions spanning the superconductivity dome. fig. S4. (Color online) Variation of superconducting critical temperature upon irradiation for different compositions. fig. S5. (Color online) Comparison of Tc suppression as a function of increasing disorder for various possible scenarios for heavily overdoped systems. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited. ## REFERENCES AND NOTES Acknowledgments: We thank A. Chubukov, R. Fernandes, Y. Matsuda, I. Mazin, T. Shibauchi, and L. Taillefer for useful discussions. Funding: This work was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, Materials Science and Engineering Division. Ames Laboratory is operated for the U.S. DOE by Iowa State University under contract DE-AC02-07CH11358. We thank the SIRIUS team, O. Cavani, B. Boizot, V. Metayer, and J. Losco, for running electron irradiation at École Polytechnique [supported by the EMIR (Réseau national d’accélérateurs pour les Etudes des Matériaux sous Irradiation) network, proposal 11-11-0121]. V.M. acknowledges the support from the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S. DOE. P.J.H. and S.M. were partially supported by NSF-DMR-1407502. Author contributions: K.C. and S.T. conducted London penetration depth measurements. K.C. processed and analyzed raw data. M.K. led electron irradiation work. M.K., R.P. and K.C. performed electron irradiation. M.A.T. handled all sample preparation and transport measurements. Y.L. and T.A.L. grew single crystals. W.E.S. performed WDS measurements. V.M., S.M., and P.J.H. led theoretical work. V.M. performed t-matrix fitting and data analysis. R.P. devised and coordinated the project. All authors extensively discussed the results, the models, and the interpretations. All authors contributed to the writing of the manuscript. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors. View Abstract
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8349353075027466, "perplexity": 1651.4792811642187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146176.73/warc/CC-MAIN-20200225233214-20200226023214-00130.warc.gz"}
https://www.physicsforums.com/threads/number-of-vibrations-in-a-photon.296749/
# Number of vibrations in a photon 1. Mar 3, 2009 ### bunburryist Is it possible to determine the actual number of vibrations associated with a given photon? Do all photons from similar events have the exact same number of vibrations? For example, do all photons resulting from electrons dropping from the next to the lowest level to the lowest level in a Hydrogen atom have the same number of vibrations? 2. Mar 3, 2009 ### malawi_glenn vibrations?? The electromagnetic field associated with the photon has FREQUENCY, number of oscillations PER time. We measure the frequency by measuring the energy. No there is a small spread in energy and hence the frequency due to the energy-time uncertainty relation, each atomic level (except the ground state) has finite 'life-time' and hence there is an uncertainty in the energy of that level. 3. Mar 3, 2009 ### Bob_for_short I heard and read that it was about 10000-100000 vibrations (a wave packet of some length). Bob. 4. Mar 3, 2009 ### Rajini i think instead of vibrations you have to use 'frequency'. ps:Molecule's vibrational properties are often studies using photons (Raman, FTIR, NIS, etc) 5. Mar 3, 2009 ### lightarrow The number of waves in an electromagnetic wavepacket can vary very much, depending on the way it was generated, even if we considered only atomic transitions. 6. Mar 3, 2009 ### Bob_for_short Yes, I agree. But it cannot be too small. Otherwise it is not one photon of a given frequency. Bob. 7. Mar 3, 2009 ### lightarrow We have to point out at least a couple of things here: 1. A photon and an electromagnetic wave packet have nothing to do each other (I know the temptation of saying they have is strong!) A photon is much more complicated than what people usually think (even after they have been told it's more complicated than what they thought before ) 2. A photon has never a precise frequency (or energy), in the sense of one only frequency: there is indetermination $$\Delta E \cdot\Delta t \geq\hbar/2$$ between the energy and the time of emission. 8. Mar 3, 2009 ### Bob_for_short You contradict yourself. A photon is characterized in the first place by its frequency. It is possible only if the number of vibrations N is big. On the other hand, it is not something that is emitted permanently. There is beginning and ending (N is limited). So it is a typical wave packet. As simple as that. Bob. 9. Mar 3, 2009 ### f95toli No, it is not. Remember that we are talking about quantum mechanics here: normal "rules" do not apply. There is no "intuitive" way of understanding what a single photon is, and any attempts to understand in in terms of "vibrations" are doomed to fail. Remember also that when we are talking about single- or a few photon states we are by necessity referring to number states (Fock states) which are very difficult to prepare; most states of light (including thermal and coherent states) don't have a definite number of photons in them; but coherent light from e.g. a laser has still a quite well-defined frequency. 10. Mar 4, 2009 ### bunburryist Let me tell you where my question came from, and maybe it will be easier to answer it. A photons "wave packet," or whatever we want to call it (after all these years I still have no idea what such a thing is), must have some duration. It either happens 1) all at once, in which case it can not have a wavelength, or it 2) happens forever, in which case it, well, happens forever, or it 3) happens for some length of time. There must be some length of time that it is created in, allowing for uncertainty. The very nature of wavelength is that of a distance-time relation. Unless the wave-packet is infinitely long, it must have some approximate length, whether or not it can be defined exactly. And if it has a length and a wavelength, then there must be some number of vibrations. For example, when an electron goes from one orbit to another it must happen in some time interval, even if that interval, al la quantum uncertainty, cannot be defined exactly. If that time interval is x, then the number of vibrations will be the frequency divided by that time. Of course, the less determinable the time is, the less determinable the number of vibrations will be - but there will be, to some degree or other of exactness, some number of vibrations. Now, imagine we have two photons. The first is that emitted by a hydrogen atom as an electron drops to the lowest level. It will have some approximate number of vibrations since it takes some length of time to happen.*** Since the energy is related to the wavelength, the energy will have an arithmetic (though not necessarily physically relevant) relation to the number of vibrations. Now let's imagine another photon emitted in some other circumstance, one which happens to have a higher frequency and a significantly larger number of vibrations than our hydrogen emitted photon. If that photon is red shifted so that the wavelength is the same as our hydrogen emitted photon, we will have two photons of the same wavelength (and hence, same energy), yet one (hydrogen emitted) will have less vibrations that the other. (***Or does it not take time to happen? If this is the case, where does the number of vibrations come from, conceptually?) Is there any physically real relationship between the number of vibrations and the energy? If my understanding of Plancks constant is correct, there would not be, since the Planck relation is between wavelength and energy, saying nothing about the number of vibrations. If the wave packet is not physically real, but is a mathematical representation of our knowledge of the location of the photon, would the difference in the number of vibrations simply not have any relation whatsoever to the energy at all, but only to our knowledge of the location of the photon? If this is the case (and if I'm even remotely close to understanding this!), does anyone have any interesting observations on this subject of same-energy photons having significantly different numbers of vibrations? 11. Mar 4, 2009 ### lightarrow What we are trying to express is that the part of your post I have coloured in blue is meaningless, because we don't know how to associate a photon to a wavepacket (assuming it's possible at all). 12. Mar 4, 2009 ### Bob_for_short The higher number of vibrations N in a wave packet, the smaller the energy uncertainty. The photon energy E=h_bar*omega implies that it is a well defined physical observable with a small uncertainty. As any observable, it has uncertainty depending on your source of photons and on the way you observe them. Please read about Mossbauer effect (http://en.wikipedia.org/wiki/Mössbauer_effect). Bob. 13. Mar 4, 2009 ### lightarrow The frequency omega is well defined *only* if the electromagnetic wave packet has an infinite lenght. For a common electronic transition in an atom, for example, that's far from being true, infact the spectral lines have a finite, non zero width. It means that the omega you write is actually an average of the frequencies you have in the line. On the other hand the more you define the frequency, the less is defined the number of photons (indetermination phase/number of photons) so a single photon, if it were an EM wavepacket, would have a completely undetermined frequency (and so energy), at least as far as I know. 14. Mar 4, 2009 ### Bob_for_short Indeed, the frequency omega is the average value. Normally the spectral line width is small, so this value is well defined (sufficient to calculate the photon energy). That means N - the number of vibrations or wave lengths in one photon - is big. 15. Mar 4, 2009 ### bunburryist What do you mean by "we don't know how to associate a photon and a wavepacket"? 16. Mar 4, 2009 ### f95toli A wavepacket -by which I assume you mean something along the line of a soliton- is a classical concept; there is no obvious connection AT ALL between the concept of a photon and a "packet" of any shape or form. 17. Mar 8, 2009 ### bunburryist Is this right? The wave associated with a particle varies in amplitude and wavelength. Where the wave has less amplitude there is less of a chance of the particle having the energy corresponding (via h) to the wavelength at that part of the wave, and where the wave has more amplitude there is a higher chance of the particle having the energy corresponding (via h) to the wavelength at that part of the wave. So Plancks constant doesn’t take us from “the” wavelength of a particle to "the" energy of the particle, rather it takes us from wavelengths at different locations in the wave to energies corresponding with those wavelengths (via h). 18. Mar 8, 2009 ### lightarrow All you have written is correct, for a massive particle. But we are talking about photons (massless particles) and about electromagnetic wavepackets (which is another story). 19. Mar 8, 2009 ### cragar E=hf 20. Mar 8, 2009 ### cragar or E=(hc)/(lambda) Similar Discussions: Number of vibrations in a photon
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9477857947349548, "perplexity": 616.1788764943781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105451.99/warc/CC-MAIN-20170819124333-20170819144333-00474.warc.gz"}
https://www.physicsforums.com/threads/angular-velocity-and-angular-acceleration.297294/
# Angular Velocity and Angular Acceleration 1. Mar 4, 2009 ### Hotsuma I feel really dumb, as this should be incredibly easy to figure out, but I keep getting the wrong answers. I am including the book's data so I can figure out how this needs to be done and then plug in my own values. 1. The problem statement, all variables and given/known data Pilots can be tested for the stresses of flying high-speed jets in a whirling "human centrifuge," which takes 1.0 min to turn through 20 complete revolutions before reaching its final speed. What is its angular acceleration (assumed constant) (in rev/min^2) and its angular velocity in rpm? 2. Relevant equations Code (Text): \theta (t) = \theta_0 +\omega_0t + \frac{1}{2}\alpha t^2 \omega (t) = \omega_0 + \alpha t. [\CODE] [b]3. The attempt at a solution[/b] I have tried finding the frequency and then multiplying it by 2\pi, but I don't get the right answer. The book's values are: Ang. Acc = 40 rev/min^2 Ang. Vel = 40 rpm 2. Mar 4, 2009 ### lanedance i think it might have something to do with your units theta = (1/2).alpha.t^2 you want to find alpha in rev/min^2 so input theta in revolutions and t in mins to get the correct alpha 3. Mar 4, 2009 ### Hotsuma I don't think I have a value for theta unless it is 2pi. Even then, that is in radians not revolutions per minute. 4. Mar 4, 2009 ### Hotsuma Thanks LowlyPion, my answer for angular acceleration is correct and the concept is much more clear now. I'll let you know if I get angular velocity figured out, which should be easy from here. 5. Mar 4, 2009 ### Hotsuma To find Angular velocity I multiply angular acceleration by time. Thanks for the help. 6. Mar 4, 2009 ### Hotsuma I forget how to mark this as solved, also, does anyone know how to use direct LaTeX typesetting here?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9628391861915588, "perplexity": 1578.71692232714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202476.48/warc/CC-MAIN-20190321010720-20190321032720-00138.warc.gz"}
https://aakashsrv1.meritnation.com/cbse-class-11-science/math/math/relations-and-functions/revision-notes/41_1_11_162_10886
Select Board & Class Relations and Functions • Cartesian product  of two sets: Two non-empty sets P and Q are given. The Cartesian product  is the set of all ordered pairs of elements from P and Q, i.e., P × Q = {(p, q) : p ∈ P and q ∈ Q} Example: If P = {x, y} and Q = {-1, 1, 0}, then  = {(x, -1), (x, 1), (x, 0), (y, -1), (y, 1), (y, 0)} If either P or Q is a null set, then P × Q will also be a null set, i.e., = . In general, if A is any set, then A × . • Property of Cartesian product  of two sets: • If n(A) = p, n(B) = q, then n(A × B) = pq. • If A and B are non-empty sets and either A or B is an infinite set, then so is the case with A × B • .A × A × A = {(a, b, c) : a, b,  c ∈ A}. Here, (a, b, c) is called an ordered triplet. • A × (B ∩ C) = (A × B) ∩ (A × C) • A × (B ∪ C) = (A × B) ∪ (A × C) • Two ordered pairs are equal if and only if the corresponding first elements are equal and … To view the complete topic, please What are you looking for? Syllabus
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9152796268463135, "perplexity": 872.6670163400945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00454.warc.gz"}
http://math.stackexchange.com/questions/294934/are-the-brackets-in-formal-box-notation-of-recursive-functions-omittable
# Are the brackets in formal box notation of recursive functions omittable? So we know all recursive functions can be expressed as a finite sequence of symbols for the basic functions and processes composition, primitive recursion, and minimization. What I'm wondering is if it's important to include the brackets in this sequence of symbols? I haven't been able to produce an example where the recursive function is underdetermined without the brackets, but that certainly doesn't mean there isn't one. - This is really an issue with parsing. If you have three functions, $f$, $g$, and "$fg$", the latter having a name with two letters, there is no clear way to parse the expression "fg1" - should it be $fg(1)$ or $f(g(1))$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.954041600227356, "perplexity": 216.1050376588058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398460982.68/warc/CC-MAIN-20151124205420-00050-ip-10-71-132-137.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/345068/why-does-the-mass-of-a-proton-not-equal-the-mass-of-its-corresponding-quarks
# Why does the mass of a proton not equal the mass of its corresponding quarks? [duplicate] The mass of a proton is around 900 mega electron volts/c2. The mass of two up quarks and a down quark is about 10 mega electron volts/c2. • Taken from Wikipedia: "... The extra energy of the quarks and gluons in a region within a proton, as compared to the rest energy of the quarks alone in the QCD vacuum, accounts for almost 99% of the mass ....." – jim Jul 10 '17 at 23:06 • Possible duplicates: physics.stackexchange.com/q/207644/2451 and links therein. – Qmechanic Jul 11 '17 at 6:46 Note that the quarks are bound by the strong interaction. The binding energy of the three-quark configuration in the proton is gigantic because the strong force is incredibly strong at short distances. This binding energy makes up the majority of the rest energy (mass) of the proton. • It is not as simple as that. The three valence quarks can only be distinguished in a probabilistic manner , see profmattstrassler.com/articles-and-posts/largehadroncolliderfaq/… – anna v Jul 11 '17 at 4:42 • @annav Right, but aren't the sea quarks and gluons included in any reasonable definition of the binding energy of the strong force? – probably_someone Jul 11 '17 at 4:44 • well, it is not like the hydrogen atom, as he shows in the beginning of the article. The quarks are interchangeable. – anna v Jul 11 '17 at 4:46 • @annav Yeah, but I don't think that conceptually alters my answer. It's still the strong interaction, which is made possible by the presence of the valence quarks, that's contributing most of the binding energy. – probably_someone Jul 11 '17 at 4:47 The proton has three valence quarks, it is true, called current quarks, whose masses are very small. But it is in the quantum mechanical regime, a bound bag of these three valence quarks and its constituents are not just that , but also include an enormous number of quark antiquark pais and gluons playing ball with each other in the proton's volume:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9029868841171265, "perplexity": 387.19078574076445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738964.20/warc/CC-MAIN-20200813073451-20200813103451-00258.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/int2/chapter/6/lesson/6.1.3/problem/6-33
### Home > INT2 > Chapter 6 > Lesson 6.1.3 > Problem6-33 6-33. Use the relationships found in each of the diagrams below to solve for $x$ and $y$. Assume the diagrams are not drawn to scale. State which geometric relationships you used. 1. What is the measure of the missing angle in the left triangle? Once you know that angle, calculate the measure of angle $y$. $y = 111^\circ$; $x = 53^\circ$ Triangle Angle Sum Theorem and ____ 1. Parallel lines create alternate interior angles. Which angles are alternate interior angles in this diagram? 1. Redraw the diagram with only the parallel lines and the line at the top. Then determine the value of $y$. 1. This diagram is not drawn to scale. Determine the measure of the missing angle then draw the diagram to scale. What special triangle is this?
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932272791862488, "perplexity": 538.7780140876181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611089.19/warc/CC-MAIN-20210613222907-20210614012907-00259.warc.gz"}
https://unapologetic.wordpress.com/2012/08/09/orthogonal-and-symplectic-lie-algebras/?like=1&_wpnonce=67418a1937
The Unapologetic Mathematician Orthogonal and Symplectic Lie Algebras For the next three families of linear Lie algebras we equip our vector space $V$ with a bilinear form $B$. We’re going to consider the endomorphisms $f\in\mathfrak{gl}(V)$ such that $\displaystyle B(f(x),y)=-B(x,f(y))$ If we pick a basis $\{e_i\}$ of $V$, then we have a matrix for the bilinear form $\displaystyle B_{ij}=B(e_i,e_j)$ and one for the endomorphism $\displaystyle f(e_i)=\sum\limits_jf_i^je_j$ So the condition in terms of matrices in $\mathfrak{gl}(n,\mathbb{F})$ comes down to $\displaystyle\sum\limits_kB_{kj}f_i^k=-\sum_kf_j^kB_{ik}$ or, more abstractly, $Bf=-f^TB$. So do these form a subalgebra of $\mathfrak{gl}(V)$? Linearity is easy; we must check that this condition is closed under the bracket. That is, if $f$ and $g$ both satisfy this condition, what about their commutator $[f,g]$? \displaystyle\begin{aligned}B([f,g](x),y)&=B(f(g(x))-g(f(x)),y)\\&=B(f(g(x)),y)-B(g(f(x)),y)\\&=-B(g(x),f(y))+B(f(x),g(y))\\&=B(x,g(f(y)))-B(x,f(g(y)))\\&=-B(x,f(g(y))-g(f(y)))\\&=-B(x,[f,g](y))\end{aligned} So this condition will always give us a linear Lie algebra. We have three different families of these algebras. First, we consider the case where $\mathrm{dim}(V)=2l+1$ is odd, and we let $B$ be the symmetric, nondegenerate bilinear form with matrix $\displaystyle\begin{pmatrix}1&0&0\\ 0&0&I_l\\ 0&I_l&0\end{pmatrix}$ where $I_l$ is the $l\times l$ identity matrix. If we write the matrix of our endomorphism in a similar form $\displaystyle\begin{pmatrix}a&b_1&b_2\\c_1&m&n\\c_2&p&q\end{pmatrix}$ our matrix conditions turn into \displaystyle\begin{aligned}a&=0\\c_1&=-b_2^T\\c_2&=-b_1^T\\q&=-m^T\\n&=-n^T\\p&=-p^T\end{aligned} From here it’s straightforward to count out $2l$ basis elements that satisfy the conditions on the first row and column, $\frac{1}{2}(l^2-l)$ that satisfy the antisymmetry for $p$, another $\frac{1}{2}(l^2-1)$ that satisfy the antisymmetry for $n$, and $l^2$ that satisfy the condition between $m$ and $q$, for a total of $2l^2+l$ basis elements. We call this Lie algebra the orthogonal algebra of $V$, and write $\mathfrak{o}(V)$ or $\mathfrak{o}(2l+1,\mathbb{F})$. Sometimes we refer to the isomorphism class of this algebra as $B_l$. Next up, in the case where $\mathrm{dim}(V)=2l$ is even we let the matrix of $B$ look like $\displaystyle\begin{pmatrix}0&I_l\\I_l&0\end{pmatrix}$ A similar approach to that above gives a basis with $2l^2-l$ elements. We also call this the orthogonal algebra of $V$, and write $\mathfrak{o}(V)$ or $\mathfrak{o}(2l,\mathbb{F})$. Sometimes we refer to the isomorphism class of this algebra as $D_l$. Finally, we again take an even-dimensional $V$, but this time we use the skew-symmetric form $\displaystyle\begin{pmatrix}0&I_l\\-I_l&0\end{pmatrix}$ This time we get a basis with $2l+2$ elements. We call this the symplectic algebra of $V$, and write $\mathfrak{sp}(V)$ or $\mathfrak{sp}(2l,\mathbb{F})$. Sometimes we refer to the isomorphism class of this algebra as $C_l$. Along with the special linear Lie algebras, these form the “classical” Lie algebras. It’s a tedious but straightforward exercise to check that for any classical Lie algebra $L$, each basis element $e$ of $L$ can be written as a bracket of two other elements of $L$. That is, we have $[L,L]=L$. Since $L\subseteq\mathfrak{gl}(V)$ for some $V$, and since we know that $[\mathfrak{gl}(V),\mathfrak{gl}(V)]=\mathfrak{sl}(V)$, this establishes that $L\subseteq\mathfrak{sl}(V)$ for all classical $L$. August 9, 2012 - Posted by | Algebra, Lie Algebras 1 Comment » 1. Thank you for sharing. I calculated 2 l square + l basis elements for symplectic algebra. Comment by thomas leow | January 16, 2015 | Reply
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 61, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9854466319084167, "perplexity": 145.90723823639442}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717954.1/warc/CC-MAIN-20161020183837-00219-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.adrian.idv.hk/2019-03-06-hypocycloid/
Hypercycloid, hypocycloid, and more general version, the hypertrochoid and hypotrochoid, are curves of the locus of a point on a circle rolling on a bigger circle. Like many other locus problems, it is convenient to tackle it from parametric equations. We go with the hypercycloid (aka epicycloid) first. Consider the image below from Wikipedia, we have a bigger circle of radius $$R$$ with centre fixed at the origin. The smaller, rolling circle of radius $$r$$ is rolling on the outside of the circle such that there is always a single point of intersection between the two circles. The locus of interest is drawn by point $$P$$ on the smaller circle while it rolls. Observe that when the smaller circle is rolling, its centre always follows a circle of radius $$R+r$$ centred at the origin. Assume at the moment that the smaller circle rolled to the position such that its centre is at angle $$\theta$$ as illustrated, the length of arc it has rolled is $$R\theta$$. This is the same length measured on the big or small circle. Assume point $$P$$ is the point of contact of the two circle when $$\theta=0$$. At the moment of an unspecified $$\theta$$, the point $$P$$ is at the angle of $$\alpha = R\theta/r$$ relative to the point of contact of the two circle at the moment, or at the angle of $$\alpha+\theta$$ relative to the $$x$$ axis (such angle is measured at the third quadrant). Given that we have the coordinate of the centre of the smaller circle to be \begin{align} x &= (R+r)\cos\theta \\ y &= (R+r)\sin\theta \end{align} and the coordinate of the point $$P$$ to be \begin{align} x &= (R+r)\cos\theta - r\cos(\frac{R+r}{r}\theta) \\ y &= (R+r)\sin\theta - r\sin(\frac{R+r}{r}\theta) \end{align} and more generally, if point $$P$$ is on a circle of radius $$\rho$$ eccentric to the smaller circle, then the parametric formula of the locus of the hypertrochoid (aka epitrochoid) is: \begin{align} x &= (R+r)\cos\theta - \rho\cos(\frac{R+r}{r}\theta) \\ y &= (R+r)\sin\theta - \rho\sin(\frac{R+r}{r}\theta) \end{align} The derivation is similar if the smaller circle is rolled on the inside of the bigger circle. Except that the angle of point $$P$$ relative to the $$x$$ axis when the centre of the smaller circle is at angle $$\theta$$ is $$\alpha-\theta$$ (measured at the first quadrant), as now the point is on the clockwise side rather than counterclockwise side when the smaller circle rolled. So similarly, the parametric equation of hypocycloid is: \begin{align} x &= (R-r)\cos\theta + r\cos(\frac{R-r}{r}\theta) \\ y &= (R-r)\sin\theta - r\sin(\frac{R-r}{r}\theta) \end{align} and the more general version, hypotrochoid, is: \begin{align} x &= (R-r)\cos\theta + \rho\cos(\frac{R-r}{r}\theta + \phi) \\ y &= (R-r)\sin\theta - \rho\sin(\frac{R-r}{r}\theta + \phi) \end{align} In above, we added an angle $$\phi$$ to $$\alpha$$ such that we allow a version rotated about the origin. The shape, however, should be just the same. Now some code. I like the animated GIF on wikipedia page that shows how the locus is created as the parameter $$\theta$$ goes from 0 up to some big angle. Generating such animation is indeed not hard, as we already derived the coordinates and metrics of everything need to show. I will use Python, for its Pillow library is handy to create such pictures. And in addition to GIF, I can also generate animated image in Google’s WebP format. Here is the code (python 3.6+ required due to type hint syntax): and this is the command to generate a hypercycloid: python3 hypchoid.py -q 180 hyper.webp and this is for a hypochoid: python3 hypchoid.py -p 50 -o hypo.webp
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985077381134033, "perplexity": 453.85911572332634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154500.32/warc/CC-MAIN-20210804013942-20210804043942-00063.warc.gz"}
http://zarco-macross.wikidot.com/mach
Mach In fluid mechanics, Mach number (\mathrm{Ma} or M) /ˈmɑːx/ is a dimensionless quantity representing the ratio of speed of an object moving through a fluid and the local speed of sound.[1][2] \ M = \frac vv_{sound} where \ M is the Mach number, \ v is the velocity of the source relative to the medium, and \ v_{sound} is the speed of sound in the medium. Mach number varies by the composition of the surrounding medium and also by local conditions, especially temperature and pressure. The Mach number can be used to determine if a flow can be treated as an incompressible flow. If M < 0.2–0.3 and the flow is (quasi) steady and isothermal, compressibility effects will be small and a simplified incompressible flow model can be used.[1][2] The Mach number is named after Austrian physicist and philosopher Ernst Mach, a designation proposed by aeronautical engineer Jakob Ackeret. Because the Mach number is often viewed as a dimensionless quantity rather than a unit of measure, with Mach, the number comes after the unit; the second Mach number is "Mach 2" instead of "2 Mach" (or Machs). This is somewhat reminiscent of the early modern ocean sounding unit "mark" (a synonym for fathom), which was also unit-first, and may have influenced the use of the term Mach. In the decade preceding faster-than-sound human flight, aeronautical engineers referred to the speed of sound as Mach's number, never "Mach 1."[3] In French, the Mach number is sometimes called the "nombre de Sarrau" ("Sarrau number") after Émile Sarrau who researched into explosions in the 1870s and 1880s.[4] page revision: 0, last edited: 23 Jun 2013 03:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8816057443618774, "perplexity": 1681.2522162352234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945584.75/warc/CC-MAIN-20180422100104-20180422120104-00613.warc.gz"}
http://math.stackexchange.com/questions/148455/limit-of-the-sequence-f-nt-frac1t-n-i-n-of-smooth-functions?answertab=votes
# Limit of the sequence $f_n(t) = \frac{1}{t + n + i/n}$ of smooth functions Let $\mathcal{E}$ be the space $C ^{\infty}(\mathbb R)$ with the system of seminorms: $$p_{N,n}(f) := \max{\lbrace |f^{(k)}(t)| : k = 0, 1, \dots , n; t \in [-N, N] \rbrace},\quad n = 0, 1, 2, \dots; N = 1, 2, \dots.$$ So, I have to find the limit of $f_n(t) = \dfrac{1}{t + n + i/n}$ in the space $\mathcal{E}$. I understand, that it is 0, but I don't know, how to prove that it exists. Thank you! - What is an $\varepsilon$-linear space? And your functions $f_n$ are not elements of $C^\infty(\mathbb{R})$? – Vobo May 22 '12 at 23:11 @Vobo it seems a safe bet that $\varepsilon$ is a replacement for $\mathcal{E}$ and that $\mathbb{R}$ denotes the domain, not the range. .@Toby: if my edit does not reflect your question, please add some clarifications. – t.b. May 23 '12 at 9:08 @t.b.: Oh sure, now I would not hold against you. – Vobo May 23 '12 at 10:14 In a locally convex space $X$ with seminorms $\{p_k | k \in I\}$ where $I$ denotes an index set, a sequence (or even more general a net) $(x_n)_n$ of elements of $X$ converges to some $x \in X$, iff for each $k \in I$ the real sequence (net) $(p_k(x_n - x))_n$ converges to 0. Applying this to your situation, fix an $N$, take some $k$, and check $|f^{(j)}_n|$ for $n\to\infty$: For $n>2N+1$ and $t\in [-N,N]$, $|f^{(j)}_n(t)| <= (j!+1)/n$, hence $p_{N,k}(f_n) \to 0$ as $n\to\infty$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9683601260185242, "perplexity": 206.86257877744762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738095178.99/warc/CC-MAIN-20151001222135-00168-ip-10-137-6-227.ec2.internal.warc.gz"}
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=LATEX-L;8e02b3f6.0102&FT=P&P=H&H=N&S=b
## [email protected] Options: Use Classic View Use Monospaced Font Show Text Part by Default Show All Mail Headers Topic: [<< First] [< Prev] [Next >] [Last >>] Re: Reading from the system (pipe input) Joseph Wright <[log in to unmask]> Mon, 26 May 2014 11:02:59 +0100 text/plain (75 lines) On 23/05/2014 23:22, David Carlisle wrote: > On 22/05/2014 21:40, Joseph Wright wrote: >> On 22/05/2014 20:42, David Carlisle wrote: >>> On 22/05/2014 17:09, Joseph Wright wrote: >>>> Hello all, >>>> >>>> Currently, we have \ior_open:Nn for reading from a file, but no defined >>>> interface for using the 'pipe' shell escape provided by pdfTeX. As we >>>> forbid spaces in file names, >>> why do we do that? Spaces in filenames always seem like an abomination >>> to me. >>> But that seems to be a relic on the 1970s I've noticed that people who >>> started using >>> computers this century don't seem to think anything of using a >>> descriptive phrase >>> as a filename... >> The logic here was that, at least as I understand it, there are some >> places where you can't just 'wrap up spaces' and have them work. In >> particular, there are some comments about dvips and included graphics >> which suggest that space behaviour is hard-coded in that case and can't >> be changed. (MiKTeX also does some 'interesting' things with BibTeX >> input names containing spaces.) I may have this wrong: can other more >> knowledgeable people comment? >> >> This can of course be changes if required. > > issues with existing implementations aside.... >>> Shouldn't we just allow spaces (and leading | or any other system >>> dependent special >>> syntax) just surrounding any user supplied name by " " to keep it >>> together? >> Space behaviour notwithstanding, I think the point here is that the pipe >> input approach is sufficiently different from a 'real' file to deserve a >> separate interface. > > My instinct here is still that the pipe interface being exposed as a > file is a feature that should be > preserved rather than hidden. > > At the bottom programming layer it probably seems natural to have > separate functions for accessing files > or for invoking commands, but if you separate them you need to duplicate > the entire stack all the way > up to the document level, all file reading commands would need to be > duplicated or have some > way of accessing the pipe functionality. > > In 2e you can do > > \documentclass{article} > > \begin{document} > > \input{"|zcat foo.tex.gz"} > > \end{document} > > to input a compressed file (and zcat could be replaced by a database > query or wget to do web request or whatever. > You can do same with \include and \InputIfFileExists and... > > The easiest way to ensure that all top level file reading can access > pipes (if enabled) as well as files is to not distinguish > them and just treat them as files. The syntax for a filename is > explicitly system dependent in anycase and this is not really > so different (which is how come it came to be hooked into the \input > syntax in the first place). > > David OK, this seems like a reasonable argument. I'll update the behaviour w.r.t. spaces. -- Joseph Wright
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8719942569732666, "perplexity": 4215.372401565363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711218.21/warc/CC-MAIN-20221207185519-20221207215519-00399.warc.gz"}
http://physics.stackexchange.com/questions/17734/interaction-of-matter-with-em-fields?answertab=votes
# Interaction of matter with EM fields For the interaction between electromagnetic fields and matter, 1. when do we have to include quantization of the EM field and when we can ignore it? 2. when do we have to include quantization of atomic energy levels and when we can ignore it? Update: I am aware that part of the answer might depend on the accuracy we are looking for. Part of the problem here is that I do not know how to estimate such things, or what quantity will quantify the accuracy we are looking for which can tell us if we can ignore quantization of either energy levels or the fields or not. - Who would like to write another textbook on spectroscopy? –  Georg Dec 2 '11 at 11:10 @Georg Could you please point out a reference? –  Revo Dec 2 '11 at 17:22 @Georg Since obviously your are knowledgeable person, yes I was asking you to recommend a textbook. Googling a keyword will not give me the best book that addresses my question. But you know what, on a second thought I do not want anything from someone who communicates with others with such an arrogant tone! –  Revo Dec 2 '11 at 19:51 I am 65 years now, my textbooks are totally outdated. –  Georg Dec 2 '11 at 20:15 A good (but expensive) reference is Demtroeder. –  Antillar Maximus Dec 2 '11 at 21:32 I'm not sure there is a generic answer to your questions other than the trivial "don't bother including the quantization when the accuracy of your result isn't compromised by making this approximation". I know that doesn't really help much, because you may not be able to verify this until you've done the calculation including the quantization anyway. You may have no choice in the matter - modelling everything at a microscopic level may just be intractable. Sometimes the answers as to when you have to do the full quantum calculation are surprising. For example, it is a common belief that explanation of the photoelectric effect requires you to treat the electromagnetic field quantum mechanically - i.e. you need photons. However, computations in Mandel and Wolf reveal that the observed experimental outcome can be obtained by treating the radiation purely classically (but the atomic electrons quantum mechanically). (Of course light does have a quantum nature, as revealed by photon antibunching). - the same is true for the Compton effect, btw - a semi-classical explanation was given by Schrödinger in 1927 –  Christoph Dec 2 '11 at 12:23 @Christoph: Thanks, I didn't know that about the Compton effect... –  twistor59 Dec 2 '11 at 12:36 Photoelectric effect can also be described without photons according to the seminal paper by Lamb and Scully. ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/… –  Revo Dec 2 '11 at 19:56 @twistor59: This answer is only technically correct. Mandel and Wolf are saying that if you send a coherent wave with enormous occupation numbers, so that it can be treated as a classical wave, you get excitation of the atoms by (semiclassical) single-photon absorption only when the frequency is right to excite the atoms. But if you explain the photoelectric effect semiclassically, you can't account for the fact that each absorbed photon is removed from the EM wave. The reradiated wave from the atom will cancel part of the semiclassical wave and give statistical absoprtion of photons... –  Ron Maimon Dec 2 '11 at 20:15 But the reradiated wave goes outward at the speed of light, and cannot reduce the classical wave amplitude instantaneously, it doesn't collapse the photon wavefunction. This means that if you have only a few photons, they can be simultaneously absorbed at far away places, and more absoprtion events can happen semiclassically than there are photons to produce the events. This leads to violations of energy conservation. This was Bohr's motivation for BKS, and Kramers and Slater joined him in abandoning energy conservation. This is not correct, however, energy is conserved. –  Ron Maimon Dec 2 '11 at 20:16 Generally speaking, the answer to both questions is linked to some number becoming increasingly large so that, for atoms you have a large density of higher excited states (think to Rydberg atoms as an example) or for electromagnetic field one has such a large number of photons that a coherent state is a good description of it and an average field can be safely taken. Then, quantum fluctuations are negligible small as these numbers increases. In order to make all the argument quantitative, let me consider a standard Hamiltonian for radiation-matter interaction for hydrogen-like atoms, in a non-relativistic limit, $$H=H_a+H_f+H_i=H_f-\frac{\hbar^2}{2m}\Delta_2-\frac{Ze^2}{r}-{\bf d}\cdot{\bf E}$$ where I have used an equivalent form for the interaction. Now, we can always rewrite this through a complete set of atomic states and this will give (the continuous part of the spectrum is implicit in the summation) $$H=H_f+\sum_nE_n|n\rangle\langle n|+\sum_{m,n}|m\rangle\langle n|{\bf d}_{mn}\cdot{\bf E}$$ but we can do the same also for the field. Assuming this monochromatic and using coherent states $|\alpha\rangle$, that we know are overcomplete $\langle\alpha|\beta\rangle=$, we can use the resolution of identity (e.g. see here) $$I=\int\frac{d^2\alpha}{\pi}|\alpha\rangle\langle\alpha|$$ that will produce $$H=H_f+\sum_nE_n|n\rangle\langle n|+\sum_{m,n}|m\rangle\langle n|{\bf d}_{mn}\cdot\int \frac{d^2\alpha}{\pi}\frac{d^2\beta}{\pi}\langle\alpha|{\bf E}|\beta\rangle|\alpha\rangle\langle\beta|.$$ Now, we are a step away from our conclusion. Indeed, we should not that $|\alpha|^2=N$, being $N$ the number of photons. So, the interaction part of the Hamiltonian can be promptly evaluated as $$\langle\alpha|{\bf E}|\beta\rangle=\tilde{\bf E}(\alpha,\beta)e^{-\frac{1}{2}|\alpha-\beta|^2}$$ and, for a very large $N$, the integral will have a dominant contribution and the coherent state can be assumed practically orthogonal. This will justify the use of a classical approximation through the averaged field. This argument can be repeated for the atomic states, if we introduce the operators (see here) $\sigma_{nm}=|n\rangle\langle m|$, $\sigma_{nm}^\dagger=|n\rangle\langle m|$ and $\sigma_{nm}^3=(1/2)(|n\rangle\langle n|-|m\rangle\langle m|$ forming an su(2) algebra. We can use now atomic coherent states and arrive to the same conclusion as above, provided the atomic state is large enough. This is the rationale behind this kind of approximations normally used in quantum optics. - In general, a semi-classical approach is easier to handle mathematically and is taught before attacking the full quantum description. I don't know if there is are any hard and fast rules on what approach to use, but having experimental data would be the best way to check. I don't know about other areas, but in nonlinear optics when the light fields are intense, a semi-classical approach works just as well as a fully quantum approach. (Semi-classical: Atomic part is quantized while the Optical part is a classical wave). In many cases, it is relatively straightforward to go from the semi-classical picture to the quantum picture (ex: replace a classical EM Field with a coherent state). There are many cases where a semi-classical approach fails and then you have no choice but to recast the problem using a fully quantum picture. To summarize, it is all about convenience and what you are after. A fully quantum picture is definitely richer, but may not be necessary. - This question has been a personal crusade of mine for the last 20 years. Around that time I figured out that the usual explanation for the photo-electric effect was based on basic misconceptions about the way waves transferred energy. To put things in a nutshell, if the usual power density argument were taken literally, then not only the photo-electric effect but such commonplace devices as the crystal radio should be "classically impossible". It was almost ten years later that I figured out the explanation for the Compton effect based on standing waves, and I was hugely disappointed when I learned that Schroedinger had already published my explanation in 1927. Since then I have put together two more pieces of the puzzle. The Black Body Spectrum was a tough one until I figured out how to calculate the equilibrium between a mechanical oscillator and the radiation field. Then I was able to show that the classical radiation field had to follow the natural equilibrium of a system of mechanical oscillators, and not vice versa. If the equipartition theorem failed at the mechanical level (which it does), then you don't need to quantize the electormagnetic field to get the Planck distribution: it follows automatically from the mechanical equilibrium. I explain this in a series of blogposts beginning with this one in my blog "Why I Hate Physics". The other big problem I solved was how to explain the very baffling question of the flecks of silver appearing on a photographic plate when exposed to the light of a distant star. According to classical theory the energy of light can be made arbitrarily diffuse. How then can it accumulate with sufficient intensity to provide the significant amount of energy needed to drive the chemical reduction of Ag+ to metallic silver in the silver bromide crystal? I was able to explain this by showing that the energy for the transition is already available in the detector system, namely the silver bromide crystal. It is not enough to look at the enthalpy of the transition; you must really consider the Gibbs Free Energy, which includes a term for the entropy. Treating the crystal as a solid solution of metallic silver and silver bromide, it is easily shown that at the very low concentrations present in a photographic film, the chemical transition is very nearly spontaneous. So no bunching of energy in the form of photons is required. I also demonstrate a mechanism whereby the energy of the crystal is concentrated at the point of detection. I call this phenomenon Quantum Siphoning and it is explained in the linked article. - So you think you can explain silver film, how about all of the other photon-counting detectors we have today? I didn't bother to read your article btw. –  user2963 Dec 2 '11 at 14:21 The silver film is the toughest because unlike all your other photon-counting detectors, you don't plug it into the wall. So there's no obvious source for the energy other than the incoming light. –  Marty Green Dec 2 '11 at 14:27 The silver film is not more difficult than any other photo receptor! You always have the old age (meanwhile) problem that You have to apply photon nature of light in this case, or wave nature for eg diffraction of light for easy understanding. The problem is that "tertium non datur". This is the core problem of QM and human thoughts. The problems Marty invents on silver halide process origin from his not knowing of the latent images nature. He thinks in wrong simple chemical ways. The latent image is hard core crystal physics. –  Georg Dec 2 '11 at 14:52 How do you explain the Lamb shift, then? –  Jerry Schirmer Dec 2 '11 at 16:51 Instead of focusing on this answer, consider the case of two entangled photons with opposite polarization. If you see one with one polarization, the other has the opposite polarization. How do you describe this classically? The EM wave has one polarization only, it cannot have a polarization that is correlated with another far away. Worse, the correlations in the polarizations are observed by Aspect et al and they violate the Bell inequality. This conclusively demonstrates you need quantum states of superposition for the electromagnetic field, ignoring photoelectric effect or atomic stuff. –  Ron Maimon Dec 3 '11 at 6:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8415045142173767, "perplexity": 471.9601701765881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900080.19/warc/CC-MAIN-20141030025820-00094-ip-10-16-133-185.ec2.internal.warc.gz"}
https://earthscience.stackexchange.com/questions/14893/why-is-the-upper-atmosphere-brighter-than-the-lower-atmosphere-in-some-photos-fr
# Why is the upper atmosphere brighter than the lower atmosphere in some photos from space? In most photos from space, the atmosphere gradually fades away from pretty bright blue to essentially black, as you move away from the Earth surface, such as can be seen in this photo of the Earth from the ISS (also showing some noctilucent clouds): NASA photo of Earth atmosphere. Source: NASA Or by night (again with noctilucent clouds): Earth atmosphere by night. Source: NASA Yet in photos with an exposure long enough for stars and airglow to be clearly visible, the opposite appears to be the case: NASA photo of Earth atmosphere and milky way. Source: NASA In the third photo, the Earth is rather dark. Moving away from the Earth, the sky closest to the Earth is colourful, but faint enough for stars to be visible through it. Then the atmosphere appears to get brighter as you move further up, until it becomes abruptly black after a sharp edge. Higher up still we can see a faint but clearly visible band of red airglow. It's not aurora — aurora is higher up than airglow, and not so constant. Another example, but with different colours (that could be a side-effect of exposure or editing): ESA starry night. Source: ESA via Wikimedia Commons The glow observed in those pictures in the upper layers of the atmosphere is the airglow from OH molecules. OH emission peaks between 75 and 105 km of elevation (Blamont and Reed, 1967) with intensities peaking at different wavelengths, the most prominent peaks in the visible spectrum are 557nm (Green), and multiple other peaks between 620 and 750 nm (red) In 1965 the OGO-II satellite was launched with the propose to measure airglow, the following image show a typical altitude vs intensity profile: From Blamont and Reed, (1967). The peak around 100 km is mostly due to OH airglow and the broader peak around 250 km is mostly due to atomic oxigen. Regarding the color observed in the pictures, while it is difficult to trust them due to photo editing, they are consisten with OH airglow, in the greens, yellows and reds. This is the airglow spectrum from Broadfoot and Kendall (1968), where I've highlighted in yellow the most important emission bands of OH in the visible range: To get a sense of the color associated to each band here is a approximate scale (remember 1 nm = 10 Å) From wikipedia The fading yellow glow that goes below the brightest band can be explained as the OH glow at the same altitude band (75-105 km) but originating from the part of the atmosphere that is closer (or further) to the spacecraft from which the picture was taken. As that altitude band is a shell around the Earth and not a ring. For the same reason the peak intensity in the picture is likely a bit lower than altitude of the peak in emissions. • You know every earth science subject amazingly sir :-) Aug 18, 2018 at 2:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8021897673606873, "perplexity": 1672.5177480474279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00135.warc.gz"}
http://www.wikihow.com/Use-the-Laws-of-Sines-and-Cosines
Edit Article # wikiHow to Use the Laws of Sines and Cosines When you are missing side lengths or angle measurements of any triangle, you can use the law of sines, or the law of cosines, to help you find what you are looking for. The law of sines is ${\displaystyle {\frac {a}{\sin {A}}}={\frac {b}{\sin {B}}}={\frac {c}{\sin {C}}}}$. The law of cosines is ${\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos {C}}$. In each formula ${\displaystyle a}$, ${\displaystyle b}$, and ${\displaystyle c}$ are the side lengths of the triangle. The angle opposite each side has a corresponding uppercase variable. Depending on what information you know about your triangle, you can use these two laws to solve for missing information. ### Method 1 Using the Law of Sines to Find a Missing Side Length 1. 1 Assess what you know. To use the law of sines to find a missing side, you need to know at least two angles of the triangle and one side length.[1] • For example, you might have a triangle with two angles measuring 39 and 52 degrees, and you know that the side opposite the 39 degree angle is 4 cm long. You can use the law of sines to find both missing side lengths. 2. 2 Identify and label sides and opposite angles. The convention is that side lengths are labeled ${\displaystyle a}$, ${\displaystyle b}$, and ${\displaystyle c}$. The angle opposite each side is denoted by the capital letter of that side’s variable. For example, the angle opposite side ${\displaystyle a}$ is ${\displaystyle A}$, the angle opposite side ${\displaystyle b}$ is ${\displaystyle B}$, and the angle opposite side ${\displaystyle c}$ is ${\displaystyle C}$.[2] • For example, in your triangle: ${\displaystyle a=4cm}$; ${\displaystyle A=39\;{\text{degrees}}}$ ${\displaystyle b=?}$; ${\displaystyle B=52\;{\text{degrees}}}$ ${\displaystyle c=?}$; ${\displaystyle C=?}$ 3. 3 Find the missing angle. The sum of all angles in a triangle is 180 degrees.[3] Thus, if you know two angles of a triangle, you can find the third angle by subtracting both angles from 180. • For example, since ${\displaystyle A=39\;{\text{degrees}}}$ and ${\displaystyle B=52\;{\text{degrees}}}$, ${\displaystyle C=180-39-52=89\;{\text{degrees}}}$. 4. 4 Set up the formula for the law of sines. The formula is ${\displaystyle {\frac {a}{\sin {A}}}={\frac {b}{\sin {B}}}={\frac {c}{\sin {C}}}}$. The formula shows that the ratio of one side of the triangle to the sine of the opposite angle is equal to the ratio of all other sides to their opposite angles.[4] 5. 5 Plug all the known values into the formula. Make sure you substitute side lengths for the lowercase variables, and angles for the capital variables. Also, remember that opposite sides and angles should have the same letter. • For example, ${\displaystyle {\frac {4}{\sin {39}}}={\frac {b}{\sin {52}}}={\frac {c}{\sin {89}}}}$. 6. 6 Use a calculator to find the sines of the angles. You can also use a trigonometry table.[5] Substitute the sines in the denominators of the ratios. • For example, ${\displaystyle \sin {39}=0.6293}$, ${\displaystyle \sin {52}=0.788}$, and ${\displaystyle \sin {89}=0.9998}$. So, your ratios will now look like this: ${\displaystyle {\frac {4}{0.6293}}={\frac {b}{0.788}}={\frac {c}{0.9998}}}$. 7. 7 Simplify the complete ratio. You have one complete ratio, with an angle and side. To simplify it, divide the numerator by the denominator. • For example, ${\displaystyle {\frac {4}{0.6293}}=6.3562}$. 8. 8 Set the incomplete ratios equal to the complete ratio. To solve for a missing variable, multiply the complete ratio by the denominator of either incomplete ratio. • For example: ${\displaystyle 6.3562={\frac {b}{0.788}}}$ ${\displaystyle (6.3562)(0.788)=({\frac {b}{0.788}})(0.788)}$ ${\displaystyle 5.0087=b}$ AND ${\displaystyle 6.3562={\frac {c}{0.9998}}}$ ${\displaystyle (6.3562)(0.9998)=({\frac {c}{0.9998}})(0.9998)}$ ${\displaystyle 6.3549=c}$ Thus, side ${\displaystyle b}$ is about 5 cm long, and side ${\displaystyle c}$ is about 6.35 cm long. ### Method 2 Using the Law of Sines to Find a Missing Angle 1. 1 Assess what you know. To use the law of sines to find a missing angle, you need to know at least two side lengths and one angle.[6] • For example, you might have a triangle that has one side that is 10 cm long. Another side is 8 cm long, and the angle opposite is 50 degrees. You need to find the angle opposite the side that is 10 cm long. 2. 2 Identify and label sides and opposite angles. The convention is that side lengths are labeled ${\displaystyle a}$, ${\displaystyle b}$, and ${\displaystyle c}$. The angle opposite each side is denoted by the capital letter of that side’s variable. For example, the angle opposite side ${\displaystyle a}$ is ${\displaystyle A}$, the angle opposite side ${\displaystyle b}$ is ${\displaystyle B}$, and the angle opposite side ${\displaystyle c}$ is ${\displaystyle C}$.[7] • For example, in your triangle: ${\displaystyle a=8cm}$; ${\displaystyle A=50\;{\text{degrees}}}$ ${\displaystyle b=10cm}$; ${\displaystyle B=?}$ ${\displaystyle c=?}$; ${\displaystyle C=?}$ • Since you want to find the angle opposite the 10 cm side, you are looking for angle B. 3. 3 Set up the formula for the law of sines. The formula is ${\displaystyle {\frac {a}{\sin {A}}}={\frac {b}{\sin {B}}}={\frac {c}{\sin {C}}}}$. The formula shows that the ratio of one side of the triangle to the sine of the opposite angle is equal to the ratio of all other sides to their opposite angles.[8] 4. 4 Plug all the known values into the formula. Take care to substitute the values correctly, so that the side lengths are in the numerators of the formula, and their opposite angles are in the corresponding denominators. • For example, ${\displaystyle {\frac {8}{\sin {50}}}={\frac {10}{\sin {B}}}={\frac {c}{\sin {C}}}}$. 5. 5 Set up an equation to find the missing angle. To do this, set the complete ratio equal to the ratio with the angle you are solving for. Take the reciprocal of each ratio, so that the side length is in the denominator, and the sine of the angle is in the numerator.[9] • For example, since you know side ${\displaystyle a}$ and angle ${\displaystyle A}$, and are solving for angle ${\displaystyle B}$, you would set up the ratio ${\displaystyle {\frac {8}{\sin {50}}}={\frac {10}{\sin {B}}}}$. Taking the reciprocals, you have ${\displaystyle {\frac {\sin {50}}{8}}={\frac {\sin {B}}{10}}}$. 6. 6 Find the sine of the known angle. Use a calculator or trigonometry table to do this. Plug the decimal into the equation. • For example, ${\displaystyle \sin {50}=0.766}$. So, the equation should now look like this: ${\displaystyle {\frac {0.766}{8}}={\frac {\sin {B}}{10}}}$ 7. 7 Isolate the missing sine and simplify the equation. To do this, multiply each side of the equation by the unknown angle’s denominator, then simplify the remaining ratio. • For example: ${\displaystyle {\frac {\sin {50}}{8}}={\frac {\sin {B}}{10}}}$ ${\displaystyle ({\frac {0.766}{8}})(10)=({\frac {\sin {B}}{10}})(10)}$ ${\displaystyle {\frac {0.766\times 10}{8}}=\sin {B}}$ ${\displaystyle {\frac {7.66}{8}}=\sin {B}}$ ${\displaystyle 0.9575=\sin {B}}$ 8. 8 Find the inverse sine. The inverse sine is shown by the ${\displaystyle SIN^{-1}}$ button on a calculator. The inverse sine will give you measurement of the missing angle.[10] • For example, the inverse sine of 0.9575 is 73.2358. So, angle ${\displaystyle B}$ is about 73.24 degrees. ### Method 3 Using the Law of Cosines to Find a Missing Side Length 1. 1 Assess what you know. To find a missing side length using the law of cosines, you need to know the length of the other two sides of the triangles, and the measurement of the angle between them.[11] • For example, you might have a triangle with sides that are 5 and 9 cm long, and the angle between them is 85 degrees. You need to find the length of the missing side. 2. 2 Identify and label sides and opposite angles. The convention is that side lengths are labeled ${\displaystyle a}$, ${\displaystyle b}$, and ${\displaystyle c}$. The angle opposite each side is denoted by the capital letter of that side’s variable. For example, the angle opposite side ${\displaystyle a}$ is ${\displaystyle A}$, the angle opposite side ${\displaystyle b}$ is ${\displaystyle B}$, and the angle opposite side ${\displaystyle c}$ is ${\displaystyle C}$.[12] • For example, in your triangle: ${\displaystyle a=5cm}$; ${\displaystyle A=?}$ ${\displaystyle b=9cm}$; ${\displaystyle B=?}$ ${\displaystyle c=?}$; ${\displaystyle C=85}$ • Since you want to find the side opposite the 85 degree angle, you are looking for side ${\displaystyle c}$. 3. 3 Set up the formula for the law of cosines. The formula is ${\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos {C}}$. In this formula, ${\displaystyle c}$ is the missing side length.[13] 4. 4 Plug all the known values into the formula. Make sure you substitute the correct values for the correct variables. The side you are trying to find should be ${\displaystyle c}$, and the angle you know should be ${\displaystyle C}$. • For example, ${\displaystyle c^{2}=5^{2}+9^{2}-2(5)(9)\cos {85}}$. 5. 5 Use a calculator to find the cosine of the angle. Plug this value into the equation, and multiply. • For example, ${\displaystyle \cos {85}=0.0872}$. So, your equation should now look like this: ${\displaystyle c^{2}=5^{2}+9^{2}-2(5)(9)(0.0872)}$. Multiplying, you get ${\displaystyle c^{2}=5^{2}+9^{2}-7.844}$. 6. 6 Square the known side lengths. Remember that to square a number means to multiply the number by itself. Square the numbers, then add them together. • For example: ${\displaystyle c^{2}=25+81-7.844}$ ${\displaystyle c^{2}=106-7.844}$ 7. 7 Find the difference. This will give you the value of ${\displaystyle c^{2}}$. Then, you can take the square root of both sides of the equation to find ${\displaystyle c}$.[14] • For example: ${\displaystyle c^{2}=106-7.844}$ ${\displaystyle c^{2}=98.156}$ ${\displaystyle {\sqrt {c^{2}}}={\sqrt {98.156}}}$ ${\displaystyle c=9.9074}$ Thus, side ${\displaystyle c}$ is about 9.91 cm long. ### Method 4 Using Law of Cosines to Find a Missing Angle 1. 1 Assess what you know. To find the missing angle using the law of cosines, you need to know the length of all three sides of the triangle.[15] • For example, you might have a triangle with sides measuring 14, 17, and 20 cm. You need to find the angle opposite the 20 cm side. 2. 2 Identify and label sides and opposite angles. The convention is that side lengths are labeled ${\displaystyle a}$, ${\displaystyle b}$, and ${\displaystyle c}$. The angle opposite each side is denoted by the capital letter of that side’s variable. For example, the angle opposite side ${\displaystyle a}$ is ${\displaystyle A}$, the angle opposite side ${\displaystyle b}$ is ${\displaystyle B}$, and the angle opposite side ${\displaystyle c}$ is ${\displaystyle C}$.[16] • For example, in your triangle: ${\displaystyle a=14cm}$; ${\displaystyle A=?}$ ${\displaystyle b=17cm}$; ${\displaystyle B=?}$ ${\displaystyle c=20cm}$; ${\displaystyle C=?}$ • Since you want to find the side opposite the 20 cm side, you are looking for side ${\displaystyle c}$. 3. 3 Set up the formula for the law of cosines. The formula is ${\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos {C}}$. In this formula, ${\displaystyle C}$ is the angle you are trying to find.[17] 4. 4 Plug all the known values into the formula. Make sure you substitute the correct values for the correct variables. The angle you are trying to find should be ${\displaystyle C}$. This means that ${\displaystyle c}$ should be the side opposite the angle you are trying to solve. • For example, ${\displaystyle 20^{2}=14^{2}+17^{2}-2(14)(17)\cos {C}}$. 5. 5 Simplify the expression using the order of operations. First, find the squares of the side lengths. Then, make the appropriate multiplications. Then, add. • For example: ${\displaystyle 20^{2}=14^{2}+17^{2}-2(14)(17)\cos {C}}$ ${\displaystyle 400=196+289-2(14)(17)\cos {C}}$ ${\displaystyle 400=196+289-(476)\cos {C}}$ ${\displaystyle 400=485-(476)\cos {C}}$ 6. 6 Isolate the cosine. To do this, subtract the sum of the squares of sides ${\displaystyle a}$ and ${\displaystyle b}$ from each side of the equation. Then, divide each side by the cosine’s coefficient. • For example: ${\displaystyle 400=485-(476)\cos {C}}$ ${\displaystyle 400-485=485-485-(476)\cos {C}}$ ${\displaystyle -85=(-476)\cos {C}}$ ${\displaystyle {\frac {-85}{-476}}={\frac {(-476)\cos {C}}{-476}}}$ ${\displaystyle 0.1786=\cos {C}}$ 7. 7 Find the inverse cosine. Use the ${\displaystyle COS^{-1}}$ key on a calculator to do this. The inverse cosine will give you the measurement of the missing angle.[18] • For example, the inverse cosine of 0.1786 is 79.7134. So, angle ${\displaystyle C}$ is about 79.71 degrees. ## Community Q&A Ask a Question 200 characters left If this question (or a similar one) is answered twice in this section, please click here to let us know. ## Tips • Remember the sine and cosine of 0, 30, 45, 60, 90. It'll help you solve the problems faster. (Sine: 0 = 0, 30 = .5, 45 = √2/2, 60 = √3/2, 90 = 1; Cosine: 0 = 1, 30 = √3/2, 45 = √2/2, 60 = .5, 90 = 0;) ## Article Info Categories: Trigonometry Thanks to all authors for creating a page that has been read 11,365 times. Did this article help you?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 136, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336297512054443, "perplexity": 262.4572661670631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119642.3/warc/CC-MAIN-20170423031159-00421-ip-10-145-167-34.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1751106/continuity-of-banach-limit-and-existence-of-lambda-0-in-ell-infty-c-0-su
# Continuity of Banach limit and existence of $\Lambda_0\in(\ell^\infty/c_0)^*$ such that $\Lambda=\Lambda_0\circ q_0$, with $q_0$ the quotient map Let $\Lambda$ be any Banach limit on $\ell^\infty$, where $\ell^\infty$ denotes the space of bounded real sequences. A Banach limit is defined as a linear functional $\Lambda$ such that $$\Lambda(\tau x)=\Lambda(x), \forall x\in\ell^\infty$$ $$\liminf_{n\rightarrow\infty}x_n\leq\Lambda(x)\leq\limsup_{n\rightarrow\infty}x_n$$ where we write $x=(x_n)_{n\in\mathbb{N}}$ for a sequence $x\in\ell^\infty$ and we define left translation on $\ell^\infty$ by $(\tau x)_n=x_{n+1},n=1,2,\dots$. I would like to show that $\Lambda\in(\ell^\infty)^*$, which means that $\Lambda$ is a continuous, linear functional on $\ell^\infty$. Thus I need to show that $\Lambda$ is continuous. How do I do this? Furthermore I wish to show that there exists a continuous, linear functional $\Lambda_0\in(\ell^\infty/c_0)^*$ such that $\Lambda=\Lambda_0\circ q_0$, where $$q_0:\ell^\infty\rightarrow\ell^\infty/c_0$$ is the quotient map and $$c_0=\{(x_n)\in\Lambda^\infty\mid \lim_{n\rightarrow\infty}x_n=0\}$$ I can't seem to get anywhere with these questions. Any help is greatly appreciated. • Oops, I forgot that, it has been added. Thank you. – Kevin Apr 20 '16 at 12:17 That $\Lambda$ is continuous follows directly from the second estimate: $$-\|x\|_\infty\leq \liminf_{n\to\infty} \Lambda(x_n)\leq \Lambda(x)\leq \limsup_{n\to\infty}\Lambda(x_n)\leq\|x\|_\infty$$ Thus, $|\Lambda(x)|\leq \|x\|_\infty$ for all $x\in \ell^\infty$. For your second claim define $\Lambda_0(x+c_0):=\Lambda(x)$. This is well-defined since $x-y\in c_0$ implies $\Lambda(x-y)=0$, i.e. $\Lambda(x)=\Lambda(y)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9962465167045593, "perplexity": 44.87321583684754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573053.13/warc/CC-MAIN-20190917061226-20190917083226-00198.warc.gz"}
https://physics.stackexchange.com/questions/421659/examples-of-complex-valued-wave-functions/421681
# Examples of Complex valued wave functions After learning some rudimentary Quantum Mechanics, I have found that the wavefunctions of harmonic Oscillators and particles in potential well are all real valued. The ground state of harmonic oscillators, for example, has a wavefunction similar to a Normal distribution. I wonder whether or not there are interesting examples complex valued wavefunctions. Since QM's formulation needs a lot of complex numbers, I think some systems must have complex valued wavefunctions. Or can we say that all systems can be described with real valued wavefunctions? EDIT: Really sorry for giving an unclear question. What I am going to find is a wavefunction that never become real valued after evolutioning according to the time dependent Schrodinger equation $i\hbar \frac{d|\phi>}{dt}=H |\phi>$ • Duplicate? physics.stackexchange.com/q/77894 Note that they talk about eigenstates. You can trivially obtain a complex wavefunction by a linear combination of eigenstates. – jinawee Aug 8 '18 at 6:24 First and foremost if you take a Harmonic oscillator and try to find the time dependent wavefunction you necessarily get a complex phase factor $\lvert n\rangle \to e^{-i\hbar^{-1}E_nt}\lvert n\rangle$. Secondly having the wavefunctions that are solutions be real values is just a convenient choice - the wave equation being real this choice is always possible. You can multiply the wavefunction by a constant phase factor $e^{i\phi}$ and it changes nothing. • For the case of hydrogen atom, can we still always choose eigenstates to be real-valued functions? I remember that the eigenstates consist of spherical harmonics $Y_{l}^{m_{l}}(\theta, \phi) \sim e^{i m_{l} \phi}$ which is complex for $l > 0$. – K_inverse Aug 8 '18 at 10:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8755819797515869, "perplexity": 225.62816406813374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668544.32/warc/CC-MAIN-20191114232502-20191115020502-00135.warc.gz"}
https://www.physicsforums.com/threads/specific-heat-at-constant-volume.637029/
# Specific heat at constant volume 1. Sep 18, 2012 ### Bipolarity $$C_{V} = \frac{∂U}{∂T}$$ This is the specific heat at constant volume so I assume it can only be used at constant volume. However, my textbook uses this to derive the following equation for reversible adiabatic expansion: $$P_{1}V_{1}^{γ} = P_{2}V_{2}^{γ}$$ Why are we allowed to use $C_{V}$ when it only works in isovolumetric processes? BiP 2. Sep 18, 2012 ### nasu How is Cv used to derive the equation for adiabatic transformation? Can you show it here? 3. Sep 18, 2012 ### Bipolarity 4. Sep 19, 2012 ### nasu The change in internal energy has the same expression for any process between two states. For ideal gas is $$\Delta U = nC_v\Delta T$$ The amount of heat is dependent on the type of process. It is $$Q = nC_v\Delta T$$ only for constant volume process. 5. Sep 19, 2012 ### Bipolarity Superb! Thanks! BiP 6. Sep 19, 2012 ### Staff: Mentor For an ideal gas, the internal energy is a function only of temperature, such that dU = CvdT always. For an adiabatic expansion, dQ = 0, so that dU = CvdT = -pdV
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8478803038597107, "perplexity": 1178.2856141777831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948544124.40/warc/CC-MAIN-20171214124830-20171214144830-00383.warc.gz"}
http://mathoverflow.net/feeds/question/64837
spectral theorem for infinite-dimensional matrices - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-22T00:58:28Z http://mathoverflow.net/feeds/question/64837 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/64837/spectral-theorem-for-infinite-dimensional-matrices spectral theorem for infinite-dimensional matrices yanzhang 2011-05-12T21:20:39Z 2011-05-13T04:39:25Z <p>Keller and Ochsenius (1995) has a spectral theorem for finite-dimensional symmetric matrices over the field of formal power series with real coefficients $\mathbf{R}((t))$ (they actually have a more general result, but I'm just interested in this one for now), in the sense that every finite symmetric square matrix can be diagonalized by some orthogonal matrix with entries in the field.</p> <p>Question: Is there a spectral theorem for symmetric infinite-dimensional matrices over this field?</p> <p>What about the obvious generalization to Hermitian matrices and $\mathbf{C}$? (I believe the approach in the previous paper works for this case as well when the matrices are finite-dimensional, though they don't say it explicitly so I'm not completely confident)</p> http://mathoverflow.net/questions/64837/spectral-theorem-for-infinite-dimensional-matrices/64867#64867 Answer by Anatoly Kochubei for spectral theorem for infinite-dimensional matrices Anatoly Kochubei 2011-05-13T04:39:25Z 2011-05-13T04:39:25Z <p>In fact, the field considered by Keller and Ochsenius is more complicated; it must have a Krull valuation with a specific kind of value group. They had several papers devoted to the infinite-dimensional case too:</p> <p>H. Keller and H. Ochsenius, Spectral decompositions of operators on non-Archimedean orthomodular spaces. Int. J. Theor. Phys. 34, No.8, 1507-1517 (1995). </p> <p>H. Keller and H. Ochsenius, Bounded operators on non-archimedean orthomodular spaces. Math. Slovaca 45, No.4, 413-434 (1995).</p> <p>H. Keller and H. Ochsenius, Residual spaces and operators on orthomodular spaces. In: Schikhof, W. H. (ed.) et al., p-Adic functional analysis. Proceedings of the fourth international conference, Nijmegen, Netherlands, June 3--7, 1996. New York, NY: Marcel Dekker. Lect. Notes Pure Appl. Math. 192, 265-274 (1997). </p>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8564151525497437, "perplexity": 1352.6650270755767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00032-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mathoverflow.net/revisions/10396/list
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). It seems to me that we can, without loss of generality, assume the $x_i$ to be commensurable. Otherwise, split $S\in\mathbb{Z}[r_1,\dots,r_n]$ S\in\mathbb{Z}[x_1,\dots,x_n]$into a representation wrt a basis of$\mathbb{Z}[r_1,\dots,r_n]$.\mathbb{Z}[x_1,\dots,x_n]$. Thus, by multiplying through with a suitable constant, we can assume that the $r_i$ x_i$are positive integers. We may also assume$\gcd(r_1,\dots,r_n)=1$, \gcd(x_1,\dots,x_n)=1$, since otherwise, any $S$ for which the equation has a solution is also divisible by this gcd, which allows dividing the whole equation. *Edit: Both of these simplifying assumptions shift the set of solutions (to solutions for some other $S$ and $r_1,\dots,r_n$, but in a bijective way.* The number of solutions for any particular $S$ and $r_1,\dots,r_n$ x_1,\dots,x_n$can be counted using generating functions (similar to Polya's method for counting possibilities of giving change); with your example$S=98\,a_1+99\,a_2$and$0 \leq a_1,a_2 \leq 100$, the number of solutions for$S$is the coefficient of$x^S$in the polynomial$(x^{98}+x^{2\cdot98}+\cdots+x^{100\cdot98})\,(x^{99}+x^{2\cdot99}+\cdots+x^{100\cdot99})$, whose lowest exponent with coefficient larger than$1$is$9899$. I'm not sure I've got a good way of explaining this. Essentially, the first of these polynomials is the generating function for the number of solutions for$S=98\,a_1$and the second is the generating function for the number of solutions for$S=99\,a_2$. Since in these generating functions, the$S$values are in the exponents, summation of the$S$values corresponds to multiplication. If you wanted to write a computer program to find the smallest$S$such that the corresponding coefficient in the generating function as given above fulfills some condition (e.g., is larger than$1$), it would probably be a good idea to use standard written multiplication and use a heap structure for carrying out the steps. Such an implementation would provide a stream of coefficient/exponent pairs and can also use such as one of its two inputs, which means that the multiplication of very many polynomials can be performed with little memory overhead, especially without needing to store all the coefficients already checked and found not interesting, and the calculation can stop almost without computing anything beyond the first interesting” term. 1 [made Community Wiki] From your problem description, I assume the$x_i$from the first two paragraphs are what is called$r_i$later. I do not yet have a complete answer, but would like to point out some observations and ideas (sorry, I'm not allowed to write comments yet): It seems to me that we can, without loss of generality, assume the$x_i$to be commensurable. Otherwise, split$S\in\mathbb{Z}[r_1,\dots,r_n]$into a representation wrt a basis of$\mathbb{Z}[r_1,\dots,r_n]$. Thus, by multiplying through with a suitable constant, we can assume that the$r_i$are positive integers. We may also assume$\gcd(r_1,\dots,r_n)=1$, since otherwise, any$S$for which the equation has a solution is also divisible by this gcd, which allows dividing the whole equation. The number of solutions for any particular$S$and$r_1,\dots,r_n$can be counted using generating functions (similar to Polya's method for counting possibilities of giving change); with your example$S=98\,a_1+99\,a_2$and$0 \leq a_1,a_2 \leq 100$, the number of solutions for$S$is the coefficient of$x^S$in the polynomial$(x^{98}+x^{2\cdot98}+\cdots+x^{100\cdot98})\,(x^{99}+x^{2\cdot99}+\cdots+x^{100\cdot99})$, whose lowest exponent with coefficient larger than$1$is$9899\$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983477592468262, "perplexity": 590.334375712175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711240143/warc/CC-MAIN-20130516133400-00040-ip-10-60-113-184.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/insertion-sort-questions-from-quiz-3/
# Insertion Sort Questions from Quiz 3 The question is: "In the worst case for a list with 10 distinct elements, how many comparisons are made?" Why isn't the comparison with the sorted number and the blank left space counted as an individual comparison? I thought that was considered a possible comparison scenario in insertion sorts. For example, sorting 3, 2, 1: • 1st round: 2 < 3, (blank) 2 = 2 comparisons • 2nd round: 1 < 3, 1 < 2, (blank) 1 = 3 comparisons For a total of 5 comparisons. Right? Note by Anna Morales 3 years, 2 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. • Stay on topic — we're all here to learn more about math and science, not to hear about your favorite get-rich-quick scheme or current world events. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ ## Comments Sort by: Top Newest The only way I can justify this is that blank comparisons are not considered individual comparisons because they do not cause the cycle to repeat. In mind, the computer would perform the blank comparison and just be like "oh, nothing to compare, I'm all good" and thus this minor comparison was just to check for another number. So since, there were no numbers to compare, it is not a full comparison. Does this make sense? I'm trying to justify the $\frac{(n-1)(n)}{2}$ formula. (Also, if anyone can link the theory to that formula, that would be great!) - 3 years, 2 months ago Log in to reply The formula is sum of first $n-1$ natural numbers. It is also the number of ways to choose 2 things from a set of $n$ things, see combinations - 3 years, 2 months ago Log in to reply Thanks! - 3 years, 2 months ago Log in to reply Can you explain what is this blank comparison that you are referring to? - 3 years, 2 months ago Log in to reply Sure! So when Insertion Sort was introduced, this was what I was told: Insertion Sort method will place a single element x into a sorted array A. First, x is placed at the end of A. Second, x is compared to the element on the left, there are 3 possible scenarios: 1. There is no element to the left, so the process is finished because x is the smallest element and is already at the start of the array 2. x is greater than or equal to the left element, so the process is finished 3. x is less than the left element = positions are switched Thirdly, the cycle repeats until x meets scenario 1 or 2. The blank comparison I'm referring to is scenario 1, when there is no left element. - 3 years, 2 months ago Log in to reply You are indeed correct. Insertion sort involves repeating the procedure above for every element of the input array. Now, it is possible that you might get a slightly different number of comparisons based on what you define to be comparisons. However, we are interested in getting a rough approximation of how this number depends on the size of input. So, usually just noticing that the number of comparisons is $O(n^2)$ is good enough. If that answer confuses you further, let me know. - 3 years, 2 months ago Log in to reply × Problem Loading... Note Loading... Set Loading...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9121835827827454, "perplexity": 1568.8949722431908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154042.23/warc/CC-MAIN-20210731011529-20210731041529-00067.warc.gz"}
http://mathhelpforum.com/advanced-statistics/43276-probability-generating-functions.html
1. ## Probability Generating Functions Can I please have some help on this subject? I have a continous random variable given by f(x)=k(3/8)^x and have been asked to find the probabilty generating function Gx(N) and so far have: Gx(N)=SUM(0 -> infinity) k(n^x)(3/8)^x Can I please have some help on this subject? I have a continous random variable given by Mr F says: Surely you mean discrete random variable ...... f(x)=k(3/8)^x and have been asked to find the probabilty generating function Gx(N) and so far have: Gx(N)=SUM(0 -> infinity) k(n^x)(3/8)^x $G_x(n) = k \sum_{x=0}^{\infty} n^x \left( \frac{3}{8} \right)^x = k \sum_{x=0}^{\infty} \left( \frac{3n}{8} \right)^x = \frac{k}{1 - \frac{3n}{8}} = \frac{8k}{8 - 3n}$ since the sum is that for an infinite geometric series (and is not finite unless $0 < \frac{3n}{8} < 1$). To get the value of k, you use the infinite geometric series again: $k \sum_{x=0}^{\infty}\left( \frac{3}{8} \right)^x = 1 \Rightarrow \frac{k}{1 - \frac{3}{8}} = 1 \Rightarrow k = \frac{5}{8}$. 3. Thanks, with a follow up question; The wiki page on this is confusing me, to find the expectation; do I differentiate and put n=1? and if so is it the same approach for variance? 4. Hello, Thanks, with a follow up question; The wiki page on this is confusing me, to find the expectation, do I differenttiate and put n=1? and if so is it the same approach for Variance? If f is the generating function of a random variable X, then : $f'(1)=\mathbb{E}(X)$ (like you said) and $f''(1)=\mathbb{E}(X(X-1))=\mathbb{E}(X^2)-\mathbb{E}(X)$, by linearity of the expectation. This implies : $\mathbb{E}(X^2)=f''(1)+\mathbb{E}(X)=f''(1)+f'(1)$ Knowing that $\text{var}(X)=\mathbb{E}(X^2)-(\mathbb{E}(X))^2$, you can say : $\text{var}(X)=f''(1)+f'(1)-\left(f'(1)\right)^2$ 5. One further thing, its just a clarification on notation my question literally says: f(x)= xexp(x) does that mean f(x) = x*2.718^x ? as in the expotential?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8876369595527649, "perplexity": 1151.4311334683111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823229.49/warc/CC-MAIN-20171019050401-20171019070401-00732.warc.gz"}
https://en.wikipedia.org/wiki/Shanks_transformation
# Shanks transformation In numerical analysis, the Shanks transformation is a non-linear series acceleration method to increase the rate of convergence of a sequence. This method is named after Daniel Shanks, who rediscovered this sequence transformation in 1955. It was first derived and published by R. Schmidt in 1941.[1] One can calculate only a few terms of a perturbation expansion, usually no more than two or three, and almost never more than seven. The resulting series is often slowly convergent, or even divergent. Yet those few terms contain a remarkable amount of information, which the investigator should do his best to extract. This viewpoint has been persuasively set forth in a delightful paper by Shanks (1955), who displays a number of amazing examples, including several from fluid mechanics. Milton D. Van Dyke (1975) Perturbation methods in fluid mechanics, p. 202. ## Formulation For a sequence ${\displaystyle \left\{a_{m}\right\}_{m\in \mathbb {N} }}$ the series ${\displaystyle A=\sum _{m=0}^{\infty }a_{m}\,}$ is to be determined. First, the partial sum ${\displaystyle A_{n}}$ is defined as: ${\displaystyle A_{n}=\sum _{m=0}^{n}a_{m}\,}$ and forms a new sequence ${\displaystyle \left\{A_{n}\right\}_{n\in \mathbb {N} }}$. Provided the series converges, ${\displaystyle A_{n}}$ will also approach the limit ${\displaystyle A}$ as ${\displaystyle n\to \infty .}$ The Shanks transformation ${\displaystyle S(A_{n})}$ of the sequence ${\displaystyle A_{n}}$ is the new sequence defined by[2][3] ${\displaystyle S(A_{n})={\frac {A_{n+1}\,A_{n-1}\,-\,A_{n}^{2}}{A_{n+1}-2A_{n}+A_{n-1}}}=A_{n+1}-{\frac {(A_{n+1}-A_{n})^{2}}{(A_{n+1}-A_{n})-(A_{n}-A_{n-1})}}}$ where this sequence ${\displaystyle S(A_{n})}$ often converges more rapidly than the sequence ${\displaystyle A_{n}.}$ Further speed-up may be obtained by repeated use of the Shanks transformation, by computing ${\displaystyle S^{2}(A_{n})=S(S(A_{n})),}$ ${\displaystyle S^{3}(A_{n})=S(S(S(A_{n}))),}$ etc. Note that the non-linear transformation as used in the Shanks transformation is essentially the same as used in Aitken's delta-squared process so that as with Aitken's method, the right-most expression in ${\displaystyle S(A_{n})}$'s definition (i.e. ${\displaystyle S(A_{n})=A_{n+1}-{\frac {(A_{n+1}-A_{n})^{2}}{(A_{n+1}-A_{n})-(A_{n}-A_{n-1})}}}$) is more numerically stable than the expression to its left (i.e. ${\displaystyle S(A_{n})={\frac {A_{n+1}\,A_{n-1}\,-\,A_{n}^{2}}{A_{n+1}-2A_{n}+A_{n-1}}}}$). Both Aitken's method and Shanks transformation operate on a sequence, but the sequence the Shanks transformation operates on is usually thought of as being a sequence of partial sums, although any sequence may be viewed as a sequence of partial sums. ## Example Absolute error as a function of ${\displaystyle n}$ in the partial sums ${\displaystyle A_{n}}$ and after applying the Shanks transformation once or several times: ${\displaystyle S(A_{n}),}$ ${\displaystyle S^{2}(A_{n})}$ and ${\displaystyle S^{3}(A_{n}).}$ The series used is ${\displaystyle \scriptstyle 4\left(1-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+{\frac {1}{9}}-\cdots \right),}$ which has the exact sum ${\displaystyle \pi .}$ As an example, consider the slowly convergent series[3] ${\displaystyle 4\sum _{k=0}^{\infty }(-1)^{k}{\frac {1}{2k+1}}=4\left(1-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+\cdots \right)}$ which has the exact sum π ≈ 3.14159265. The partial sum ${\displaystyle A_{6}}$ has only one digit accuracy, while six-figure accuracy requires summing about 400,000 terms. In the table below, the partial sums ${\displaystyle A_{n}}$, the Shanks transformation ${\displaystyle S(A_{n})}$ on them, as well as the repeated Shanks transformations ${\displaystyle S^{2}(A_{n})}$ and ${\displaystyle S^{3}(A_{n})}$ are given for ${\displaystyle n}$ up to 12. The figure to the right shows the absolute error for the partial sums and Shanks transformation results, clearly showing the improved accuracy and convergence rate. ${\displaystyle n}$ ${\displaystyle A_{n}}$ ${\displaystyle S(A_{n})}$ ${\displaystyle S^{2}(A_{n})}$ ${\displaystyle S^{3}(A_{n})}$ 0 4.00000000 1 2.66666667 3.16666667 2 3.46666667 3.13333333 3.14210526 3 2.89523810 3.14523810 3.14145022 3.14159936 4 3.33968254 3.13968254 3.14164332 3.14159086 5 2.97604618 3.14271284 3.14157129 3.14159323 6 3.28373848 3.14088134 3.14160284 3.14159244 7 3.01707182 3.14207182 3.14158732 3.14159274 8 3.25236593 3.14125482 3.14159566 3.14159261 9 3.04183962 3.14183962 3.14159086 3.14159267 10 3.23231581 3.14140672 3.14159377 3.14159264 11 3.05840277 3.14173610 3.14159192 3.14159266 12 3.21840277 3.14147969 3.14159314 3.14159265 The Shanks transformation ${\displaystyle S(A_{1})}$ already has two-digit accuracy, while the original partial sums only establish the same accuracy at ${\displaystyle A_{24}.}$ Remarkably, ${\displaystyle S^{3}(A_{3})}$ has six digits accuracy, obtained from repeated Shank transformations applied to the first seven terms ${\displaystyle A_{0},\ldots ,A_{6}.}$ As said before, ${\displaystyle A_{n}}$ only obtains 6-digit accuracy after about summing 400,000 terms. ## Motivation The Shanks transformation is motivated by the observation that — for larger ${\displaystyle n}$ — the partial sum ${\displaystyle A_{n}}$ quite often behaves approximately as[2] ${\displaystyle A_{n}=A+\alpha q^{n},\,}$ with ${\displaystyle |q|<1}$ so that the sequence converges transiently to the series result ${\displaystyle A}$ for ${\displaystyle n\to \infty .}$ So for ${\displaystyle n-1,}$ ${\displaystyle n}$ and ${\displaystyle n+1}$ the respective partial sums are: ${\displaystyle A_{n-1}=A+\alpha q^{n-1}\quad ,\qquad A_{n}=A+\alpha q^{n}\qquad {\text{and}}\qquad A_{n+1}=A+\alpha q^{n+1}.}$ These three equations contain three unknowns: ${\displaystyle A,}$ ${\displaystyle \alpha }$ and ${\displaystyle q.}$ Solving for ${\displaystyle A}$ gives[2] ${\displaystyle A={\frac {A_{n+1}\,A_{n-1}\,-\,A_{n}^{2}}{A_{n+1}-2A_{n}+A_{n-1}}}.}$ In the (exceptional) case that the denominator is equal to zero: then ${\displaystyle A_{n}=A}$ for all ${\displaystyle n.}$ ## Generalized Shanks transformation The generalized kth-order Shanks transformation is given as the ratio of the determinants:[4] ${\displaystyle S_{k}(A_{n})={\frac {\begin{vmatrix}A_{n-k}&\cdots &A_{n-1}&A_{n}\\\Delta A_{n-k}&\cdots &\Delta A_{n-1}&\Delta A_{n}\\\Delta A_{n-k+1}&\cdots &\Delta A_{n}&\Delta A_{n+1}\\\vdots &&\vdots &\vdots \\\Delta A_{n-1}&\cdots &\Delta A_{n+k-2}&\Delta A_{n+k-1}\\\end{vmatrix}}{\begin{vmatrix}1&\cdots &1&1\\\Delta A_{n-k}&\cdots &\Delta A_{n-1}&\Delta A_{n}\\\Delta A_{n-k+1}&\cdots &\Delta A_{n}&\Delta A_{n+1}\\\vdots &&\vdots &\vdots \\\Delta A_{n-1}&\cdots &\Delta A_{n+k-2}&\Delta A_{n+k-1}\\\end{vmatrix}}},}$ with ${\displaystyle \Delta A_{p}=A_{p+1}-A_{p}.}$ It is the solution of a model for the convergence behaviour of the partial sums ${\displaystyle A_{n}}$ with ${\displaystyle k}$ distinct transients: ${\displaystyle A_{n}=A+\sum _{p=1}^{k}\alpha _{p}q_{p}^{n}.}$ This model for the convergence behaviour contains ${\displaystyle 2k+1}$ unknowns. By evaluating the above equation at the elements ${\displaystyle A_{n-k},A_{n-k+1},\ldots ,A_{n+k}}$ and solving for ${\displaystyle A,}$ the above expression for the kth-order Shanks transformation is obtained. The first-order generalized Shanks transformation is equal to the ordinary Shanks transformation: ${\displaystyle S_{1}(A_{n})=S(A_{n}).}$ The generalized Shanks transformation is closely related to Padé approximants and Padé tables.[4] ## Notes 1. ^ Weniger (2003). 2. ^ a b c Bender & Orszag (1999), pp. 368–375. 3. ^ a b Van Dyke (1975), pp. 202–205. 4. ^ a b Bender & Orszag (1999), pp. 389–392. ## References • Shanks, D. (1955), "Non-linear transformation of divergent and slowly convergent sequences", Journal of Mathematics and Physics, 34: 1–42 • Schmidt, R. (1941), "On the numerical solution of linear simultaneous equations by an iterative method", Philosophical Magazine, 32: 369–383 • Van Dyke, M.D. (1975), Perturbation methods in fluid mechanics (annotated ed.), Parabolic Press, ISBN 0-915760-01-0 • Bender, C.M.; Orszag, S.A. (1999), Advanced mathematical methods for scientists and engineers, Springer, ISBN 0-387-98931-5 • Weniger, E.J. (2003). "Nonlinear sequence transformations for the acceleration of convergence and the summation of divergent series". arXiv:.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 68, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9628958106040955, "perplexity": 696.8670362714595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319265.41/warc/CC-MAIN-20170622114718-20170622134718-00118.warc.gz"}
https://www.physicsforums.com/threads/wave-polarization.305140/
Wave Polarization 1. Apr 5, 2009 sauravrt Can an electromagnetic wave be both circular polarized and horizontal or E polarized at the same time? 2. Apr 5, 2009 clem It can be a mix, such as 60% plane, 40% circular. 3. Apr 5, 2009 sauravrt A circular polarized wave has both horizontal and vertical polarization, is this correct? 4. Apr 5, 2009 Andy Resnick Completely circularly polarized light can be decomposed into two orthogonal linear components in quadrature. In general, any pure polarization state can be decomposed into two orthogonal basis states (linear, circular, elliptical, etc).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9032564759254456, "perplexity": 2544.0325542834917}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860112228.39/warc/CC-MAIN-20160428161512-00042-ip-10-239-7-51.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/76762/when-is-a-variable-bound-or-free-in-a-lambda-application
# When is a variable bound or free in a lambda application? I am currently reading the book "An Introduction to Functional Programming through Lambda Calculus" (the 2011 edition) and am a bit puzzled by the definitions of free and bound variables with regards to function application. The book states, in section 2.10, that a variable is bound in an expression if 1. the expression is an application $$(\langle function \rangle \langle argument \rangle)$$ and the variable is bound in $$\langle function \rangle$$ OR $$\langle argument \rangle$$, or 2. the expression is a function $$\lambda \langle name \rangle. \langle body \rangle$$ and the variable is named $$\langle name \rangle$$ or bound in $$\langle body \rangle$$. Next, it states that a variable is free in an expression if 1. The expression is a single name and the variable has that name, or 2. the expression is an application $$(\langle function \rangle \langle argument \rangle)$$ and the variable is free in $$\langle function \rangle$$ OR $$\langle argument \rangle$$, or 3. the expression is a function $$\lambda \langle name \rangle.\langle body \rangle$$ and the variable is not named $$\langle name \rangle$$ and is free in $$\langle body \rangle$$. What confuses me is the use of "or" in the definitions of both bound and free variables in applications. I figured that if a variable is not bound, it must be free, and if a variable is not free, it must be bound. So I would expect that if a variable is free, the point 1 in the definition of bound above is inverted, so with boolean logic $$\neg(A \lor B)$$ should become $$\neg A \land \neg B$$. However, the book doesn't state that a variable is free if it is free in $$\langle function \rangle$$ AND $$\langle argument \rangle$$. I checked Wikipedia for another reference and it states: The set of free variables of a lambda expression, M, is denoted as FV(M) and is defined by recursion on the structure of the terms, as follows: 1. $$FV(x) = \{x\}$$, where $$x$$ is a variable 2. $$FV(\lambda x.M) = FV(M) \setminus \{x\}$$ 3. $$FV(M N) = FV(M) \cup FV(N)$$ So the set of bound variables in (M N) should be the complement of $$FV(M) \cup FV(N)$$ which I believe would be $$FV(M)' \cap FV(N)'$$ where ' denotes the complement. The book repeats this definition later in a summary of the chapter, so I'm hesitant to think it would be a typo. I'm thinking that either I'm • wrong in assuming that bound and free are full complements or • wrong in thinking that a variable with the same name in the function and argument expressions of an application is the same variable. What is the correct interpretation here? As an example, in the application expression $$(\lambda x.x\ \ \lambda y.x)$$, is $$x$$ bound or free, or is it senseless to speak of "x" outside of the scope of either the function or argument expressions? • Aha! So in the example I gave at the end, x is bound AND free, it just depends on which part you consider? If you alpha-converted it to replace the x in the function expression part with a, then x would be free in the application but no longer bound? • @G_H Yes, exactly. Though I prefer to think of it in terms of bound x and free x really being two different variables with the same name (and if you have multiple $\lambda x$, each introduces a new variable). Jun 14 '17 at 8:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9035062193870544, "perplexity": 217.67448145001538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00239.warc.gz"}
https://www.physicsforums.com/threads/converting-watt-hr-to-watts.335828/
# Homework Help: Converting Watt-hr to Watts 1. Sep 8, 2009 ### tacojohn 1. The problem statement, all variables and given/known data An automobile gets 20 miles per gallon when traveling at 60 miles per hour. With the energy content of gasoline at 36,000 watt-hr per gallon, convert this amount of power into watts. 2. Relevant equations W h (watt hours) | ([length]^2 [mass])/[time]^2 | energy W (watts) | ([length]^2 [mass])/[time]^3 | power Energy = Power (60) * time (3) = 60 x 3 = 180 Wh 3. The attempt at a solution If watt-hrs and watts are not compatible how am I supposed to convert these? 2. Sep 8, 2009 ### kuruman You are not expected to convert the 36,000 watt-hr to watts. As you noted, it cannot be done. The power that you need to find is the energy per unit time that this car consumes as it travels at 60 miles per hour. 3. Sep 8, 2009 ### lanedance Energy is ususally measured in Joules (J) Power is the rate of change of energy - ie. how much energy you are using in a given time. Usually measuered in Watts = Joules per second (W = J/s) so work out how much fuel the car consumes per time unit, then convert that to how much energy per time unit & you have power - pow Last edited: Sep 8, 2009 4. Sep 8, 2009 Correct-o.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.829105794429779, "perplexity": 2176.5636272737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589251.7/warc/CC-MAIN-20180716095945-20180716115945-00061.warc.gz"}
http://mathhelpforum.com/pre-calculus/142791-solving-logarithmic-equation.html
# Math Help - Solving a logarithmic equation 1. ## Solving a logarithmic equation Hi, could anyone help me with this, I'd be very greatful.Thanks.. 20logx^1/2 + 7logx^3 - 3logx^2 = 0 4x 16x x/2 2. Originally Posted by alternative Hi, could anyone help me with this, I'd be very greatful.Thanks.. 20logx^1/2 + 7logx^3 - 3logx^2 = 0 4x 16x x/2 hi This is confusing . You might want to try the latex (a software to format equations) . Try this , logarithms of x to base 3 can be typed as \log_3(x) , then wrap it with this $\sum$ 3. Originally Posted by mathaddict hi This is confusing . You might want to try the latex (a software to format equations) . Try this , logarithms of x to base 3 can be typed as \log_3(x) , then wrap it with this $\sum$ Sorry about that, is it better now? Thanks alot 4. Originally Posted by alternative Sorry about that, is it better now? Thanks alot HI all I think x will equal 4 5. Using change of base (let new base be 2),the given problem can be written as 10log(x)/log(4x) + 21log(x)/log(16x) = 6log(x)/log(x/2) Cancel log(x) from both side. You get 10/log(4x) + 21/log(16x) = 6/log(x/2) 10/[2 + log(x)] + 21/[4 + log(x) 6/[log(x) - 1] Let log x to the base 2 is a, then 10/(2+a) + 21/(4+a) = 6/(a-1) Simplify this equation and solve for a. From that find x. 6. Hello, alternative! $20\log_{4x}\!\left(x^{\frac{1}{2}}\right) + 7\log_{16x}\!\left(x^3\right) - 3\log_{\frac{x}{2}}\!\left(x^2\right) \;=\;0$ Use the Base-Change formula and change everything to base-2 . . . . . . . . . $\frac{20\log_2\left(x^{\frac{1}{2}}\right)}{\log_2 (4x)} + \frac{7\log_2(x^3)}{\log_2(16x)} - \frac{3\log_2(x^2)}{\log_2(\frac{x}{2})} \;=\;0$ . . $\frac{20\cdot\frac{1}{2}\log_x(x)}{\log_2(x) + \log_2(4)} + \frac{7\cdot3\log_2(x)}{\log_2(x) + \log_2(16)} - \frac{3\cdot2\log(x)}{\log_2(x) - \log_2(2)} \;=\;0$ . . . . . . $\frac{10\log_2(x)}{\log_2(x)+2} + \frac{21\log_2(x)}{\log_2(x) + 4} - \frac{6\log_2(x)}{\log_2(x)-1} \;=\;0$ Factor: . $\log_2(x)\,\left[\frac{10}{\log_2(x)+2} + \frac{21}{\log_2(x)+4} - \frac{6}{\log_2(x) - 1}\right] \;=\;0$ Multiply through by the LCD. . I'll drop the "base-2" for now. . . $\log(x)\,\bigg[10(\log x + 4)(\log x - 1) + 21(\log x + 2)(\log x - 1) - 6(\log x + 2)(\log x + 4)\bigg] \;=\;0$ . . . . . . . $\log(x)\,\bigg[25\log^2x + 15\log x - 130\bigg] \;=\;0$ . . . . . . $5\log(x)\bigg[\log(x) - 2\bigg]\,\bigg[5\log(x) + 13\bigg] \;=\;0$ And we have three equations to solve: . . $\log_2(x) \:=\:0\quad\Rightarrow\quad x \:=\:2^0 \quad\Rightarrow\quad \boxed{x \:=\:1}$ . . $\log_2(x) \:=\:2 \quad\Rightarrow\quad x \:=\:2^2 \quad\Rightarrow\quad\boxed{x \:=\:4}$ . . $5\log_2(x) + 13 \:=\:0 \quad\Rightarrow\quad \log_2(x) \:=\:-\frac{13}{5} \quad\Rightarrow\quad \boxed{x \:=\:2^{-\frac{13}{5}}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9318312406539917, "perplexity": 2441.8325502537823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257836399.81/warc/CC-MAIN-20160723071036-00314-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/equation-for-modeling-atomic-spectra-of-all-atoms.906966/
# I Equation for modeling atomic spectra of all atoms Tags: 1. Mar 8, 2017 ### Xilus I've seen the equation I think is just for hydrogen. is this just for hydrogen? of course this doesn't return the atomic spectra, it returns the energy. So using E=h*v and Planck's constant. a simple factor of 1/h would return the frequency. right? Energy is directly proportional to frequency. and E0=13.6eV n1<n2 where both n1 and n2 are integers Is there an equation that models atomic spectra of all atoms? 2. Mar 8, 2017 ### blue_leaf77 Yes it's only for hydrogen, and approximately for the so-called Rydberg states. Yes. As far as I know, no. We haven't derived the general expression for energy levels for all atoms. 3. Mar 8, 2017 ### Xilus is the spectra the same for all isotopes? 4. Mar 8, 2017 ### blue_leaf77 There is the so-called isotope shift which arise due to the fact that the nucleus is not completely at rest. It moves around by a very little amount which in turn disturbs the motion and hence wavefunction and energy levels of electrons. Different nuclear mass will have different effect on the wavefunction. 5. Mar 9, 2017 ### Khashishi The formula for the isotope shift is quite simple. It's just a scaling by the reduced mass. The energy is $$E_M = \frac{M}{m_e+M} E_\infty \left(\frac{1}{n_1^2} - \frac{1}{n_2^2}\right)$$ where $m_e$ is the electron mass, $M$ is the nuclear mass, and $E_\infty \approx 13.605693$ eV 6. Mar 9, 2017 ### gianeshwar I think quantum mechanics brings in probabilities into physics ,so due to no determinism we cannot describe a general result for multielectron atoms. 7. Mar 9, 2017 ### Khashishi That's not the reason we don't have a general result. The reason is that it's just too complicated for a simple analytical formula. Draft saved Draft deleted Similar Discussions: Equation for modeling atomic spectra of all atoms
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.930261492729187, "perplexity": 1627.3294993232157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107720.63/warc/CC-MAIN-20170821060924-20170821080924-00127.warc.gz"}
http://theoryapp.com/tag/derandomization/
Pairwise Independence A collection of random variables is pairwise independent if every pair of variables are independent. Given $$k$$ independent bits, we define $$n = 2^k-1$$ random variables each of which is a parity of a non-empty subset of the $$k$$ bits. Tagged with: , , , Posted in Theory Pseudorandom Generators and Derandomization Definition of Pseudorandom Generators Two distributions $$X$$ and $$Y$$ over $$\{0,1\}^n$$ are $$(s, \epsilon)$$-indistinguishable if, for any circuit $$C$$ of size at most $$s$$, $\left| \Pr_X[C(X) = 1] – \Pr_Y[C(Y) = 1] \right| \leq \epsilon.$ A pseudorandom generator Tagged with: , , Posted in Theory
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9690326452255249, "perplexity": 553.7827734516969}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991812.46/warc/CC-MAIN-20210515004936-20210515034936-00309.warc.gz"}
http://crypto.stackexchange.com/questions/5813/blind-quantum-computing-and-fully-homomorphic-encryption
# Blind quantum computing and fully homomorphic encryption I am somewhat familiar with current research on fully homomorphic enryption schemes and their possible application to Cloud computing. I've just noticed (somewhat late) that a marketing-savvy group of Physicists working on quantum information is approaching the same possible application: they call their approach blind quantum computing. Is this a serious contender with respect to the claimed application (the group's reputation in Physics is not in doubt) and are there any significant similarities between the two approaches on a fundamental level? - It appears to require a quantum computer and thus is not a serious contender for cloud computing applications. –  David Cash Dec 23 '12 at 19:31 Well, since I'm one of the authors on the paper, let me try to answer your question. First I should explain that the paper you link to is not the original paper proposing that approach, but rather the first implementation of it (in this case using quantum optics). The original paper which introduced the Universal Blind Quantum Computing (UBQC) protocol which the experiments demonstrate was written by myself along with Anne Broadbent and Elham Kashefi, and appeared at FOCS in 2009, so at least some people in the CS community took us seriously. I should also point out that of the three of us, only I am a physicist. Elham and Anne are both computer scientists. Indeed Anne's PhD supervisor was Gilles Brassard, one of the Bs in BB84, and one of the discovers of quantum key distribution. One might ask why our original paper doesn't reference fully homomorphic encryption, but that is easy to answer. It simply didn't exist when we wrote the paper in 2008. As David Cash mentions in the comments above, UBQC is a quantum protocol, and hence requires quantum information processing, meaning at least some quantum computational abilities. It is certainly not something you can expect to deploy tomorrow, and any large scale version of this kind of thing is likely decades off. We worked with one of the top experimental groups to implement it, and still only managed 4 qubits (essentially hiding a 12 bit description of the circuit). Now, we're certainly not the only ones to have written about the concept of blind quantum computation. The term is taken from a 2003 paper by Arrighi and Salvail which introduced a non-universal blind computation protocol, although there was earlier work by Childs. A few months after us, and apparently independently, Aharonov, Ben-Or and Eban came up with a similar idea in the context of interactive proofs. There are some important differences between blind quantum computing and homomorphic encryption. First, blind computation and homomorphic encryption are fundamentally different in what they seek to achieve. In blind computation the aim is to have a remote computer perform a computation for you in such a way that it remains "blind" to the computation (i.e. the input, output, and the actual computation performed), and should only learn upper bounds on the resource requirements. In homomorphic encryption the situation is somewhat different, since while the intention is indeed to hide the input and output, the computation itself is known to the remote computer and not necessarily to the user. This is a fundamental difference in the goals of the various protocols. Secondly, protocols such as ours and the Aharonov-Ben-Or-Eban approach, take measures to ensure that the remote computer cannot interfere with the protocol without being detected, which is in completely the opposite direction to homomorphic encryption. Thirdly, there is a fundamental difference in security. Many of the blind quantum computation protocols are information theoretically secure, meaning that they are secure independent of the computational power of their adversary. This is not true of current schemes for fully homomorphic encryption. And, last, but certainly not least, blind quantum computing allows you to essentially boost the computational power of the user, in computational complexity terms, expanding the class of decision problems they can solve from P to BQP, which contains certain problems believed to be NP-intermediate problems such as factoring, which is not something fully homomorphic encryption currently allows for. Let me close by saying why I think people should be interested in blind quantum computation. As I have said, as a tool for cloud computing, practical technologies would be decades off, and I certainly don't want to be guessing the future that far in advance. Rather, I would suggest, the main reason people currently find such protocols interesting is because of their possible consequences for the study of complexity classes, and as a practical means for verifying that purported quantum computing technologies do indeed function correctly (which is a non-trivial task since BQP is believed not to be contained in NP, and hence there exist no efficient means to verify the outcome of certain quantum computations after the fact).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084602952003479, "perplexity": 637.1590589332017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007301.29/warc/CC-MAIN-20141125155647-00234-ip-10-235-23-156.ec2.internal.warc.gz"}
http://mymathforum.com/applied-math/329340-using-newton-s-laws-etc-pulley-type.html
My Math Forum Using Newton's laws etc. pulley type. User Name Remember Me? Password Applied Math Applied Math Forum March 22nd, 2016, 07:04 AM #1 Newbie   Joined: Mar 2016 From: Physics land Posts: 7 Thanks: 0 Using Newton's laws etc. pulley type. Consider a situation where there is a particle1 which has mass 6kg. The particle is momentarily at rest at an angle of 42 degrees above the horizontal. The particle is then connected by an inextensible, light string to another particle 2 with mass of 3.5kg. Now, this string then goes over a pulley where friction is negligible. Refer to the diagram below. The coefficient of friction (kinematical and statics) relating the question which is contained in between the particle m1 and the slope's surface have values of statics friction = 9/20 and kinematical friction= 7/20. Additional information provided: m1 = 6kg m2 = 3.5kg theta = 42 degrees With all the information provided, deduce whether particle1 will move and if so state the direction, or remain at its current position. The slope angle theta is again 42 degrees, what would be the mass of particle 3 (a different particle with different mass), which is connected to the string which will cause the particle1 of mass 6kg to accelerate in an upwards direction where the acceleration is 3.81m/s^2? In your answer you must clearly state the equations(SUVAT) that you have used for each particle. Thanks if you read all this! March 22nd, 2016, 08:08 AM #2 Senior Member   Joined: Jun 2015 From: England Posts: 915 Thanks: 271 So what do you think? Have you attempted the first part yet? Hint: To solve the first part you need to obtain an inequality. This will tell you whether the mass on the slope is in equilibrium or not. Thanks from devour19 Last edited by studiot; March 22nd, 2016 at 08:10 AM. March 22nd, 2016, 08:42 AM   #3 Newbie Joined: Mar 2016 From: Physics land Posts: 7 Thanks: 0 Quote: Originally Posted by studiot So what do you think? Have you attempted the first part yet? Hint: To solve the first part you need to obtain an inequality. This will tell you whether the mass on the slope is in equilibrium or not. Hi thanks for the reply. I am still unsure what inequality to devise. Please can you advise me further in what I should use to construct it. Usually I resolve in the x and y directions.. March 22nd, 2016, 09:37 AM   #4 Senior Member Joined: Jun 2015 From: England Posts: 915 Thanks: 271 Quote: Usually I resolve in the x and y directions.. Then what do you do with the resolutes? What is the condition (involving friction) for slipping/not slipping between any contact surfaces? Is this not the inequality you need? March 22nd, 2016, 09:47 AM #5 Math Team     Joined: Jul 2011 From: Texas Posts: 3,002 Thanks: 1588 disregarding friction for a moment ... if $m_2g > m_1g\sin{\theta}$, then $m_1$ will slide up the incline now, throw friction in ... if $f_s$ is large enough,then equilibrium will be achieved and $m_2g = m_1g\sin{\theta} + f_s$ $m_2g - m_1g\sin{\theta} = f_s$ recall $f_s \le \mu_s \cdot m_1g\cos{\theta}$ ... $m_2g - m_1g\sin{\theta} = f_s \le \mu_s \cdot m_1g\cos{\theta}$ $m_2g - m_1g\sin{\theta} \le \mu_s \cdot m_1g\cos{\theta}$ $m_2 - m_1\sin{\theta} \le \mu_s \cdot m_1\cos{\theta}$ now consider the other possibility ... if $m_2g < m_1g\sin{\theta}$, then $m_1$ will slide down the incline and the force of static friction will be directed up the incline. Of course, if $f_{s \, max}$ is not great enough, then you will have a dynamic situation where $m_1$ slides up or down the incline depending on the relative magnitudes of $m_2g$ and $m_1g\sin{\theta}$ Thanks from devour19 March 22nd, 2016, 01:21 PM   #6 Newbie Joined: Mar 2016 From: Physics land Posts: 7 Thanks: 0 Quote: Originally Posted by studiot Then what do you do with the resolutes? What is the condition (involving friction) for slipping/not slipping between any contact surfaces? Is this not the inequality you need? I am an A Level Student. What is resolute? F=uR, newton's coefficient of friction right? But I have no idea what kinematic friction is I only know of regular basic? statics work. Thanks. March 22nd, 2016, 01:24 PM   #7 Newbie Joined: Mar 2016 From: Physics land Posts: 7 Thanks: 0 Quote: Originally Posted by skeeter disregarding friction for a moment ... if $m_2g > m_1g\sin{\theta}$, then $m_1$ will slide up the incline now, throw friction in ... if $f_s$ is large enough,then equilibrium will be achieved and $m_2g = m_1g\sin{\theta} + f_s$ $m_2g - m_1g\sin{\theta} = f_s$ recall $f_s \le \mu_s \cdot m_1g\cos{\theta}$ ... $m_2g - m_1g\sin{\theta} = f_s \le \mu_s \cdot m_1g\cos{\theta}$ $m_2g - m_1g\sin{\theta} \le \mu_s \cdot m_1g\cos{\theta}$ $m_2 - m_1\sin{\theta} \le \mu_s \cdot m_1\cos{\theta}$ now consider the other possibility ... if $m_2g < m_1g\sin{\theta}$, then $m_1$ will slide down the incline and the force of static friction will be directed up the incline. Of course, if $f_{s \, max}$ is not great enough, then you will have a dynamic situation where $m_1$ slides up or down the incline depending on the relative magnitudes of $m_2g$ and $m_1g\sin{\theta}$ Thank you. This is helpful I will see where this leads me. March 22nd, 2016, 02:16 PM   #8 Senior Member Joined: Jun 2015 From: England Posts: 915 Thanks: 271 Quote: What is resolute? resolute on its own is an adjective which means determined or steadfast. the resolute is a noun which is the result of resolving vectors (forces etc) Since skeeter has decided to do you homework for you I will not interfere. However I will explain about friction. The force of friction is not constant. Before an object moves the frictional force is only exactly enough to oppose the disturbing or driving force (say D) So it rises from zero when there is no driving force up to a maximum value when object is just about to move. At this moment the value equals the coefficient of friction times the normal reaction between the surfaces F = uR. As the driving force increases from zero to D, the friction increases from zero to F = uR which is always equal to the driving force so the body is alway in equilibrium and the rules of statics apply. u is called the coefficient of static friction. Once the body starts to move the coefficient of friction drops slightly and remains constant. This lower coefficient of friction is called the coefficient of dynamic friction. Because this coefficient is constant, the frictional force no longer varies as the driving force increases but also remains constant. Thus the net force on the body (equal to D - F) increases as D increases as D increases. In these circumstances Newtons laws of motion apply and the acceleration = mass times (D-F) March 22nd, 2016, 03:22 PM   #9 Newbie Joined: Mar 2016 From: Physics land Posts: 7 Thanks: 0 Quote: Originally Posted by skeeter disregarding friction for a moment ... if $m_2g > m_1g\sin{\theta}$, then $m_1$ will slide up the incline now, throw friction in ... if $f_s$ is large enough,then equilibrium will be achieved and $m_2g = m_1g\sin{\theta} + f_s$ $m_2g - m_1g\sin{\theta} = f_s$ recall $f_s \le \mu_s \cdot m_1g\cos{\theta}$ ... $m_2g - m_1g\sin{\theta} = f_s \le \mu_s \cdot m_1g\cos{\theta}$ $m_2g - m_1g\sin{\theta} \le \mu_s \cdot m_1g\cos{\theta}$ $m_2 - m_1\sin{\theta} \le \mu_s \cdot m_1\cos{\theta}$ now consider the other possibility ... if $m_2g < m_1g\sin{\theta}$, then $m_1$ will slide down the incline and the force of static friction will be directed up the incline. Of course, if $f_{s \, max}$ is not great enough, then you will have a dynamic situation where $m_1$ slides up or down the incline depending on the relative magnitudes of $m_2g$ and $m_1g\sin{\theta}$ Thank you I have calculated from your help that I have the m1 will slide down the slope, thank you for the clarity in your calculation. Finally I am unsure for the second part - would I rearrange the equation to make the mass of the second particle the subject of the equation? And from the conditions you have said. Thank you. March 22nd, 2016, 03:34 PM   #10 Newbie Joined: Mar 2016 From: Physics land Posts: 7 Thanks: 0 Quote: Originally Posted by skeeter disregarding friction for a moment ... if $m_2g > m_1g\sin{\theta}$, then $m_1$ will slide up the incline now, throw friction in ... if $f_s$ is large enough,then equilibrium will be achieved and $m_2g = m_1g\sin{\theta} + f_s$ $m_2g - m_1g\sin{\theta} = f_s$ recall $f_s \le \mu_s \cdot m_1g\cos{\theta}$ ... $m_2g - m_1g\sin{\theta} = f_s \le \mu_s \cdot m_1g\cos{\theta}$ $m_2g - m_1g\sin{\theta} \le \mu_s \cdot m_1g\cos{\theta}$ $m_2 - m_1\sin{\theta} \le \mu_s \cdot m_1\cos{\theta}$ now consider the other possibility ... if $m_2g < m_1g\sin{\theta}$, then $m_1$ will slide down the incline and the force of static friction will be directed up the incline. Of course, if $f_{s \, max}$ is not great enough, then you will have a dynamic situation where $m_1$ slides up or down the incline depending on the relative magnitudes of $m_2g$ and $m_1g\sin{\theta}$ finally I am unsure how to link the equations to suvat, I gather I would have a and v obviously, since I won't be able to have s or t it means I must need u which I don't know how to figure it. Thank you sir, this question has been bothering me all day. Tags laws, newton, pulley, type Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post lyurealm Applied Math 0 October 25th, 2015 02:52 PM hyperbola Physics 5 April 13th, 2015 08:16 AM hyperbola Physics 3 April 13th, 2015 02:12 AM hyperbola Physics 5 April 10th, 2015 08:46 AM bitsnbitting Physics 1 February 3rd, 2014 09:01 PM Contact - Home - Forums - Cryptocurrency Forum - Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262101650238037, "perplexity": 938.3198423929307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318986.84/warc/CC-MAIN-20190823192831-20190823214831-00374.warc.gz"}
http://math.stackexchange.com/questions/222766/integrate-int-1-alpha2-3-2-sin-theta-d-theta-where-alpha-cos
# Integrate $\int (1+\alpha^{2})^{-3/2} \sin \theta d \theta$where $\alpha = \cos \theta + a \sin \theta$ with a constant $a$ Integrate $$\int (1+\alpha^{2})^{-3/2} \sin \theta d \theta$$where $\alpha = \cos \theta + a \sin \theta$ with a constant $a$. How could I possibly do that? Trigonometrical manipulations? Or integration parts? - Hint Making the change of variables $\theta=\arctan(t)$ casts the integral to the form $$\int \!{\frac {t}{ \left( 2+2\,at+({a}^{2}+1){t}^{2} \right) ^{3/2 }}}{dt}=\frac{1}{\alpha}\int \!{\frac {t}{ \left( (t+\frac{a}{\alpha})^2+\frac{a^2+2}{\alpha} \right) ^{3/2 }}}{dt}\,.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992226362228394, "perplexity": 371.7504174216043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163051244/warc/CC-MAIN-20131204131731-00016-ip-10-33-133-15.ec2.internal.warc.gz"}
http://www.jiskha.com/display.cgi?id=1312631104
# Physics posted by on . A particular Ferris wheel (a rigid wheel rotating in a vertical plane about a horizontal axis) at a local carnival has a radius of 20.0 m and it completes 1 revolution in 9.84 seconds. (a) What is the speed (m/s) of a point on the edge of the wheel? Using the coordinate system shown, nd: the (b) x component of the acceleration of point A at the top of the wheel. the (c) y component of the acceleration of point A at the top of the wheel the (d) x component of the acceleration of point B at the bottom of the wheel. the (e) y component of the acceleration of point B at the bottom of the wheel the (f) x component of the acceleration of point C on the edge of the wheel. the (g) y component of the acceleration of point C on the edge of the wheel • Physics - , (a) Divide 2*pi*R by 9.84 s for the speed. The speed is suspiciously high, and probably unsafe. (b) and (c) At the top of the wheel, the centripetal acceleration is down (-y). Its magnitude is V^2/R everywhere along the outer edge (d) and (e) At the bottom of the wheel, the centripetal acceleration is up (+y) (f) and (g) It depends upon which edge. Location C is not shown.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9374854564666748, "perplexity": 676.3505178160752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609610.87/warc/CC-MAIN-20170528082102-20170528102102-00028.warc.gz"}
https://www.physicsforums.com/threads/relative-motion-problem-help.67895/
# Homework Help: Relative Motion problem help! 1. Mar 19, 2005 ### Byrne Relative Motion problem... help! (now with scans of what I have done so far!) I guess I will write the entire question out first, I am current stuck at part (e) and will provide what I believe are correct answers to the previous portions of the question: The pilot of a small plane leaves "Here" and set course for "There." It is known that the distance from Here to There is 547 km in the direction [East 29.4 North]. The pilot set course without considering the wind. The maximum crusing speed of the plane was known to be 316 km/h. The pilot sets out at the maximum crusing speed. The wind blows anyway! The wind in the entire area was 78.5 km/h in the direction [North 7 West] and constant throughout the entire flight. Calculate the following: (a) The time the pilot calculated it would take for the flight. Distance of flight (574 km) divided by speed of plane (316 km/h) gives us what the pilot calculated for the time of the flight (1.73 h). (b) The position the pilot expected to be at after 1.23 hours in the air. (Express this using both component notation and magnitude/direction notation.) I determined the position to be 388.7 km [West 29.4 East]. (c) The actual velocity of the plane relative to the ground including the wind. (Express this using both component notation and magnitude/direction notation.) I determined the velocity relative to the ground to be 353.4 km/h [East 41.3 North] by solving the triangle using the information previously given. (d) The actual position of the plane 1.23 hours after starting. I determined the actual position of the plane to be 434.7 km [East 41.3 North]. (e) This is where I'm currently at! The displacement from the cirrent location of the pilot 1.23 hours after starting to the actual destination. (Resultatnt Displacement = d2 - d1... a vector subtraction!) Basically what I did was create a triangle by connecting the actual displacement vector (434.7 km [E 41.3 N]) to the actual destination vector (547 km [E 29.4 N]) at their tails because it is vector subtraction. I determined the interior angle to be 11.9 degrees and used the cosine law to solve for the resultant displacement to find it to be 151.1 km, but after checking my results my answers did not seem to make sense. Please help! (f) The required heading for the pilot to get to the destination from the current location 1.23 hours after starting out. (Inlcude the wind in this calculation!) (Express this using both component notation and magnitude/direction notation.) Not here yet... (e) The length of time required for the entire flight. Not here yet... Thanks to anyone for their help! It is truly appreciated... Last edited: Mar 19, 2005 2. Mar 19, 2005 ### arildno Welcome to PF! To verify numerical answers is extremely tedious and time-consuming. Do not expect people at PF to do that part of the job. So, what do we do? We will check your procedures, and your set-up , and leave the number-crunching to you. For example: define your quantities symbolically, and we will will help you manipulate the "symbolic" equations you gain correctly. (That is, let for example $$\vec{v}_{p}$$ denote the plane's velocity) From what I can discern, you have the right set-up, so I assume you've made a numerical mistake somewhere. You might try to re-write your attempts symbolically; it will be easier for people to notice your (possible) mistakes then. Last edited: Mar 19, 2005 3. Mar 19, 2005 ### Byrne It would probably be easier just for me to scan everything I need to show... I'll be right back. 4. Mar 19, 2005 ### Byrne Sorry... I guess I should have resized them, but they are in order and the question is #3... http://www.ocgn.com/features/games/misc_images/physics1.jpg [Broken] http://www.ocgn.com/features/games/misc_images/physics2.jpg [Broken] http://www.ocgn.com/features/games/misc_images/physics3.jpg [Broken] http://www.ocgn.com/features/games/misc_images/physics4.jpg [Broken] Last edited by a moderator: May 1, 2017 5. Mar 19, 2005 ### Byrne Okay, I guess I'll go even a bit further here... in part (e) (see page 4), I solved the unknown side to be 151.1 km... which is what I am trying to determine. However, this side creates a triangle that cannot be possible because the angles do not add up 180 degrees. The three side lengths are 547 km, 434.7 km, and 151.1 km... I must have made a mistake earlier on the question, but can someone at least read part (e) and tell me what figures I should be using (ie... you should be using the number you found in part (c) with the number given in the question)... that would be appreciated. It's really starting to bug me!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8364102244377136, "perplexity": 1557.3070788061823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741578.24/warc/CC-MAIN-20181114020650-20181114042650-00309.warc.gz"}
https://forum.allaboutcircuits.com/tags/induction-motor/
# induction motor 1. ### DC to 3-phase VFD for providing induction motor with battery power (~8kW) I'm building a mobile robot which is quite heavy and needs some serious HP. For this and other reasons I want to mount 4 induction motors to each wheel (for a total output of about 8kW) and power them via batteries. However, those motors need 3-phase current and I obviously can't plug them... 2. ### Speed Control of single phase shaded pole motor I have a group project where I have to control the speed of a single-phase shaded pole induction motor. My plan was to use a Triac-based circuit however, all Triac circuits I have seen use a potentiometer to control the speed of the motor. In this project, the motor's speed is dependent on an... 3. ### Single phase induction motor trips breaker Hi, So I've got this rotary floor sander I've got to fix. When I opened it I noticed someone had fixed it before and 1 of the capacitors had swollen. I've been told it had been working for a long time before the capacitor broke. I assumed the silver one was the run capacitor and the blue broken...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8110339045524597, "perplexity": 2015.7182643979104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358591.95/warc/CC-MAIN-20211128194436-20211128224436-00019.warc.gz"}
http://www.ma.utexas.edu/mp_arc-bin/mpa?yn=98-593
98-593 S. Bastea, J. L. Lebowitz Spinodal Decomposition in Binary Gases (112K, ReVTeX) Sep 9, 98 Abstract , Paper (src), View paper (auto. generated ps), Index of related papers Abstract. We carried out three-dimensional simulations, with about $1.4\times10^6$ particles, of phase segregation in a low density binary fluid mixture, described mesoscopically by energy and momentum conserving Boltzmann-Vlasov equations. Using a combination of Direct Simulation Monte Carlo(DSMC) for the short range collisions and a version of Particle-In-Cell(PIC) evolution for the smooth long range interaction, we found dynamical scaling after the ratio of the interface thickness(whose shape is described approximately by a hyperbolic tangent profile) to the domain size is less than $\sim0.1$. The scaling length $R(t)$ grows at late times like $t^\alpha$, with $\alpha=1$ for critical quenches and $\alpha=\frac{1}{3}$ for off-critical ones. We also measured the variation of temperature, total particle density and hydrodynamic velocity during the segregation process. Files: 98-593.tex
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9755013585090637, "perplexity": 3412.043700821062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805708.41/warc/CC-MAIN-20171119172232-20171119192232-00570.warc.gz"}
https://www.physicsforums.com/threads/inner-product.389918/
# Inner product 1. Mar 26, 2010 ### Dustinsfl In C[0,1], with inner product defined by (3), consider the vectors 1 and x. Find the angle theta between 1 and x. (3)$$\int_{0}^{1}f(x)g(x)dx$$ Find the angle theta between 1 and x I don't know what to do with polynomial inner product vector space 2. Mar 27, 2010 ### Dick Can't you compute the inner product of 1 and x given the definition? 3. Mar 27, 2010 ### Dustinsfl Yes but how is that going to find the angle? 4. Mar 27, 2010 ### Staff: Mentor How would you find the angle between two ordinary vectors? Wouldn't you use something like this? $u \cdot v = |u| |v| cos(\theta)$ This idea can be generalized to any inner product space. 5. Mar 27, 2010 ### Dustinsfl After taking the integral, I obtain 1/2. How does that help me obtain the angle of pi/6? 6. Mar 27, 2010 ### Dick Didn't you see Mark44's suggestion? Set u=1 and v=x and then figure out u.v, |u|, |v| and then what cos(theta) is. Remember |v|=sqrt(v.v). It's a number, not a function. 7. Mar 27, 2010 ### Dustinsfl By definition, <1,1> is the integral from 0 to 1 of 1^2 which is 1/2. And <x,x> is the integral from 0 to 1 of x^2 which is 1/3. Now the equation is 1/2=1/6 cos theta so theta is arccos 3 which isn't pi/6. 8. Mar 27, 2010 ### Dick The integral of 1*1 from 0 to 1 is 1/2??? And I already warned you that |x|=sqrt(<x,x>). 9. Mar 27, 2010 ### Dustinsfl 1 sorry. The definition of inner product on this space is (3) <u, v>=$$\int_{0}^{1}f(x)g(x)$$ which implies <x, x>=$$\int_{0}^{1}x^{2}=1/3$$. So now we have 1/2=1/3 cos theta and now 3/2=cos theta which is greater than 1 so it doesn't exist. 10. Mar 27, 2010 ### Dick <x,x> is 1/3. That doesn't mean |x|=1/3. 11. Mar 27, 2010 ### Dustinsfl Ok I understand now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9893510937690735, "perplexity": 3368.651548134558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648177.88/warc/CC-MAIN-20180323024544-20180323044544-00701.warc.gz"}
http://cms.math.ca/cjm/kw/manifold?page=2
location:  Publications → journals Search results Search: All articles in the CJM digital archive with keyword manifold Expand all        Collapse all Results 26 - 30 of 30 26. CJM 2001 (vol 53 pp. 278) Helminck, G. F.; van de Leur, J. W. Darboux Transformations for the KP Hierarchy in the Segal-Wilson Setting In this paper it is shown that inclusions inside the Segal-Wilson Grassmannian give rise to Darboux transformations between the solutions of the $\KP$ hierarchy corresponding to these planes. We present a closed form of the operators that procure the transformation and express them in the related geometric data. Further the associated transformation on the level of $\tau$-functions is given. Keywords:KP hierarchy, Darboux transformation, Grassmann manifoldCategories:22E65, 22E70, 35Q53, 35Q58, 58B25 27. CJM 2001 (vol 53 pp. 212) Puppe, V. Group Actions and Codes A $\mathbb{Z}_2$-action with maximal number of isolated fixed points'' ({\it i.e.}, with only isolated fixed points such that $\dim_k (\oplus_i H^i(M;k)) =|M^{\mathbb{Z}_2}|, k = \mathbb{F}_2)$ on a $3$-dimensional, closed manifold determines a binary self-dual code of length $=|M^{\mathbb{Z}_2}|$. In turn this code determines the cohomology algebra $H^*(M;k)$ and the equivariant cohomology $H^*_{\mathbb{Z}_2}(M;k)$. Hence, from results on binary self-dual codes one gets information about the cohomology type of $3$-manifolds which admit involutions with maximal number of isolated fixed points. In particular, most'' cohomology types of closed $3$-manifolds do not admit such involutions. Generalizations of the above result are possible in several directions, {\it e.g.}, one gets that most'' cohomology types (over $\mathbb{F}_2)$ of closed $3$-manifolds do not admit a non-trivial involution. Keywords:Involutions, $3$-manifolds, codesCategories:55M35, 57M60, 94B05, 05E20 28. CJM 2000 (vol 52 pp. 695) Carey, A.; Farber, M.; Mathai, V. Correspondences, von Neumann Algebras and Holomorphic $L^2$ Torsion Given a holomorphic Hilbertian bundle on a compact complex manifold, we introduce the notion of holomorphic $L^2$ torsion, which lies in the determinant line of the twisted $L^2$ Dolbeault cohomology and represents a volume element there. Here we utilise the theory of determinant lines of Hilbertian modules over finite von~Neumann algebras as developed in \cite{CFM}. This specialises to the Ray-Singer-Quillen holomorphic torsion in the finite dimensional case. We compute a metric variation formula for the holomorphic $L^2$ torsion, which shows that it is {\it not\/} in general independent of the choice of Hermitian metrics on the complex manifold and on the holomorphic Hilbertian bundle, which are needed to define it. We therefore initiate the theory of correspondences of determinant lines, that enables us to define a relative holomorphic $L^2$ torsion for a pair of flat Hilbertian bundles, which we prove is independent of the choice of Hermitian metrics on the complex manifold and on the flat Hilbertian bundles. Keywords:holomorphic $L^2$ torsion, correspondences, local index theorem, almost Kähler manifolds, von~Neumann algebras, determinant linesCategories:58J52, 58J35, 58J20 29. CJM 1999 (vol 51 pp. 1123) Arnold, V. I. First Steps of Local Contact Algebra We consider germs of mappings of a line to contact space and classify the first simple singularities up to the action of contactomorphisms in the target space and diffeomorphisms of the line. Even in these first cases there arises a new interesting interaction of local commutative algebra with contact structure. Keywords:contact manifolds, local contact algebra, Diracian, contactianCategories:53D10, 14B05 30. CJM 1999 (vol 51 pp. 585) Mansfield, R.; Movahedi-Lankarani, H.; Wells, R. Smooth Finite Dimensional Embeddings We give necessary and sufficient conditions for a norm-compact subset of a Hilbert space to admit a $C^1$ embedding into a finite dimensional Euclidean space. Using quasibundles, we prove a structure theorem saying that the stratum of $n$-dimensional points is contained in an $n$-dimensional $C^1$ submanifold of the ambient Hilbert space. This work sharpens and extends earlier results of G.~Glaeser on paratingents. As byproducts we obtain smoothing theorems for compact subsets of Hilbert space and disjunction theorems for locally compact subsets of Euclidean space. Keywords:tangent space, diffeomorphism, manifold, spherically compact, paratingent, quasibundle, embeddingCategories:57R99, 58A20 Page Previous 1 2 top of page | contact us | privacy | site map |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9107353091239929, "perplexity": 1140.2200127526103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662022.71/warc/CC-MAIN-20160924173742-00086-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/mean-absolute-deviation-standard-deviation-ratio.199764/
# Mean Absolute Deviation/Standard Deviation Ratio 1. Nov 21, 2007 ### kimberley I ran across an interesting statistic today while doing some research, but it was stated as a matter of fact without explanation and there appears to be a dearth of material on it. It was stated that the Mean Absolute Deviation ("MAD") of a Normal (Gaussian) Distribution is .7979 of a Normal Distribution's Standard Deviation ("SD"). The simple equation offered was MAD:SD=SQRT (2/pi). Question 1: Assuming this statement is true, why is it true? That is, what is it about the Normal Distribution that would cause a MAD to be .7979 of the SD? Question 2: Again, assuming this statment is true, how would you reconcile two samples, one of which has a more favorable Jarque-Bera Test Statistic than another, but a less favorable MAD/SD Ratio? Kimberley 2. Nov 21, 2007 ### EnumaElish For any arbitrary probability distribution F, MAD(F) < SD(F) is always the case. As for Q.1., the normal dist. has the characteristic that as its "spread" increases one unit as measured by squared deviations (i.e., the variance), its spread increases 0.7979 of a unit as measured by absolute deviations. Remember that SD is "the sqrt of Var" = "sqrt of average squared error," and squaring inflates outliers. (This link shows a MAD of 0.681 for the Normal -- my guess is it is simulated data; which contain some error.) Contrast this with the Double Exponential, whose MAD is about half of its SD. Compared to SD, MAD gives less weight to outliers, so distributions with light tails tend to have a MAD/SD ratio closer to 1. This is confirmed by the observation that the Normal does indeed have lighter tails than the Double Exp'l. Q.2 is challenging. MAD and Skew measure different characteristics: MAD is dispersion, which is a second-order moment, so is SD. But the ratio MAD/SD is akin to a fourth-order moment (Kurtosis). Skew is third-order. It should be possible to devise a joint test of skewness and Kurtosis, which would be a Golden Key, but I don't have a ready formula for that. A practical approach may be to say "a third-order statistic is obviously more important than a fourth-order one," and devise an ad-hoc two-step test: If distribution F has excess skew closer to 0 than distribution G, then we conclude F is more normal than G. If distributions F and G have "practically" the same excess skew, then we compare their MAD/SD ratios; the one closer to 0.7979 is more normal. Again, the Golden Key would be to devise a joint test. Last edited: Nov 22, 2007 3. Nov 22, 2007 ### D H Staff Emeritus It's an easy calculation. The mean absolute deviation is the expected value of the absolute value of the random variable: $$E(|x|) = \int_{-\infty}^{\infty}|x|\, p(x) dx$$ As both absolute value and the standard Gaussian distribution are even functions, $$E(|x|) = 2\int_0^{\infty}x \frac 1 {\sigma\sqrt{2\pi}} e^{-\,\frac {x^2} {2\sigma^2} } dx$$ A simple u-substitution does the trick here, $u=\exp(-x^2/(2\sigma^2))$: $$E(|x|) = \left.-\,\sigma \sqrt{\frac 2 {\pi}} e^{-\,\frac {x^2} {2\sigma^2} }\right|_0^{\infty} = \sigma \sqrt{\frac 2 {\pi}}$$ 4. Nov 22, 2007 ### EnumaElish Last edited: Nov 23, 2007 5. Nov 29, 2007 ### judoudo i have a vaguely related question... consider random variables X and Y with E[X]=E[Y]=0 E[X^2]=E[Y^2] (ie the same standard deviation) and for n>=3 E[X^n]>=E[Y^n]>=0 is there a way to conclude that E[|X|]>=E[|Y|] ? 6. Mar 29, 2011 ### kentrbailey I assume you mean that the higher moments (3, 4, etc.) of |X| are always higher for X than Y. If that is what you mean, then I think I can provide a counter example. Consider X to be the absolute value of a rescaled version (standard deviation 1) of a central T distribution on 9 degrees of freedom. Suppose X is the absolute value of a Standard mean zero normal variable. Then I believe (from simulation) that higher moments (2, 3, ...) of the absolute value of the rescaled t variable are all greater than the corresponding higher moments of the absolute normal. Yet the expected value of the absolute T-variable is LESS than that of the absolute normal. Indeed you would not intuitively expect the relationship to be "higher, equal, higher, higher..." as the exponent goes 1, 2, 3, 4, ... rather it goes "less, equal, higher, higher, ..." 7. Mar 29, 2011 ### kentrbailey After I sent my post to the question you did NOT ask (with absolute value signs), I realized you might have meant it just the way you posed it! Here is a counterexample to the question you ACTUALLY posed. Let X and Y both have 2-point distributions. X is 9 with probability 0.1 and -1 with probability 0.9. E(X) = 0, Var(X) = 9 Y is 4.5 with probability 4/13 and -2 with probability 9/13. E(Y) = 0, Var(Y) = 9 E( |X| ) = 1.8 and E ( |Y| ) is clearly greater than 1.8, since it has to be between 2 and 4.5. Yet the higher moments of X are clearly all greater than those of Y. I apologize for assuming you did not mean to leave off the absolute value signs.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352138042449951, "perplexity": 974.6853929513317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172902.42/warc/CC-MAIN-20170219104612-00542-ip-10-171-10-108.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/27279/a-nice-overview-and-maybe-derivation-of-the-poincar%c3%a9-transformations-of-the-ve
A nice overview (and maybe derivation) of the Poincaré transformations of the Vector Spherical Harmonics With $Y_{lm}(\vartheta,\varphi)$ being the Spherical Harmonics and $z_l^{(j)}(r)$ being the Spherical Bessel functions ($j=1$), Neumann functions ($j=2$) or Hankel functions ($j=3,4$) defining $$\psi_{lm}^{(j)}(r,\vartheta,\varphi)=z_l^{(j)}(r)Y_{lm}(\vartheta,\varphi),$$ what are representations of the Poincaré transformations applied to the Vector Spherical Harmonics $$\vec L_{lm}^{(j)} = \vec\nabla \psi_{lm}^{(j)},\\ \vec M_{lm}^{(j)} = \vec\nabla\times\vec r \psi_{lm}^{(j)},\\ \vec N_{lm}^{(j)} = \vec\nabla\times\vec M_{lm}^{(j)}$$ ? Does any publication cover all Poincaré-transformations, i.e. not only translations and rotations but also Lorentz boosts? I'd prefer one publication covering all transformations at once due to the different normalizations sometimes used. - disclaimer: I also asked this at MathOverflow –  Tobias Kienzler Mar 1 '12 at 12:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9743557572364807, "perplexity": 419.0479780113048}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066586.13/warc/CC-MAIN-20150827025426-00060-ip-10-171-96-226.ec2.internal.warc.gz"}
https://computergraphics.stackexchange.com/questions/12326/how-is-the-beam-transmittance-calculated-in-pbrt-v3
# How is the beam transmittance calculated in PBRT V3? In pbrt v3, the book gives this description of beam transmittance, but I don't know how to solve the differential equation like it says to get Tr , can someone please tell me how to solve the differential equation? Thanks a lot. To review that standard derivation: suppose we want a function $$y(x)$$ obeying the differential equation $$\mathrm{d}y/\mathrm{d}x = -ky$$, for some constant $$k$$. Then we can solve the equation as follows: \begin{aligned} \mathrm{d}y &= -ky \, \mathrm{d}x \\ \frac{\mathrm{d}y}{y} &= -k \, \mathrm{d}x \\ \int \frac{\mathrm{d}y}{y} &= -\int k \, \mathrm{d}x \\ \ln y &= -kx \\ y &= e^{-kx} \end{aligned} (there should actually be some constants of integration in there, but I left them out since they're not important for this answer.) Now, suppose we generalize and make the constant $$k$$ into a function $$k(x)$$. Then we can repeat this derivation, but we will not be able to do the integral on the right side, since $$k(x)$$ is unspecified. The result will then be: $$y = e^{-\int k(x) \, \mathrm{d}x}$$ The derivation for the transmittance along a ray is just the same, but with some variables renamed: $$y$$ becomes $$L$$, $$x$$ is now the parameter $$t$$ along the ray, and $$k(x)$$ is called $$\sigma_\mathrm{t}(\mathrm{p})$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.000008225440979, "perplexity": 270.3302156114568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00771.warc.gz"}
https://ccrma.stanford.edu/~jos/pasp/Incorporating_Control_Motion.html
Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search #### Incorporating Control Motion Let denote the vertical position of the mass in Fig.9.22. (We still assume .) We can think of as the position of the control point on the plectrum, e.g., the position of the pinch-point'' holding the plectrum while plucking the string. In a harpsichord, can be considered the jack position [350]. Also denote by the rest length of the spring in Fig.9.22, and let denote the position of the end'' of the spring while not in contact with the string. Then the plectrum makes contact with the string when where denotes string vertical position at the plucking point . This may be called the collision detection equation. Let the subscripts and each denote one side of the scattering system, as indicated in Fig.9.23. Then, for example, is the displacement of the string on the left (side ) of plucking point, and is on the right side of (but still located at point ). By continuity of the string, we have When the spring engages the string ( ) and begins to compress, the upward force on the string at the contact point is given by where again . The force is applied given (spring is in contact with string) and given (the force at which the pluck releases in a simple max-force model).10.15 For or the applied force is zero and the entire plucking system disappears to leave and , or equivalently, the force reflectance becomes and the transmittance becomes . During contact, force equilibrium at the plucking point requires (cf. §9.3.1) (10.25) where as usual (§6.1), with denoting the string tension. Using Ohm's laws for traveling-wave components (p. ), we have where denotes the string wave impedance (p. ). Solving Eq. (9.25) for the velocity at the plucking point yields or, for displacement waves, (10.26) Substituting and taking the Laplace transform yields Solving for and recognizing the force reflectance gives where, as first noted at Eq. (9.24) above, We can thus formulate a one-filter scattering junction as follows: This system is diagrammed in Fig.9.24. The manipulation of the minus signs relative to Fig.9.23 makes it convenient for restricting to positive values only (as shown in the figure), corresponding to the plectrum engaging the string going up. This uses the approximation , which is exact when , i.e., when the plectrum does not affect the string displacement at the current time. It is therefore exact at the time of collision and also applicable just after release. Similarly, can be used to trigger a release of the string from the plectrum. Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search [How to cite this work]  [Order a printed hardcopy]  [Comment on this page via email]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9575914740562439, "perplexity": 2284.3535700791394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00551-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/equivalent-dose-problem.811432/
# Homework Help: Equivalent Dose Problem 1. Apr 30, 2015 ### CheesyPeeps A worker in a nuclear power station is exposed to 3∙0 mGy of gamma radiation and 0∙50 mGy of fast neutrons. The radiation weighting factor for gamma radiation is 1 and for fast neutrons is 10. The total equivalent dose, in mSv, received by the worker is A 3·50 B 8·00 C 30·5 D 35·0 E 38·5. The answer I got (using H=DWR) was C, 30.5 mSv, but the answer sheet said that the correct answer was B, 8.00 mSv. I would be most grateful if you could please suggest where I went wrong? (This question is from the SQA National 5 Physics Specimen Paper) Update: I asked a friend about it and realised my mistake. I converted mSv into Sv when I didn't need to, which then affected my answer. Last edited: Apr 30, 2015 2. Apr 30, 2015 ### haruspex That does not explain your wrong answer. It is explained if you crossed over the weighting factors.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9419920444488525, "perplexity": 3351.5531387026526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861981.50/warc/CC-MAIN-20180619080121-20180619100121-00136.warc.gz"}
http://www.gradesaver.com/textbooks/math/other-math/basic-college-mathematics-9th-edition/chapter-8-geometry-8-8-pythagorean-theorem-8-8-exercises-page-593/15
## Basic College Mathematics (9th Edition) Published by Pearson # Chapter 8 - Geometry - 8.8 Pythagorean Theorem - 8.8 Exercises: 15 31.623 #### Work Step by Step We can use a calculator to estimate the given square root. The calculator shows that $\sqrt 1000=31.62278$, which rounds to $31.623$. This makes sense, given that $\sqrt 1024=32$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.816231906414032, "perplexity": 1387.6700028174032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426639.7/warc/CC-MAIN-20170726222036-20170727002036-00684.warc.gz"}
http://mathoverflow.net/feeds/question/19240
Algebraic Proof of 4-Colour Theorem? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-21T03:43:48Z http://mathoverflow.net/feeds/question/19240 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/19240/algebraic-proof-of-4-colour-theorem Algebraic Proof of 4-Colour Theorem? Tony Huynh 2010-03-24T21:59:25Z 2012-10-20T16:16:19Z <p><strong>4-Colour Theorem.</strong> Every planar graph is 4-colourable. </p> <p>This theorem of course has a well-known history. It was first proven by Appel and Haken in 1976, but their proof was met with skepticism because it heavily relied on the use of computers. The situation was partially remedied 20 years later, when Robertson, Sanders, Seymour, and Thomas published a new proof of the theorem. This new proof still relied on computer analysis, but to such a lower extent that their proof was actually verifiable. Finally, in 2005, Gonthier and Werner used the <a href="http://en.wikipedia.org/wiki/Coq" rel="nofollow">Coq</a> proof assistant to formalize a proof, so I suppose only the most die hard skeptics remain.</p> <p>My question stems from reading this <a href="http://people.math.gatech.edu/~thomas/PAP/update.pdf" rel="nofollow">paper</a> by Robin Thomas. In it, he describes several interesting reformulations of the 4-colour theorem. Here is one:</p> <p>Note that the cross-product on vectors in $\mathbb{R}^3$ is not an associative operation. We therefore define a <em>bracketing</em> of a cross-product $v_1 \times \dots v_n$ to be a set of brackets which makes the product well-defined. </p> <p><strong>Theorem.</strong> Let $i, j, k$ be the standard unit vectors in $\mathbb{R}^3$. For any two different bracketings of the product $v_1 \times \dots \times v_n$, there is an assignment of $i,j,k$ to $v_1, \dots, v_n$ such that the two products are equal and non-zero. </p> <p>The surprising fact is that this innocent looking theorem implies the 4-colour theorem. </p> <p><strong>Question.</strong> Is anyone working on an algebraic proof of the 4-colour theorem (say by trying to prove the above theorem)? If so, what techniques are involved? What partial progress has been made? Or do most people consider the effort/reward ratio of such an endeavor to be too high?</p> <p>I think it would be interesting to have an algebraic proof, even a very long one, particularly if the algebraic proof does not use computers. Given its connection to many other areas (Temperley-Lieb Algebras), the problem seems to be amenable to other forms of attack. </p> http://mathoverflow.net/questions/19240/algebraic-proof-of-4-colour-theorem/19242#19242 Answer by David Lehavi for Algebraic Proof of 4-Colour Theorem? David Lehavi 2010-03-24T22:08:18Z 2010-03-24T22:08:18Z <p>Does <a href="http://front.math.ucdavis.edu/q-alg/9606016" rel="nofollow">"Lie Algebras and the Four Color Theorem"</a> by Dror Bar-Natan qualify ?</p> http://mathoverflow.net/questions/19240/algebraic-proof-of-4-colour-theorem/19274#19274 Answer by Igor Pak for Algebraic Proof of 4-Colour Theorem? Igor Pak 2010-03-25T04:19:58Z 2010-03-25T04:19:58Z <p>There is a classical approach by Birkhoff and Lewis, which remained dormant for decades. It was recently revived by Cautis and Jackson (start <a href="http://www.math.columbia.edu/~scautis/papers/TL.pdf" rel="nofollow">here</a> and proceed <a href="http://www.math.columbia.edu/~scautis/papers/tuttefinal.pdf" rel="nofollow">here</a>), using the Temperley-Lieb algebra. </p> http://mathoverflow.net/questions/19240/algebraic-proof-of-4-colour-theorem/19336#19336 Answer by Noah Snyder for Algebraic Proof of 4-Colour Theorem? Noah Snyder 2010-03-25T17:57:53Z 2010-03-25T17:57:53Z <p>I must admit I'm a bit baffled about what the <em>question</em> is here, and about why so many people have voted it up. What are you looking for in an answer? I don't think it's appropriate to post speculation on the internet about which mathematicians are privately working on which big problems. As to public work, you seem to have a weirdly restrictive view of what "working on" and "partial progress" mean that don't fit with my understanding of how mathematics works. Several papers have been written on the subject of possible algebraic proofs of the 4-color theorem (look at google scholar or Mathscinet for <a href="http://scholar.google.com/scholar?cites=17068357875107999385&amp;hl=en&amp;as_sdt=20000000000" rel="nofollow">papers which cite the Saleur-Kauffman paper</a> mentioned in the paper you're reading), but if the Bar-Natan paper doesn't count for you then you're likely to be disappointed by all of them.</p> <p>The long and short of it is that everyone in quantum topology would love to prove the 4-color theorem and occasionally thinks about it. There's lots of tantalizing clues that an algebraic argument has promise, but if anyone knew how to prove it they'd have done so. As far as I know there isn't anyone who is holed up in their attic thinking about only the 4-color theorem, instead there's a lot of people who every time they find a new tool think "hrm, I wonder if this tool would work on the 4-color theorem?"</p> http://mathoverflow.net/questions/19240/algebraic-proof-of-4-colour-theorem/20194#20194 Answer by guest for Algebraic Proof of 4-Colour Theorem? guest 2010-04-02T22:13:16Z 2010-04-02T22:13:16Z <p>An article in Scientific American, Jan 2003 offered a supposed counterexample, that sparked my interest in the problem. That and the complexity of the Appel and Haken proof motivated me to do my own study. It has minimal math, but is consistent, and approaches the problem from an entirely different direction. If interested it's at insight.awardspace.info.</p> http://mathoverflow.net/questions/19240/algebraic-proof-of-4-colour-theorem/110165#110165 Answer by Anatoly for Algebraic Proof of 4-Colour Theorem? Anatoly 2012-10-20T16:16:19Z 2012-10-20T16:16:19Z <p>Algebraic Proof of 4-Colour Theorem? see please <a href="http://www.math.accent.kiev.ua/article/00/4ct-2-.htm" rel="nofollow">http://www.math.accent.kiev.ua/article/00/4ct-2-.htm</a></p>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8154957294464111, "perplexity": 1494.3779556446525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699675907/warc/CC-MAIN-20130516102115-00040-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.wikizero.com/simple/Heat
# Heat Heat from the sun. Heat, or thermal energy is the sum of the kinetic energy of atoms or molecules. In thermodynamics, heat means energy which is moved between two things, when one of them has a higher temperature than the other thing. Adding heat to something increases its temperature, but heat is not the same as temperature. The temperature of an object is the measure of the average speed of the moving particles in it. The energy of the particles is called the internal energy. When an object is heated, its internal energy can increase to make the object hotter. The first law of thermodynamics says that the increase in internal energy is equal to the heat added minus the work done on the surroundings. Heat can also be defined as the amount of thermal energy in a system.[1] Thermal energy is the type of energy that a thing has because of its temperature. In thermodynamics, thermal energy is the internal energy present in a system in a state of thermodynamic equilibrium because of its temperature.[2] That is, heat is defined as a spontaneous flow of energy (energy in transit) from one object to another, caused by a difference in temperature between two objects; therefore, objects do not possess heat.[3] ## Properties of Heat Heat is a form of energy and not a physical substance. Heat has no mass. Heat can move from one place to another in different ways: The measure of how much heat is needed to cause a change in temperature for a material is the specific heat capacity of the material. If the particles in the material are hard to move, then more energy is needed to make them move quickly, so a lot of heat will cause a small change in temperature. A different particle that is easier to move will need less heat for the same change in temperature. Specific heat capacities can be looked up in a table, like this one. Unless some work is done, heat moves only from hot things to cold things. ## Measuring heat Heat can be measured. That is, the amount of heat given out or taken in can be given a value. The calorie is one of the units of measurement for heat but the joule is also used for all kinds of energy including heat. Heat is usually measured with a calorimeter, where the energy in a material is allowed to flow into nearby water, which has a known specific heat capacity. The temperature of the water is then measured before and after, and heat can be found using a formula. ## References 1. "How Physicists Define Heat". ThoughtCo. Retrieved 2018-04-24. 2. Thermal energy - Britannica 3. Schroeder, Daniel, R. (2000). Thermal Physics. New York: Addison Wesley Longman. ISBN 0201380277.CS1 maint: multiple names: authors list (link)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84635990858078, "perplexity": 278.79751008346545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00432.warc.gz"}
https://overunity.com/2794/the-lee-tseung-lead-out-theory/375/
Language: To browser these website, it's necessary to store cookies on your computer. The cookies contain no personal information, they are required for program control. the storage of cookies while browsing this website, on Login and Register. ### GDPR and DSGVO law Storing Cookies (See : http://ec.europa.eu/ipg/basics/legal/cookies/index_en.htm ) help us to bring you our services at overunity.com . If you use this website and our services you declare yourself okay with using cookies .More Infos here: https://overunity.com/5553/privacy-policy/ If you do not agree with storing cookies, please LEAVE this website now. From the 25th of May 2018, every existing user has to accept the GDPR agreement at first login. If a user is unwilling to accept the GDPR, he should email us and request to erase his account. Many thanks for your understanding. Amazon Warehouse Deals ! Now even more Deep Discounts ! Check out these great prices on slightly used or just opened once only items.I always buy my gadgets via these great Warehouse deals ! Highly recommended ! Many thanks for supporting OverUnity.com this way. Many thanks. # New Book Products WaterMotor kit ### Statistics • Total Members: 84217 • Latest: Jarik • Total Posts: 897755 • Total Topics: 15822 • Online Today: 44 • Most Online: 103 (December 19, 2006, 11:27:19 PM) • Users: 4 • Guests: 19 • Total: 23 #### jeffc • Jr. Member • Posts: 96 ##### Re: The Lee-Tseung Lead Out Theory « Reply #375 on: September 30, 2007, 09:02:35 AM » ..... I have been taking a wait and see attitude until this point, considering that Lawrence has repeatedly stated direct involvement with the university, which implies they believe Lead Out has some sort of scientific basis.  If they really do deny even knowing about this, then we are all wasting time, as it is likely the other credible parties which have been claimed to be involved are fabrications as well. I hope this is not that case, as the optimist part of me would like to believe that there is actually something incredible going on in China with free energy.  I think if Lawrence has contacts at the university, he should help us clarify this apparent flaw in his claims. Regards, jeffc Dear jeffc, You should have asked for information earlier.  I was eager to share it - especially my picture at the Lecture Hall of Tsing Hua University.  It was an honor for me.  Almost all my friends and relatives have a hard copy. Please see the attached file.  Hans can now ask his Chinese Lecturer friends at Tsing Hua University to confirm our visit in September-October 2006. Lawrence Tseung Asking intelligent questions Lead Out valuable answers.  You can put good questions on  multiple pulses in case I overlooked them. Hi Lawrence, Thank you for the photo and information.  The post which I made before was in direct response to Hans statement about his attempts to verify your relationship with Tsing Hua.  Prior to his statement, I had taken for granted that your credentials with respect to the university were as you stated, and was instead rather happy to try and follow the scientific elements of this discussion.  But when Hans brought what appeared to be a valid challenge to your information, it seemed to be something which needed your help to verify. If you read the last part of my post, it says that "I hope this is not that case", because I am an optimistic person and I have no reason to disbelieve anything you have said.  In any case, if we are able to confirm the truth in this issue, perhaps we can get back to science! I will continue to read this topic with interest, and ask questions when I think it will be helpful for reaching conclusions.  Please understand that I do not want to make a personal attack on anyone in this forum.  I'm here to be a small part of progress, and thats all. Regards, jeffc #### Free Energy | searching for free energy and discussing free energy ##### Re: The Lee-Tseung Lead Out Theory « Reply #375 on: September 30, 2007, 09:02:35 AM » #### Mr.Entropy • Full Member • Posts: 195 ##### Re: The Lee-Tseung Lead Out Theory « Reply #376 on: September 30, 2007, 10:43:29 PM » (1)   A pendulum with no pulse force  can be analyzed with the simple law of conservation of energy.  At any point in time, the sum of potential energy and kinetic energy are equal. mgh + 1/2 * m *v *v = constant (2)   When a pulse force is applied, how should the analysis be done?  Can we apply the Law of parallelogram of forces?  Will this pulse force supply energy to the system?  Will the tension of the string contribute to the resulting forces and the resulting energy? If you apply a pulse force F (vector) to the pendulum bob while it moves through a displacement D (vector), you do work equal to F dot D, and you should find then find that the total energy in the pendulum has changed by exactly that amount.  If you find that the new total energy exceeds to old total energy by more than that, then congratulations -- you have an overunity device. Note that this does not have anything to do with the tension on the string. For a pulse force, F is typically very large, and D is typically very small, and this makes it very difficult to measure F dot D directly.  Instead, what you want to do is charge some small resevoir of potential energy, like a spring or a capacitor (but not a battery) or a lifted weight, and expend that energy into the pendulum with a pulse.  It's easy to measure the energy in the original charge, and if you're careful about your engineering, you can ensure that that energy is transferred to the pendulum efficiently. Measuring the total energy in the pendulum is also problematic if you're adding magnets and stuff, but one way that works is to pick some point in the pendulum's swing (like the bottom) and measure its velocity there before and after the pulse.  The difference is the amount of energy you have added to the pendulum.  Without magnets and stuff, it's easier -- just measure the difference in the height of the pendulum's swing and use mgh. I'm going to actually propose an experiment in another post... Cheers, Mr. Entropy Footnote:  In cartesian coordinates, where vectors F and D are (Fx,Fy) and (Dx,Dy), F dot D = Fx*Dx + Fy*Dy.  This is equal to length_of_F * length_of_D * cos(angle_between_F_and_D). #### ltseung888 • Hero Member • Posts: 4363 ##### Re: The Lee-Tseung Lead Out Theory « Reply #377 on: October 01, 2007, 09:22:20 AM » Itseung888: What the heck does that have to do with anything?  The sum of potential energy and kinetic energy are always equal in any system...at least the ones we know about.  I fail to see how this applies in any way to your theory. I will re-read all of the previous posts (almost done) and I mean no disrespect to you at all. If I have read this right thus far, the "energy" you are describing could easily be described by a bouncing ball, which will also come to rest eventually.  It does not emit or give off or generate any additional energy even though it is "defying" gravity during half of it's cycles.  Maybe I am just ignorant, which is always possible. I always try to maintain an open mind on such matters.  I will research this phenomenon a little more.  Thanks for your reply. Bill Dear Bill and Mr. Entrophy, Thank you for your replies.  Bill is right in comparing the pendulum with the bouncing ball in the case of NO External Pulsing Force.  There is no obvious external energy entering the system.  If there were no loss of energy, we can apply the CoE and limit the energy of the system to be just the two terms - Potential Energy (mgh) and Kinetic Energy (1/2 * M* v* v).  The sum of these two terms will be the same while the ball bounces up and down or while the pendulum is swinging. We sometimes use the formular mgH = mgh + 1/2m*v*v) = 1/2m*V*V where H is the maximum Height reached and V is the highest velocity at the lowest point. h is the height at any instant. v is the velocity at the same instant. Scientists already know how to use gravitational energy in the following case.  Water from a dam drives a turbine to generate electricity.  The potenital energy of water is used.  However, to get the water back to its original height, we need to wait for the sun to evaporate the water and the rainfall will complete the cycle. If we want to continuously use gravitational energy, we should look for repeatable systems.  These systems, fortuanately, are available to us easily.  The first example is the simple pendulum with no external pulse force.  We know that we can safely apply the CoE and use the forumula mgh + 1/2 m*v*v = constant. Now consider exactly how we supply energy to the stationary pendulum.  The pendulum is hanging in the vertical position.  We apply a horizontal Force F.  The pendulum will have both a vertical and a horizontal displacement. (The D vector mentioned by Mr. Entrophy).  Just before we stop the force F, there will be THREE forces involved in the pendulum system. (1) The Weight of the Pendulum (or more exactly m *g where m is the mass and g is the gravitational acceleration at the surface of the Earth which is 9.8 m/s/s approxiately.  Please yell if you don't understand the statement) (2) The Horizontal Pulse Force F (Note that it is an externally applied Force controlled by the Engineer. ) (3) The Tension  of the String (If there were no string, the pendulum will not swing back) Bill rightly stated that these three forces are also vectors.  In order to determine the energy supplied by these three forces, we need to apply the vector mathematics of Force * Displacement.  (Note that it is Vector Mathematics and not the scalar multiplication.) The relationship of the forces MUST obey the Law of Parallelogram of Forces.  That set of Laws is taught at Mechanics 101 in secondary school physics. I shall pause here to get your response first. Regards, Lawrence « Last Edit: October 01, 2007, 11:23:59 PM by ltseung888 » #### Free Energy | searching for free energy and discussing free energy ##### Re: The Lee-Tseung Lead Out Theory « Reply #377 on: October 01, 2007, 09:22:20 AM » #### shruggedatlas • Hero Member • Posts: 549 ##### Re: The Lee-Tseung Lead Out Theory « Reply #378 on: October 01, 2007, 03:07:31 PM » (1) The Weight of the Pendulum (or more exactly m *g where m is the mass and g is the gravitational acceleration at the surface of the Earth which is 9.8 m/s/s approxiately.  Please yell if you don't understand the statement) (2) The Horizontal Pulse Force F (Note that it is an externally applied Force controlled by the Engineer. ) (3) The Tension  of the String (If there were no string, the pendulum will not swing back) The weight of the pendulum plus the horizontal pulse force already sum up the force the object exerts on the string.  I do not know why you add (3) above.  If someone was pulling on the string, then yes, you would need to calculate that force, but seeing as the string is fixed at a point, why even include this? #### ltseung888 • Hero Member • Posts: 4363 ##### Re: The Lee-Tseung Lead Out Theory « Reply #379 on: October 01, 2007, 05:10:21 PM » (1) The Weight of the Pendulum (or more exactly m *g where m is the mass and g is the gravitational acceleration at the surface of the Earth which is 9.8 m/s/s approximately.  Please yell if you don't understand the statement) (2) The Horizontal Pulse Force F (Note that it is an externally applied Force controlled by the Engineer.) (3) The Tension of the String (If there were no string, the pendulum will not swing back) The weight of the pendulum plus the horizontal pulse force already sum up the force the object exerts on the string.  I do not know why you add (3) above.  If someone was pulling on the string, then yes, you would need to calculate that force, but seeing as the string is fixed at a point, why even include this? Dear shruggedatlas, When the pendulum is at rest, there were two forces.  They were the tension of the string and the weight of the pendulum bob.  They were equal and opposite to each other. When we applied a horizontal force on the pendulum bob, there would be three forces.  They were the tension of the string, the weight of the pendulum bob and the horizontal force.  When the Pendulum Bob moved to its new position  and was momentarily at rest under the influence of these three forces, we called this system of three forces as ?at equilibrium?.  When these 3 forces were at equilibrium, we could apply the Law of Parallelogram of Forces. This would be the situation when the first Pulse Force  was applied.  In this particular situation, the vigorous application of the Physics Law of Parallelogram of Forces and energy analysis conclusively indicated that the total energy entering the system was not  just the energy from the horizontal pulse force. The Pendulum Bob moved up.  There was displacement up.  The force up  was from the vertical component of the tension of the string.  This displacement up times the force up  represented work done or energy exchanged in the up direction.  This is the Lead Out  Energy! For more details, see a result from google search: http://www.antonine-education.co.uk/Physics_AS/Module_2/Topic_2/Forces%20and%20Equilibrium_files/frame.htm. I shall pause here for responses. Regards, Lawrence Tseung Three forces at equilibrium Leads Out the use of the Parallelogram of Forces. If one of the forces is Pulsed (repeated) at the right moment, resonance can result.  Useful Energy is not just the energy from the Pulse.  Useful Energy will include the Lead Out  Energy. #### Free Energy | searching for free energy and discussing free energy ##### Re: The Lee-Tseung Lead Out Theory « Reply #379 on: October 01, 2007, 05:10:21 PM » #### Pirate88179 • elite_member • Hero Member • Posts: 8366 ##### Re: The Lee-Tseung Lead Out Theory « Reply #380 on: October 01, 2007, 10:24:58 PM » Lawrence: I think I follow you and agree to a point.  In your system of the suspended pendulum, I understand and agree with the equalibrium of the three balanced forces at a given point.  You mentioned the initial "push" as being controlled by the experimenter, and that the pulses could be timed in such a way as to produce resonance, which I also agree with.  The example posted earlier about the child in a swing and a small force (push) timed and repeated correctly can send the child in the swing to great heights and velocity.  But, what I was taught, and possible inncorrectly, was that if you added up all of the energy used in the pushes or pulses, they would equal out to exactly the kinetic and potential energy conveyed by the swinging child.  Of course, this example does not involve magnets or magnetic fields. So, my question is, given that the correctly timed pulses are an efficient way of propelling the child to great swinging arcs, but, they none the less represent no more than the total energy in the system, where or how does any additional energy come into the picture?  I appreciate your patience in your explanations. Bill #### gaby de wilde • Sr. Member • Posts: 470 ##### Re: The Lee-Tseung Lead Out Theory « Reply #381 on: October 02, 2007, 12:15:00 AM » But, what I was taught, and possible inncorrectly, was that if you added up all of the energy used in the pushes or pulses, they would equal out to exactly the kinetic and potential energy conveyed by the swinging child. I don't know physics. I think physics should get to know me. Personally I try to imagine how far I can throw a person up into the sky with a gentle push. A gentle push seems to be enough to throw a person up by say 0 cm? Correct me if I'm wrong but it appears not enough to even lift the passenger off the ground let alone launch them by means of gentle push. The fact the swing was already moving isn't so much of an issue. the part where a small push is enough to lift 80 kg of meat by 30 cm really makes no sense with the established theory. One of the 2 has to be wrong.  If I give you 2 blocks 1000 kg, block 1 is attached to 50 meters of wire, you will be able to swing block 1 up against gravity.  Block 2 remains in it's spot, you cant even lift it - what are we talking about here? LOL! wait, this is pretty convincing... http://www.youtube.com/watch?v=BDla-x-l4Hc no? Quote where or how does any additional energy come into the picture?  I appreciate your patience in your explanations. I think A standing wave is still a wave? if 10 Joule is the energy required to lift a kg by 1 meter(is it?). Then we need about 5 Joule worth of pulses. It appears you can get about twice the height out of it. I think it's interesting how the bob is already decelerating. If 9.8 m/s is the maximum acceleration then that must also be the maximum deceleration? (I'm guessing here)  As the bob is already decelerating while moving upwards could it be that gravity has some modified influence? You may test what I mean by pushing an object towards the ground faster as it would drop. You feel you are not assisting gravity but replacing it. If you toss an object in the air it kind of floats there for a moment then reverses direction. It doesn't just reverse direction but it waits for a bit, this is the moment the pulse disturbs the system most efficiently. Push the swing the moment right after it reverses direction. It's like nature shifts the gears from decelerate to accelerate and you have a small window of free motion while the sprockets are detached. #### Free Energy | searching for free energy and discussing free energy ##### Re: The Lee-Tseung Lead Out Theory « Reply #381 on: October 02, 2007, 12:15:00 AM » #### ltseung888 • Hero Member • Posts: 4363 ##### Re: The Lee-Tseung Lead Out Theory « Reply #382 on: October 02, 2007, 12:15:04 AM » .... can send the child in the swing to great heights and velocity.  But, what I was taught, and possible inncorrectly, was that if you added up all of the energy used in the pushes or pulses, they would equal out to exactly the kinetic and potential energy conveyed by the swinging child. ...... Bill Dear Bill, I was taught incorrectly as well.  However, I had my lesson in a painful way.  When I was still a naughty boy (almost half a century ago), I pushed the punch bag a few times and then stood there for it to knock me down.  The punch bag was a few hundred kilograms.  The force knocked me a few meters away and down. I was convinced that my couple of pushes could not have provided the force or the energy to give me so much pain.  However, my physics teacher  told me the same thing as your teacher (The sum of energy of the few pushes added together was the culprit.) It took me 50 years later  to realize that I was taught the wrong thing.  (Thanks to Lee waking me up at 7:30 am from the hotel.)  I actually Lead Out some gravitational energy in each of the pushes.  The culprit was the sum of my energy and the Lead Out gravitational energy! If I did not believe  in my physics teacher, the Lee-Tseung theory would have been out over half a century ago! Lawrence Tseung Believing the teacher blindly Leads Out  wrong results even though the Pulses provided the painful lesson. #### shruggedatlas • Hero Member • Posts: 549 ##### Re: The Lee-Tseung Lead Out Theory « Reply #383 on: October 02, 2007, 01:38:09 AM » the part where a small push is enough to lift 80 kg of meat by 30 cm really makes no sense with the established theory. One of the 2 has to be wrong.  If I give you 2 blocks 1000 kg, block 1 is attached to 50 meters of wire, you will be able to swing block 1 up against gravity.  Block 2 remains in it's spot, you cant even lift it - what are we talking about here? LOL! The established theory is called mechanical advantage.  You are not lifting the 30kg straight up, but at an incline, and it is therefore easier to do, but the total amount of energy needed is the same, assuming no friction or air resistance. If what you are saying is true, creating an overunity device is trivial.  Just have the pendulum hit something capable of converting the kinetic energy to electrical, and use that stored charge to "pulse" the pendulum when it is on the downswing. #### Free Energy | searching for free energy and discussing free energy ##### Re: The Lee-Tseung Lead Out Theory « Reply #383 on: October 02, 2007, 01:38:09 AM » #### Pirate88179 • elite_member • Hero Member • Posts: 8366 ##### Re: The Lee-Tseung Lead Out Theory « Reply #384 on: October 02, 2007, 01:40:16 AM » Lawrence: So, then might it be possible to contruct a decent size pedulum where the "weight" would be a cylinder shaped magnet and suspend it from two lines into one such that it would keep it from twisting, and have it pass twice during it's period through a coil, or a series of coils, to generate enough power to run a small occilator that would add enough of a pulse at the correct time to maintain the pendulum motion?  This would be fairly easy to construct on a smaller scale for testing.  Do you think this would be possible?  thanks. Bill #### ltseung888 • Hero Member • Posts: 4363 ##### The Ms. Forever Yuen Magnetic Pendulum Experiment « Reply #385 on: October 02, 2007, 02:15:24 AM » The Ms. Forever Yuen Magnetic Pendulum Experiment Ms. Forever Yuen gave me her powerpoint presentation with photos on her Magnetic Pendulum Experiment last night. I have edited and attached it here. It is easier to set up  than the toy from the overunity.com store.  It is much cheaper too.  The important element to look for are the three sets  of readings: (1) No other magnetic material around (32 Oscillations per 30 sec) (2) Repulsion (25 Oscillations per 30 sec) (3) Attraction (41 Oscillations per 30 sec) I shall continue to discuss the significance of this experiment in the coming posts. Thanks to Ms. Forever Yuen once again. Lawrence Tseung #### Free Energy | searching for free energy and discussing free energy ##### The Ms. Forever Yuen Magnetic Pendulum Experiment « Reply #385 on: October 02, 2007, 02:15:24 AM » #### Mr.Entropy • Full Member • Posts: 195 ##### Re: The Lee-Tseung Lead Out Theory « Reply #386 on: October 02, 2007, 03:35:12 AM » Now consider exactly how we supply energy to the stationary pendulum.  The pendulum is hanging in the vertical position.  We apply a horizontal Force F.  The pendulum will have both a vertical and a horizontal displacement. (The D vector mentioned by Mr. Entrophy).  Just before we stop the force F, there will be THREE forces involved in the pendulum system. (1) The Weight of the Pendulum (or more exactly m *g where m is the mass and g is the gravitational acceleration at the surface of the Earth which is 9.8 m/s/s approxiately.  Please yell if you don't understand the statement) (2) The Horizontal Pulse Force F (Note that it is an externally applied Force controlled by the Engineer. ) (3) The Tension  of the String (If there were no string, the pendulum will not swing back) Bill rightly stated that these three forces are also vectors.  In order to determine the energy supplied by these three forces, we need to apply the vector mathematics of Force * Displacement.  (Note that it is Vector Mathematics and not the scalar multiplication.) The relationship of the forces MUST obey the Law of Parallelogram of Forces.  That set of Laws is taught at Mechanics 101 in secondary school physics. I shall pause here to get your response first. Right let the displacement D represent the path of the pendulum bob durring the application of the pulse force. Assuming that the displacement D is small enough, since a pulse lasts only a moment, we can consider D to be a straight line and the forces below to be constant over the time when the pulse is applied.  Otherwise we'll have to integrate: During the pulse time, we have a force due to gravity (Fg), a force due to the tension on the string (Fs) and the applied pulse force (Fp).  By the law of the parallelogram of forces, as you say, these add vectorially, so that the work done by the combination of those forces is W = (Fg + Fs + Fp) \dot D The \dot product is distributive over addition, so W = (Fg \dot D) + (Fs \dot D) + (Fp \dot D), and we can consider each of these independently: (Fg \dot D) is the work done by gravity.  If the bob is moving up, Fg and D are in opposing directions and this is negative.  work is done against gravity and stored as potential energy by the increase in the bob's height. (Fp \dot D) is the work done by the pulse force.  You will probably apply the force in the direction that the bob moves, speeding it up, so this will be positive.  This work will be stored as an increase in the speed and kinetic energy of the pendulum. (Fs \dot D) is the work done by the tension on the string.  Since the force is applied at a right angle to the direction of motion, Fs \dot D is zero, and the string does no work. That's the conventional analysis.  What in here is incorrect? Cheers, Mr. Entropy #### ltseung888 • Hero Member • Posts: 4363 ##### Re: The Lee-Tseung Lead Out Theory « Reply #387 on: October 02, 2007, 03:47:39 AM » Lawrence: So, then might it be possible to contruct a decent size pedulum where the "weight" would be a cylinder shaped magnet and suspend it from two lines into one such that it would keep it from twisting, and have it pass twice during it's period through a coil, or a series of coils, to generate enough power to run a small occilator that would add enough of a pulse at the correct time to maintain the pendulum motion?  This would be fairly easy to construct on a smaller scale for testing.  Do you think this would be possible?  thanks. Bill Dear Bill, I believe you should read the Bill Mehess  Motor first. http://www.overunity.com/index.php/topic,919.msg6407.html#msg6407 I advise a more thorough understanding  of the Lee-Tseung theory first before more experiments.  There are already 300 or so Over Unity Inventions worldwide.  Most of them are from "experimenters" who jumped to try some ideas  without the painstaking research first. Many almost got it  but then spent years spinning around.  We can discuss how to improve some of them here and then do the improvements. I shall wait for a few more comments or responses on the Ms. Forever Yuen experiment before further continued discussion of the Lee-Tseung Theory. Regards, Lawrence #### ltseung888 • Hero Member • Posts: 4363 ##### Re: The Lee-Tseung Lead Out Theory « Reply #388 on: October 02, 2007, 06:09:26 AM » Right let the displacement D represent the path of the pendulum bob during the application of the pulse force. Assuming that the displacement D is small enough, since a pulse lasts only a moment [1], we can consider D to be a straight line and the forces below to be constant over the time [2] when the pulse is applied.  Otherwise we'll have to integrate: During the pulse time, we have a force due to gravity (Fg), a force due to the tension on the string (Fs) and the applied pulse force (Fp).  By the law of the parallelogram of forces, as you say, these add vectorially, so that the work done by the combination of those forces is W = (Fg + Fs + Fp) \dot D The \dot product is distributive over addition, so W = (Fg \dot D) + (Fs \dot D) + (Fp \dot D), and we can consider each of these independently[3]: (Fg \dot D) is the work done by gravity.  If the bob is moving up, Fg and D are in opposing directions and this is negative.  work is done against gravity and stored as potential energy by the increase in the bob's height. (Fp \dot D) is the work done by the pulse force.  You will probably apply the force in the direction that the bob moves, speeding it up, so this will be positive.  This work will be stored as an increase in the speed and kinetic energy [4]  of the pendulum. (Fs \dot D) is the work done by the tension on the string.  Since the force is applied at a right angle to the direction of motion, Fs \dot D is zero[5],  and the string does no work. That's the conventional analysis[6].  What in here is incorrect? Cheers, Mr. Entropy Let us discuss your assumptions point by point. [1]We assume that the Pulse Force is controllable by the Engineer.  It can be as short or as long as desired.  In this case of the first application on the stationary pendulum, we assume that the Pulse Force is long enough to let the pendulum swing fully to its new position.  At the new position, the pendulum stopped momentarily to change direction to swing back. [2] The forces are constant over time. The Gravitational Force (Fg) can be regarded as constant as the mass and the gravitational acceleration (g = 9.8 m/s/s) can be regarded as constant during this first pulse and swing. The Horizontal Pulse Force (Fp) can be regarded as constant because this is under the control of the Engineer.  We assume the ideal situation of capable of turning the Force on and off. The Tension in the String (Fs) unfortunately, cannot be regarded as a constant.  Before the application of the Pulse Force (Fp), Fs = Fg.  When Fp is being applied, Fs MUST change.  At the new momentary stationary position, Fs must be equal and opposite to the resultant force of Fp and Fg.  Thus this particular assumption is incorrect. [3] We cannot consider them independently if Fs is a function of Fp and Fg.  Thus this particular assumption is incorrect. [4] As we stated, the final position for this analysis is the momentary stationary position.  There is NO velocity  and hence no kinetic energy.  This particular assumption is incorrect. [5] Fs \dot D is zero  You are making the assumption that the force vector Fs is always at right angles to the displacement vector D.  In constant circular motion such as the Earth going around the Sun, this is correct.  However, in accelerating and decelerating circular motion, this is incorrect. [6] Thus the so called conventional analysis as outlined by you have many  incorrect assumptions. As you have rightly pointed out, without the use of Integrals, we have to make certain assumptions and simplifications.  These assumptions and simplifications may be inexact or even incorrect.  Arguing over inexact or incorrect assumptions will waste much energy and time. I shall try to explain the Integral Assumption  in the next post.  It will be edited from the notes made after the discussions at Tsing Hua University. Lawrence Tseung Simplified Analysis Leads Out  incorrect assumptions.  It increases the Pulse rates of all involved. « Last Edit: October 02, 2007, 01:03:59 PM by ltseung888 » #### Pirate88179 • elite_member • Hero Member • Posts: 8366 ##### Re: The Lee-Tseung Lead Out Theory « Reply #389 on: October 02, 2007, 07:57:04 AM » Lawrence: It may not matter in your calculations but I believe that the figure of acceleration due to gravity of 9.8 cm2 does not apply in this case. (Pendulum)  That was calculated for a free falling body in a vaccuum.  I am not going to get into wind resistance or anything but it's the "free falling" aspect that interests me.  Since the sting is forcing the pendulum into a circular path, rather than straight, and if the pendulum is launched (started) at the 3:00 position or the 9:00 position, it would only free fall for an instant before the string causes it to begin it's circular path.  The force imposed by the string in doing this would not allow the pendulum to reach the acceleration figure for a free falling body. The force of gravity at 1g would also not remain constant during the circular path forced by the string.  I believe it would climb and be at maximum at the 6:00 position of the pendulum arc and taper off back toward 1g, and then 0g at the extreme ends of the swing movement. I think the most interesting thing is that at each ends of the pendulum arc, there is that one moment where, for all intents and purposes, the pendulm is weightless and would be at 0g.  Velocity would also be 0. The tension on the string would be 0.  It is this exact point in time that interests me. I saw the slide show done by your friend. The results were not what I would have anticipated if asked beforehand.  Interesting. Bill
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8241037130355835, "perplexity": 1245.9431135350744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00716.warc.gz"}
https://dsp.stackexchange.com/questions/9904/measure-of-harmonicity-in-a-time-series
# Measure of harmonicity in a time-series I'm analyzing speech signal for identifying voiced and unvoiced regions. Voiced regions are supposed to have a "pitch", which can be estimated using auto-correlation function (ACF). Basically, one estimates ACF for each frame of speech (say 20ms) and then finds the time lag between peaks in ACF output. If all the significant peaks in ACF output are equidistant, I can say that the signal is very much periodic. If the peaks have random spacing between the, that would indicate aperiodicity. Based on this, what measure can I use to find out HOW MUCH periodic a signal is? If I denote time lag between successive peaks as lag1,lag2...lagN; the deviation from mean of these values can tell how much periodic the signal is. Any better ideas?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8786799907684326, "perplexity": 1465.351837709162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00307.warc.gz"}
http://mathhelpforum.com/advanced-algebra/112929-question-about-inequalities.html
Hi. I'm at the last step of a question and I'm wondering how to solve for the inequality: (t-3)(t-2)>0 I know that there is some method to figure out if t is within or beyond the interval [2,3], but I was never taught it. What is the system to solve this, and what is it called? Thank you in advanced for taking a look at this question! 2. Originally Posted by shawli Hi. I'm at the last step of a question and I'm wondering how to solve for the inequality: (t-3)(t-2)>0 I know that there is some method to figure out if t is within or beyond the interval [2,3], but I was never taught it. What is the system to solve this, and what is it called? Thank you in advanced for taking a look at this question! HI YOu can use the graphical method . In this case , it will be a parabola opening upwards (since its leading coefficient is >0 ) and it intersects the x-axis at 2 and 3 . Since the inequality is >0 , so the solution would be the top part of the graph ie t<2 or t>3 . 3. oh okay, thank you. i was wondering, is there any algebraic method of solving this?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9231462478637695, "perplexity": 198.16543169672258}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718840.18/warc/CC-MAIN-20161020183838-00193-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/rms-speed-of-a-gas-molecule.213211/
# Archived RMS Speed of a Gas Molecule 1. Feb 5, 2008 ### ChopChop 1. The problem statement, all variables and given/known data The atmosphere is composed primarily of nitrogen N2 (78%), and oxygen O2 (21%). Find the rms speed of N2 and O2 at 293K 2. Relevant equations Vrms=$$\sqrt{}((3RT)/M)$$ 3. The attempt at a solution Vrms, O2=$$\sqrt{}((3*8.31J*293K)/(32g/mol))$$ =15.11m/s When I looked at the answer to the book, it was 478 m/s because instead of putting 32g/mol in the denominator, they converted it to 0.032 kg/mol. Can somebody explain to me why the authors of my book decided to do that? Is my first answer still correct? Or do I need to convert to kg every time I have to do a RMS problem? 2. Mar 7, 2016 ### LukeEWilliams It is used in terms of kg because Joules is a unit that involves kg (kg*m^2*s^-2). In order for it to work out, you have to divide by the same units. Draft saved Draft deleted Similar Discussions: RMS Speed of a Gas Molecule
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8970212936401367, "perplexity": 1059.7562439489377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690376.61/warc/CC-MAIN-20170925074036-20170925094036-00436.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-slope-given-4x-5y-2
Algebra Topics # How do you find the slope given 4x - 5y = -2? Jul 17, 2015 The slope is $\frac{4}{5}$ . #### Explanation: $4 x - 5 y = - 2$ The given equation is in the standard form for a linear equation. To determine the slope, you need to solve the standard equation for $y$. This will give you the slope-intercept form for a linear equation, $y = m x + b$, where $m$ is the slope and $b$ is the y-intercept. Solve $4 x - 5 y = - 2$ for $y$. Subtract $4 x$ from both sides of the equation. $- 5 y = - 4 x - 2$ Divide both sides by $- 5$. $y = \frac{4}{5} x + \frac{2}{5}$ The slope is 4/5#. ##### Impact of this question 3162 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9761562347412109, "perplexity": 402.29813996701176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00208.warc.gz"}
https://raweb.inria.fr/rapportsactivite/RA2017/deducteam/uid66.html
Overall Objectives Application Domains New Software and Platforms Partnerships and Cooperations Bibliography PDF e-Pub ## Section: New Results ### Proof theory G. Burel developed a general framework, focusing with selection, of which various logical systems are instances: ordinary focusing, refinements of resolution, deduction modulo theory, superdeduction and beyond [20]. This strengthens links between sequent calculi and resolution methods. F. Gilbert developed a constructivization algorithm, taking as input the classical proof of some formula and generating as output, whenever possible, a constructive proof of the same formula. This result has been published and presented in [14]. F. Gilbert submitted his PhD dissertation (work document [25]), centered on the extension of higher-order logic with predicate subtyping. Predicate subtyping is a key feature of the proof assistant PVS, allowing to define types from predicates – for instance, using this feature, the type of even numbers can be defined from the corresponding predicate. The core of this work is the definition of a language of verifiable certificates for predicate subtyping, as well as the proof of two properties of this language: a cut-elimination theorem, a theorem of conservativity over higher-order logic. F. Gilbert presented this language of certificates as well as the cut-elimination theorem at the workshop TYPES 2017.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937065839767456, "perplexity": 1820.1998254293005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00164.warc.gz"}
http://mathoverflow.net/questions/159607/proof-that-the-schwarz-map-defined-as-ratios-of-gauss-hypergeometric-functions-i
# proof that the schwarz map defined as ratios of gauss hypergeometric functions is univalent The ratio of two linearly independent solutions of the Guass hypergeometric differential equation defines a map from the upper half plane to a Schwarz triangle. Everything I read tells me that this map is injective, but I cannot find a proof. Is there a simple proof? Also, can we similarly prove a similar result for the map $$\sigma:(x,y) \to (\frac{G(x,y)}{F(x,y)},\frac{H(x,y)}{F(x,y)})$$ where $F$, $G$, and $H$ are solutions of Appell's $F_1$ system? - I though it was clear that I was referring to the case where |1-c|, |c-a-b|, and |a-b| are all less than 1. Since you appeal to the Riemann mapping theorem, I am wondering how you would show injectivity of the two variable case where (x,y) is mapped to two ratios of solutions of Appell's F1 system. Any ideas on this problem? (of course with suitable restriction on the parameters a, b1, b2 and c) –  Dan Mar 7 at 2:34 "Everything you read" has it wrong: this map is not necessarily injective. For example, there exists a Schwarz equation whose solution is a ratio of two solutions of a Gauss equation, and the solution is $z^{10}$, which is not injective in the upper half-plane. The "triangle" in question has angles $10\pi,\pi,10\pi$. It is indeed a "triangle" in some sense, and you can even make a paper model. But it does not fit into the plane. The correct result is the following. Suppose that you have an (honest) circular triangle in the plane. That is the sides are arcs of circles, and the angles are at most $2\pi$. Then (by the Riemann mapping theorem) there exists a conformal (injective) map of the upper half-plane onto this triangle. This map satisfies a Schwarz equation. All other solutions of this Schwarz equation also map the upper half-plane injectively onto triangles (obtained by a fractional-linear transformation of the original one).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9747641086578369, "perplexity": 191.44338588450896}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829421.59/warc/CC-MAIN-20140820021349-00392-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/84632/equivalent-forms-of-the-proper-base-change-isomorphism
# Equivalent forms of the proper base change isomorphism $\DeclareMathOperator{\Nat}{Nat}$In a current project, I am trying to "commute" $!$ and $*$ functors that are both upper or both lower. (Sheaf-theoretic context: constructible étale sheaves.) The fact that they commute when we have one of each ultimately comes down to proper base change: that is, if we have maps $f \colon X \to Z$, $g \colon Y \to Z$, and their fiber product with projections $p, q \colon P \to X, Y$, then we have a natural isomorphism $$g^* f_! \cong q_! p^*.$$ From each direction of this isomorphism we can derive arrows between un-mixed compositions: \begin{align} g^* f_! \to q_! p^* \implies f_! \to g_* q_! p^* \implies f_! p_* \to g_* q_! \qquad &(1)\\ q_! p^* \to g^* f_! \implies p^* \to q^! g^* f_! \implies p^* f^! \to q^! g^* \qquad &(2) \end{align} (the second implications are valid! Work it out: if $L$ and $R$ are left- and right-adjoints, then for functors $F$ and $G$ we have $\Nat(F, GL) \cong \Nat(FR, G)$.) It is clear that (1) is an isomorphism when $f$, and hence $q$, are proper, and that (2) is an isomorphism when they are open immersions. It is also easy to prove directly that (1) is an isomorphism when $f$ is an open immersion, since we can check that both sides have the same restriction to the open subscheme and their restrictions to its closed complement are zero. So (1) is an isomorphism by Nagata's compactification theorem, which is the basis for defining the lower-$!$ functors anyway. Here's my question: Is (2) an isomorphism when $f$ is a proper map? My one trick, which was a direct computation, is no good here. I normally avoid like the plague dealing with the upper-$!$ functor directly, reducing it to something better by adjunction or duality, but the question is self-dual and I already exhausted my options with adjunction in deriving it. - (1) is not always an isomorphism when $f$ is an open immersion. (Take $X=Y$ equal to an open subscheme of $Z$, with the obvious maps.) Here is why : when you try to show that the restriction of $g_*q_!$ to the closed complement is $0$, you will want to use a base change isomorphism which is not true in general (it is true if $g$ is proper, but then (1) is trivially an isomorphism). Neither is (2), and it doesn't matter whether $f$ is proper or not. Take $Z=\mathbb{A}_2$ (over some field, say), $X=$ one of the axes, $Z=$ the origin, with the obvious maps (so $P=Z$, and $f$ is a closed immersion, hence proper), and look what happens for the constant sheaf $\Lambda$ on $Z$ ($\Lambda$ is, for example, a finite ring of torsion prime to the characteristic of the base field). Then $f^!\Lambda=\Lambda(-1)[-2]$ by purity, so the rleft-hand side of (2) is $\Lambda(-1)[-2]$, while the right-hand side is $\Lambda$. You're right; I glossed over the base-change business because it does work when checking restriction to the open part. As for (2): thanks, somehow I missed the possibility of the fiber product changing the codimension. (I think you mean $Y = \mathbb{A}^2$ and $P = Y$, though.) – Ryan Reich Jan 16 '12 at 17:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852899312973022, "perplexity": 399.61175147752806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00168-ip-10-164-35-72.ec2.internal.warc.gz"}
http://uaxa.sitewebpro.fr/derivative-word-problems-worksheet.html
” No one is going to offer you a job because you can take the derivative of a function. Indeed, one could think of inverse trig functions as \creating" right triangles. So, if the first derivative tells us if the function is increasing or decreasing, the second derivative tells us where the graph is curving upward and where it is curving downward. This theorem is sometimes referred to as the small-angle approximation. We can now use derivatives of logarithmic and exponential functions to solve various types of problems eg. The derivative is the natural logarithm of the base times the original function. CLASSNOTES AND SOME ANSWERS MATH 122B FINAL EXAM INFORMATION & STUDY GUIDE 122B FINAL EXAM covers 1. A damsel is in distress and is being held captive in a tower. Below is a part/part/whole word problem. All worksheets are pdf documents and can be printed. How to use the quotient rule for derivatives. An object falls from a high building. This algebra 1 worksheet will produce distance, rate, and time word. Graphing Notes: Finding Absolute Maxs and Mins, Mean Value Theorem, First and Second Derivatives Tests Homework: Extrema Worksheet #1, 2, and 19-24 Extrema Worksheet Extrema Worksheet Key. 20 3 >5 and so is not a physically reasonable answer. Solve the practice problems at the end of the notes. Solving a word problem using derivatives. Problem: Evaluate the following derivatives using the quotient rule. Optimization Problems Practice Solve each optimization problem. Practice problems for sections on September 27th and 29th. Also has three derivative trigonometric problems with solutions. How many smartphones. ©v G2r0Q1 H3O pK nu atEa 9 ZSVoGfutQw5a 5r Xe V RL xLpCW. Provide lesson plans, worksheets, ExamView test banks, links to helpful math websites for high school math courses. Calculus is a subject which you can not understand with out instructor, so be attentive in the class room. X Y 20 35 25 40 6. Worksheet 4. So, we could say that for simple resistor circuits, the instantaneous rate-of-change for a voltage/current function is the resistance of the circuit. 3) Identify the function that you want to maximize/minimize. We will not enter into any correspondence on the content of the worksheets, errors, answers or tuition. Worksheets for MA 113 Worksheet # 10: The Derivative as a Function, Product, and Quotient Rules Note when working through a limit problem that your answers. N a nAml6lR qr1iKgjhit vsJ Fr2ewsse8rYvReSdC. If you're seeing this message, it means we're having trouble loading external resources on our website. Khan Academy Linear Systems Lessons - Homework on Khan Academy Lesson 1 - Solving Equations Review with Answers Lesson 1 - Lines Review Homework: pg. Used by over 7 million students, IXL provides unlimited practice in more than 4 500 maths and English topics. Some Worked Problems on Inverse Trig Functions When we work with inverse trig functions it is especially important to draw a triangle since the output of the inverse trig function is an angle of a right triangle. The power rule for derivatives Usually the first shortcut rule you study for finding derivatives is the power rule. What is the perimeter? Important concept: Hexagon. Get a clean sheet of paper, calm down and take a stress pill and you can do this. where t denotes the number of seconds since the ball has been thrown and v 0 is the initial speed of the ball (also in meters per second). Finding the Projection of One Vector Onto Another k. Rules for derivatives. trig graphs. f(x) = (x3) 5 √. 2nd grade word problems. How to Memorize the Unit Circle: Summary of how to remember the Radian Measures for each angle. 100% Free Calculus Worksheets, Printables, and Activities. Vanier College Sec V Mathematics Department of Mathematics 201-015-50 Worksheet: Logarithmic Function 1. Take the derivative and set it equal to zero. $! ˇ ˆ ’. in the fields of earthquake measurement, electronics, air resistance on moving objects etc. Derivative Practice Worksheet In problems 1 - 40, find the derivative of the given function. University Of Kentucky > Elementary Calculus and its 1/13 Chapter7. Math 180 Worksheets About this booklet This booklet contains worksheets for the Math 180 Calculus 1 course at the University of Illinois at Chicago. This worksheet is arranged in order of increasing difficulty. The worksheets start out introducing simple powers of ten terms, including ones that should be memorized. Practice solving real world word problems involving perimeter. Definition of the Derivative Instantaneous Rates of Change Power, Constant, and Sum Rules Higher Order Derivatives Product Rule Quotient Rule Chain Rule Differentiation Rules with Tables Chain Rule with Trig Chain Rule with Inverse Trig Chain Rule with Natural Logarithms and Exponentials Chain Rule with Other Base Logs and Exponentials. Quotient Rule. I am passionate about travelling and currently live and work in Paris. Find the general solution of xy0 = y−(y2/x). Each of the derivatives above could also have been found using the chain rule. For example, a student watching their savings account dwindle over time as they. where t denotes the number of seconds since the ball has been thrown and v 0 is the initial speed of the ball (also in meters per second). Bar Graph Worksheet #1 Library Visits 0 100 200 300 400 Monday Tuesday Wednesday Thursday Friday Saturday Days of the week Number of visitors 1. Solve-variable. Some problems require the the chain rule with the product rule, within the quotient rule, or within another chain rule. So much of math is about solving equations properly. 0 z YMPaBd qeA Wwai UtQh z TI SnHfdi wnyi QtTeL yC iailgcnuml5uNsY. Derivatives Worksheet II. Sketch a. Topics you should know: The Intermediate Value Theorem. 0 m/s when the driver decides to pass a slow-moving sled. f(x) = 4x5 −5x4 2. View Applications of Derivatives word problems solutions from CALCULUS AP CALCULU at St Brendan Catholic High School. Financial math has as its foundation many basic finance formulas related to the time value of money. The Problems tend to be computationally intensive. y>=0 b] Minimize y^2 + (y - 5)^2 s. Derivative Of Exponential Function - Displaying top 8 worksheets found for this concept. Derivatives; Applications of Differentiation; Definite Integral; Applications of Integration; Logarithmic & Exponential Functions; Techniques of Integration; Seperable Differential Equations; Taylor’s Theorem, L’Hopital’s Rule & Improper Integrals; Infinite Sequences & Theories; Conic Sections & Polar Coordinates; Curves & Vectors in the Plane. The cable will go under ground along the shoreline from Point A to a Point P between Points. As you study calculus, you will find that many problems have multiple possible approaches. Her knight in shining armor is on the ground below with a ladder. The de nition of the derivative. Finally, a di erential equations problem: Show that for any constant c, y= (c x2) 1=2 is a solution to the di erential equation y0= xy3. Here’s why: You know that the derivative of sin x is cos x, and that according to the chain rule, the derivative of sin (x 3) is You could finish that problem by doing the derivative of x 3, but there is a reason for you to. These are problems which (a) provide some review of the material covered in that portion of the course, (b) add a little bit of new material ,and(c)trytotiethingstogether. Math 221 Worksheet: Derivatives of exponential and logarithmic functions November 4, 2014 Find the derivatives of the following functions. Each of the derivatives above could also have been found using the chain rule. They are organized by the type of questions. Steps for solving Derivative max/min word problems: 1) Draw a diagram and label parts. The addition rule helps you solve probability problems that involve two events. - Stats Worksheet #1 - Stats Worksheet #2. Problem 1 A salesman sold twice as much pears in the afternoon than in the morning. Chain Rule: Problems and Solutions. Tangent Problems. 1 Concavity and the Second-Derivative Test Intuition: a curve is concave up on an interval I if it looks like on I. Create Answer Sheet (Pop-Up Window) Show how to solve it! (Pop-Up Window) Mix up the problems. Differential and integral calculus: limits and continuity, the. ) As these examples show, calculating a. They found if they new either some angles or some lengths or a combination of both, then they could calculate the others. Equations: linear, quadratic, simultaneous equations, word problems. This section is less about correct versus incorrect and more about just being neat. $$y = \ln(3x^2 + 5)$$. Example: you've found zeroes of 5, 9 and 16. Problem: Evaluate the following derivatives using the quotient rule. Because the derivative provides information about the gradient or slope of the graph of a function we can use it to locate points on a graph where the gradient is zero. Included is a fully. Good Calculus Word Problems, especially for EE students. Well worth it. Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more. f(x) = 3x2(x3 +1)7 5. 05 x where x is the number of smartphones manufactured per day. Here is a set of practice problems to accompany the Differentiation Formulas section of the Derivatives chapter of the notes for Paul Dawkins Calculus I course at Lamar University. If the driver accelerates to a speed of 19. Max Min Word Problems Our approach to max min word problems is modeled after our approach to related rates word problems. This algebra 1 worksheet will produce distance, rate, and time word. Partial derivative examples. The Questions emphasize qualitative issues and answers for them may vary. Description: Our circle solver lets you enter the area, diameter or circumference of a circle and then solves for the other two. Differential calculus is all about instantaneous rate of change. So using the second derivative, plug in points in between each x coordinate of the zeroes you've found and find out if that value is positive or negative. 8 Y hAnlQl0 vr liJgWh3t qsO drRe8s 5e Yrjv seTdr. I like to spend my time reading, gardening, running, learning languages and exploring new places. *developed by Student Learning, we also acknowledge links to Flinders and RMIT Universities, plus MrPatrick and other websites. Position, velocity, and acceleration problems can be solved by solving differential equations. Differential Problems. Try a complete lesson on Volume Word Problems, featuring video examples, interactive practice, self-tests, worksheets and more! Surface Area and Volume – Word. Teach yourself calculus. Lecture 10 - Concavity, The Second Derivative Test, and Opti-mization Word Problems 10. write down the information of the problem in terms of those letters; 4. Partial derivative examples. Some good x values to plug into the function are 3, 7, 11, and 17. Mixed Operations Word Problems Using 1 or 2 Digits. We will not enter into any correspondence on the content of the worksheets, errors, answers or tuition. We can now use derivatives of trigonometric and inverse trigonometric functions to solve various types of problems. Worksheet 3 - Kinematics Equations 1. But htey are super useful. 125-153, Gootman) Chapter Goals: In this Chapter we learn a general strategy on how to approach the two main types of. Worksheet 4. Calculating Derivatives: Problems and Solutions. calculus worksheets calculus derivative, maximum and minimum quadratic word problems worksheet, box and whisker plot worksheets, box and whisker plot worksheets and use a graph to determine where a function is increasing decreasing. There is a world of difference between constant acceleration and constant velocity. Math Word Problems and Solutions - Distance, Speed, Time. Derivative Of Exponential Function - Displaying top 8 worksheets found for this concept. Calculus Derivatives Word Problems And Solutions Sample Calculus Problems Therefore we can not just drop some of the limit signs in the solution above to The derivative is not de ned at x = 0,. Implicit Differentiation Worksheet Use implicit differentiation to find the derivative: 1. To do these problems, you need to remember that the derivative of position is ve-locity (i. The problem reads d (e to the x power over dx equals d to the x power. 2) Write relevant formulas. A ball is thrown at the ground from the top of a tall building. Click "Show Answer" underneath the problem to see the answer. Recall that an expression of the form. Jul 29, the definition of homework 2013 · Wherever the homework debate goes next, be it the front pages or on the back burner, it's worth taking narrative essay outline worksheet a moment to examine if we're asking the right questions about our children's education. Practice Worksheet - The here is with simple products and negatives are not really the focus. Sample Questions with Answers The curriculum changes over the years, so the following old sample quizzes and exams may differ in content and sequence. php(143) : runtime-created function(1) : eval()'d code(156) : runtime-created function(1. Derivative Worksheets include practice handouts based on power rule, product rule, quotient rule, exponents, logarithms, trigonometric angles, hyperbolic functions, implicit differentiation and more. Calculus with Algebra and Trigonometry II Lecture 2 Applied optimization or calculus word problems Jan 22, 2015 Calculus with Algebra and Trigonometry II Lecture 2Applied optimization or calculus word problemsJan 22, 2015 1 / 16. You are simply making typographical mistakes right and left. The given answers are not simplied. 00 seconds then what was the acceleration? b) What distance will be covered by the snowmobile in the time that it takes to accelerate?. Problem-Attic will make your best work look its best, whether it’s delivered online or in print. Word problems involving integrals usually fall into one of two general categories: alien related and non-alien related. L-7-Worksheet by Kuta Software LLC Answers to Review Sheet: Exponential and Logorithmic Functions (ID: 1) 1) 6log u − 3log v 2) 4log 6 u + 4log 6 v 3) log 5 8 3 + log 5 7 3 + log 5 11 3 4) 6log 4 u. Below is a part/part/whole word problem. The rst derivative test uses the rst derivative around the critical point. Derivative Worksheets. If a graph is curving up from its tangent lines, the first derivative is increasing (f ''(x) > 0) and the graph is said to be ' '' ' ''. There are 3 questions with an answer key. Some of the worksheets below are Metric Conversion Practice Problems Worksheet, Metric Mania Conversion Practice : Conversions using the ladder method, Conversion Factors, Measuring Worksheet, Unit Conversion and Dimensional Analysis : Rules and guidelines, examples and practice problems, … Once you find your worksheet(s), you can either. h z oMxabdJe g EwriZtah l vIJn qfei1nMi2tLe A TC 7a7l qc GuHlruPs 9. Always start with basics of derivatives. Here are a set of practice problems for the Derivatives chapter of the Calculus I notes. The normal line is defined as the line that is perpendicular to the tangent line at the point of tangency. f(x) = 3x2(x3 +1)7 5. f 8 aAzlUlx kr tiJg ihNtWsW 0r Te1s meOrLvSerdy. Let's see how this can be used to solve real-world word problems. Students begin to work with Polynomial Word Problems in a series of math worksheets, lessons, and homework. Some problems require the the chain rule with the product rule, within the quotient rule, or within another chain rule. edu Note: Feel free to use these problems in your class and share them with others, but please do not publish them without permission. k 2 YMsaBdjeM Sw7ilt1hg 6IrnzfSiYnuit5ew MAYl6gGeJbqraaP G15. Calculus Worksheet − Max. • Introduce the term “derivative” and connect to the above limits (2. Calculus BC Worksheet 50 Word Problem Optimization 1. Derivatives of exponential functions involve the natural logarithm function, which itself is an important limit in Calculus, as well as the initial exponential function. Problem: For each of the following functions, determine the intervals on which the function is increasing or decreasing. ) and finding about the probability of two things happening in that one task. A ball is thrown at the ground from the top of a tall building. Derivatives Sample Word Problems with Solutions - Free download as Word Doc (. We provide a whole lot of high-quality reference materials on topics starting from algebra to math homework. Applications of the definite integral to velocities and rates 4. p x = log x (3) log 4 x2 = log p x 9. x y2 2− = 1 2. Chapter 3 : Derivatives. Calculus Derivatives Word Problems And Solutions Sample Calculus Problems Therefore we can not just drop some of the limit signs in the solution above to The derivative is not de ned at x = 0,. com provides practical advice on Solution Nonlinear Differential Equation, elementary algebra and algebra syllabus and other math topics. So to solve these problems, all you have to do is answer the questions as if they had asked you to determine a rate or a slope instead of a derivative. In problems 41-50, find dy/dx. They are organized by the type of questions. Partial Derivatives Word Problems Practice. If you are viewing the pdf version of this document (as opposed to viewing it on the web) this document contains only the problems themselves and no solutions are included in this document. A Collection of Problems in Di erential Calculus Problems Given At the Math 151 - Calculus I and Math 150 - Calculus I With Review Final Examinations Department of Mathematics, Simon Fraser University 2000 - 2010 Veselin Jungic Petra Menz Randall Pyke Department Of Mathematics Simon Fraser University c Draft date December 6, 2011. Some of the worksheets displayed are Math 1a calculus work, Continuity date period, Graphs of polynomial functions, Precalculus, Work 1 precalculus review functions and inverse, Pre calculus review work answers, Functionswork, Precalculus hgt work conics circles. Use 1, 1 or DNEwhere appropriate. Find the equation of the normal to the curve of y=tan^-1(x/2) at x=3. Students begin to work with Polynomial Word Problems in a series of math worksheets, lessons, and homework. The derivative of e with a functional exponent. The point is a local minimum. The Questions emphasize qualitative issues and answers for them may vary. The given answers are not simplified. Solutions to elementary partial derivative problems by Duane Q. The Organic. Printable in convenient PDF format. 2nd grade word problems. I like to spend my time reading, gardening, running, learning languages and exploring new places. I want to see the steps on how to solve the problem. Math word problems worksheets focus only on math related problems where the student must find the correct solution by subtracting from the story. The speed of the ball in meters per second is. Find the limit (if it exists): Find the derivative using the definition of a derivative (a) f. We provide a whole lot of high-quality reference materials on topics starting from algebra to math homework. Solve calculus and algebra problems online with Cymath math problem solver with steps to show your work. Applications of Derivatives Worksheet Name _____ I. The response received a rating of "5/5" from the student who originally posted the question. Every optimization word problem will end the same way. The collections are intended to be self-teaching workbooks that students can study even before high school. Partial Derivatives Word Problems Practice. After 6 hours, he is at an altitude of 700 feet. If you are taking your first Calculus class, derviatives are sort of like little "puzzles" that you have to work out. In class Worksheet 1. (a) Find the velocity of the rock when t = a (b) Find the velocity of the rock after one second. Introduce younger students to the basics of collecting and organizing data. Remember, your answer will have to be multiplied by 10 and added to 2000. The given answers are not simplied. Then nd a solution to the initial value problem y0= xy3, y(0) = 2. 5) Answer question(s) 6) Check your work and the solutions _____ Download Free Max/Min Word problem answers. About the worksheets This booklet contains the worksheets that you will be using in the discussion section of your course. The equation means that the slope equals the y-value, or function value, for all points on the graph. f(x) = 2x4 +3x2 −1 x2 11. Find two positive numbers such that their product is 192 and the sum of the first plus three times the second is a minimum. Jul 29, the definition of homework 2013 · Wherever the homework debate goes next, be it the front pages or on the back burner, it's worth taking narrative essay outline worksheet a moment to examine if we're asking the right questions about our children's education. This instantaneous rate of change is what we call the derivative. Constructed with the help of Alexa Bosse. Calculus with Algebra and Trigonometry II Lecture 2 Applied optimization or calculus word problems Jan 22, 2015 Calculus with Algebra and Trigonometry II Lecture 2Applied optimization or calculus word problemsJan 22, 2015 1 / 16. Derivatives Worksheet. Set it to 0 and solve for t. The cable will go under ground along the shoreline from Point A to a Point P between Points. In economics terms, the opportunity cost of employing an editor is very low to anyone who is self-publishing. Z sec2(x) (1+tan(x))3. This section gives a method of di erentiating those functions which are what we. org has 11,239 printable grammar worksheets in different categories. We will use some of these in class. Problem: Evaluate the following derivatives using the quotient rule. Word problems involving integrals usually fall into one of two general categories: alien related and non-alien related. The equation means that the slope equals the y-value, or function value, for all points on the graph. Velocity and Acceleration a. Being able to find a derivative is a "must do" lesson for any student taking Calculus. Check that this value is a minimum or maximum and read exactly what form the answer should be. Financial Math Formulas and Financial Equations. The following word problem comes from Calculus for the Life Sciences by Greenwell, Ritchey, and Lial. t y >= 0 c] Minimize y^2 (y-5) s. Supplemental Instruction. For problems 1-8, find the derivative of the given function:. Most organic chemistry textbooks contain a broad assortment of suitable problems, and paperback collections of practice problems are also available. Also, references to the text are not references to the current text. 3) Identify the function that you want to maximize/minimize. Instructions to tutor: Read instructions and follow all steps for each problem exactly as given. Every optimization word problem will end the same way. Limits & Derivatives Worksheet SOLUTIONS Math 1100-005 01/26/06 1. The derivative of a function at a point is the slope of the tangent line at this point. When the knight stands 15 feet from the base of the tower and looks up at his precious damsel, the angle of elevation to. The easiest rates of change for most people to understand are those dealing with time. Also, it will provide a detailed explanation. one step equations with fractions worksheets ; math word problem calculator ; tx algebra 1 answers ; dilation calculator ; simplify radicals equations and functions in ti 84 ; decimal to least fraction calculator ; consecutive integers calculator ; how to plug formulas into TI-84 plus calculator ; HOW TO SOLVE QUOTIENTS OF RADICALS. This type of word problem comes up often in Algebra and learning this method of solving is very important. recip/inverse/minutes trig. The difference quotient and the definition of the derivative. Ignoring air resistance, what will its velocity be after 6 seconds of falling? _____ 2. List of Derivative Problems (1 - 18) Find the derivative of: (19 - 25) Find the second derivative of: Problem 19 y = 8x - 3 Answer: y'' = 0. How many smartphones. So,try to solve some problems from algebra expression worksheets. The reason is that it is a simple rule to remember and it applies to all different kinds of functions. If the driver accelerates to a speed of 19. Operations with Integers Worksheets Subtracting Signed Integers Worksheets Combining Like Terms Variables And Expressions Worksheets Simplify Expressions Worksheets Evaluating Expression Worksheets Pre Algebra Word Problem Worksheets Simplifying Radicals Worksheets - No Variables Solving Decimal Equations Using Multiplications and Divisions. Worksheet Average and Instantaneous Velocity Math 124 Introduction In this worksheet, we introduce what are called the average and instantaneous velocity in the context of a specific physical problem: A golf ball is hit toward the cup from a distance of 50 feet. Derivatives are found all over science and math, and are a measure of how one variable changes with respect to another variable. Steps for solving Derivative max/min word problems: 1) Draw a diagram and label parts. Some of the worksheets for this concept are Math 171, 03, Derivatives, 04, Chapter 3 work packet ap calculus ab name, Derivatives practice work, Math 1a calculus work, Work for ma 113. The quiz is a collection of math problems. ©1995-2001 Lawrence S. Algebra1help. The following problems require the use of the chain rule. f(x) = (x3) 5 √. (1) log 5 25 = y (2) log 3 1 = y (3) log 16 4 = y (4) log. This has the advantage that you can save the worksheet directly from your browser (choose File → Save) and then edit it in Word or other word processing program. The solution is detailed and well presented. Take the derivative and set it equal to zero. Supplemental Instruction. master problem solving one needs a tremendous amount of practice doing problems. 2) Write relevant formulas. So using the second derivative, plug in points in between each x coordinate of the zeroes you've found and find out if that value is positive or negative. Note that this might fail for some functions, because some of my. On your papers, write down all of your givens as well as which variable represents the requested solution. Derivative Of Exponential Function. The cable will go under ground along the shoreline from Point A to a Point P between Points. 1) A company has started selling a new type of smartphone at the price of$ 110 − 0. The derivative of an exponential function can be derived using the definition of the derivative. Topics you should know: The Intermediate Value Theorem. Math courses include algebra, geometry, algebra 2, precalculus, and calculus. For example, "largest * in the world". If you have more than one unknown then you will need to eliminate all but one variable with additional equations or formulas. Derivative worksheet kuta. Let the numbers be y and 5 - y. The cable will go under ground along the shoreline from Point A to a Point P between Points. Math Word Problems and Solutions - Distance, Speed, Time. The following is a list of worksheets and other materials related to Math 122B and 125 at the UA. So much of math is about solving equations properly. For problems 1-8, find the derivative of the given function:. 17Calculus - You CAN ace calculus. Optimization Problems Practice Solve each optimization problem. Derivatives. At the end of the booklet there are 2 review worksheets, covering parts of the course (based on a two-midterm model). What does the derivative mean in calculus word problems? My teacher has been trying to explain what the derivative means with word problems but I still don't understand how he interprets his answers. There is a world of difference between constant acceleration and constant velocity. This is a math PDF printable activity sheet with several exercises. maximizing or minimizing some quantity so as to optimize some outcome. Special cases of limits are solved and the related graphs are described. Multiplication word problems for grade 3 students. The second derivative test uses the second derivative at the critical point. Practice problems for sections on September 27th and 29th. If you did the previous exercise then no calculation is required since this function has the same second derivative as that function and thus is concave up and concave down on the same intervals; i. So, if the first derivative tells us if the function is increasing or decreasing, the second derivative tells us where the graph is curving upward and where it is curving downward. in the fields of earthquake measurement, electronics, air resistance on moving objects etc. For each question, draw a diagram to help you. combined work, and mixture word problems college algebra 2 Mixture and Combined work problems involves using a scenario to create an algebraic equation in one variable and then solving it algebraically. 16 25 400x y2 2+ = 6. A ball is thrown at the ground from the top of a tall building. If we write a = b x, then the exponent x is the logarithm of a with log base of b and we can write a = b x as log b a = x The notation x = log b a is called Logarithm Notation. What is the rate of change of the height of water in the tank?(express the answer in cm / sec). If you are viewing the pdf version of this document (as opposed to viewing it on the web) this document. There are 3 questions with an answer key. Solving a word problem using derivatives. However, there are some cases where you have no choice. If you are taking your first Calculus class, derviatives are sort of like little "puzzles" that you have to work out. When we integrate to get Inverse Trigonometric Functions back, we have use tricks to get the functions to look like one of the inverse trig forms and then usually use U-Substitution Integration to perform the integral. In cases where you require advice on multiplying and dividing fractions or maybe squares, Algebra1help. Or click the "Show Answers" button at the bottom of the page to see all the answers at once.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8857617974281311, "perplexity": 596.6671307533594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00058.warc.gz"}
https://proceedings.neurips.cc/paper_files/paper/2020/hash/eefc9e10ebdc4a2333b42b2dbb8f27b6-Abstract.html
Authors Kimia Nadjahi, Alain Durmus, Lénaïc Chizat, Soheil Kolouri, Shahin Shahrampour, Umut Simsekli Abstract The idea of slicing divergences has been proven to be successful when comparing two probability measures in various machine learning applications including generative modeling, and consists in computing the expected value of a `base divergence' between \emph{one-dimensional random projections} of the two measures. However, the topological, statistical, and computational consequences of this technique have not yet been well-established. In this paper, we aim at bridging this gap and derive various theoretical properties of sliced probability divergences. First, we show that slicing preserves the metric axioms and the weak continuity of the divergence, implying that the sliced divergence will share similar topological properties. We then precise the results in the case where the base divergence belongs to the class of integral probability metrics. On the other hand, we establish that, under mild conditions, the sample complexity of a sliced divergence does not depend on the problem dimension. We finally apply our general results to several base divergences, and illustrate our theory on both synthetic and real data experiments.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532040357589722, "perplexity": 702.2417375959185}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948858.7/warc/CC-MAIN-20230328104523-20230328134523-00525.warc.gz"}
http://cstheory.stackexchange.com/questions/22337/another-variant-of-partition
# Another variant of PARTITION I've got a reduction of the following partition problem to a certain scheduling problem: Input: A list $a_1\leqslant\cdots\leqslant a_n$ of positive integers in non-decreasing order. Question: Does there exist a vector $(x_1,\ldots,x_n)\in\{-1,1\}^n$ such that $\sum_{i=1}^na_ix_i=0\qquad\text{and}$ $\sum_{i=1}^ka_ix_i\geqslant 0\quad\text{for all }k\in\{1,\ldots,n\}$ Without the second condition it's just PARTITION, hence NP-hard. But the second condition seems to provide a lot of additional information. I'm wondering if there is an efficient way of deciding this variant. Or is it still hard? - Here is a reduction from PARTITION to this problem. Let $(a_1,\dots, a_n)$ be an instance of PARTITION. Assume that $a_1\leq a_2\leq \dots \leq a_n$. Let $N$ be a “very large number”, e.g. $N = (\sum_{i=1}^n |a_i|) + 1$. Consider the instance $$\underbrace{N, \dots, N}_{5n \text{ times}}, N + a_1, \dots, N+a_n,\underbrace{4N, \dots, 4N}_{n \text{ times}}$$ of our problem. 1. If there is a solution $x_1,\dots, x_n$ to PARTITION then $$\underbrace{1, \dots, 1}_{4n \text{ times}},-x_1,\dots,-x_n,x_1,\dots,x_n,\underbrace{-1,\dots,-1}_{n \text{ times}}$$ is a solution to our problem. 2. If there is a solution $(x_1,\dots,x_{5n},y_1,\dots, y_n, z_1,\dots,z_n)$ to the instance of our problem (which we reduced an instance of PARTITION to), then $\sum_{i=1}^n a_i y_i \equiv 0 \pmod N$. Thus $$\sum_{i=1}^n a_i y_i = 0.$$ That is, $(y_1,\dots, y_n)$ is a solution to PARTITION. Thanks Yury. In my application it is essential that the input list is ordered non-decreasingly, and the input $(N,a_1,\ldots,a_n,N)$ in your reduction is not. I'll modify the question to make the order requirement more explicit. – Thomas Kalinowski Apr 29 '14 at 3:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8970226049423218, "perplexity": 169.559026858019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118369.35/warc/CC-MAIN-20160428161518-00193-ip-10-239-7-51.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/19745-rational-problems-2.html
1. Originally Posted by Ari YOUR AWSOME!! but shouldnt it be 2x^2?? correct, i'll change it 2. Ok this is the other problem i dont get(btw i love this forum now) simplify and state restrictions x^2+2x-63/7-x 3. Originally Posted by Ari Ok this is the other problem i dont get(btw i love this forum now) simplify and state restrictions x^2+2x-63/7-x do you mean $\frac {x^2 + 2x - 63}{7 - x}$? if so, you should type (x^2 + 2x - 63)/(7 - x) 4. Originally Posted by Jhevon do you mean $\frac {x^2 + 2x - 63}{7 - x}$? if so, you should type (x^2 + 2x - 63)/(7 - x) yeah thats what i meant, BTW how do you type the way you type? i cant get those fractions and stuff properly like you... 5. Originally Posted by Ari Ok this is the other problem i dont get(btw i love this forum now) simplify and state restrictions x^2+2x-63/7-x $\frac {x^2 + 2x - 63}{7 - x} = \frac {(x + 9)(x - 7)}{-(x - 7)} = -(x + 9)$ the restriction is $x \ne 7$ since we have to consider the domain of the first function as well. in the original question, x = 7 makes the denominator zero Originally Posted by Ari yeah thats what i meant, BTW how do you type the way you type? i cant get those fractions and stuff properly like you... i use LaTex. see here 6. Oh OK so you factor the top and then just cancel them out i see. (5m^2+6m-8)/(5m^2-16m-16) * (20m^2+16m)/(4-m^2) $ \frac {5m^2+6m-8}{5m^2-16m-16} \cdot \frac {20m^2+16m}{4-m^2} $ OMG I DID IT! 7. $ \frac {j^2+7j+12}{j^2+6j+9} \div \frac {j^2-16}{j^2+7j+12} $ $ \frac {(j+3)(j+4)}{(j+3)(j+3)} \div \frac {(j-4)(j+4)}{(j+3)(j+4)} $ 8. I hope you're not letting me do your homework for you... where's that suspicious smiley? Originally Posted by Ari Oh OK so you factor the top and then just cancel them out i see. (5m^2+6m-8)/(5m^2-16m-16) * (20m^2+16m)/(4-m^2) $ \frac {5m^2+6m-8}{5m^2-16m-16} \cdot \frac {20m^2+16m}{4-m^2} $ same story here. $\frac {5m^2 + 6m - 8}{5m^2 - 16m - 16} \cdot \frac {20m^2 + 16m}{4 - m^2} = \frac {(5m - 4) (m + 2)}{(m - 4) (5m + 4)} \cdot \frac {4m (5m + 4)}{(2 - m)(2 + m)}$ $= \frac {4m(5m - 4)}{(m - 4)(2 - m)}$ what do you think the restrictions are? remember, consider the original function as well OMG I DID IT! haha, yes you did not so hard was it? 9. Originally Posted by Ari $ \frac {j^2+7j+12}{j^2+6j+9} \div \frac {j^2-16}{j^2+7j+12} $ now that you can type with LaTex, try this on your own, and i will correct you if anything 10. Thats what i was planning on doing but i got an email saying you posted so im reading that ATM thanks! 11. m doesnt = 4 or -2 <---is that right? and im stuck here, do i cancel or recipricate and muiltyply? $ \frac {j^2+7j+12}{j^2+6j+9} \div \frac {j^2-16}{j^2+7j+12} $ $ \frac {(j+3)(j+4)}{(j+3)(j+3)} \div \frac {(j-4)(j+4)}{(j+3)(j+4)} $ 12. Originally Posted by Ari m doesnt = 4 or -2 <---is that right? what about the (2 - x) and the (5m + 4) in the denominators of the original fractions and im stuck here, do i cancel or recipricate and muiltyply? $ \frac {j^2+7j+12}{j^2+6j+9} \div \frac {j^2-16}{j^2+7j+12} $ $ \frac {(j+3)(j+4)}{(j+3)(j+3)} \div \frac {(j-4)(j+4)}{(j+3)(j+4)} $ multiply the first fraction by the reciprocal of the second fraction and see if anything cancels. if not, just multiply 13. so m doesnt = 2 but how do i do the (5m-4)? $ \frac {(j+3)(j+4)}{(j+3)(j+3)} \cdot \frac {(j+3)(j+4)}{(j-4)(j+4)} $ $ \frac {(j+3)(j-3)}{(j+3)(j-4)} $ did i cancel the right stuff? 14. Originally Posted by Ari so m doesnt = 2 but how do i do the (5m-4)? set 5m + 4 = 0, solve for m $ \frac {(j+3)(j+4)}{(j+3)(j+3)} \cdot \frac {(j+3)(j+4)}{(j-4)(j+4)} $ $ \frac {(j+3)(j-3)}{(j+3)(j-4)} $ did i cancel the right stuff? no 15. m doesnt = -4/5 then what am i supposed to cancel? Page 2 of 3 First 123 Last
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8200674653053284, "perplexity": 1735.1342404587108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661155.56/warc/CC-MAIN-20160924173741-00174-ip-10-143-35-109.ec2.internal.warc.gz"}
http://de.mathworks.com/help/simrf/examples/rf-noise-modeling.html?prodcode=RB&language=en&nocookie=true
Accelerating the pace of engineering and science # SimRF ## RF Noise Modeling This example shows how to use the SimRF™ Circuit Envelope library to simulate noise and calculate noise power. Results are compared against theoretical calculations and a Communications System Toolbox™ reference model. System Architecture The RF system, shown in white, consists of: • A Configuration block, which sets global simulation parameters for the SimRF system. With the Simulate Noise option checked, noise is included in the simulation. • A Noise source with a power spectral density of applied at the input. In this equation, is the Boltzmann constant, is the temperature of the source, and is the noise reference impedance. The calculated noise level of -174 dBm/Hz is used in this example. The Noise source is an explicit signal. • An Amplifier block with a specified power gain and noise figure. • An Outport block, with the Source type parameter set to Voltage. The Communications System Toolbox reference system, shown in green, consists of: • Two Receiver Thermal Noise blocks that model both the external noise and the amplifier noise. Calculate Power block computes RMS noise power. Note, that Communications System Toolbox signal is referenced to 1 Ohm, while SimRF power is computed for the actual load . The example model defines variables for block parameters using a callback function. To access model callbacks, select File > Model Properties > Model Properties and click the Callbacks tab in the Model Properties window. Running the Example 1. Type open_system('simrfV2_noise') at the Command Window prompt. 2. Select Simulation > Run. The Display block verifies that the SimRF and Communications System Toolbox noise models are equivalent. Computing RF System Noise To enable noise in the SimRF circuit envelope environment: • In the Configuration block dialog, select Simulate noise. • Specify a Temperature. SimRF software uses this value to calculate the equivalent noise temperature inside the amplifier. • Specify the Noise figure (dB) parameter of any amplifiers or mixers in the system. In the example, for a specified LNA gain of 4 dB and noise figure of 3 dB, the output noise is calculated using the following equations: The next equation converts the noise factor to an equivalent noise temperature. is the Temperature parameter of the SimRF Parameters block. The final equation calculates the output noise power. is the temperature of the external noise (noise source in this example) The available noise power is the power that can be supplied by a resistive source when it is feeding a noiseless resistive load equal to the source resistance. The Reference External Noise block generates an available power referenced to 50 Ohms. The Gain Front end block models the voltage divider due to the source resistance and the input impedance of the amplifier. The Amplifier noise and the Gain block model the noise added by the amplifier and the amplifier gain respectively. The output of the Amplifier Gain block is equal to the voltage across the load resistor in the SimRF system.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9055221080780029, "perplexity": 2409.1554762043907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195036337.8/warc/CC-MAIN-20150601214356-00072-ip-10-180-206-219.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/332325/if-p-is-prime-a-in-bbb-z-orda-p-3-then-how-to-find-orda1-p
# If $p$ is prime, $a \in \Bbb Z$, $ord^a_p=3$. Then how to find $ord^{a+1}_p=?$ If $p$ is prime, $a \in \Bbb Z$, $ord^a_p=3$. Then how to find $ord^{a+1}_p=?$ about $ord_n^a$ we know that is $(a,n)=1$ and smallest integer number as $d$ such that $a^d \equiv 1$ so $d=ord_n^a$ also we have: if $(a,n)=1$, $a\equiv b \pmod n$then $gcd(b,n)=1$, $ord_n^b=ord_n^a$ if $k \in \Bbb N$ , $a^k \equiv 1 \pmod n$ iff $ord_n^a|k$ $a^{k_1} \equiv a^{k_2} \pmod n$ iff $k_1 \equiv k_2 \pmod { ord_n^a}$ $ord_n^a| \phi(n)$ it's my trying : $a^3\equiv 1 \pmod p$ so $(a-1)(a^2 +a+1) \equiv 0 \pmod p$ so $a \equiv1 \pmod p$ that is impossible. so $a^2+a+1 \equiv 0 \pmod p$ so $a+1 \equiv -a^2 \pmod p$ how to find smallest $d$ such that $gcd(p,a+1)=1$ and $(a+1)^d \equiv 1 \pmod p$ also we have: $(-(a+1))^d \equiv (a^2)^d \equiv 1$ also $ord^a_p=ord^{a^2}_p$so $d=3$, $(a+1)^3 \equiv -1$ so $(a+1)^6 \equiv 1$ the problem is : Is $6$ smallest? how to prove for $2,4,5$ that is not ? in fact how to prove : $(a+1)^i \not \equiv 0 \pmod p$, $i=2,4,5$ - Hint: Note that since $(a+1)^6\equiv 1\pmod{p}$, the order of $a+1$ divides $6$. It follows that the only candidates to be eliminated are $1$, $2$, and $3$. The numbers $4$ and $5$ are not in the game. Added: The fact that the order of $a+1$ is not $1$ is easy to prove, but should be proved. It comes down to the fact that the order of $a$ is $\ne 2$. To show $a+1$ does not have order $2$, suppose that it does. Then from $(a+1)^2\equiv 1\pmod{p}$ we get that $a(a+2)\equiv 0\pmod{p}$. Now show that we cannot have $a\equiv -2\pmod{p}$. To show that the order of $a+1$ is not $3$, suppose it is. Then from $(a+1)^3\equiv 1\pmod{p}$ we obtain $3a^2+3a+1\equiv 0\pmod{p}$. But $p^2+p+1\equiv 0\pmod{p}$. From this one can quickly obtain a contradiction. case$3$ is impossible by attention to question.but why $4,5$ is impossible? also if $a \equiv -2$ then $-8 \equiv 1$ that is true for $p=3$ – elham Mar 22 '13 at 19:57 Of course Case $3$ is impossible, but the proof is not built into the wording of the question. The answer said why $4$ and $5$ are impossible. Suppose say that the order is $4$. You showed that $(a+1)^6\equiv 1\pmod{p}$. In general, if $b^k\equiv 1\pmod{p}$, then the order of $b$ divides $k$. But $4$ does not divide $6$. Same argument works for $5$. To show that the order $b$ divides $k$, use a general argument from group theory. Or let $e$ be the order, and let $k=qe+r$, $0\le r\lt e$. Since $b^k\equiv 1$ and $b^e\equiv 1$, we get $b^r\equiv 1$. Unless $r=0$, we get contradiction. – André Nicolas Mar 22 '13 at 20:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.992897093296051, "perplexity": 75.50503097944372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451744.67/warc/CC-MAIN-20151124205411-00255-ip-10-71-132-137.ec2.internal.warc.gz"}
http://ccms.claremont.edu/applied-math-seminar/new-approach-regularity-and-singularity-questions-class-nonlinear-evolutionary-
## A new approach to regularity and singularity questions for a class of non-linear evolutionary PDEs (eg, 3-D Navier-Stokes eqns) When Start: 03/09/2010 - 4:15pm End  : 03/09/2010 - 5:15pm Category Applied Math Seminar Speaker Saleh Tanveer (The Ohio State University) Abstract (Joint work with Ovidiu Costin, G. Luo.) Abstract: ------------- We consider a new approach to a class of evolutionary PDEs where question of global existence or lack of it is tied to the asymptotics of solution to a non-linear integral equation in a dual variable whose solution has been shown to exist a priori. This integral equation approach is inspired by Borel summation of a formally divergent series for small time, but has general applicability and is not limited to analytic initial data. In this approach, there is no blow-up in the variable p, which is dual to 1/t or some power 1/t^n; solutions are known to be smooth in p and exist globally for p in R+. Exponential growth in p, for different choice of n, signifies finite time singularity. On the other hand, sub-exponential growth implies global existence. Further, unlike PDE problems where global existence is uncertain, a discretized Galerkin approximation to the associated integral equation has controlled errors. Further, known integral solution for p in [0, p_0], numerically or otherwise, gives sharper analytic bounds on the exponents in p and hence better estimate on the existence time for the associated PDE. We will also discuss particular results for 3-D Navier-Stokes and discuss ways in which this method may be relevant to numerical studies of finite time blow-up problems. Where Third Floor - Sprague Building Proudly Serving Math Community at the Claremont Colleges Since 2007
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9430813193321228, "perplexity": 1047.1062879171302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.8/warc/CC-MAIN-20180220224819-20180221004819-00473.warc.gz"}
http://mathhelpforum.com/calculus/128137-complex-power-series-print.html
# Complex Power Series • Feb 10th 2010, 02:12 AM alawrie Complex Power Series Can anyone help with this question? Find the power series expansions of (a) log z about z = 1; (b) 1/(1 + z) about z = −5. What is the radius of convergence in each case? Thanks for any help • Feb 10th 2010, 04:27 AM mr fantastic Quote: Originally Posted by alawrie Can anyone help with this question? Find the power series expansions of (a) log z about z = 1; (b) 1/(1 + z) about z = −5. What is the radius of convergence in each case? Thanks for any help For (a), where are you stuck? What have you tried? (b) Note that $\frac{1}{1 + z} = \frac{1}{(z + 5) -4}$. Region I: $0 < \frac{|z + 5|}{4} < 1$. $\frac{1}{(z + 5) -4} = \frac{-1}{4 - (z + 5)} = -\frac{1}{4} \left( \frac{1}{1 - \left(\frac{z+5}{4}\right)} \right)$ and you can get a series using the sum of an infinite geometric series, noting that $r = \frac{z + 5}{4}$. Region II: $\frac{|z + 5|}{4} >1$. $\frac{1}{(z + 5) -4} = \frac{1}{z + 5} \left( \frac{1}{1 - \left[ \frac{4}{z + 5}\right]} \right)$ and you can get a series using the sum of an infinite geometric series, noting that $r = \frac{4}{z + 5}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.974121630191803, "perplexity": 1386.940333993673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806447.28/warc/CC-MAIN-20171122012409-20171122032409-00734.warc.gz"}
https://www.clutchprep.com/physics/practice-problems/141076/what-is-the-most-likely-method-in-which-jupiter-generates-its-internal-heat-a-ra
Gravitational Force Inside Earth Video Lessons Concept # Problem: What is the most likely method in which Jupiter generates its internal heat? A) radioactive decay B) internal friction due to its high rotation rate C) chemical processes D) nuclear fusion in the core E) by contracting, changing gravitational potential energy into thermal energy ###### FREE Expert Solution Jupiter, Saturn, and Neptune majorly rely on internal heat sources for their thermal energy. Radiation from the sun is limited by their large distance from the sun. 88% (348 ratings) ###### Problem Details What is the most likely method in which Jupiter generates its internal heat? B) internal friction due to its high rotation rate C) chemical processes D) nuclear fusion in the core E) by contracting, changing gravitational potential energy into thermal energy
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8508803248405457, "perplexity": 2287.6040102169477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058222.43/warc/CC-MAIN-20210926235727-20210927025727-00495.warc.gz"}
http://link.springer.com/article/10.1007%2Fs10439-007-9282-1
Annals of Biomedical Engineering , Volume 35, Issue 7, pp 1264–1275 # Electric Fields around and within Single Cells during Electroporation—A Model Study • Brian J. Mossop • Roger C. Barr • Joshua W. Henshaw • Fan Yuan Article DOI: 10.1007/s10439-007-9282-1 Mossop, B.J., Barr, R.C., Henshaw, J.W. et al. Ann Biomed Eng (2007) 35: 1264. doi:10.1007/s10439-007-9282-1 ## Abstract One of the key issues in electric field-mediated molecular delivery into cells is how the intracellular field is altered by electroporation. Therefore, we simulated the electric field in both the extracellular and intracellular domains of spherical cells during electroporation. The electroporated membrane was modeled macroscopically by assuming that its electric resistivity was smaller than that of the intact membrane. The size of the electroporated region on the membrane varied from zero to the entire surface of the cell. We observed that for a range of values of model constants, the intracellular current could vary several orders of magnitude whereas the maximum variations in the extracellular and total currents were less than 8% and 4%, respectively. A similar difference in the variations was observed when comparing the electric fields near the center of the cell and across the permeabilized membrane, respectively. Electroporation also caused redirection of the extracellular field that was significant only within a small volume in the vicinity of the permeabilized regions, suggesting that the electric field can only facilitate passive cellular uptake of charged molecules near the pores. Within the cell, the field was directed radially from the permeabilized regions, which may be important for improving intracellular distribution of charged molecules. ### Keywords Intracellular electric field Single cells Electroporation Molecular delivery ## Authors and Affiliations • Brian J. Mossop • 1 • Roger C. Barr • 1 • Joshua W. Henshaw • 1 • Fan Yuan • 1 1. 1.Department of Biomedical EngineeringDuke UniversityDurhamUSA
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317684173583984, "perplexity": 2460.179753149166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118743.41/warc/CC-MAIN-20170423031158-00050-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.mathway.com/examples/basic-math/factors-fractions-and-exponents/converting-to-a-fraction?id=214
# Basic Math Examples Convert the decimal number to a fraction by placing the decimal number over a power of ten. Since there is number to the right of the decimal point, place the decimal number over . We're sorry, we were unable to process your request at this time Step-by-step work + explanations •    Step-by-step work •    Detailed explanations •    Access anywhere Access the steps on both the Mathway website and mobile apps $--.--/month$--.--/year (--%)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8106619119644165, "perplexity": 3407.8772426062224}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816351.97/warc/CC-MAIN-20180225090753-20180225110753-00780.warc.gz"}
http://math.stackexchange.com/questions/178741/additive-quotient-group-mathbbq-mathbbz-is-isomorphic-to-the-multiplicat?answertab=active
# Additive quotient group $\mathbb{Q}/\mathbb{Z}$ is isomorphic to the multiplicative group of roots of unity I would like to prove that the additive quotient group $\mathbb{Q}/\mathbb{Z}$ is isomorphic to the multiplicative group of roots of unity. Now every $X \in \mathbb{Q}/\mathbb{Z}$ is of the form $\frac{p}{q} + \mathbb{Z}$ for $0 \leq \frac{p}{q} < 1$ for a unique $\frac{p}{q} \in \mathbb{Q}.$ This suggest taking the map $f:\mathbb{Q}/\mathbb{Z} \mapsto C^{\times}$ defined with the rule $$f(\frac{p}{q} + \mathbb{Z}) = e^{\frac{2\pi i p}{q}}$$ where $\frac{p}{q}$ is the mentioned representative. Somehow I have problems showing that this is a bijective function in a formal way. I suspect I do not know the properties of the complex roots of unity well enough. Can someone point me out (perhaps with a hint) how to show that $f$ is injective and surjective? - You could use first isomorphism theorem. Then you don't need to prove that the map is bijective - you get this from that theorem. –  Martin Sleziak Aug 4 '12 at 10:45 I will say every $\mathbb{Q/Z}$ is isomorphic to $X = \left\{x : x \in [0,1] \text{ and }x \in \mathbb{Q} \right\}$ –  Jayesh Badwaik Aug 4 '12 at 10:46 From the geometrical form ($e^{i\varphi}$ corresponds to angle $\varphi$) you have: $e^{(2\pi p)/q}=1$ implies $2\pi p/q=2\pi k$ for some $k\in\mathbb Z$; i.e. $\frac pq\in\mathbb Z$. Was this what you had problem with? –  Martin Sleziak Aug 4 '12 at 10:49 You probably mean i.e $\frac{p}{q} \in Q$ right? I don't quite follow your argument here. Could you please elaborate a bit more? Thanks. –  Jernej Aug 4 '12 at 11:59 BTW you can read here how to reply in comments. (It was only accident that I came back to this question and I saw your comment directed to me. If you want to get attention of other users who previously left comment, you can use @username.) –  Martin Sleziak Aug 4 '12 at 13:23 show 1 more comment To prove it is a bijection, one can use rather "primitive" methods. suppose that: $f\left(\frac{p}{q} + \Bbb Z\right) = f\left(\frac{p'}{q'} + \Bbb Z\right)$, then: $e^{2\pi ip/q} = e^{2\pi ip'/q'}$, so $e^{2\pi i(p/q - p'/q')} = 1$. This, in turn, means that $\frac{p}{q} - \frac{p'}{q'} \in \Bbb Z$, so the cosets are equal. Hence $f$ is injective. On the other hand, if $e^{2\pi i p/q}$ is any $q$-th root of unity, it clearly has the pre-image $\frac{p}{q} + \Bbb Z$ in $\Bbb Q/\Bbb Z$ (so $f$ is surjective). One caveat, however. You haven't actually demonstrated $f$ is a function (i.e., that it is well-defined, although if you stare hard at the preceding, I'm sure it will come to you). - This is what I thought. But somehow I wasn't sure that 1. Every q-th root of unity is of the form $e^{\frac{2 \pi i p}{q}}$ and consequently that for every n-th root of unity z, $e^k=1$ if and only if $n|k$. As for the well defined remark isn't that implied by the uniqueness of the representative? –  Jernej Aug 4 '12 at 18:40 Be canonical! You have a morphism of groups $ex:\mathbb R \to S^1: r\mapsto e^{2i\pi r}$, where $S^1$ is the multiplicative group of complex numbers with $\mid z\mid=1$. This morphism is surjective and has kernel $\mathbb Z$. [The wish to have kernel $\mathbb Z$ instead of $2\pi \mathbb Z$ dictated the choice of $ex(r)=e^{2i\pi r}$ instead of $e^{ir}$]. Restricting the morphism to $\mathbb Q$ induces a morphism $res(ex):\mathbb Q\to S^1$ with kernel $\mathbb Q\cap \mathbb Z=\mathbb Z$ and image $\mu_\infty\stackrel {def}{=} e^{2i\pi \mathbb Q} \subset S^1$. The crucial observation is that this image is $\mu_\infty=\bigcup_n \mu_n$, where $\mu_n$ is the set of $n$-roots of unity $e^{\frac {2i\pi k}{n}}\quad (k=1,2,...,n)$. Hence $\mu_\infty$ is the set of all roots of unity i.e. the set of complex numbers $z \in \mathbb C$ with $z^n=1$ for some $n\in \mathbb N^*.$ Applying Noether's isomorphism you finally get the required group isomorphism (be attentive to the successive presence and absence of a bar over the $q$ in the formula) $$Ex: \mathbb Q/\mathbb Z \xrightarrow {\cong} \mu_\infty:\overline {q}\mapsto e^{2i\pi q}$$ A cultural note This elementary isomorphism is actually useful in quite advanced mathematics.You will find it, for example, in Grothendieck's Classes de Chern et représentations linéaires des groupes discrets. - Define $\,f: \Bbb Q\to S^1:=\{z\in \Bbb C\;:\;|z|=1\}\,\,,\,f(q):=e^{iq}\,$ , show this is a homomorphism of groups, find its kernel and use the fist isomorphism theorem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9476327896118164, "perplexity": 259.98828576509203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345768632/warc/CC-MAIN-20131218054928-00043-ip-10-33-133-15.ec2.internal.warc.gz"}
http://www.math.gatech.edu/seminars-colloquia/series/cdsns-colloquium/dmitry-dolgopyat-20130227
## Piecewise linear Fermi-Ulam pingpongs. Series: CDSNS Colloquium Wednesday, February 27, 2013 - 16:00 1 hour (actually 50 minutes) Location: Skiles Bldg Rm.005 , Univ. of Maryland Piecewise linear Fermi-Ulam pingpongs. We consider a particle moving freely between two periodically moving infinitely heavy walls. We assume that one wall is fixed and the second one moves with piecewise linear velocities. We study the question about existence and abundance of accelerating orbits for that model. This is a joint work with Jacopo de Simoi
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8813350796699524, "perplexity": 3077.2215649964915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864798.12/warc/CC-MAIN-20180522151159-20180522171159-00440.warc.gz"}
https://proofwiki.org/wiki/Definition:Closed_Ball/P-adic_Numbers
## Definition Let $p$ be a prime number. Let $\struct {\Q_p, \norm {\,\cdot\,}_p}$ be the $p$-adic numbers. Let $a \in \Q_p$. Let $\epsilon \in \R_{>0}$ be a strictly positive real number. The closed $\epsilon$-ball of $a$ in $\struct {\Q_p, \norm {\,\cdot\,}_p }$ is defined as: $\map { {B_\epsilon}^-} a = \set {x \in R: \norm {x - a}_p \le \epsilon}$ In $\map { {B_\epsilon}^-} a$, the value $\epsilon$ is referred to as the radius of the closed $\epsilon$-ball. In $\map { {B_\epsilon}^-} a$, the value $a$ is referred to as the center of the closed $\epsilon$-ball.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836785197257996, "perplexity": 67.38690897547791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00063.warc.gz"}
http://www.emathzone.com/tutorials/calculus/average-and-instantaneous-rate-of-change.html
# Average and Instantaneous Rate of Change A variable which can assign any value independently is called independent variable and the variable which depends on some independent variable is called the dependent variable. For Example: If $x = 0,1,2,3, \ldots$ etc, then We see that as $x$ behaves independently, so we call it the independent variable. But the behavior of $y$ or $f(x)$ depends on the variable $x$. So we call it dependent variable. Increment: Literally the word increment means on increase, but in mathematics, this word covers both increase as well as decrease; for the increment may be positive or negative. Briefly and simply, the word increment, in mathematics means, “the difference between two values of variables”. i.e.,  the final value minus the initial value is called an increment in the variable. The increment in $x$ is denoted by the symbols $\delta x$ or $\Delta x$ (read as “delta $x$”) If $y = f(x)$, and $x$ changes from an initial value ${x_0}$ to the final value ${x_1}$, then $y$ changes from an initial value ${y_0} = f({x_0})$ to the final value ${y_1} = f({x_1})$. Thus, the increment in ‘$x$ $\Delta x = {x_1} - {x_0}$ Produces a corresponding increment in ‘$y$ $\Delta y = {y_1} - {y_0} = f({x_1}) - f({x_0})$ Average Rate of Change: If $y = f(x)$ is real valued continuous function in the interval $({x_0},{x_1})$, then the average rate of change of ‘$y$’ with respect to ‘$x$’ over this interval is $\frac{{f({x_1}) - f({x_0})}}{{{x_1} - {x_0}}}$ But $\Delta x = {x_1} - {x_0}$ $\Rightarrow {x_1} = {x_0} + \Delta x$ $\therefore \frac{{f({x_0} + \Delta x) - f({x_0})}}{{\Delta x}}$ Instantaneous Rate of Change: If $y = f(x)$ is real valued continuous function in the interval $({x_0},{x_1})$, then the average rate of change of ‘$y$’ with respect to ‘$x$’ over this interval is $\mathop {\lim }\limits_{{x_1} \to {x_0}} \frac{{f({x_1}) - f({x_0})}}{{{x_1} - {x_0}}}$ But $\Delta x = {x_1} - {x_0}$ $\Rightarrow {x_1} = {x_0} + \Delta x$ This shows that $\Delta x \to 0$, as ${x_1} \to {x_0}$ Average or Instantaneous Rate of Change of Distance: OR Average or Instantaneous Velocity: Suppose a particle (or an object) is moving in a straight line and its positions (from some fixed point) after times ${t_0}$ and ${t_1}$ are given by $S({t_0})$ and $S({t_1})$, then the average rate of change or the average velocity is Also, the instantaneous rate of change of distance or instantaneous velocity is comments
{"extraction_info": {"found_math": true, "script_math_tex": 41, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708069562911987, "perplexity": 326.80715345019036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476990033880.51/warc/CC-MAIN-20161020190033-00057-ip-10-142-188-19.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/gravity-as-cause-or-effect.867607/
# B Gravity as cause or effect? Tags: 1. Apr 18, 2016 ### Donald Marks According to current theory, high concentrations of matter warp space-time and create gravity. The Einstein field equations EFE describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy. Would not a reinterpretation of the EFE lead to the following alternative explanation of how matter collects to form planets and stars? Rather than matter collecting, distorting space-time and thereby creating gravity effect, could discontinuous areas of SpaceTime result in concentrated areas of gravity which then attract collections of matter? 2. Apr 18, 2016 ### Staff: Mentor No, because there are no discontinuous areas of spacetime in the theory. The Oppenheimer-Snyder solution to the Einstein field equations describes how infalling matter behaves in general relativity. 3. Apr 18, 2016 ### Staff: Mentor In addition, there would be no reason why those regions of spacetime should exactly follow the matter in literally all experiments (including those where matter is accelerated by other forces, like electromagnetism). Unless the matter (more precisely, the stress energy tensor) itself is the source of gravity. 4. Apr 19, 2016 ### Donald Marks It is my understanding that discontinuous areas of spacetime are not excluded in the theory. Discontinuities could not in practicality be excluded by observation. 5. Apr 19, 2016 ### Staff: Mentor Where are you getting that understanding from? Spacetime is a continuous 4-dimensional manifold in GR. Why not? I think you need to be much more precise in explaining exactly what you mean by "discontinuities in spacetime". Draft saved Draft deleted Similar Discussions: Gravity as cause or effect?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9024788737297058, "perplexity": 2775.973421308418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811655.65/warc/CC-MAIN-20180218042652-20180218062652-00776.warc.gz"}
http://self.gutenberg.org/articles/eng/Zig-zag_lemma
#jsDisabledContent { display:none; } My Account | Register | Help # Zig-zag lemma Article Id: WHEBN0019382017 Reproduction Date: Title: Zig-zag lemma Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date: ### Zig-zag lemma In mathematics, particularly homological algebra, the zig-zag lemma asserts the existence of a particular long exact sequence in the homology groups of certain chain complexes. The result is valid in every abelian category. ## Statement In an abelian category (such as the category of abelian groups or the category of vector spaces over a given field), let $\left(\mathcal\left\{A\right\},\partial_\left\{\bullet\right\}\right), \left(\mathcal\left\{B\right\},\partial_\left\{\bullet\right\}\text{'}\right)$ and $\left(\mathcal\left\{C\right\},\partial_\left\{\bullet\right\}$) be chain complexes that fit into the following short exact sequence: $0 \longrightarrow \mathcal\left\{A\right\} \stackrel\left\{\alpha\right\}\left\{\longrightarrow\right\} \mathcal\left\{B\right\} \stackrel\left\{\beta\right\}\left\{\longrightarrow\right\} \mathcal\left\{C\right\}\longrightarrow 0$ Such a sequence is shorthand for the following commutative diagram: where the rows are exact sequences and each column is a complex. The zig-zag lemma asserts that there is a collection of boundary maps that makes the following sequence exact: The maps $\alpha_*^\left\{ \right\}$ and $\beta_*^\left\{ \right\}$ are the usual maps induced by homology. The boundary maps $\delta_n^\left\{ \right\}$ are explained below. The name of the lemma arises from the "zig-zag" behavior of the maps in the sequence. In an unfortunate overlap in terminology, this theorem is also commonly known as the "snake lemma," although there is another result in homological algebra with that name. Interestingly, the "other" snake lemma can be used to prove the zig-zag lemma, in a manner different from what is described below. ## Construction of the boundary maps The maps $\delta_n^\left\{ \right\}$ are defined using a standard diagram chasing argument. Let $c \in C_n$ represent a class in $H_n\left(\mathcal\left\{C\right\}\right)$, so $\partial_n$(c) = 0. Exactness of the row implies that $\beta_n^\left\{ \right\}$ is surjective, so there must be some $b \in B_n$ with $\beta_n^\left\{ \right\}\left(b\right) = c$. By commutativity of the diagram, $\beta_\left\{n-1\right\} \partial_n\text{'} \left(b\right) = \partial_n$ \beta_n(b) = \partial_n(c) = 0. By exactness, $\partial_n\text{'}\left(b\right) \in \ker \beta_\left\{n-1\right\} = \mathrm\left\{im\right\} \alpha_\left\{n-1\right\}.$ Thus, since $\alpha_\left\{n-1\right\}^\left\{\right\}$ is injective, there is a unique element $a \in A_\left\{n-1\right\}$ such that $\alpha_\left\{n-1\right\}\left(a\right) = \partial_n\text{'}\left(b\right)$. This is a cycle, since $\alpha_\left\{n-1\right\}^\left\{ \right\}$ is injective and $\alpha_\left\{n-2\right\} \partial_\left\{n-1\right\}\left(a\right) = \partial_\left\{n-1\right\}\text{'} \alpha_\left\{n-1\right\}\left(a\right) = \partial_\left\{n-1\right\}\text{'} \partial_n\text{'}\left(b\right) = 0,$ since $\partial^2 = 0$. That is, $\partial_\left\{n-1\right\}\left(a\right) \in \ker \alpha_\left\{n-2\right\} = \\left\{0\\right\}$. This means $a$ is a cycle, so it represents a class in $H_\left\{n-1\right\}\left(\mathcal\left\{A\right\}\right)$. We can now define $\delta_\left\{ \right\}^\left\{ \right\}\left[c\right] = \left[a\right].\,$ With the boundary maps defined, one can show that they are well-defined (that is, independent of the choices of c and b). The proof uses diagram chasing arguments similar to that above. Such arguments are also used to show that the sequence in homology is exact at each group.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9869439601898193, "perplexity": 354.436305231459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987787444.85/warc/CC-MAIN-20191021194506-20191021222006-00244.warc.gz"}
http://www.edurite.com/kbase/how-to-find-the-domain-of-a-fraction
#### • Class 11 Physics Demo Explore Related Concepts # how to find the domain of a fraction Question:g(x)= absolute value of x-1/ x-1 The denominator isn't in the absolute value part. How do you find the domain and range of it?? Answers:Domain = R\ {1} Range = { 1, -1} Question:hello, I am have trouble solving f(t)=-11/(square root of t), I would like to find the domain and range of the function. sorry, i couldn't write the sign. please help me, I thought the domain is zero to positive infinity but what about the negative sign. I also need the range too. Thanks Answers:Domain a) bottom of the fraction cannot be zero b) argument of square root has to be positive or zero Domain (0,infinity) Range a) bottom of fraction is positive Range (-infinity,0) Question:I don't really understand how to find range. I think for the domain you set the equation equal to 0 and solve but I'm not positive. How would I find the domain and range of this problem? f(x) = sqr rt (4 - 2x) Thanks Answers:for domain, it is basically the restrictions of what X can be. you would only set the equation to 0 if it was a fraction and the equation was on the bottom. and it shouldn't be asking you for range with that sort of problem. basically domain is the "x"s and range is the "y" values that fit in that function. Good luck oh so the domain for that function would be all reall numbers Question:If the vertex is point (7,9) and the end points are points (6,8) and (8,8), how would you write the absolute value equation for this? Also, since there are endpoints to this absolute value, how would I describe or write the domain for this absolute value function. The absolute value function that I got for this function is y = - I(x- 7)I + 9 y equals neg (absolute value of x-7) plus nine Answers:Your equation looks correct to me. The domain is the possible x values, namely 6 <= x <= 8
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8090508580207825, "perplexity": 392.04648213667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422119446463.10/warc/CC-MAIN-20150124171046-00245-ip-10-180-212-252.ec2.internal.warc.gz"}
https://robotics.stackexchange.com/questions/10463/reward-function-for-q-learning-on-a-robot/19404
# Reward Function for q learning on a robot I have 2 wheeled differential drive robot which I use pid for low level control to follow line. I implemented q learning which uses samples for 16 iterations then uses them to decide the best position to be on the line so car takes the turn from there. This allows PID to setup and smooth fast following. My question is how can I setup a reward function that improves the performance i.e. lets the q learning to find the best Edit What it tries to learn is this, it has 16 inputs which contains the line positions for the last 15 iterations and this iteration. Line position is between -1 and 1 which -1 means only left most sensor sees the line and 0 means the line is in the center. I want it to learn a line position that when it faces this input again it will set that line position like its the center and take the curve according to that line position. For example error is required position - line position so let say I had 16 0 as input then I calculated the required as 0.4. So after that the car will center itself at 0.4 I hope this helps :) You asked for my source code i post it below void MainController::Control(void){ if(linePosition == -2.0f){ lost_line->FindLine(lastPos[1] - lastPos[0]); } else{ line_follower->Follow(linePosition); lastPos.push_back(linePosition); lastPos.erase(lastPos.begin()); } } My Sensor reading returns a value between -1.0f and 1.0f. 1.0f means Outer Sensor on the right is only the line. I have 8 sensors. void LineFollower::Follow(float LinePosition){ float requiredPos = Qpredictor.Process(LinePosition,CurrentSpeed); float error = requiredPos - LinePosition; float ErrorDer = error -LastError; float diffSpeed = (KpTerm * error + (KdTerm * ErrorDer)); float RightMotorSpeed = CurrentSpeed - diffSpeed; float LeftMotorSpeed = CurrentSpeed + diffSpeed; LastError = error; driver->Drive(LeftMotorSpeed,RightMotorSpeed); } Here is the logic for the value for QPredictor(I call the learning part as this). And Finally QPredictor float Memory[MemorySize][DataVectorLength] = { {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}, {0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3}, {0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6}, {0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8}, {0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.000, 0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.012, 0.050, 0.113, 0.200, 0.312, 0.450, 0.613, 0.800, 1.000}, {0.000, 0.025, 0.100, 0.225, 0.400, 0.625, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.050, 0.200, 0.450, 0.800, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000, 1.000}, {0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.100, 0.400, 0.900, 1.000} }; QPredictor::QPredictor(){ for(int i=0;i<MemorySize;i++){ output[i]=0.0f; input[i]=0.0f; } state = 0; PrevState = 0; } float QPredictor::Process(float linePosition,float currentBaseSpeed){ for(int i=1;i<DataVectorLength;i++){ input[i] = input[i-1]; } input[0] = m_abs(linePosition); int MinIndex = 0; float Distance = 10000.0f; float sum = 0.0f; for(int i=0;i<MemorySize;i++){ sum = 0.0f; for(int j=0;j<DataVectorLength;j++){ sum +=m_abs(input[j] - Memory[i][j]); } if(sum <= Distance){ MinIndex = i; Distance = sum; } } sum = 0.0f; for(int i=0;i<DataVectorLength;i++){ sum += input[i]; } float eta = 0.95f; output[MinIndex] = eta * output[MinIndex] + (1 - eta) * sum; return -m_sgn(linePosition) * output[MinIndex]; } float QPredictor::rewardFunction(float *inputData,float currentBaseSpeed){ float sum = 0.0f; for(int i=0;i<DataVectorLength;i++){ sum += inputData[i]; } sum /= DataVectorLength; return sum; } I now only have average Error and currently not using learning because it's not complete without reward function. How can I adjust it according to the dimensions of my Robot? • I think your question needs some improvement before it can be answered. E.g. what are you trying to learn? The PID parameters? What is your goal for the final systems behaviour? – Jakob Aug 15 '16 at 6:32 • linked question: robotics.stackexchange.com/questions/361/… – Manuel Rodriguez Aug 15 '16 at 8:56 • The memory-array stores the past experience of robot, not the future. Negative rewards must given if all array elements are white. For simple curves this will work. – Manuel Rodriguez Aug 18 '16 at 21:50 • Yes that makes sense but what if i havent lost line how can i increase or decrease the reward or punishment – Ege Keyvan Aug 19 '16 at 6:24 • +1 for attempt; You can use a simple webcam for validation; Use simple check to determine how close your robot is to the centre of the line. Then, use that and add rewards (like +1 indicating "continue on the lane" every second) and punishments (if one of the wheels cross the line; -50 for every second). – Prasad Raghavendra Apr 9 '17 at 12:37 Adaptive PID control is based on a manual coded algorithm for line-following which is modified by reinforcement learning. The idea is, that in the PID function a parameter is unknown (e.g. the distance from line which can be tolerated as deviation) and this parameter will be calculated on-the-fly while driving the robot. The rewards are given manual by an operator Clicker training or can be determined in a loop. The robot is driving a parcurs, the time is measured. The robot drives the parcurs again and the goal is to decrease the time. • I have seen it but i have kohonen maps and q learning that i use kohonen to cluster the input vector and use q learning to decide the output can i adapt this system to mine ? – Ege Keyvan Aug 15 '16 at 9:07 • Sure, the self-organizing map can be used as associative memory (Q-KOHON, Claude F. Touzet). – Manuel Rodriguez Aug 15 '16 at 9:18 • Do you suggest according to the input should the system change the parameters ? – Ege Keyvan Aug 15 '16 at 9:45 • @EgeKeyvan: please post your soucecode or a screenshot of NXT-G GUI – Manuel Rodriguez Aug 15 '16 at 14:48 • I posted it now – Ege Keyvan Aug 17 '16 at 12:32 I think Q-learning is an overkill for such a simple tracking problem. I highly recommend you using a simpler method, namely pure pursuit tracking controller. Here are some links that you can read about the method. Matlab's page on the subject is pretty neat. Matlab's Pure Pursuit Controller Implementation of the Pure Pursuit Path Tracking Algorithm
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9553411602973938, "perplexity": 584.0134111524185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153392.43/warc/CC-MAIN-20210727135323-20210727165323-00394.warc.gz"}
http://mathhelpforum.com/math-topics/16458-forces-problem.html
1. ## forces problem hello , i would like you please to find the solution for all of those ,am find it difficult to do them alone i need help!! Attached Thumbnails hello , i would like you please to find the solution for all of those ,am find it difficult to do them alone i need help!! For #1 Recall Newton's 2nd Law: $\sum F = ma$ and the rotational version of Newton's 2nd Law: $\sum \tau = I \alpha$ Since the plank is in static equilibrium $\sum F = 0$ and $\tau = 0$ So choose a +y direction to be directly upward. A Free Body Diagram shows that there are four forces present: The forces A and B acting directly upward, the weight w of the woman acting directly downward, and the weight W of the plank also acting directly downward. So we know that $\sum F_y = A + B - w - W = 0$ $A + B - mg - Mg = 0$ where m and M are the mass of the woman and the plank respectively. Also choose a positive rotation to be in the counterclockwise sense. I am going to choose an "axis of rotation" to be at the point where the reaction force A is operating. (We may choose any point as our axis, since there is no rotation anyway.) We don't know where the CM of the plank is yet, so I'm going to place that at a distance x to the right of the axis of rotation. So $\sum \tau _A = -(2)w - (x)W + (6)B = 0$<-- That's "2 meters there as the coefficient of w, etc. $-2mg - xMg + 6B = 0$ We have three unknowns in these two equations. Well, pick a new axis of rotation and do it again! I'll now pick the axis at the point where B is acting, with the same rotation convention. So $\sum \tau_B = (6 - x)W + (4~m)w - (6~m)A = 0$ $(6 - x)Mg + 4mg - 6A = 0$ This gives us the system of equations: $A + B - mg - Mg = 0$ $-2mg - xMg + 6B = 0$ $(6 - x)Mg + 4mg - 6A = 0$ We have three equations in three unknowns (A, B, and x), so we may solve this. I have to get going. If you have a problem solving the system just post in the thread and someone will help you. -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483330011367798, "perplexity": 454.8170237980037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321410.90/warc/CC-MAIN-20170627115753-20170627135753-00217.warc.gz"}
https://www.scribd.com/doc/62537922/A-K
P. 1 A.K # A.K |Views: 714|Likes: See more See less 02/14/2013 pdf text original ## Sections • Linear Algebra • Matrices • 1.1 Definition of a Matrix • 1.1.1 Special Matrices • 1.2 Operations on Matrices • 1.2.1 Multiplication of Matrices • 1.2.2 Inverse of a Matrix • 1.3 Some More Special Matrices • 1.3.1 Submatrix of a Matrix • 1.3.2 Block Matrices • 1.4 Matrices over Complex Numbers • Linear System of Equations • 2.1 Introduction • 2.2. ROW OPERATIONS AND EQUIVALENT SYSTEMS 23 • 2.1.1 A Solution Method • 2.2 Row Operations and Equivalent Systems • 2.2.1 Gauss Elimination Method • 2.3 Row Reduced Echelon Form of a Matrix • 2.3.1 Gauss-Jordan Elimination • 2.3.2 Elementary Matrices • 2.4 Rank of a Matrix • 2.5 Existence of Solution of Ax = b • 2.5.1 Example • 2.5.2 Main Theorem • 2.5.3 Equivalent conditions for Invertibility • 2.5.4 Inverse and the Gauss-Jordan Method • 2.6 Determinant • 2.6.1 Adjoint of a Matrix • 2.6.2 Cramer’s Rule • 2.7 Miscellaneous Exercises • Finite Dimensional Vector Spaces • 3.1 Vector Spaces • 3.1.1 Definition • 3.1.2 Examples • 3.1.3 Subspaces • 3.1.4 Linear Combinations • 3.2 Linear Independence • 3.3 Bases • 3.3.1 Important Results • 3.4 Ordered Bases • Linear Transformations • 4.1 Definitions and Basic Properties • 4.2 Matrix of a linear transformation • 4.3 Rank-Nullity Theorem • 4.4 Similarity of Matrices • Inner Product Spaces • 5.1 Definition and Basic Properties • 5.2 Gram-Schmidt Orthogonalisation Process • 5.3 Orthogonal Projections and Applications • 5.3.1 Matrix of the Orthogonal Projection • 6.1 Introduction and Definitions • 6.2 Diagonalisation • 6.3 Diagonalisable matrices • 6.4. SYLVESTER’S LAW OF INERTIA AND APPLICATIONS 123 • 6.4 Sylvester’s Law of Inertia and Applications • Ordinary Differential Equation • Differential Equations • 7.1 Introduction and Preliminaries • 7.2 Separable Equations • 7.2.1 Equations Reducible to Separable Form • 7.3 Exact Equations • 7.3.1 Integrating Factors • 7.4 Linear Equations • 7.5 Miscellaneous Remarks • 7.6 Initial Value Problems • 7.6.1 Orthogonal Trajectories • 7.7 Numerical Methods • 8.1 Introduction • 8.2 More on Second Order Equations • 8.2.1 Wronskian • 8.2.2 Method of Reduction of Order • 8.3 Second Order equations with Constant Coefficients • 8.4 Non Homogeneous Equations • 8.5 Variation of Parameters • 8.6 Higher Order Equations with Constant Coefficients • 8.7 Method of Undetermined Coefficients • Solutions Based on Power Series • 9.1 Introduction • 9.1.1 Properties of Power Series • 9.2 Solutions in terms of Power Series • 9.3 Statement of Frobenius Theorem for Regular (Ordinary) • 9.4. LEGENDRE EQUATIONS AND LEGENDRE POLYNOMIALS 183 • 9.4 Legendre Equations and Legendre Polynomials • 9.4.1 Introduction • 9.4.2 Legendre Polynomials • Laplace Transform • 10.1 Introduction • 10.2 Definitions and Examples • 10.2.1 Examples • 10.3 Properties of Laplace Transform • 10.3.1 Inverse Transforms of Rational Functions • 10.3.2 Transform of Unit Step Function • 10.4 Some Useful Results • 10.4.1 Limiting Theorems • 10.5 Application to Differential Equations • 10.6 Transform of the Unit-Impulse Function • Numerical Applications • Newton’s Interpolation Formulae • 11.1 Introduction • 11.2 Difference Operator • 11.2.1 Forward Difference Operator • 11.2.2 Backward Difference Operator • 11.2.3 Central Difference Operator • 11.2.4 Shift Operator • 11.2.5 Averaging Operator • 11.3 Relations between Difference operators • 11.4 Newton’s Interpolation Formulae • Lagrange’s Interpolation Formula • 12.1 Introduction • 12.2 Divided Differences • 12.3 Lagrange’s Interpolation formula • 12.4 Gauss’s and Stirling’s Formulas • Numerical Differentiation and • 13.1 Introduction • 13.2 Numerical Differentiation • 13.3 Numerical Integration • 13.3.1 A General Quadrature Formula • 13.3.2 Trapezoidal Rule # Notes on Mathematics - 102 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.1.1 Special Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2 Operations on Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2.1 Multiplication of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2.2 Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.3 Some More Special Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.3.1 Submatrix of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.3.2 Block Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4 Matrices over Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2 Linear System of Equations 21 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.1.1 A Solution Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2 Row Operations and Equivalent Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.1 Gauss Elimination Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3 Row Reduced Echelon Form of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.3.1 Gauss-Jordan Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.3.2 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.4 Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.5 Existence of Solution of Ax = b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.5.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.5.2 Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.5.3 Equivalent conditions for Invertibility . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.4 Inverse and the Gauss-Jordan Method . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.6 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.6.1 Adjoint of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.6.2 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.7 Miscellaneous Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3 Finite Dimensional Vector Spaces 51 3.1 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.1.3 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.1.4 Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.2 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3 4 CONTENTS 3.3 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.3.1 Important Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.4 Ordered Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4 Linear Transformations 71 4.1 Definitions and Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.2 Matrix of a linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.3 Rank-Nullity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.4 Similarity of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5 Inner Product Spaces 89 5.1 Definition and Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.2 Gram-Schmidt Orthogonalisation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.3 Orthogonal Projections and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.3.1 Matrix of the Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . 105 6 Eigenvalues, Eigenvectors and Diagonalisation 109 6.1 Introduction and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 6.2 Diagonalisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 6.3 Diagonalisable matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 6.4 Sylvester’s Law of Inertia and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 123 II Ordinary Differential Equation 131 7 Differential Equations 133 7.1 Introduction and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 7.2 Separable Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 7.2.1 Equations Reducible to Separable Form . . . . . . . . . . . . . . . . . . . . . . . . 137 7.3 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 7.3.1 Integrating Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 7.4 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 7.5 Miscellaneous Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 7.6 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 7.6.1 Orthogonal Trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 7.7 Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 8 Second Order and Higher Order Equations 155 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 8.2 More on Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 8.2.1 Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 8.2.2 Method of Reduction of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 8.3 Second Order equations with Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . 162 8.4 Non Homogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 8.5 Variation of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 8.6 Higher Order Equations with Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . 168 8.7 Method of Undetermined Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 CONTENTS 5 9 Solutions Based on Power Series 177 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 9.1.1 Properties of Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 9.2 Solutions in terms of Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 9.3 Statement of Frobenius Theorem for Regular (Ordinary) Point . . . . . . . . . . . . . . . 182 9.4 Legendre Equations and Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . 183 9.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 9.4.2 Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 III Laplace Transform 191 10 Laplace Transform 193 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 10.2 Definitions and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 10.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 10.3 Properties of Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 10.3.1 Inverse Transforms of Rational Functions . . . . . . . . . . . . . . . . . . . . . . . 201 10.3.2 Transform of Unit Step Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 10.4 Some Useful Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 10.4.1 Limiting Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 10.5 Application to Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 10.6 Transform of the Unit-Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 IV Numerical Applications 209 11 Newton’s Interpolation Formulae 211 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 11.2 Difference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 11.2.1 Forward Difference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 11.2.2 Backward Difference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 11.2.3 Central Difference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 11.2.4 Shift Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 11.2.5 Averaging Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 11.3 Relations between Difference operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 11.4 Newton’s Interpolation Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 12 Lagrange’s Interpolation Formula 223 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 12.2 Divided Differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 12.3 Lagrange’s Interpolation formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 12.4 Gauss’s and Stirling’s Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 13 Numerical Differentiation and Integration 231 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 13.2 Numerical Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 13.3 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 13.3.1 A General Quadrature Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 13.3.2 Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 6 CONTENTS 13.3.3 Simpson’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 14 Numerical Methods 241 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 14.1.1 Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 14.2 Error Estimates and Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 14.3 Runge-Kutta Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 14.3.1 Algorithm: Runge-Kutta Method of Order 2 . . . . . . . . . . . . . . . . . . . . . 247 14.3.2 Runge-Kutta Method of Order 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 14.4 Predictor-Corrector Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 14.4.1 Algorithm for Predictor-Corrector Method . . . . . . . . . . . . . . . . . . . . . . . 250 15 Appendix 253 15.1 System of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 15.2 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 15.3 Properties of Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 15.4 Dimension of M +N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 15.5 Proof of Rank-Nullity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 15.6 Condition for Exactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Part I Linear Algebra 7 Chapter 1 Matrices 1.1 Definition of a Matrix Definition 1.1.1 (Matrix) A rectangular array of numbers is called a matrix. We shall mostly be concerned with matrices having real numbers as entries. The horizontal arrays of a matrix are called its rows and the vertical arrays are called its columns. A matrix having m rows and n columns is said to have the order m n. A matrix A of order mn can be represented in the following form: A = a 11 a 12 a 1n a 21 a 22 a 2n . . . . . . . . . . . . a m1 a m2 a mn ¸ ¸ ¸ ¸ ¸ ¸ , where a ij is the entry at the intersection of the i th row and j th column. In a more concise manner, we also denote the matrix A by [a ij ] by suppressing its order. Remark 1.1.2 Some books also use ¸ ¸ ¸ ¸ ¸ a 11 a 12 a 1n a 21 a 22 a 2n . . . . . . . . . . . . a m1 a m2 a mn ¸ to represent a matrix. Let A = ¸ 1 3 7 4 5 6 ¸ . Then a 11 = 1, a 12 = 3, a 13 = 7, a 21 = 4, a 22 = 5, and a 23 = 6. A matrix having only one column is called a column vector; and a matrix with only one row is called a row vector. Whenever a vector is used, it should be understood from the context whether it is a row vector or a column vector. Definition 1.1.3 (Equality of two Matrices) Two matrices A = [a ij ] and B = [b ij ] having the same order mn are equal if a ij = b ij for each i = 1, 2, . . . , m and j = 1, 2, . . . , n. In other words, two matrices are said to be equal if they have the same order and their corresponding entries are equal. 9 10 CHAPTER 1. MATRICES Example 1.1.4 The linear system of equations 2x + 3y = 5 and 3x + 2y = 5 can be identified with the matrix ¸ 2 3 : 5 3 2 : 5 ¸ . 1.1.1 Special Matrices Definition 1.1.5 1. A matrix in which each entry is zero is called a zero-matrix, denoted by 0. For example, 0 2×2 = ¸ 0 0 0 0 ¸ and 0 2×3 = ¸ 0 0 0 0 0 0 ¸ . 2. A matrix for which the number of rows equals the number of columns, is called a square matrix. So, if A is a n n matrix then A is said to have order n. 3. In a square matrix, A = [a ij ], of order n, the entries a 11 , a 22 , . . . , a nn are called the diagonal entries and form the principal diagonal of A. 4. A square matrix A = [a ij ] is said to be a diagonal matrix if a ij = 0 for i = j. In other words, the non-zero entries appear only on the principal diagonal. For example, the zero matrix 0 n and ¸ 4 0 0 1 ¸ are a few diagonal matrices. A diagonal matrix D of order n with the diagonal entries d 1 , d 2 , . . . , d n is denoted by D = diag(d 1 , . . . , d n ). If d i = d for all i = 1, 2, . . . , n then the diagonal matrix D is called a scalar matrix. 5. A diagonal matrix A of order n is called an identity matrix if d i = 1 for all i = 1, 2, . . . , n. This matrix is denoted by I n . For example, I 2 = ¸ 1 0 0 1 ¸ and I 3 = 1 0 0 0 1 0 0 0 1 ¸ ¸ ¸. The subscript n is suppressed in case the order is clear from the context or if no confusion arises. 6. A square matrix A = [a ij ] is said to be an upper triangular matrix if a ij = 0 for i > j. A square matrix A = [a ij ] is said to be an lower triangular matrix if a ij = 0 for i < j. A square matrix A is said to be triangular if it is an upper or a lower triangular matrix. For example 2 1 4 0 3 −1 0 0 −2 ¸ ¸ ¸ is an upper triangular matrix. An upper triangular matrix will be represented by a 11 a 12 a 1n 0 a 22 a 2n . . . . . . . . . . . . 0 0 a nn ¸ ¸ ¸ ¸ ¸ ¸ . 1.2 Operations on Matrices Definition 1.2.1 (Transpose of a Matrix) The transpose of an m n matrix A = [a ij ] is defined as the n m matrix B = [b ij ], with b ij = a ji for 1 ≤ i ≤ m and 1 ≤ j ≤ n. The transpose of A is denoted by A t . That is, by the transpose of an mn matrix A, we mean a matrix of order n m having the rows of A as its columns and the columns of A as its rows. 1.2. OPERATIONS ON MATRICES 11 For example, if A = ¸ 1 4 5 0 1 2 ¸ then A t = 1 0 4 1 5 2 ¸ ¸ ¸. Thus, the transpose of a row vector is a column vector and vice-versa. Theorem 1.2.2 For any matrix A, (A t ) t = A. Proof. Let A = [a ij ], A t = [b ij ] and (A t ) t = [c ij ]. Then, the definition of transpose gives c ij = b ji = a ij for all i, j and the result follows. Definition 1.2.3 (Addition of Matrices) let A = [a ij ] and B = [b ij ] be are two mn matrices. Then the sum A +B is defined to be the matrix C = [c ij ] with c ij = a ij +b ij . Note that, we define the sum of two matrices only when the order of the two matrices are same. Definition 1.2.4 (Multiplying a Scalar to a Matrix) Let A = [a ij ] be an m n matrix. Then for any element k ∈ R, we define kA = [ka ij ]. For example, if A = ¸ 1 4 5 0 1 2 ¸ and k = 5, then 5A = ¸ 5 20 25 0 5 10 ¸ . Theorem 1.2.5 Let A, B and C be matrices of order mn, and let k, ∈ R. Then 1. A+B = B +A (commutativity). 2. (A +B) +C = A + (B +C) (associativity). 3. k(A) = (k)A. 4. (k +)A = kA +A. Proof. Part 1. Let A = [a ij ] and B = [b ij ]. Then A +B = [a ij ] + [b ij ] = [a ij +b ij ] = [b ij +a ij ] = [b ij ] + [a ij ] = B +A as real numbers commute. The reader is required to prove the other parts as all the results follow from the properties of real numbers. Exercise 1.2.6 1. Suppose A+B = A. Then show that B = 0. 2. Suppose A+B = 0. Then show that B = (−1)A = [−a ij ]. Definition 1.2.7 (Additive Inverse) Let A be an mn matrix. 1. Then there exists a matrix B with A + B = 0. This matrix B is called the additive inverse of A, and is denoted by −A = (−1)A. 2. Also, for the matrix 0 m×n , A+0 = 0+A = A. Hence, the matrix 0 m×n 12 CHAPTER 1. MATRICES 1.2.1 Multiplication of Matrices Definition 1.2.8 (Matrix Multiplication / Product) Let A = [a ij ] be an mn matrix and B = [b ij ] be an n r matrix. The product AB is a matrix C = [c ij ] of order mr, with c ij = n ¸ k=1 a ik b kj = a i1 b 1j +a i2 b 2j + +a in b nj . That is, if A m×n = a i1 a i2 a in ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ and B n×r = b 1j b 2j . . . . . . . . . b mj ¸ ¸ ¸ ¸ ¸ ¸ then AB = [(AB) ij ] m×r and (AB) ij = a i1 b 1j +a i2 b 2j + +a in b nj . Observe that the product AB is defined if and only if the number of columns of A = the number of rows of B. For example, if A = ¸ 1 2 3 2 4 1 ¸ and B = 1 2 1 0 0 3 1 0 4 ¸ ¸ ¸ then AB = ¸ 1 + 0 + 3 2 + 0 + 0 1 + 6 + 12 2 + 0 + 1 4 + 0 + 0 2 + 12 + 4 ¸ = ¸ 4 2 19 3 4 18 ¸ . Note that in this example, while AB is defined, the product BA is not defined. However, for square matrices A and B of the same order, both the product AB and BA are defined. Definition 1.2.9 Two square matrices A and B are said to commute if AB = BA. Remark 1.2.10 1. Note that if A is a square matrix of order n then AI n = I n A. Also, a scalar matrix of order n commutes with any square matrix of order n. 2. In general, the matrix product is not commutative. For example, consider the following two matrices A = ¸ 1 1 0 0 ¸ and B = ¸ 1 0 1 0 ¸ . Then check that the matrix product AB = ¸ 2 0 0 0 ¸ = ¸ 1 1 1 1 ¸ = BA. Theorem 1.2.11 Suppose that the matrices A, B and C are so chosen that the matrix multiplications are defined. 1. Then (AB)C = A(BC). That is, the matrix multiplication is associative. 2. For any k ∈ R, (kA)B = k(AB) = A(kB). 3. Then A(B +C) = AB +AC. That is, multiplication distributes over addition. 4. If A is an n n matrix then AI n = I n A = A. 5. For any square matrix A of order n and D = diag(d 1 , d 2 , . . . , d n ), we have • the first row of DA is d 1 times the first row of A; • for 1 ≤ i ≤ n, the i th row of DA is d i times the i th row of A. 1.2. OPERATIONS ON MATRICES 13 A similar statement holds for the columns of A when A is multiplied on the right by D. Proof. Part 1. Let A = [a ij ] m×n , B = [b ij ] n×p and C = [c ij ] p×q . Then (BC) kj = p ¸ =1 b k c j and (AB) i = n ¸ k=1 a ik b k . Therefore, A(BC) ij = n ¸ k=1 a ik BC kj = n ¸ k=1 a ik p ¸ =1 b k c j = n ¸ k=1 p ¸ =1 a ik b k c j = n ¸ k=1 p ¸ =1 a ik b k c j = p ¸ =1 n ¸ k=1 a ik b k c j = t ¸ =1 AB i c j = (AB)C ij . Part 5. For all j = 1, 2, . . . , n, we have (DA) ij = n ¸ k=1 d ik a kj = d i a ij as d ik = 0 whenever i = k. Hence, the required result follows. The reader is required to prove the other parts. Exercise 1.2.12 1. Let A and B be two matrices. If the matrix addition A + B is defined, then prove that (A +B) t = A t +B t . Also, if the matrix product AB is defined then prove that (AB) t = B t A t . 2. Let A = [a 1 , a 2 , . . . , a n ] and B = b 1 b 2 . . . b n ¸ ¸ ¸ ¸ ¸ ¸ . Compute the matrix products AB and BA. 3. Let n be a positive integer. Compute A n for the following matrices: ¸ 1 1 0 1 ¸ , 1 1 1 0 1 1 0 0 1 ¸ ¸ ¸, 1 1 1 1 1 1 1 1 1 ¸ ¸ ¸. Can you guess a formula for A n and prove it by induction? 4. Find examples for the following statements. (a) Suppose that the matrix product AB is defined. Then the product BA need not be defined. (b) Suppose that the matrix products AB and BA are defined. Then the matrices AB and BA can have different orders. (c) Suppose that the matrices A and B are square matrices of order n. Then AB and BA may or may not be equal. 14 CHAPTER 1. MATRICES 1.2.2 Inverse of a Matrix Definition 1.2.13 (Inverse of a Matrix) Let A be a square matrix of order n. 1. A square matrix B is said to be a left inverse of A if BA = I n . 2. A square matrix C is called a right inverse of A, if AC = I n . 3. A matrix A is said to be invertible (or is said to have an inverse) if there exists a matrix B such that AB = BA = I n . Lemma 1.2.14 Let A be an n n matrix. Suppose that there exist n n matrices B and C such that AB = I n and CA = I n , then B = C. Proof. Note that C = CI n = C(AB) = (CA)B = I n B = B. Remark 1.2.15 1. From the above lemma, we observe that if a matrix A is invertible, then the inverse is unique. 2. As the inverse of a matrix A is unique, we denote it by A −1 . That is, AA −1 = A −1 A = I. Theorem 1.2.16 Let A and B be two matrices with inverses A −1 and B −1 , respectively. Then 1. (A −1 ) −1 = A. 2. (AB) −1 = B −1 A −1 . 3. (A t ) −1 = (A −1 ) t . Proof. Proof of Part 1. By definition AA −1 = A −1 A = I. Hence, if we denote A −1 by B, then we get AB = BA = I. Thus, the definition, implies B −1 = A, or equivalently (A −1 ) −1 = A. Proof of Part 2. Verify that (AB)(B −1 A −1 ) = I = (B −1 A −1 )(AB). Proof of Part 3. We know AA −1 = A −1 A = I. Taking transpose, we get (AA −1 ) t = (A −1 A) t = I t ⇐⇒(A −1 ) t A t = A t (A −1 ) t = I. Hence, by definition (A t ) −1 = (A −1 ) t . Exercise 1.2.17 1. Let A 1 , A 2 , . . . , A r be invertible matrices. Prove that the product A 1 A 2 A r is also an invertible matrix. 2. Let A be an inveritble matrix. Then prove that A cannot have a row or column consisting of only zeros. 1.3. SOME MORE SPECIAL MATRICES 15 1.3 Some More Special Matrices Definition 1.3.1 1. A matrix A over R is called symmetric if A t = A and skew-symmetric if A t = −A. 2. A matrix A is said to be orthogonal if AA t = A t A = I. Example 1.3.2 1. Let A = 1 2 3 2 4 −1 3 −1 4 ¸ ¸ ¸ and B = 0 1 2 −1 0 −3 −2 3 0 ¸ ¸ ¸. Then A is a symmetric matrix and B is a skew-symmetric matrix. 2. Let A = 1 3 1 3 1 3 1 2 1 2 0 1 6 1 6 2 6 ¸ ¸ ¸. Then A is an orthogonal matrix. 3. Let A = [a ij ] be an nn matrix with a ij = 1 if i = j + 1 0 otherwise . Then A n = 0 and A = 0 for 1 ≤ ≤ n − 1. The matrices A for which a positive integer k exists such that A k = 0 are called nilpotent matrices. The least positive integer k for which A k = 0 is called the order of nilpotency. 4. Let A = ¸ 1 0 0 0 ¸ . Then A 2 = A. The matrices that satisfy the condition that A 2 = A are called idempotent matrices. Exercise 1.3.3 1. Show that for any square matrix A, S = 1 2 (A + A t ) is symmetric, T = 1 2 (A − A t ) is skew-symmetric, and A = S +T. 2. Show that the product of two lower triangular matrices is a lower triangular matrix. A similar statement holds for upper triangular matrices. 3. Let A and B be symmetric matrices. Show that AB is symmetric if and only if AB = BA. 4. Show that the diagonal entries of a skew-symmetric matrix are zero. 5. Let A, B be skew-symmetric matrices with AB = BA. Is the matrix AB symmetric or skew-symmetric? 6. Let A be a symmetric matrix of order n with A 2 = 0. Is it necessarily true that A = 0? 7. Let A be a nilpotent matrix. Show that there exists a matrix B such that B(I +A) = I = (I +A)B. 1.3.1 Submatrix of a Matrix Definition 1.3.4 A matrix obtained by deleting some of the rows and/or columns of a matrix is said to be a submatrix of the given matrix. For example, if A = ¸ 1 4 5 0 1 2 ¸ , a few submatrices of A are [1], [2], ¸ 1 0 ¸ , [1 5], ¸ 1 5 0 2 ¸ , A. But the matrices ¸ 1 4 1 0 ¸ and ¸ 1 4 0 2 ¸ are not submatrices of A. (The reader is advised to give reasons.) 16 CHAPTER 1. MATRICES 1.3.2 Block Matrices Let A be an n m matrix and B be an m p matrix. Suppose r < m. Then, we can decompose the matrices A and B as A = [P Q] and B = ¸ H K ¸ ; where P has order n r and H has order r p. That is, the matrices P and Q are submatrices of A and P consists of the first r columns of A and Q consists of the last m−r columns of A. Similarly, H and K are submatrices of B and H consists of the first r rows of B and K consists of the last m−r rows of B. We now prove the following important theorem. Theorem 1.3.5 Let A = [a ij ] = [P Q] and B = [b ij ] = ¸ H K ¸ be defined as above. Then AB = PH +QK. Proof. First note that the matrices PH and QK are each of order n p. The matrix products PH and QK are valid as the order of the matrices P, H, Q and K are respectively, n r, r p, n (m−r) and (m−r) p. Let P = [P ij ], Q = [Q ij ], H = [H ij ], and K = [k ij ]. Then, for 1 ≤ i ≤ n and 1 ≤ j ≤ p, we have (AB) ij = m ¸ k=1 a ik b kj = r ¸ k=1 a ik b kj + m ¸ k=r+1 a ik b kj = r ¸ k=1 P ik H kj + m ¸ k=r+1 Q ik K kj = (PH) ij + (QK) ij = (PH +QK) ij . Theorem 1.3.5 is very useful due to the following reasons: 1. The order of the matrices P, Q, H and K are smaller than that of A or B. 2. It may be possible to block the matrix in such a way that a few blocks are either identity matrices or zero matrices. In this case, it may be easy to handle the matrix product using the block form. 3. Or when we want to prove results using induction, then we may assume the result for r r submatrices and then look for (r + 1) (r + 1) submatrices, etc. For example, if A = ¸ 1 2 0 2 5 0 ¸ and B = a b c d e f ¸ ¸ ¸, Then AB = ¸ 1 2 2 5 ¸¸ a b c d ¸ + ¸ 0 0 ¸ [e f] = ¸ a + 2c b + 2d 2a + 5c 2b + 5d ¸ . If A = 0 −1 2 3 1 4 −2 5 −3 ¸ ¸ ¸, then A can be decomposed as follows: A = 0 −1 2 3 1 4 −2 5 −3 ¸ ¸ ¸, or A = 0 −1 2 3 1 4 −2 5 −3 ¸ ¸ ¸, or A = 0 −1 2 3 1 4 −2 5 −3 ¸ ¸ ¸ and so on. 1.3. SOME MORE SPECIAL MATRICES 17 Suppose A = m 1 m 2 n 1 n 2 ¸ P Q R S ¸ and B = s 1 s 2 r 1 r 2 ¸ E F G H ¸ . Then the matrices P, Q, R, S and E, F, G, H, are called the blocks of the matrices A and B, respectively. Even if A+B is defined, the orders of P and E may not be same and hence, we may not be able to add A and B in the block form. But, if A+B and P +E is defined then A+B = ¸ P +E Q+F R +G S +H ¸ . Similarly, if the product AB is defined, the product PE need not be defined. Therefore, we can talk of matrix product AB as block product of matrices, if both the products AB and PE are defined. And in this case, we have AB = ¸ PE +QG PF +QH RE +SG RF +SH ¸ . That is, once a partition of A is fixed, the partition of B has to be properly chosen for purposes of block addition or multiplication. Miscellaneous Exercises Exercise 1.3.6 1. Complete the proofs of Theorems 1.2.5 and 1.2.11. 2. Let x = ¸ x 1 x 2 ¸ , y = ¸ y 1 y 2 ¸ , A = ¸ 1 0 0 −1 ¸ and B = ¸ cos θ −sinθ sin θ cos θ ¸ . Geometrically interpret y = Ax and y = Bx. 3. Consider the two coordinate transformations x 1 = a 11 y 1 +a 12 y 2 x 2 = a 21 y 1 +a 22 y 2 and y 1 = b 11 z 1 +b 12 z 2 y 2 = b 21 z 1 +b 22 z 2 . (a) Compose the two transformations to express x 1 , x 2 in terms of z 1 , z 2 . (b) If x t = [x 1 , x 2 ], y t = [y 1 , y 2 ] and z t = [z 1 , z 2 ] then find matrices A, B and C such that x = Ay, y = Bz and x = Cz. (c) Is C = AB? 4. For a square matrix A of order n, we define trace of A, denoted by tr (A) as tr (A) = a 11 +a 22 + a nn . Then for two square matrices, A and B of the same order, show the following: (a) tr (A +B) = tr (A) + tr (B). (b) tr (AB) = tr (BA). 5. Show that, there do not exist matrices A and B such that AB −BA = cI n for any c = 0. 6. Let A and B be two mn matrices and let x be an n 1 column vector. (a) Prove that if Ax = 0 for all x, then A is the zero matrix. (b) Prove that if Ax = Bx for all x, then A = B. 7. Let A be an n n matrix such that AB = BA for all n n matrices B. Show that A = αI for some α ∈ R. 8. Let A = 1 2 2 1 3 1 ¸ ¸ ¸. Show that there exist infinitely many matrices B such that BA = I 2 . Also, show that there does not exist any matrix C such that AC = I 3 . 18 CHAPTER 1. MATRICES 9. Compute the matrix product AB using the block matrix multiplication for the matrices A = 2 6 6 6 4 1 0 0 1 0 1 1 1 0 1 1 0 0 1 0 1 3 7 7 7 5 and B = 2 6 6 6 4 1 2 2 1 1 1 2 1 1 1 1 1 −1 1 −1 1 3 7 7 7 5 . 10. Let A = ¸ P Q R S ¸ . If P, Q, R and S are symmetric, what can you say about A? Are P, Q, R and S symmetric, when A is symmetric? 11. Let A = [a ij ] and B = [b ij ] be two matrices. Suppose a 1 , a 2 , . . . , a n are the rows of A and b 1 , b 2 , . . . , b p are the columns of B. If the product AB is defined, then show that AB = [Ab 1 , Ab 2 , . . . , Ab p ] = a 1 B a 2 B . . . a n B ¸ ¸ ¸ ¸ ¸ ¸ . [That is, left multiplication by A, is same as multiplying each column of B by A. Similarly, right multiplication by B, is same as multiplying each row of A by B.] 1.4 Matrices over Complex Numbers Here the entries of the matrix are complex numbers. All the definitions still hold. One just needs to look at the following additional definitions. Definition 1.4.1 (Conjugate Transpose of a Matrix) 1. Let A be an mn matrix over C. If A = [a ij ] then the Conjugate of A, denoted by A, is the matrix B = [b ij ] with b ij = a ij . For example, Let A = ¸ 1 4 + 3i i 0 1 i −2 ¸ . Then A = ¸ 1 4 −3i −i 0 1 −i −2 ¸ . 2. Let A be an mn matrix over C. If A = [a ij ] then the Conjugate Transpose of A, denoted by A , is the matrix B = [b ij ] with b ij = a ji . For example, Let A = ¸ 1 4 + 3i i 0 1 i −2 ¸ . Then A = 1 0 4 −3i 1 −i −i − 2 ¸ ¸ ¸. 3. A square matrix A over C is called Hermitian if A = A. 4. A square matrix A over C is called skew-Hermitian if A = −A. 5. A square matrix A over C is called unitary if A A = AA = I. 1.4. MATRICES OVER COMPLEX NUMBERS 19 6. A square matrix A over C is called Normal if AA = A A. Remark 1.4.2 If A = [a ij ] with a ij ∈ R, then A = A t . Exercise 1.4.3 1. Give examples of Hermitian, skew-Hermitian and unitary matrices that have entries with non-zero imaginary parts. 2. Restate the results on transpose in terms of conjugate transpose. 3. Show that for any square matrix A, S = A+A 2 is Hermitian, T = A−A 2 is skew-Hermitian, and A = S +T. 4. Show that if A is a complex triangular matrix and AA = A A then A is a diagonal matrix. 20 CHAPTER 1. MATRICES Chapter 2 Linear System of Equations 2.1 Introduction Let us look at some examples of linear systems. 1. Suppose a, b ∈ R. Consider the system ax = b. (a) If a = 0 then the system has a unique solution x = b a . (b) If a = 0 and i. b = 0 then the system has no solution. ii. b = 0 then the system has infinite number of solutions, namely all x ∈ R. 2. We now consider a system with 2 equations in 2 unknowns. Consider the equation ax + by = c. If one of the coefficients, a or b is non-zero, then this linear equation represents a line in R 2 . Thus for the system a 1 x +b 1 y = c 1 and a 2 x +b 2 y = c 2 , the set of solutions is given by the points of intersection of the two lines. There are three cases to be considered. Each case is illustrated by an example. (a) Unique Solution x + 2y = 1 and x + 3y = 1. The unique solution is (x, y) t = (1, 0) t . Observe that in this case, a 1 b 2 −a 2 b 1 = 0. (b) Infinite Number of Solutions x + 2y = 1 and 2x + 4y = 2. The set of solutions is (x, y) t = (1 −2y, y) t = (1, 0) t +y(−2, 1) t with y arbitrary. In other words, both the equations represent the same line. Observe that in this case, a 1 b 2 −a 2 b 1 = 0, a 1 c 2 −a 2 c 1 = 0 and b 1 c 2 −b 2 c 1 = 0. (c) No Solution x + 2y = 1 and 2x + 4y = 3. The equations represent a pair of parallel lines and hence there is no point of intersection. Observe that in this case, a 1 b 2 −a 2 b 1 = 0 but a 1 c 2 −a 2 c 1 = 0. 3. As a last example, consider 3 equations in 3 unknowns. A linear equation ax + by + cz = d represent a plane in R 3 provided (a, b, c) = (0, 0, 0). As in the case of 2 equations in 2 unknowns, we have to look at the points of intersection of the given three planes. Here again, we have three cases. The three cases are illustrated by examples. 21 22 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS (a) Unique Solution Consider the system x+y+z = 3, x+4y+2z = 7 and 4x+10y−z = 13. The unique solution to this system is (x, y, z) t = (1, 1, 1) t ; i.e. the three planes intersect at a point. (b) Infinite Number of Solutions Consider the system x + y + z = 3, x + 2y + 2z = 5 and 3x + 4y + 4z = 11. The set of solutions to this system is (x, y, z) t = (1, 2 −z, z) t = (1, 2, 0) t +z(0, −1, 1) t , with z arbitrary: the three planes intersect on a line. (c) No Solution The system x + y + z = 3, x + 2y + 2z = 5 and 3x + 4y + 4z = 13 has no solution. In this case, we get three parallel lines as intersections of the above planes taken two at a time. Definition 2.1.1 (Linear System) A linear system of m equations in n unknowns x 1 , x 2 , . . . , x n is a set of equations of the form a 11 x 1 + a 12 x 2 + +a 1n x n = b 1 a 21 x 1 + a 22 x 2 + +a 2n x n = b 2 . . . . . . (2.1.1) a m1 x 1 +a m2 x 2 + +a mn x n = b m where for 1 ≤ i ≤ n, and 1 ≤ j ≤ m; a ij , b i ∈ R. Linear System (2.1.1) is called homogeneous if b 1 = 0 = b 2 = = b m and non-homogeneous otherwise. We rewrite the above equations in the form Ax = b, where A = a 11 a 12 a 1n a 21 a 22 a 2n . . . . . . . . . . . . a m1 a m2 a mn ¸ ¸ ¸ ¸ ¸ ¸ , x = x 1 x 2 . . . x n ¸ ¸ ¸ ¸ ¸ ¸ , and b = b 1 b 2 . . . b m ¸ ¸ ¸ ¸ ¸ ¸ The matrix A is called the coefficient matrix and the block matrix [A b] , is the augmented matrix of the linear system (2.1.1). Remark 2.1.2 Observe that the i th row of the augmented matrix [A b] represents the i th equation and the j th column of the coefficient matrix A corresponds to coefficients of the j th variable x j . That is, for 1 ≤ i ≤ m and 1 ≤ j ≤ n, the entry a ij of the coefficient matrix A corresponds to the i th equation and j th variable x j .. For a system of linear equations Ax = b, the system Ax = 0 is called the associated homogeneous system. Definition 2.1.3 (Solution of a Linear System) A solution of the linear system Ax = b is a column vector y with entries y 1 , y 2 , . . . , y n such that the linear system (2.1.1) is satisfied by substituting y i in place of x i . That is, if y t = [y 1 , y 2 , . . . , y n ] then Ay = b holds. Note: The zero n-tuple x = 0 is always a solution of the system Ax = 0, and is called the trivial solution. A non-zero n-tuple x, if it satisfies Ax = 0, is called a non-trivial solution. 2.2. ROW OPERATIONS AND EQUIVALENT SYSTEMS 23 2.1.1 A Solution Method Example 2.1.4 Let us solve the linear system x + 7y + 3z = 11, x +y +z = 3, and 4x + 10y −z = 13. Solution: 1. The above linear system and the linear system x +y +z = 3 Interchange the first two equations. x + 7y + 3z = 11 (2.1.2) 4x + 10y −z = 13 have the same set of solutions. (why?) 2. Using the 1 st equation, we eliminate x from 2 nd and 3 rd equation to get the linear system x +y +z = 3 6y + 2z = 8 (obtained by subtracting the first equation from the second equation.) 6y −5z = 1 (obtained by subtracting 4 times the first equation from the third equation.) (2.1.3) This system and the system (2.1.2) has the same set of solution. (why?) 3. Using the 2 cd equation, we eliminate y from the last equation of system (2.1.3) to get the system x +y +z = 3 6y + 2z = 8 7z = 7 obtained by subtracting the third equation from the second equation. (2.1.4) which has the same set of solution as the system (2.1.3). (why?) 4. The system (2.1.4) and system x +y +z = 3 3y +z = 4 divide the second equation by 2 z = 1 divide the third equation by 7 (2.1.5) has the same set of solution. (why?) 5. Now, z = 1 implies y = 4 −1 3 = 1 and x = 3 −(1 +1) = 1. Or in terms of a vector, the set of solution is ¦ (x, y, z) t : (x, y, z) = (1, 1, 1)¦. 2.2 Row Operations and Equivalent Systems Definition 2.2.1 (Elementary Operations) The following operations 1, 2 and 3 are called elementary op- erations. 1. interchange of two equations, say “interchange the i th and j th equations”; (compare the system (2.1.2) with the original system.) 24 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS 2. multiply a non-zero constant throughout an equation, say “multiply the k th equation by c = 0”; (compare the system (2.1.5) and the system (2.1.4).) 3. replace an equation by itself plus a constant multiple of another equation, say “replace the k th equation by k th equation plus c times the j th equation”. (compare the system (2.1.3) with (2.1.2) or the system (2.1.4) with (2.1.3).) Remark 2.2.2 1. In Example 2.1.4, observe that the elementary operations helped us in getting a linear system (2.1.5), which was easily solvable. 2. Note that at Step 1, if we interchange the first and the second equation, we get back to the linear system from which we had started. This means the operation at Step 1, has an inverse operation. In other words, inverse operation sends us back to the step where we had precisely started. So, in Example 2.1.4, the application of a finite number of elementary operations helped us to obtain a simpler system whose solution can be obtained directly. That is, after applying a finite number of elementary operations, a simpler linear system is obtained which can be easily solved. Note that the three elementary operations defined above, have corresponding inverse operations, namely, 1. “interchange the i th and j th equations”, 2. “divide the k th equation by c = 0”; 3. “replace the k th equation by k th equation minus c times the j th equation”. It will be a useful exercise for the reader to identify the inverse operations at each step in Example 2.1.4. Definition 2.2.3 (Equivalent Linear Systems) Two linear systems are said to be equivalent if one can be obtained from the other by a finite number of elementary operations. The linear systems at each step in Example 2.1.4 are equivalent to each other and also to the original linear system. Lemma 2.2.4 Let Cx = d be the linear system obtained from the linear system Ax = b by a single elementary operation. Then the linear systems Ax = b and Cx = d have the same set of solutions. Proof. We prove the result for the elementary operation “the k th equation is replaced by k th equation plus c times the j th equation.” The reader is advised to prove the result for other elementary operations. In this case, the systems Ax = b and Cx = d vary only in the k th equation. Let (α 1 , α 2 , . . . , α n ) be a solution of the linear system Ax = b. Then substituting for α i ’s in place of x i ’s in the k th and j th equations, we get a k1 α 1 +a k2 α 2 + a kn α n = b k , and a j1 α 1 +a j2 α 2 + a jn α n = b j . Therefore, (a k1 +ca j1 1 + (a k2 +ca j2 2 + + (a kn +ca jn n = b k +cb j . (2.2.1) But then the k th equation of the linear system Cx = d is (a k1 +ca j1 )x 1 + (a k2 +ca j2 )x 2 + + (a kn +ca jn )x n = b k +cb j . (2.2.2) Therefore, using Equation (2.2.1), (α 1 , α 2 , . . . , α n ) is also a solution for the k th Equation (2.2.2). 2.2. ROW OPERATIONS AND EQUIVALENT SYSTEMS 25 Use a similar argument to show that if (β 1 , β 2 , . . . , β n ) is a solution of the linear system Cx = d then it is also a solution of the linear system Ax = b. Hence, we have the proof in this case. Lemma 2.2.4 is now used as an induction step to prove the main result of this section (Theorem 2.2.5). Theorem 2.2.5 Two equivalent systems have the same set of solutions. Proof. Let n be the number of elementary operations performed on Ax = b to get Cx = d. We prove the theorem by induction on n. If n = 1, Lemma 2.2.4 answers the question. If n > 1, assume that the theorem is true for n = m. Now, suppose n = m+1. Apply the Lemma 2.2.4 again at the “last step” (that is, at the (m+1) th step from the m th step) to get the required result using induction. Let us formalise the above section which led to Theorem 2.2.5. For solving a linear system of equa- tions, we applied elementary operations to equations. It is observed that in performing the elementary operations, the calculations were made on the coefficients (numbers). The variables x 1 , x 2 , . . . , x n and the sign of equality (that is, “ = ”) are not disturbed. Therefore, in place of looking at the system of equations as a whole, we just need to work with the coefficients. These coefficients when arranged in a rectangular array gives us the augmented matrix [A b]. Definition 2.2.6 (Elementary Row Operations) The elementary row operations are defined as: 1. interchange of two rows, say “interchange the i th and j th rows”, denoted R ij ; 2. multiply a non-zero constant throughout a row, say “multiply the k th row by c = 0”, denoted R k (c); 3. replace a row by itself plus a constant multiple of another row, say “replace the k th row by k th row plus c times the j th row”, denoted R kj (c). Exercise 2.2.7 Find the inverse row operations corresponding to the elementary row operations that have been defined just above. Definition 2.2.8 (Row Equivalent Matrices) Two matrices are said to be row-equivalent if one can be obtained from the other by a finite number of elementary row operations. Example 2.2.9 The three matrices given below are row equivalent. 0 1 1 2 2 0 3 5 1 1 1 3 ¸ ¸ ¸ −−→ R 12 2 0 3 5 0 1 1 2 1 1 1 3 ¸ ¸ ¸ −−−−−→ R 1 (1/2) 1 0 3 2 5 2 0 1 1 2 1 1 1 3 ¸ ¸ ¸. Whereas the matrix 0 1 1 2 2 0 3 5 1 1 1 3 ¸ ¸ ¸ is not row equivalent to the matrix 1 0 1 2 0 2 3 5 1 1 1 3 ¸ ¸ ¸. 2.2.1 Gauss Elimination Method Definition 2.2.10 (Forward/Gauss Elimination Method) Gaussian elimination is a method of solving a linear system Ax = b (consisting of m equations in n unknowns) by bringing the augmented matrix [A b] = a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 . . . . . . . . . . . . . . . a m1 a m2 a mn b m ¸ ¸ ¸ ¸ ¸ ¸ 26 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS to an upper triangular form c 11 c 12 c 1n d 1 0 c 22 c 2n d 2 . . . . . . . . . . . . . . . 0 0 c mn d m ¸ ¸ ¸ ¸ ¸ ¸ . This elimination process is also called the forward elimination method. The following examples illustrate the Gauss elimination procedure. Example 2.2.11 Solve the linear system by Gauss elimination method. y +z = 2 2x + 3z = 5 x +y +z = 3 Solution: In this case, the augmented matrix is 0 1 1 2 2 0 3 5 1 1 1 3 ¸ ¸ ¸. The method proceeds along the fol- lowing steps. 1. Interchange 1 st and 2 nd equation (or R 12 ). 2x + 3z = 5 y +z = 2 x +y +z = 3 2 0 3 5 0 1 1 2 1 1 1 3 ¸ ¸ ¸. 2. Divide the 1 st equation by 2 (or R 1 (1/2)). x + 3 2 z = 5 2 y +z = 2 x +y +z = 3 1 0 3 2 5 2 0 1 1 2 1 1 1 3 ¸ ¸ ¸. 3. Add −1 times the 1 st equation to the 3 rd equation (or R 31 (−1)). x + 3 2 z = 5 2 y +z = 2 y − 1 2 z = 1 2 1 0 3 2 5 2 0 1 1 2 0 1 − 1 2 1 2 ¸ ¸ ¸. 4. Add −1 times the 2 nd equation to the 3 rd equation (or R 32 (−1)). x + 3 2 z = 5 2 y +z = 2 3 2 z = − 3 2 1 0 3 2 5 2 0 1 1 2 0 0 − 3 2 3 2 ¸ ¸ ¸. 5. Multiply the 3 rd equation by −2 3 (or R 3 (− 2 3 )). x + 3 2 z = 5 2 y +z = 2 z = 1 1 0 3 2 5 2 0 1 1 2 0 0 1 1 ¸ ¸ ¸. The last equation gives z = 1, the second equation now gives y = 1. Finally the first equation gives x = 1. Hence the set of solutions is (x, y, z) t = (1, 1, 1) t , a unique solution. 2.2. ROW OPERATIONS AND EQUIVALENT SYSTEMS 27 Example 2.2.12 Solve the linear system by Gauss elimination method. x +y +z = 3 x + 2y + 2z = 5 3x + 4y + 4z = 11 Solution: In this case, the augmented matrix is 1 1 1 3 1 2 2 5 3 4 4 11 ¸ ¸ ¸ and the method proceeds as follows: 1. Add −1 times the first equation to the second equation. x +y +z = 3 y +z = 2 3x + 4y + 4z = 11 1 1 1 3 0 1 1 2 3 4 4 11 ¸ ¸ ¸. 2. Add −3 times the first equation to the third equation. x +y +z = 3 y +z = 2 y +z = 2 1 1 1 3 0 1 1 2 0 1 1 2 ¸ ¸ ¸. 3. Add −1 times the second equation to the third equation x +y +z = 3 y +z = 2 1 1 1 3 0 1 1 2 0 0 0 0 ¸ ¸ ¸. Thus, the set of solutions is (x, y, z) t = (1, 2 −z, z) t = (1, 2, 0) t +z(0, −1, 1) t , with z arbitrary. In other words, the system has infinite number of solutions. Example 2.2.13 Solve the linear system by Gauss elimination method. x +y +z = 3 x + 2y + 2z = 5 3x + 4y + 4z = 12 Solution: In this case, the augmented matrix is 1 1 1 3 1 2 2 5 3 4 4 12 ¸ ¸ ¸ and the method proceeds as follows: 1. Add −1 times the first equation to the second equation. x +y +z = 3 y +z = 2 3x + 4y + 4z = 12 1 1 1 3 0 1 1 2 3 4 4 12 ¸ ¸ ¸. 2. Add −3 times the first equation to the third equation. x +y +z = 3 y +z = 2 y +z = 3 1 1 1 3 0 1 1 2 0 1 1 3 ¸ ¸ ¸. 28 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS 3. Add −1 times the second equation to the third equation x +y +z = 3 y +z = 2 0 = 1 1 1 1 3 0 1 1 2 0 0 0 1 ¸ ¸ ¸. The third equation in the last step is 0x + 0y + 0z = 1. This can never hold for any value of x, y, z. Hence, the system has no solution. Remark 2.2.14 Note that to solve a linear system, Ax = b, one needs to apply only the elementary row operations to the augmented matrix [A b]. 2.3 Row Reduced Echelon Form of a Matrix Definition 2.3.1 (Row Reduced Form of a Matrix) A matrix C is said to be in the row reduced form if 1. the first non-zero entry in each row of C is 1; 2. the column containing this 1 has all its other entries zero. A matrix in the row reduced form is also called a row reduced matrix. Example 2.3.2 1. One of the most important examples of a row reduced matrix is the n n identity matrix, I n . Recall that the (i, j) th entry of the identity matrix is I ij = δ ij = 1 if i = j 0 if i = j. . δ ij is usually referred to as the Kronecker delta function. 2. The matrices 0 1 0 −1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 ¸ ¸ ¸ ¸ ¸ and 0 1 0 4 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 0 ¸ ¸ ¸ ¸ ¸ are also in row reduced form. 3. The matrix 1 0 0 0 5 0 1 1 1 2 0 0 0 1 1 0 0 0 0 0 ¸ ¸ ¸ ¸ ¸ is not in the row reduced form. (why?) Definition 2.3.3 (Leading Term, Leading Column) For a row-reduced matrix, the first non-zero entry of any row is called a leading term. The columns containing the leading terms are called the leading columns. Definition 2.3.4 (Basic, Free Variables) Consider the linear system Ax = b in n variables and m equa- tions. Let [C d] be the row-reduced matrix obtained by applying the Gauss elimination method to the augmented matrix [A b]. Then the variables corresponding to the leading columns in the first n columns of [C d] are called the basic variables. The variables which are not basic are called free variables. 2.3. ROW REDUCED ECHELON FORM OF A MATRIX 29 The free variables are called so as they can be assigned arbitrary values and the value of the basic variables can then be written in terms of the free variables. Observation: In Example 2.2.12, the solution set was given by (x, y, z) t = (1, 2 −z, z) t = (1, 2, 0) t +z(0, −1, 1) t , with z arbitrary. That is, we had two basic variables, x and y, and z as a free variable. Remark 2.3.5 It is very important to observe that if there are r non-zero rows in the row-reduced form of the matrix then there will be r leading terms. That is, there will be r leading columns. Therefore, if there are r leading terms and n variables, then there will be r basic variables and n −r free variables. 2.3.1 Gauss-Jordan Elimination We now start with Step 5 of Example 2.2.11 and apply the elementary operations once again. But this rd row. I. Add −1 times the third equation to the second equation (or R 23 (−1)). x + 3 2 z = 5 2 y = 2 z = 1 1 0 3 2 5 2 0 1 0 1 0 0 1 1 ¸ ¸ ¸. −3 2 times the third equation to the first equation (or R 13 (− 3 2 )). x = 1 y = 1 z = 1 1 0 0 1 0 1 0 1 0 0 1 1 ¸ ¸ ¸. III. From the above matrix, we directly have the set of solution as (x, y, z) t = (1, 1, 1) t . Definition 2.3.6 (Row Reduced Echelon Form of a Matrix) A matrix C is said to be in the row reduced echelon form if 1. C is already in the row reduced form; 2. The rows consisting of all zeros comes below all non-zero rows; and 3. the leading terms appear from left to right in successive rows. That is, for 1 ≤ ≤ k, let i be the th row. Then i 1 < i 2 < < i k . Example 2.3.7 Suppose A = 0 1 0 2 0 0 0 0 0 0 1 1 ¸ ¸ ¸ and B = 0 0 0 1 0 1 1 0 0 0 0 0 0 0 1 ¸ ¸ ¸ are in row reduced form. Then the corresponding matrices in the row reduced echelon form are respectively, 0 1 0 2 0 0 1 1 0 0 0 0 ¸ ¸ ¸ and 1 1 0 0 0 0 0 0 1 0 0 0 0 0 1 ¸ ¸ ¸. Definition 2.3.8 (Row Reduced Echelon Matrix) A matrix which is in the row reduced echelon form is also called a row reduced echelon matrix. 30 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS Definition 2.3.9 (Back Substitution/Gauss-Jordan Method) The procedure to get to Step II of Example 2.2.11 from Step 5 of Example 2.2.11 is called the back substitution. The elimination process applied to obtain the row reduced echelon form of the augmented matrix is called the Gauss-Jordan elimination. That is, the Gauss-Jordan elimination method consists of both the forward elimination and the backward substitution. Method to get the row-reduced echelon form of a given matrix A Let A be an mn matrix. Then the following method is used to obtain the row-reduced echelon form the matrix A. Step 1: Consider the first column of the matrix A. If all the entries in the first column are zero, move to the second column. Else, find a row, say i th row, which contains a non-zero entry in the first column. Now, interchange the first row with the i th row. Suppose the non-zero entry in the (1, 1)-position is α = 0. Divide the whole row by α so that the (1, 1)-entry of the new matrix is 1. Now, use the 1 to make all the entries below this 1 equal to 0. Step 2: If all entries in the first column after the first step are zero, consider the right m (n − 1) submatrix of the matrix obtained in step 1 and proceed as in step 1. Else, forget the first row and first column. Start with the lower (m−1) (n−1) submatrix of the matrix obtained in the first step and proceed as in step 1. Step 3: Keep repeating this process till we reach a stage where all the entries below a particular row, say r, are zero. Suppose at this stage we have obtained a matrix C. Then C has the following form: 1. the first non-zero entry in each row of C is 1. These 1’s are the leading terms of C 2. the entries of C below the leading term are all zero. Step 4: Now use the leading term in the r th row to make all entries in the r th to zero. Step 5: Next, use the leading term in the (r − 1) th row to make all entries in the (r − 1) th column equal to zero and continue till we come to the first leading term or column. The final matrix is the row-reduced echelon form of the matrix A. Remark 2.3.10 Note that the row reduction involves only row operations and proceeds from left to right. Hence, if A is a matrix consisting of first s columns of a matrix C, then the row reduced form of A will be the first s columns of the row reduced form of C. The proof of the following theorem is beyond the scope of this book and is omitted. Theorem 2.3.11 The row reduced echelon form of a matrix is unique. Exercise 2.3.12 1. Solve the following linear system. (a) x +y +z +w = 0, x −y +z +w = 0 and −x +y + 3z + 3w = 0. (b) x + 2y + 3z = 1 and x + 3y + 2z = 1. (c) x +y +z = 3, x +y −z = 1 and x +y + 7z = 6. 2.3. ROW REDUCED ECHELON FORM OF A MATRIX 31 (d) x +y +z = 3, x +y −z = 1 and x +y + 4z = 6. (e) x +y +z = 3, x +y −z = 1, x +y + 4z = 6 and x +y −4z = −1. 2. Find the row-reduced echelon form of the following matrices. 1. −1 1 3 5 1 3 5 7 9 11 13 15 −3 −1 13 ¸ ¸ ¸ ¸ ¸ , 2. 10 8 6 4 2 0 −2 −4 −6 −8 −10 −12 −2 −4 −6 −8 ¸ ¸ ¸ ¸ ¸ 2.3.2 Elementary Matrices Definition 2.3.13 A square matrix E of order n is called an elementary matrix if it is obtained by applying exactly one elementary row operation to the identity matrix, I n . Remark 2.3.14 There are three types of elementary matrices. 1. E ij , which is obtained by the application of the elementary row operation R ij to the identity matrix, I n . Thus, the (k, ) th entry of E ij is (E ij ) (k,) = 1 if k = and = i, j 1 if (k, ) = (i, j) or (k, ) = (j, i) 0 otherwise . 2. E k (c), which is obtained by the application of the elementary row operation R k (c) to the identity matrix, I n . The (i, j) th entry of E k (c) is (E k (c)) (i,j) = 1 if i = j and i = k c if i = j = k 0 otherwise . 3. E ij (c), which is obtained by the application of the elementary row operation R ij (c) to the identity matrix, I n . The (k, ) th entry of E ij (c) is (E ij ) (k,) 1 if k = c if (k, ) = (i, j) 0 otherwise . In particular, if we start with a 3 3 identity matrix I 3 , then E 23 = 1 0 0 0 0 1 0 1 0 ¸ ¸ ¸, E 1 (c) = c 0 0 0 1 0 0 0 1 ¸ ¸ ¸, and E 23 (c) = 1 0 0 0 1 c 0 0 1 ¸ ¸ ¸. Example 2.3.15 1. Let A = 1 2 3 0 2 0 3 4 3 4 5 6 ¸ ¸ ¸. Then 1 2 3 0 2 0 3 4 3 4 5 6 ¸ ¸ ¸ −−→ R 23 1 2 3 0 3 4 5 6 2 0 3 4 ¸ ¸ ¸ = 1 0 0 0 0 1 0 1 0 ¸ ¸ ¸A = E 23 A. That is, interchanging the two rows of the matrix A is same as multiplying on the left by the corre- sponding elementary matrix. In other words, we see that the left multiplication of elementary matrices to a matrix results in elementary row operations. 32 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS 2. Consider the augmented matrix [A b] = 0 1 1 2 2 0 3 5 1 1 1 3 ¸ ¸ ¸. Then the result of the steps given below is same as the matrix product E 23 (−1)E 12 (−1)E 3 (1/3)E 32 (2)E 23 E 21 (−2)E 13 [A b]. 2 6 4 0 1 1 2 2 0 3 5 1 1 1 3 3 7 5 −−→ R13 2 6 4 1 1 1 3 2 0 3 5 0 1 1 2 3 7 5 −−−−−→ R21(−2) 2 6 4 1 1 1 3 0 −2 1 −1 0 1 1 2 3 7 5 −−→ R23 2 6 4 1 1 1 3 0 1 1 2 0 −2 1 −1 3 7 5 −−−−→ R32(2) 2 6 4 1 1 1 3 0 1 1 2 0 0 3 3 3 7 5 −−−−−→ R3(1/3) 2 6 4 1 1 1 3 0 1 1 2 0 0 1 1 3 7 5 −−−−−→ R12(−1) 2 6 4 1 0 0 1 0 1 1 2 0 0 1 1 3 7 5 −−−−−→ R23(−1) 2 6 4 1 0 0 1 0 1 0 1 0 0 1 1 3 7 5 Now, consider an m n matrix A and an elementary matrix E of order n. Then multiplying by E on the right to A corresponds to applying column transformation on the matrix A. Therefore, for each elementary matrix, there is a corresponding column transformation. We summarize: Definition 2.3.16 The column transformations obtained by right multiplication of elementary matrices are called elementary column operations. Example 2.3.17 Let A = 1 2 3 2 0 3 3 4 5 ¸ ¸ ¸ and consider the elementary column operation f which interchanges the second and the third column of A. Then f(A) = 1 3 2 2 3 0 3 5 4 ¸ ¸ ¸ = A 1 0 0 0 0 1 0 1 0 ¸ ¸ ¸ = AE 23 . Exercise 2.3.18 1. Let e be an elementary row operation and let E = e(I) be the corresponding ele- mentary matrix. That is, E is the matrix obtained from I by applying the elementary row operation e. Show that e(A) = EA. 2. Show that the Gauss elimination method is same as multiplying by a series of elementary matrices on the left to the augmented matrix. Does the Gauss-Jordan method also corresponds to multiplying by elementary matrices on the left? Give reasons. 3. Let A and B be two mn matrices. Then prove that the two matrices A, B are row-equivalent if and only if B = PA, where P is product of elementary matrices. When is this P unique? 4. Show that every elementary matrix is invertible. Is the inverse of an elementary matrix, also an ele- mentary matrix? 2.4 Rank of a Matrix In previous sections, we solved linear systems using Gauss elimination method or the Gauss-Jordan method. In the examples considered, we have encountered three possibilities, namely 1. existence of a unique solution, 2.4. RANK OF A MATRIX 33 2. existence of an infinite number of solutions, and 3. no solution. Based on the above possibilities, we have the following definition. Definition 2.4.1 (Consistent, Inconsistent) A linear system is called consistent if it admits a solution and is called inconsistent if it admits no solution. The question arises, as to whether there are conditions under which the linear system Ax = b is consistent. The answer to this question is in the affirmative. To proceed further, we need a few definitions and remarks. Recall that the row reduced echelon form of a matrix is unique and therefore, the number of non-zero rows is a unique number. Also, note that the number of non-zero rows in either the row reduced form or the row reduced echelon form of a matrix are same. Definition 2.4.2 (Row rank of a Matrix) The number of non-zero rows in the row reduced form of a matrix is called the row-rank of the matrix. By the very definition, it is clear that row-equivalent matrices have the same row-rank. For a matrix A, we write ‘row-rank (A)’ to denote the row-rank of A. Example 2.4.3 1. Determine the row-rank of A = 1 2 1 2 3 1 1 1 2 ¸ ¸ ¸. Solution: To determine the row-rank of A, we proceed as follows. (a) 1 2 1 2 3 1 1 1 2 ¸ ¸ ¸ −−−−−−−−−−−−−→ R 21 (−2), R 31 (−1) 1 2 1 0 −1 −1 0 −1 1 ¸ ¸ ¸. (b) 1 2 1 0 −1 −1 0 −1 1 ¸ ¸ ¸ −−−−−−−−−−−→ R 2 (−1), R 32 (1) 1 2 1 0 1 1 0 0 2 ¸ ¸ ¸. (c) 1 2 1 0 1 1 0 0 2 ¸ ¸ ¸ −−−−−−−−−−−−→ R 3 (1/2), R 12 (−2) 1 0 −1 0 1 1 0 0 1 ¸ ¸ ¸. (d) 1 0 −1 0 1 1 0 0 1 ¸ ¸ ¸ −−−−−−−−−−−→ R 23 (−1), R 13 (1) 1 0 0 0 1 0 0 0 1 ¸ ¸ ¸ The last matrix in Step 1d is the row reduced form of A which has 3 non-zero rows. Thus, row-rank(A) = 3. This result can also be easily deduced from the last matrix in Step 1b. 2. Determine the row-rank of A = 1 2 1 2 3 1 1 1 0 ¸ ¸ ¸. Solution: Here we have (a) 1 2 1 2 3 1 1 1 0 ¸ ¸ ¸ −−−−−−−−−−−−−→ R 21 (−2), R 31 (−1) 1 2 1 0 −1 −1 0 −1 −1 ¸ ¸ ¸. 34 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS (b) 1 2 1 0 −1 −1 0 −1 −1 ¸ ¸ ¸ −−−−−−−−−−−→ R 2 (−1), R 32 (1) 1 2 1 0 1 1 0 0 0 ¸ ¸ ¸. From the last matrix in Step 2b, we deduce row-rank(A) = 2. Remark 2.4.4 Let Ax = b be a linear system with m equations and n unknowns. Then the row-reduced echelon form of A agrees with the first n columns of [A b], and hence row-rank(A) ≤ row-rank([A b]). Remark 2.4.5 Consider a matrix A. After application of a finite number of elementary column oper- ations (see Definition 2.3.16) to the matrix A, we can have a matrix, say B, which has the following properties: 1. The first nonzero entry in each column is 1. 2. A column containing only 0’s comes after all columns with at least one non-zero entry. 3. The first non-zero entry (the leading term) in each non-zero column moves down in successive columns. Therefore, we can define column-rank of A as the number of non-zero columns in B. It will be proved later that row-rank(A) = column-rank(A). Thus we are led to the following definition. Definition 2.4.6 The number of non-zero rows in the row reduced form of a matrix A is called the rank of A, denoted rank (A). Theorem 2.4.7 Let A be a matrix of rank r. Then there exist elementary matrices E 1 , E 2 , . . . , E s and F 1 , F 2 , . . . , F such that E 1 E 2 . . . E s A F 1 F 2 . . . F = ¸ I r 0 0 0 ¸ . Proof. Let C be the row reduced echelon matrix obtained by applying elementary row operations to the given matrix A. As rank(A) = r, the matrix C will have the first r rows as the non-zero rows. So by Remark 2.3.5, C will have r leading columns, say i 1 , i 2 , . . . , i r . Note that, for 1 ≤ s ≤ r, the i th s column will have 1 in the s th row and zero elsewhere. We now apply column operations to the matrix C. Let D be the matrix obtained from C by succes- sively interchanging the s th and i th s column of C for 1 ≤ s ≤ r. Then the matrix D can be written in the form ¸ I r B 0 0 ¸ , where B is a matrix of appropriate size. As the (1, 1) block of D is an identity matrix, the block (1, 2) can be made the zero matrix by application of column operations to D. This gives the required result. Corollary 2.4.8 Let A be a n n matrix of rank r < n. Then the system of equations Ax = 0 has infinite number of solutions. 2.4. RANK OF A MATRIX 35 Proof. By Theorem 2.4.7, there exist elementary matrices E 1 , E 2 , . . . , E s and F 1 , F 2 , . . . , F such that E 1 E 2 . . . E s A F 1 F 2 . . . F = ¸ I r 0 0 0 ¸ . Define Q = F 1 F 2 . . . F . Then the matrix AQ = 0 ¸ ¸ ¸ as the elementary martices E i ’s are being multiplied on the left of the matrix ¸ I r 0 0 0 ¸ . Let Q 1 , Q 2 , . . . , Q n be the columns of the matrix Q. Then check that AQ i = 0 for i = r +1, r +2, . . . , n. Hence, we can use the Q i ’s which are non-zero (Use Exercise 1.2.17.2) to generate infinite number of solutions. Exercise 2.4.9 1. Determine the ranks of the coefficient and the augmented matrices that appear in Part 1 and Part 2 of Exercise 2.3.12. 2. Let A be an n n matrix with rank(A) = n. Then prove that A is row-equivalent to I n . 3. If P and Q are invertible matrices and PAQ is defined then show that rank (PAQ) = rank (A). 4. Find matrices P and Q which are product of elementary matrices such that B = PAQ where A = ¸ 2 4 8 1 3 2 ¸ and B = ¸ 1 0 0 0 1 0 ¸ . 5. Let A and B be two matrices. Show that (a) if A +B is defined, then rank(A +B) ≤ rank(A) + rank(B), (b) if AB is defined, then rank(AB) ≤ rank(A) and rank(AB) ≤ rank(B). 6. Let A be any matrix of rank r. Then show that there exists invertible matrices B i , C i such that B 1 A = ¸ R 1 R 2 0 0 ¸ , AC 1 = ¸ S 1 0 S 3 0 ¸ , B 2 AC 2 = ¸ A 1 0 0 0 ¸ , and B 3 AC 3 = ¸ I r 0 0 0 ¸ . Also, prove that the matrix A 1 is an r r invertible matrix. 7. Let A be an m n matrix of rank r. Then A can be written as A = BC, where both B and C have rank r and B is a matrix of size mr and C is a matrix of size r n. 8. Let A and B be two matrices such that AB is defined and rank (A) = rank (AB). Then show that A = ABX for some matrix X. Similarly, if BA is defined and rank (A) = rank (BA), then A = Y BA for some matrix Y. [Hint: Choose non-singular matrices P, Q and R such that PAQ = " A1 0 0 0 # and P(AB)R = " C 0 0 0 # . Define X = R " C −1 A1 0 0 0 # Q −1 .] 9. Let A = [a ij ] be an invertible matrix and let B = [p i−j a ij ] for some nonzero real number p. Find the inverse of B. 10. If matrices B and C are invertible and the involved partitioned products are defined, then show that ¸ A B C 0 ¸ −1 = ¸ 0 C −1 B −1 −B −1 AC −1 ¸ . 36 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS 11. Suppose A is the inverse of a matrix B. Partition A and B as follows: A = ¸ A 11 A 12 A 21 A 22 ¸ , B = ¸ B 11 B 12 B 21 B 22 ¸ . If A 11 is invertible and P = A 22 −A 21 (A −1 11 A 12 ), then show that B 11 = A −1 11 + (A −1 11 A 12 )P −1 (A 21 A −1 11 ), B 21 = −P −1 (A 21 A −1 11 ), B 12 = −(A −1 11 A 12 )P −1 , and B 22 = P −1 . 2.5 Existence of Solution of Ax = b We try to understand the properties of the set of solutions of a linear system through an example, using the Gauss-Jordan method. Based on this observation, we arrive at the existence and uniqueness results for the linear system Ax = b. This example is more or less a motivation. 2.5.1 Example Consider a linear system Ax = b which after the application of the Gauss-Jordan method reduces to a matrix [C d] with [C d] = 1 0 2 −1 0 0 2 8 0 1 1 3 0 0 5 1 0 0 0 0 1 0 −1 2 0 0 0 0 0 1 1 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ . For this particular matrix [C d], we want to see the set of solutions. We start with some observations. Observations: 1. The number of non-zero rows in C is 4. This number is also equal to the number of non-zero rows in [C d]. 2. The first non-zero entry in the non-zero rows appear in columns 1, 2, 5 and 6. 3. Thus, the respective variables x 1 , x 2 , x 5 and x 6 are the basic variables. 4. The remaining variables, x 3 , x 4 and x 7 are free variables. 5. We assign arbitrary constants k 1 , k 2 and k 3 to the free variables x 3 , x 4 and x 7 , respectively. 2.5. EXISTENCE OF SOLUTION OF AX = B 37 Hence, we have the set of solutions as x 1 x 2 x 3 x 4 x 5 x 6 x 7 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ = 8 −2k 1 +k 2 −2k 3 1 −k 1 −3k 2 −5k 3 k 1 k 2 2 +k 3 4 −k 3 k 3 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ = 8 1 0 0 2 4 0 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ +k 1 −2 −1 1 0 0 0 0 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ +k 2 1 −3 0 1 0 0 0 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ +k 3 −2 −5 0 0 1 −1 1 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ , where k 1 , k 2 and k 3 are arbitrary. Let u 0 = 8 1 0 0 2 4 0 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ , u 1 = −2 −1 1 0 0 0 0 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ , u 2 = 1 −3 0 1 0 0 0 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ and u 3 = −2 −5 0 0 1 −1 1 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ . Then it can easily be verified that Cu 0 = d, and for 1 ≤ i ≤ 3, Cu i = 0. A similar idea is used in the proof of the next theorem and is omitted. The interested readers can read the proof in Appendix 15.1. 2.5.2 Main Theorem Theorem 2.5.1 [Existence and Non-existence] Consider a linear system Ax = b, where A is a mn matrix, and x, b are vectors with orders n1, and m1, respectively. Suppose rank (A) = r and rank([A b]) = r a . Then exactly one of the following statement holds: 1. if r a = r < n, the set of solutions of the linear system is an infinite set and has the form ¦u 0 +k 1 u 1 +k 2 u 2 + +k n−r u n−r : k i ∈ R, 1 ≤ i ≤ n −r¦, where u 0 , u 1 , . . . , u n−r are n 1 vectors satisfying Au 0 = b and Au i = 0 for 1 ≤ i ≤ n −r. 2. if r a = r = n, the solution set of the linear system has a unique n 1 vector x 0 satisfying Ax 0 = b. 3. If r < r a , the linear system has no solution. Remark 2.5.2 Let A be an m n matrix and consider the linear system Ax = b. Then by Theorem 2.5.1, we see that the linear system Ax = b is consistent if and only if rank (A) = rank([A b]). The following corollary of Theorem 2.5.1 is a very important result about the homogeneous linear system Ax = 0. 38 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS Corollary 2.5.3 Let A be an mn matrix. Then the homogeneous system Ax = 0 has a non-trivial solution if and only if rank(A) < n. Proof. Suppose the system Ax = 0 has a non-trivial solution, x 0 . That is, Ax 0 = 0 and x 0 = 0. Under this assumption, we need to show that rank(A) < n. On the contrary, assume that rank(A) = n. So, n = rank(A) = rank [A 0] = r a . Also A0 = 0 implies that 0 is a solution of the linear system Ax = 0. Hence, by the uniqueness of the solution under the condition r = r a = n (see Theorem 2.5.1), we get x 0 = 0. A contradiction to the fact that x 0 was a given non-trivial solution. Now, let us assume that rank(A) < n. Then r a = rank [A 0] = rank(A) < n. So, by Theorem 2.5.1, the solution set of the linear system Ax = 0 has infinite number of vectors x satisfying Ax = 0. From this infinite set, we can choose any vector x 0 that is different from 0. Thus, we have a solution x 0 = 0. That is, we have obtained a non-trivial solution x 0 . We now state another important result whose proof is immediate from Theorem 2.5.1 and Corollary 2.5.3. Proposition 2.5.4 Consider the linear system Ax = b. Then the two statements given below cannot hold together. 1. The system Ax = b has a unique solution for every b. 2. The system Ax = 0 has a non-trivial solution. Remark 2.5.5 1. Suppose x 1 , x 2 are two solutions of Ax = 0. Then k 1 x 1 + k 2 x 2 is also a solution of Ax = 0 for any k 1 , k 2 ∈ R. 2. If u, v are two solutions of Ax = b then u − v is a solution of the system Ax = 0. That is, u − v = x h for some solution x h of Ax = 0. That is, any two solutions of Ax = b differ by a solution of the associated homogeneous system Ax = 0. In conclusion, for b = 0, the set of solutions of the system Ax = b is of the form, ¦x 0 +x h ¦; where x 0 is a particular solution of Ax = b and x h is a solution Ax = 0. Exercise 2.5.6 1. For what values of c and k-the following systems have i) no solution, ii) a unique solution and iii) infinite number of solutions. (a) x +y +z = 3, x + 2y +cz = 4, 2x + 3y + 2cz = k. (b) x +y +z = 3, x +y + 2cz = 7, x + 2y + 3cz = k. (c) x +y + 2z = 3, x + 2y +cz = 5, x + 2y + 4z = k. (d) kx +y +z = 1, x +ky +z = 1, x +y +kz = 1. (e) x + 2y −z = 1, 2x + 3y +kz = 3, x +ky + 3z = 2. (f) x −2y = 1, x −y +kz = 1, ky + 4z = 6. 2. Find the condition on a, b, c so that the linear system x + 2y −3z = a, 2x + 6y −11z = b, x −2y + 7z = c is consistent. 3. Let A be an n n matrix. If the system A 2 x = 0 has a non trivial solution then show that Ax = 0 also has a non trivial solution. 2.5. EXISTENCE OF SOLUTION OF AX = B 39 2.5.3 Equivalent conditions for Invertibility Definition 2.5.7 A square matrix A or order n is said to be of full rank if rank (A) = n. Theorem 2.5.8 For a square matrix A of order n, the following statements are equivalent. 1. A is invertible. 2. A is of full rank. 3. A is row-equivalent to the identity matrix. 4. A is a product of elementary matrices. Proof. 1 =⇒ 2 Let if possible rank(A) = r < n. Then there exists an invertible matrix P (a product of elementary matrices) such that PA = ¸ B 1 B 2 0 0 ¸ , where B 1 is an rr matrix. Since A is invertible, let A −1 = ¸ C 1 C 2 ¸ , where C 1 is an r n matrix. Then P = PI n = P(AA −1 ) = (PA)A −1 = ¸ B 1 B 2 0 0 ¸ ¸ C 1 C 2 ¸ = ¸ B 1 C 1 +B 2 C 2 0 ¸ . (2.5.1) Thus the matrix P has n − r rows as zero rows. Hence, P cannot be invertible. A contradiction to P being a product of invertible matrices. Thus, A is of full rank. 2 =⇒ 3 Suppose A is of full rank. This implies, the row reduced echelon form of A has all non-zero rows. But A has as many columns as rows and therefore, the last row of the row reduced echelon form of A will be (0, 0, . . . , 0, 1). Hence, the row reduced echelon form of A is the identity matrix. 3 =⇒ 4 Since A is row-equivalent to the identity matrix there exist elementary matrices E 1 , E 2 , . . . , E k such that A = E 1 E 2 E k I n . That is, A is product of elementary matrices. 4 =⇒ 1 Suppose A = E 1 E 2 E k ; where the E i ’s are elementary matrices. We know that elementary matrices are invertible and product of invertible matrices is also invertible, we get the required result. The ideas of Theorem 2.5.8 will be used in the next subsection to find the inverse of an invertible matrix. The idea used in the proof of the first part also gives the following important Theorem. We repeat the proof for the sake of clarity. Theorem 2.5.9 Let A be a square matrix of order n. 1. Suppose there exists a matrix B such that AB = I n . Then A −1 exists. 2. Suppose there exists a matrix C such that CA = I n . Then A −1 exists. Proof. Suppose that AB = I n . We will prove that the matrix A is of full rank. That is, rank (A) = n. Let if possible, rank(A) = r < n. Then there exists an invertible matrix P (a product of elementary matrices) such that PA = ¸ C 1 C 2 0 0 ¸ . Let B = ¸ B 1 B 2 ¸ , where B 1 is an r n matrix. Then P = PI n = P(AB) = (PA)B = ¸ C 1 C 2 0 0 ¸¸ B 1 B 2 ¸ = ¸ C 1 B 1 +C 2 B 2 0 ¸ . (2.5.2) 40 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS Thus the matrix P has n −r rows as zero rows. So, P cannot be invertible. A contradiction to P being a product of invertible matrices. Thus, rank (A) = n. That is, A is of full rank. Hence, using Theorem 2.5.8, A is an invertible matrix. That is, BA = I n as well. Using the first part, it is clear that the matrix C in the second part, is invertible. Hence AC = I n = CA. Thus, A is invertible as well. Remark 2.5.10 This theorem implies the following: “if we want to show that a square matrix A of order n is invertible, it is enough to show the existence of 1. either a matrix B such that AB = I n 2. or a matrix C such that CA = I n . Theorem 2.5.11 The following statements are equivalent for a square matrix A of order n. 1. A is invertible. 2. Ax = 0 has only the trivial solution x = 0. 3. Ax = b has a solution x for every b. Proof. 1 =⇒ 2 Since A is invertible, by Theorem 2.5.8 A is of full rank. That is, for the linear system Ax = 0, the number of unknowns is equal to the rank of the matrix A. Hence, by Theorem 2.5.1 the system Ax = 0 has a unique solution x = 0. 2 =⇒ 1 Let if possible A be non-invertible. Then by Theorem 2.5.8, the matrix A is not of full rank. Thus by Corollary 2.5.3, the linear system Ax = 0 has infinite number of solutions. This contradicts the assumption that Ax = 0 has only the trivial solution x = 0. 1 =⇒ 3 Since A is invertible, for every b, the system Ax = b has a unique solution x = A −1 b. 3 =⇒ 1 For 1 ≤ i ≤ n, define e i = (0, . . . , 0, 1 .... i th position , 0, . . . , 0) t , and consider the linear system Ax = e i . By assumption, this system has a solution x i for each i, 1 ≤ i ≤ n. Define a matrix B = [x 1 , x 2 , . . . , x n ]. That is, the i th column of B is the solution of the system Ax = e i . Then AB = A[x 1 , x 2 . . . , x n ] = [Ax 1 , Ax 2 . . . , Ax n ] = [e 1 , e 2 . . . , e n ] = I n . Therefore, by Theorem 2.5.9, the matrix A is invertible. Exercise 2.5.12 1. Show that a triangular matrix A is invertible if and only if each diagonal entry of A is non-zero. 2. Let A be a 1 2 matrix and B be a 2 1 matrix having positive entries. Which of BA or AB is invertible? Give reasons. 3. Let A be an n m matrix and B be an m n matrix. Prove that the matrix I − BA is invertible if and only if the matrix I −AB is invertible. 2.5. EXISTENCE OF SOLUTION OF AX = B 41 2.5.4 Inverse and the Gauss-Jordan Method We first give a consequence of Theorem 2.5.8 and then use it to find the inverse of an invertible matrix. Corollary 2.5.13 Let A be an invertible nn matrix. Suppose that a sequence of elementary row-operations reduces A to the identity matrix. Then the same sequence of elementary row-operations when applied to the identity matrix yields A −1 . Proof. Let A be a square matrix of order n. Also, let E 1 , E 2 , . . . , E k be a sequence of elementary row operations such that E 1 E 2 E k A = I n . Then E 1 E 2 E k I n = A −1 . This implies A −1 = E 1 E 2 E k . Summary: Let A be an n n matrix. Apply the Gauss-Jordan method to the matrix [A I n ]. Suppose the row reduced echelon form of the matrix [A I n ] is [B C]. If B = I n , then A −1 = C or else A is not invertible. Example 2.5.14 Find the inverse of the matrix 2 1 1 1 2 1 1 1 2 ¸ ¸ ¸ using the Gauss-Jordan method. Solution: Consider the matrix 2 1 1 1 0 0 1 2 1 0 1 0 1 1 2 0 0 1 ¸ ¸ ¸. A sequence of steps in the Gauss-Jordan method are: 1. 2 1 1 1 0 0 1 2 1 0 1 0 1 1 2 0 0 1 ¸ ¸ ¸ −−−−−→ R 1 (1/2) 1 1 2 1 2 1 2 0 0 1 2 1 0 1 0 1 1 2 0 0 1 ¸ ¸ ¸ 2. 1 1 2 1 2 1 2 0 0 1 2 1 0 1 0 1 1 2 0 0 1 ¸ ¸ ¸ −−−−−→ R 21 (−1) −−−−−→ R 31 (−1) 1 1 2 1 2 1 2 0 0 0 3 2 1 2 1 2 1 0 0 1 2 3 2 1 2 0 1 ¸ ¸ ¸ 3. 1 1 2 1 2 1 2 0 0 0 3 2 1 2 1 2 1 0 0 1 2 3 2 1 2 0 1 ¸ ¸ ¸ −−−−−→ R 2 (2/3) 1 1 2 1 2 1 2 0 0 0 1 1 3 1 3 2 3 0 0 1 2 3 2 1 2 0 1 ¸ ¸ ¸ 4. 1 1 2 1 2 1 2 0 0 0 1 1 3 1 3 2 3 0 0 1 2 3 2 1 2 0 1 ¸ ¸ ¸ −−−−−−−→ R 32 (−1/2) 1 1 2 1 2 1 2 0 0 0 1 1 3 1 3 2 3 0 0 0 4 3 1 3 1 3 1 ¸ ¸ ¸ 5. 1 1 2 1 2 1 2 0 0 0 1 1 3 1 3 2 3 0 0 0 4 3 1 3 1 3 1 ¸ ¸ ¸ −−−−−→ R 3 (3/4) 1 1 2 1 2 1 2 0 0 0 1 1 3 1 3 2 3 0 0 0 1 − 1 4 1 4 3 4 ¸ ¸ ¸ 6. 2 6 4 1 1 2 1 2 1 2 0 0 0 1 1 3 −1 3 2 3 0 0 0 1 −1 4 −1 4 3 4 3 7 5 −−−−−−−→ R23(−1/3) −−−−−−−→ R13(−1/2) 2 6 4 1 1 2 0 5 8 1 8 −3 8 0 1 0 −1 4 3 4 −1 4 0 0 1 −1 4 −1 4 3 4 3 7 5 7. 2 6 4 1 1 2 0 5 8 1 8 −3 8 0 1 0 −1 4 3 4 −1 4 0 0 1 −1 4 −1 4 3 4 3 7 5 −−−−−−−→ R12(−1/2) 2 6 4 1 0 0 3 4 −1 4 −1 4 0 1 0 −1 4 3 4 −1 4 0 0 1 −1 4 −1 4 3 4 3 7 5. 8. Thus, the inverse of the given matrix is 3/4 −1/4 −1/4 −1/4 3/4 −1/4 −1/4 −1/4 3/4 ¸ ¸ ¸. 42 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS Exercise 2.5.15 Find the inverse of the following matrices using the Gauss-Jordan method. (i) 1 2 3 1 3 2 2 4 7 ¸ ¸ ¸, (ii) 1 3 3 2 3 2 2 4 7 ¸ ¸ ¸, (iii) 2 −1 3 −1 3 −2 2 4 1 ¸ ¸ ¸. 2.6 Determinant Notation: For an n n matrix A, by A(α[β), we mean the submatrix B of A, which is obtained by deleting the α th row and β th column. Example 2.6.1 Consider a matrix A = 1 2 3 1 3 2 2 4 7 ¸ ¸ ¸. Then A(1[2) = ¸ 1 2 2 7 ¸ , A(1[3) = ¸ 1 3 2 4 ¸ , and A(1, 2[1, 3) = [4]. Definition 2.6.2 (Determinant of a Square Matrix) Let A be a square matrix of order n. With A, we associate inductively (on n) a number, called the determinant of A, written det(A) (or [A[) by det(A) = a if A = [a] (n = 1), n ¸ j=1 (−1) 1+j a 1j det A(1[j) , otherwise. Definition 2.6.3 (Minor, Cofactor of a Matrix) The number det (A(i[j)) is called the (i, j) th minor of A. We write A ij = det (A(i[j)) . The (i, j) th cofactor of A, denoted C ij , is the number (−1) i+j A ij . Example 2.6.4 1. Let A = ¸ a 11 a 12 a 21 a 22 ¸ . Then, det(A) = [A[ = a 11 A 11 −a 12 A 12 = a 11 a 22 −a 12 a 21 . For example, for A = ¸ 1 2 2 1 ¸ det(A) = [A[ = 1 −2 2 = −3. 2. Let A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ¸ ¸ ¸. Then, det(A) = [A[ = a 11 A 11 −a 12 A 12 +a 13 A 13 = a 11 a 22 a 23 a 32 a 33 −a 12 a 21 a 23 a 31 a 33 +a 13 a 21 a 22 a 31 a 32 = a 11 (a 22 a 33 −a 23 a 32 ) −a 12 (a 21 a 33 −a 31 a 23 ) +a 13 (a 21 a 32 −a 31 a 22 ) = a 11 a 22 a 33 −a 11 a 23 a 32 −a 12 a 21 a 33 +a 12 a 23 a 31 +a 13 a 21 a 32 −a 13 a 22 a 31 (2.6.1) For example, if A = 1 2 3 2 3 1 1 2 2 ¸ ¸ ¸ then det(A) = [A[ = 1 3 1 2 2 −2 2 1 1 2 + 3 2 3 1 2 = 4 −2(3) + 3(1) = 1. Exercise 2.6.5 1. Find the determinant of the following matrices. i) 1 2 7 8 0 4 3 2 0 0 2 3 0 0 0 5 ¸ ¸ ¸ ¸ ¸ , ii) 3 5 2 1 0 2 0 5 6 −7 1 0 2 0 3 0 ¸ ¸ ¸ ¸ ¸ , iii) 1 a a 2 1 b b 2 1 c c 2 ¸ ¸ ¸. 2.6. DETERMINANT 43 2. Show that the determinant of a triangular matrix is the product of its diagonal entries. Definition 2.6.6 A matrix A is said to be a singular matrix if det(A) = 0. It is called non-singular if det(A) = 0. The proof of the next theorem is omitted. The interested reader is advised to go through Appendix 15.3. Theorem 2.6.7 Let A be an n n matrix. Then 1. if B is obtained from A by interchanging two rows, then det(B) = −det(A), 2. if B is obtained from A by multiplying a row by c then det(B) = c det(A), 3. if all the elements of one row or column are 0 then det(A) = 0, 4. if B is obtained from A by replacing the jth row by itself plus k times the ith row, where i = j then det(B) = det(A), 5. if A is a square matrix having two rows equal then det(A) = 0. Remark 2.6.8 1. Many authors define the determinant using “Permutations.” It turns out that the way we have defined determinant is usually called the expansion of the determinant along the first row. 2. Part 1 of Lemma 2.6.7 implies that “one can also calculate the determinant by expanding along any row.” Hence, for an n n matrix A, for every k, 1 ≤ k ≤ n, one also has det(A) = n ¸ j=1 (−1) k+j a kj det A(k[j) . Remark 2.6.9 1. Let u t = (u 1 , u 2 ) and v t = (v 1 , v 2 ) be two vectors in R 2 . Then consider the par- allelogram, PQRS, formed by the vertices ¦P = (0, 0) t , Q = u, S = v, R = u +v¦. We Claim: Area (PQRS) = det ¸ u 1 v 1 u 2 v 2 ¸ = [u 1 v 2 −u 2 v 1 [. Recall that the dot product, u • v = u 1 v 1 + u 2 v 2 , and u • u = (u 2 1 +u 2 2 ), is the length of the vector u. We denote the length by (u). With the above notation, if θ is the angle between the vectors u and v, then cos(θ) = u • v (u)(v) . Which tells us, Area(PQRS) = (u)(v) sin(θ) = (u)(v) 1 − u • v (u)(v) 2 = (u) 2 +(v) 2 −(u • v) 2 = (u 1 v 2 −u 2 v 1 ) 2 = [u 1 v 2 −u 2 v 1 [. Hence, the claim holds. That is, in R 2 , the determinant is ± times the area of the parallelogram. 44 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS 2. Let u = (u 1 , u 2 , u 3 ), v = (v 1 , v 2 , v 3 ) and w = (w 1 , w 2 , w 3 ) be three elements of R 3 . Recall that the cross product of two vectors in R 3 is, u v = (u 2 v 3 −u 3 v 2 , u 3 v 1 −u 1 v 3 , u 1 v 2 −u 2 v 1 ). Note here that if A = [u t , v t , w t ], then det(A) = u 1 v 1 w 1 u 2 v 2 w 2 u 3 v 3 w 3 = u • (v w) = v • (wu) = w• (u v). Let P be the parallelopiped formed with (0, 0, 0) as a vertex and the vectors u, v, w as adjacent vertices. Then observe that u v is a vector perpendicular to the plane that contains the paral- lelogram formed by the vectors u and v. So, to compute the volume of the parallelopiped P, we need to look at cos(θ), where θ is the angle between the vector w and the normal vector to the parallelogram formed by u and v. So, volume (P) = [w• (u v)[. Hence, [ det(A)[ = volume (P). 3. Let u 1 , u 2 , . . . , u n ∈ R n×1 and let A = [u 1 , u 2 , . . . , u n ] be an n n matrix. Then the following properties of det(A) also hold for the volume of an n-dimensional parallelopiped formed with 0 ∈ R n×1 as one vertex and the vectors u 1 , u 2 , . . . , u n (a) If u 1 = (1, 0, . . . , 0) t , u 2 = (0, 1, 0, . . . , 0) t , . . . , and u n = (0, . . . , 0, 1) t , then det(A) = 1. Also, volume of a unit n-dimensional cube is 1. (b) If we replace the vector u i by αu i , for some α ∈ R, then the determinant of the new matrix is α det(A). This is also true for the volume, as the original volume gets multiplied by α. (c) If u 1 = u i for some i, 2 ≤ i ≤ n, then the vectors u 1 , u 2 , . . . , u n will give rise to an (n − 1)- dimensional parallelopiped. So, this parallelopiped lies on an (n−1)-dimensional hyperplane. Thus, its n-dimensional volume will be zero. Also, [ det(A)[ = [0[ = 0. In general, for any n n matrix A, it can be proved that [ det(A)[ is indeed equal to the volume of the n-dimensional parallelopiped. The actual proof is beyond the scope of this book. Recall that for a square matrix A, the notations A ij and C ij = (−1) i+j A ij were respectively used to denote the (i, j) th minor and the (i, j) th cofactor of A. Definition 2.6.10 (Adjoint of a Matrix) Let A be an n n matrix. The matrix B = [b ij ] with b ij = C ji , for 1 ≤ i, j ≤ n is called the Adjoint of A, denoted Adj(A). Example 2.6.11 Let A = 1 2 3 2 3 1 1 2 2 ¸ ¸ 4 2 −7 −3 −1 5 1 0 −1 ¸ ¸ ¸; as C 11 = (−1) 1+1 A 11 = 4, C 12 = (−1) 1+2 A 12 = −3, C 13 = (−1) 1+3 A 13 = 1, and so on. Theorem 2.6.12 Let A be an n n matrix. Then 1. for 1 ≤ i ≤ n, n ¸ j=1 a ij C ij = n ¸ j=1 a ij (−1) i+j A ij = det(A), 2.6. DETERMINANT 45 2. for i = , n ¸ j=1 a ij C j = n ¸ j=1 a ij (−1) +j A j = 0, and n . Thus, det(A) = 0 ⇒A −1 = 1 det(A) Proof. Let B = [b ij ] be a square matrix with • the th row of B as the i th row of A, • the other rows of B are the same as that of A. By the construction of B, two rows (i th and th ) are equal. By Part 5 of Lemma 2.6.7, det(B) = 0. By construction again, det A([j) = det B([j) for 1 ≤ j ≤ n. Thus, by Remark 2.6.8, we have 0 = det(B) = n ¸ j=1 (−1) +j b j det B([j) = n ¸ j=1 (−1) +j a ij det B([j) = n ¸ j=1 (−1) +j a ij det A([j) = n ¸ j=1 a ij C j . Now, A ij = n ¸ k=1 a ik kj = n ¸ k=1 a ik C jk = 0 if i = j det(A) if i = j n . Since, det(A) = 0, A 1 det(A) n . Therefore, A has a right inverse. Hence, by Theorem 2.5.9 A has an inverse and A −1 = 1 det(A) Example 2.6.13 Let A = 1 −1 0 0 1 1 1 2 1 ¸ ¸ ¸. Then −1 1 −1 1 1 −1 −1 −3 1 ¸ ¸ ¸ and det(A) = −2. By Theorem 2.6.12.3, A −1 = 1/2 −1/2 1/2 −1/2 −1/2 1/2 1/2 3/2 −1/2 ¸ ¸ ¸. The next corollary is an easy consequence of Theorem 2.6.12 (recall Theorem 2.5.9). Corollary 2.6.14 If A is a non-singular matrix, then A = det(A)I n and n ¸ i=1 a ij C ik = det(A) if j = k 0 if j = k . 46 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS Theorem 2.6.15 Let A and B be square matrices of order n. Then det(AB) = det(A) det(B). Proof. Step 1. Let det(A) = 0. This means, A is invertible. Therefore, either A is an elementary matrix or is a product of elementary matrices (see Theorem 2.5.8). So, let E 1 , E 2 , . . . , E k be elementary matrices such that A = E 1 E 2 E k . Then, by using Parts 1, 2 and 4 of Lemma 2.6.7 repeatedly, we get det(AB) = det(E 1 E 2 E k B) = det(E 1 ) det(E 2 E k B) = det(E 1 ) det(E 2 ) det(E 3 E k B) = det(E 1 E 2 ) det(E 3 E k B) = . . . = det(E 1 E 2 E k ) det(B) = det(A) det(B). Thus, we get the required result in case A is non-singular. Step 2. Suppose det(A) = 0. Then A is not invertible. Hence, there exists an invertible matrix P such that PA = C, where C = ¸ C 1 0 ¸ . So, A = P −1 C, and therefore det(AB) = det((P −1 C)B) = det(P −1 (CB)) = det P −1 ¸ C 1 B 0 ¸ = det(P −1 ) det ¸ C 1 B 0 ¸ as P −1 is non-singular = det(P) 0 = 0 = 0 det(B) = det(A) det(B). Thus, the proof of the theorem is complete. Corollary 2.6.16 Let A be a square matrix. Then A is non-singular if and only if A has an inverse. Proof. Suppose A is non-singular. Then det(A) = 0 and therefore, A −1 = 1 det(A) has an inverse. Suppose A has an inverse. Then there exists a matrix B such that AB = I = BA. Taking determinant of both sides, we get det(A) det(B) = det(AB) = det(I) = 1. This implies that det(A) = 0. Thus, A is non-singular. Theorem 2.6.17 Let A be a square matrix. Then det(A) = det(A t ). Proof. If A is a non-singular Corollary 2.6.14 gives det(A) = det(A t ). If A is singular, then det(A) = 0. Hence, by Corollary 2.6.16, A doesn’t have an inverse. There- fore, A t also doesn’t have an inverse (for if A t has an inverse then A −1 = (A t ) −1 t ). Thus again by Corollary 2.6.16, det(A t ) = 0. Therefore, we again have det(A) = 0 = det(A t ). Hence, we have det(A) = det(A t ). 2.6. DETERMINANT 47 2.6.2 Cramer’s Rule Recall the following: • The linear system Ax = b has a unique solution for every b if and only if A −1 exists. • A has an inverse if and only if det(A) = 0. Thus, Ax = b has a unique solution for every b if and only if det(A) = 0. The following theorem gives a direct method of finding the solution of the linear system Ax = b when det(A) = 0. Theorem 2.6.18 (Cramer’s Rule) Let Ax = b be a linear system with n equations in n unknowns. If det(A) = 0, then the unique solution to this system is x j = det(A j ) det(A) , for j = 1, 2, . . . , n, where A j is the matrix obtained from A by replacing the jth column of A by the column vector b. Proof. Since det(A) = 0, A −1 = 1 det(A) Adj(A). Thus, the linear system Ax = b has the solution x = 1 det(A) j , the jth coordinate of x is given by x j = b 1 C 1j +b 2 C 2j + +b n C nj det(A) = det(A j ) det(A) . The theorem implies that x 1 = 1 det(A) b 1 a 12 a 1n b 2 a 22 a 2n . . . . . . . . . . . . b n a n2 a nn , and in general x j = 1 det(A) a 11 a 1j−1 b 1 a 1j+1 a 1n a 12 a 2j−1 b 2 a 2j+1 a 2n . . . . . . . . . . . . . . . . . . . . . a 1n a nj−1 b n a nj+1 a nn for j = 2, 3, . . . , n. Example 2.6.19 Suppose that A = 1 2 3 2 3 1 1 2 2 ¸ ¸ ¸ and b = 1 1 1 ¸ ¸ ¸. Use Cramer’s rule to find a vector x such that Ax = b. Solution: Check that det(A) = 1. Therefore x 1 = 1 2 3 1 3 1 1 2 2 = −1, x 2 = 1 1 3 2 1 1 1 1 2 = 1, and x 3 = 1 2 1 2 3 1 1 2 1 = 0. That is, x t = (−1, 1, 0). 48 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS 2.7 Miscellaneous Exercises Exercise 2.7.1 1. Let A be an orthogonal matrix. Show that det A = ±1. 2. If A and B are two n n non-singular matrices, are the matrices A + B and A − B non-singular? 3. For an n n matrix A, prove that the following conditions are equivalent: (a) A is singular (A −1 doesn’t exist). (b) rank(A) = n. (c) det(A) = 0. (d) A is not row-equivalent to I n , the identity matrix of order n. (e) Ax = 0 has a non-trivial solution for x. (f) Ax = b doesn’t have a unique solution, i.e., it has no solutions or it has infinitely many solutions. 4. Let A = 2 0 6 0 4 5 3 2 2 7 2 5 7 5 5 2 0 9 2 7 7 8 4 2 1 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ . We know that the numbers 20604, 53227, 25755, 20927 and 78421 are all divisible by 17. Does this imply 17 divides det(A)? 5. Let A = [a ij ] n×n where a ij = x j−1 i . Show that det(A) = ¸ 1≤i<j≤n (x j −x i ). [The matrix A is usually called the Van-dermonde matrix.] 6. Let A = [a ij ] with a ij = max¦i, j¦ be an n n matrix. Compute det A. 7. Let A = [a ij ] with a ij = 1/(i +j) be an n n matrix. Show that A is invertible. 8. Solve the following system of equations by Cramer’s rule. i) x +y +z −w = 1, x +y −z +w = 2, 2x +y +z −w = 7, x +y +z +w = 3. ii) x −y +z −w = 1, x +y −z +w = 2, 2x +y −z −w = 7, x −y −z +w = 3. 9. Suppose A = [a ij ] and B = [b ij ] are two n n matrices such that b ij = p i−j a ij for 1 ≤ i, j ≤ n for some non-zero real number p. Then compute det(B) in terms of det(A). 10. The position of an element a ij of a determinant is called even or odd according as i +j is even or odd. Show that (a) If all the entries in odd positions are multiplied with −1 then the value of the determinant doesn’t change. (b) If all entries in even positions are multiplied with −1 then the determinant i. does not change if the matrix is of even order. ii. is multiplied by −1 if the matrix is of odd order. 11. Let A be an nn Hermitian matrix, that is, A = A. Show that det A is a real number. [A is a matrix with complex entries and A = A t .] 12. Let A be an n n matrix. Then show that A is invertible ⇐⇒ Adj(A) is invertible. 2.7. MISCELLANEOUS EXERCISES 49 14. Let P = ¸ A B C D ¸ be a rectangular matrix with A a square matrix of order n and [A[ = 0. Then show that rank (P) = n if and only if D = CA −1 B. 50 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS Chapter 3 Finite Dimensional Vector Spaces Consider the problem of finding the set of points of intersection of the two planes 2x + 3y + z + u = 0 and 3x +y + 2z +u = 0. Let V be the set of points of intersection of the two planes. Then V has the following properties: 1. The point (0, 0, 0, 0) is an element of V. 2. For the points (−1, 0, 1, 1) and (−5, 1, 7, 0) which belong to V ; the point (−6, 1, 8, 1) = (−1, 0, 1, 1)+ (−5, 1, 7, 0) ∈ V. 3. Let α ∈ R. Then the point α(−1, 0, 1, 1) = (−α, 0, α, α) also belongs to V. Similarly, for an m n real matrix A, consider the set V, of solutions of the homogeneous linear system Ax = 0. This set satisfies the following properties: 1. If Ax = 0 and Ay = 0, then x, y ∈ V. Then x +y ∈ V as A(x +y) = Ax+Ay = 0 +0 = 0. Also, x +y = y +x. 2. It is clear that if x, y, z ∈ V then (x +y) +z = x + (y +z). 3. The vector 0 ∈ V as A0 = 0. 4. If Ax = 0 then A(−x) = −Ax = 0. Hence, −x ∈ V. 5. Let α ∈ R and x ∈ V. Then αx ∈ V as A(αx) = αAx = 0. Thus we are lead to the following. 3.1 Vector Spaces 3.1.1 Definition Definition 3.1.1 (Vector Space) A vector space over F, denoted V (F), is a non-empty set, satisfying the following axioms: 1. Vector Addition: To every pair u, v ∈ V there corresponds a unique element u⊕v in V such that (a) u ⊕v = v ⊕u (Commutative law). (b) (u ⊕v) ⊕w = u ⊕(v ⊕w) (Associative law). (c) There is a unique element 0 in V (the zero vector) such that u ⊕0 = u, for every u ∈ V (called 51 52 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES (d) For every u ∈ V there is a unique element −u ∈ V such that u⊕(−u) = 0 (called the additive inverse). 2. Scalar Multiplication: For each u ∈ V and α ∈ F, there corresponds a unique element α u in V such that (a) α (β u) = (αβ) u for every α, β ∈ F and u ∈ V. (b) 1 u = u for every u ∈ V, where 1 ∈ R. 3. Distributive Laws: relating vector addition with scalar multiplication For any α, β ∈ F and u, v ∈ V, the following distributive laws hold: (a) α (u ⊕v) = (α u) ⊕ (α v). (b) (α +β) u = (α u) ⊕ (β u). Note: the number 0 is the element of F whereas 0 is the zero vector. Remark 3.1.2 The elements of F are called scalars, and that of V are called vectors. If F = R, the vector space is called a real vector space. If F = C, the vector space is called a complex vector space. We may sometimes write V for a vector space if F is understood from the context. Some interesting consequences of Definition 3.1.1 is the following useful result. Intuitively, these results seem to be obvious but for better understanding of the axioms it is desirable to go through the proof. Theorem 3.1.3 Let V be a vector space over F. Then 1. u ⊕v = u implies v = 0. 2. α u = 0 if and only if either u is the zero vector or α = 0. 3. (−1) u = −u for every u ∈ V. Proof. Proof of Part 1. For u ∈ V, by Axiom 1d there exists −u ∈ V such that −u ⊕u = 0. Hence, u ⊕v = u is equivalent to −u ⊕(u ⊕v) = −u ⊕u ⇐⇒ (−u ⊕u) ⊕v = 0 ⇐⇒0 ⊕v = 0 ⇐⇒v = 0. Proof of Part 2. As 0 = 0 ⊕0, using the distributive law, we have α 0 = α (0 ⊕0) = (α 0) ⊕ (α 0). Thus, for any α ∈ F, the first part implies α 0 = 0. In the same way, 0 u = (0 + 0) u = (0 u) ⊕(0 u). Hence, using the first part, one has 0 u = 0 for any u ∈ V. Now suppose α u = 0. If α = 0 then the proof is over. Therefore, let us assume α = 0 (note that α is a real or complex number, hence 1 α exists and 0 = 1 α 0 = 1 α (α u) = ( 1 α α) u = 1 u = u 3.1. VECTOR SPACES 53 as 1 u = u for every vector u ∈ V. Thus we have shown that if α = 0 and α u = 0 then u = 0. Proof of Part 3. We have 0 = 0u = (1 + (−1))u = u + (−1)u and hence (−1)u = −u. 3.1.2 Examples Example 3.1.4 1. The set R of real numbers, with the usual addition and multiplication (i.e., ⊕ ≡ + and ≡ ) forms a vector space over R. 2. Consider the set R 2 = ¦(x 1 , x 2 ) : x 1 , x 2 ∈ R¦. For x 1 , x 2 , y 1 , y 2 ∈ R and α ∈ R, define, (x 1 , x 2 ) ⊕(y 1 , y 2 ) = (x 1 +y 1 , x 2 +y 2 ) and α (x 1 , x 2 ) = (αx 1 , αx 2 ). Then R 2 is a real vector space. 3. Let R n = ¦(a 1 , a 2 , . . . , a n ) : a i ∈ R, 1 ≤ i ≤ n¦, be the set of n-tuples of real numbers. For u = (a 1 , . . . , a n ), v = (b 1 , . . . , b n ) in V and α ∈ R, we define u ⊕v = (a 1 +b 1 , . . . , a n +b n ) and α u = (αa 1 , . . . , αa n ) (called component wise or coordinate wise operations). Then V is a real vector space with addition and scalar multiplication defined as above. This vector space is denoted by R n , called the real vector space of n-tuples. 4. Let V = R + (the set of positive real numbers). This is not a vector space under usual operations of addition and scalar multiplication (why?). We now define a new vector addition and scalar multiplication as v 1 ⊕v 2 = v 1 v 2 and α v = v α for all v 1 , v 2 , v ∈ R + and α ∈ R. Then R + is a real vector space with 1 as the additive identity. 5. Let V = R 2 . Define (x 1 , x 2 ) ⊕ (y 1 , y 2 ) = (x 1 + y 1 + 1, x 2 + y 2 − 3), α (x 1 , x 2 ) = (αx 1 + α − 1, αx 2 − 3α + 3) for (x 1 , x 2 ), (y 1 , y 2 ) ∈ R 2 and α ∈ R. Then it can be easily verified that the vector (−1, 3) is the additive identity and V is indeed a real vector space. Recall −1 is denoted i. 6. Consider the set C = ¦x +iy : x, y ∈ R¦ of complex numbers. (a) For x 1 +iy 1 , x 2 +iy 2 ∈ C and α ∈ R, define, (x 1 +iy 1 ) ⊕(x 2 +iy 2 ) = (x 1 +x 2 ) +i(y 1 +y 2 ) and α (x 1 +iy 1 ) = (αx 1 ) +i(αy 1 ). Then C is a real vector space. (b) For x 1 +iy 1 , x 2 +iy 2 ∈ C and α +iβ ∈ C, define, (x 1 +iy 1 ) ⊕ (x 2 +iy 2 ) = (x 1 +x 2 ) +i(y 1 +y 2 ) and (α +iβ) (x 1 +iy 1 ) = (αx 1 −βy 1 ) +i(αy 1 +βx 1 ). Then C forms a complex vector space. 54 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES 7. Consider the set C n = ¦(z 1 , z 2 , . . . , z n ) : z i ∈ C for 1 ≤ i ≤ n¦. For (z 1 , . . . , z n ), (w 1 , . . . , w n ) ∈ C n and α ∈ F, define, (z 1 , . . . , z n ) ⊕(w 1 , . . . , w n ) = (z 1 +w 1 , . . . , z n +w n ) and α (z 1 , . . . , z n ) = (αz 1 , . . . , αz n ). (a) If the set F is the set C of complex numbers, then C n is a complex vector space having n-tuple of complex numbers as its vectors. (b) If the set F is the set R of real numbers, then C n is a real vector space having n-tuple of complex numbers as its vectors. Remark 3.1.5 In Example 7a, the scalars are Complex numbers and hence i(1, 0) = (i, 0). Whereas, in Example 7b, the scalars are Real Numbers and hence we cannot write i(1, 0) = (i, 0). 8. Fix a positive integer n and let M n (R) denote the set of all n n matrices with real entries. Then M n (R) is a real vector space with vector addition and scalar multiplication defined by A ⊕B = [a ij ] ⊕[b ij ] = [a ij +b ij ], α A = α [a ij ] = [αa ij ]. 9. Fix a positive integer n. Consider the set, { n (R), of all polynomials of degree ≤ n with coefficients from R in the indeterminate x. Algebraically, { n (R) = ¦a 0 +a 1 x +a 2 x 2 + +a n x n : a i ∈ R, 0 ≤ i ≤ n¦. Let f(x), g(x) ∈ { n (R). Then f(x) = a 0 + a 1 x + a 2 x 2 + + a n x n and g(x) = b 0 + b 1 x + b 2 x 2 + +b n x n for some a i , b i ∈ R, 0 ≤ i ≤ n. It can be verified that { n (R) is a real vector space with the addition and scalar multiplication defined by: f(x) ⊕g(x) = (a 0 +b 0 ) + (a 1 +b 1 )x + + (a n +b n )x n , and α f(x) = αa 0 +αa 1 x + +αa n x n for α ∈ R. 10. Consider the set {(R), of all polynomials with real coefficients. Let f(x), g(x) ∈ {(R). Observe that a polynomial of the form a 0 + a 1 x + + a m x m can be written as a 0 + a 1 x + + a m x m + 0 x m+1 + + 0 x p for any p > m. Hence, we can assume f(x) = a 0 +a 1 x + a 2 x 2 + +a p x p and g(x) = b 0 +b 1 x + b 2 x 2 + + b p x p for some a i , b i ∈ R, 0 ≤ i ≤ p, for some large positive integer p. We now define the vector addition and scalar multiplication as f(x) ⊕g(x) = (a 0 +b 0 ) + (a 1 +b 1 )x + + (a p +b p )x p , and α f(x) = αa 0 +αa 1 x + +αa p x p for α ∈ R. Then {(R) forms a real vector space. 11. Let C([−1, 1]) be the set of all real valued continuous functions on the interval [−1, 1]. For f, g ∈ C([−1, 1]) and α ∈ R, define (f ⊕g)(x) = f(x) +g(x), and (α f)(x) = αf(x), for all x ∈ [−1, 1]. Then C([−1, 1]) forms a real vector space. The operations defined above are called point wise 3.1. VECTOR SPACES 55 12. Let V and W be real vector spaces with binary operations (+, •) and (⊕, ), respectively. Consider the following operations on the set V W : for (x 1 , y 1 ), (x 2 , y 2 ) ∈ V W and α ∈ R, define (x 1 , y 1 ) ⊕ (x 2 , y 2 ) = (x 1 +x 2 , y 1 ⊕y 2 ), and α ◦ (x 1 , y 1 ) = (α • x 1 , α y 1 ). On the right hand side, we write x 1 + x 2 to mean the addition in V, while y 1 ⊕ y 2 W. Similarly, α • x 1 and α y 1 come from scalar multiplication in V and W, respectively. With the above definitions, V W also forms a real vector space. From now on, we will use ‘u +v’ in place of ‘u ⊕v’ and ‘α u or αu’ in place of ‘α u’. 3.1.3 Subspaces Definition 3.1.6 (Vector Subspace) Let S be a non-empty subset of V. S(F) is said to be a subspace of V (F) if αu+βv ∈ S whenever α, β ∈ F and u, v ∈ S; where the vector addition and scalar multiplication are the same as that of V (F). Remark 3.1.7 Any subspace is a vector space in its own right with respect to the vector addition and scalar multiplication that is defined for V (F). Example 3.1.8 1. Let V (F) be a vector space. Then (a) S = ¦0¦, the set consisting of the zero vector 0, (b) S = V are vector subspaces of V. These are called trivial subspaces. 2. Let S = ¦(x, y, z) ∈ R 3 : x + y − z = 0¦. Then S is a subspace of R 3 . (S is a plane in R 3 passing through the origin.) 3. Let S = ¦(x, y, z) ∈ R 3 : x + y + z = 3¦. Then S is not a subspace of R 3 . (S is again a plane in R 3 but it doesn’t pass through the origin.) 4. Let S = ¦(x, y, z) ∈ R 3 : z = x¦. Then S is a subspace of R 3 . 5. The vector space { n (R) is a subspace of the vector space {(R). Exercise 3.1.9 1. Which of the following are correct statements? (a) Let S = ¦(x, y, z) ∈ R 3 : z = x 2 ¦. Then S is a subspace of R 3 . (b) Let V (F) be a vector space. Let x ∈ V. Then the set ¦αx : α ∈ F¦ forms a vector subspace of V. (c) Let W = ¦f ∈ C([−1, 1]) : f(1/2) = 0¦. Then W is a subspace of the real vector space, C([−1, 1]). 2. Which of the following are subspaces of R n (R)? (a) ¦(x 1 , x 2 , . . . , x n ) : x 1 ≥ 0¦. (b) ¦(x 1 , x 2 , . . . , x n ) : x 1 + 2x 2 = 4x 3 ¦. (c) ¦(x 1 , x 2 , . . . , x n ) : x 1 is rational ¦. (d) ¦(x 1 , x 2 , . . . , x n ) : x 1 = x 2 3 ¦. 56 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES (e) ¦(x 1 , x 2 , . . . , x n ) : either x 1 or x 2 or both is0¦. (f) ¦(x 1 , x 2 , . . . , x n ) : [x 1 [ ≤ 1¦. 3. Which of the following are subspaces of i)C n (R) ii)C n (C)? (a) ¦(z 1 , z 2 , . . . , z n ) : z 1 is real ¦. (b) ¦(z 1 , z 2 , . . . , z n ) : z 1 +z 2 = z 3 ¦. (c) ¦(z 1 , z 2 , . . . , z n ) :[ z 1 [=[ z 2 [¦. 3.1.4 Linear Combinations Definition 3.1.10 (Linear Span) Let V (F) be a vector space and let S = ¦u 1 , u 2 , . . . , u n ¦ be a non-empty subset of V. The linear span of S is the set defined by L(S) = ¦α 1 u 1 2 u 2 + + α n u n : α i ∈ F, 1 ≤ i ≤ n¦ If S is an empty set we define L(S) = ¦0¦. Example 3.1.11 1. Note that (4, 5, 5) is a linear combination of (1, 0, 0), (1, 1, 0), and (1, 1, 1) as (4, 5, 5) = 5(1, 1, 1) −1(1, 0, 0) + 0(1, 1, 0). For each vector, the linear combination in terms of the vectors (1, 0, 0), (1, 1, 0), and (1, 1, 1) is unique. 2. Is (4, 5, 5) a linear combination of (1, 2, 3), (−1, 1, 4) and (3, 3, 2)? Solution: We want to find α 1 , α 2 , α 3 ∈ R such that α 1 (1, 2, 3) +α 2 (−1, 1, 4) +α 3 (3, 3, 2) = (4, 5, 5). (3.1.1) Check that 3(1, 2, 3)+(−1)(−1, 1, 4)+0(3, 3, 2) = (4, 5, 5). Also, in this case, the vector (4, 5, 5) does not have a unique expression as linear combination of vectors (1, 2, 3), (−1, 1, 4) and (3, 3, 2). 3. Verify that (4, 5, 5) is not a linear combination of the vectors (1, 2, 1) and (1, 1, 0)? 4. The linear span of S = ¦(1, 1, 1), (2, 1, 3)¦ over R is L(S) = ¦α(1, 1, 1) +β(2, 1, 3) : α, β ∈ R¦ = ¦(α + 2β, α +β, α + 3β) : α, β ∈ R¦ = ¦(x, y, z) ∈ R 3 : 2x −y = z¦. as 2(α + 2β) −(α +β) = α + 3β, and if z = 2x −y, take α = 2y −x and β = x −y. Lemma 3.1.12 (Linear Span is a subspace) Let V (F) be a vector space and let S be a non-empty subset of V. Then L(S) is a subspace of V (F). Proof. By definition, S ⊂ L(S) and hence L(S) is non-empty subset of V. Let u, v ∈ L(S). Then, for 1 ≤ i ≤ n there exist vectors w i ∈ S, and scalars α i , β i ∈ F such that u = α 1 w 1 + α 2 w 2 + + α n w n and v = β 1 w 1 2 w 2 + +β n w n . Hence, u +v = (α 1 +β)w 1 + + (α n n )w n ∈ L(S). Thus, L(S) is a vector subspace of V (F). 3.1. VECTOR SPACES 57 Remark 3.1.13 Let V (F) be a vector space and W ⊂ V be a subspace. If S ⊂ W, then L(S) ⊂ W is a subspace of W as W is a vector space in its own right. Theorem 3.1.14 Let S be a non-empty subset of a vector space V. Then L(S) is the smallest subspace of V containing S. Proof. For every u ∈ S, u = 1.u ∈ L(S) and therefore, S ⊆ L(S). To show L(S) is the smallest subspace of V containing S, consider any subspace W of V containing S. Then by Proposition 3.1.13, L(S) ⊆ W and hence the result follows. Definition 3.1.15 Let A be an m n matrix with real entries. Then using the rows a t 1 , a t 2 , . . . , a t m ∈ R n and columns b 1 , b 2 , . . . , b n ∈ R m , we define 1. RowSpace(A) = L(a 1 , a 2 , . . . , a m ), 2. ColumnSpace(A) = L(b 1 , b 2 , . . . , b n ), 3. NullSpace(A), denoted ^(A) as ¦x t ∈ R n : Ax = 0¦. 4. Range(A), denoted Im (A) = ¦y : Ax = y for some x t ∈ R n ¦. Note that the “column space” of a matrix A consists of all b such that Ax = b has a solution. Hence, ColumnSpace(A) = Range(A). Lemma 3.1.16 Let A be a real m n matrix. Suppose B = EA for some elementary matrix E. Then Row Space(A) = Row Space(B). Proof. We prove the result for the elementary matrix E ij (c), where c = 0 and i < j. Let a t 1 , a t 2 , . . . , a t m be the rows of the matrix A. Then B = E ij (c)A gives us Row Space(B) = L(a 1 , . . . , a i−1 , a i +ca j , . . . , a m ) = ¦α 1 a 1 + +α i−1 a i−1 i (a i +ca j ) + m a m : α ∈ R, 1 ≤ ≤ m¦ = m ¸ =1 α a i ca j : α ∈ R, 1 ≤ ≤ m ¸ = m ¸ =1 β a : β ∈ R, 1 ≤ ≤ m ¸ = L(a 1 , . . . , a i−1 , a i , . . . , a m ) = Row Space(A) Theorem 3.1.17 Let A be an mn matrix with real entries. Then 1. ^(A) is a subspace of R n ; 2. the non-zero row vectors of a matrix in row-reduced form, forms a basis for the row-space. Hence dim( Row Space(A)) = row rank of (A). 58 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES Proof. Part 1) can be easily proved. Let A be an mn matrix. For part 2), let D be the row-reduced form of A with non-zero rows d t 1 , d t 2 , . . . , d t r . Then B = E k E k−1 E 2 E 1 A for some elementary matrices E 1 , E 2 , . . . , E k . Then, a repeated application of Lemma 3.1.16 implies Row Space(A) = Row Space(B). That is, if the rows of the matrix A are a t 1 , a t 2 , . . . , a t m , then L(a 1 , a 2 , . . . , a m ) = L(b 1 , b 2 , . . . , b r ). Hence the required result follows. Exercise 3.1.18 1. Show that any two row-equivalent matrices have the same row space. Give examples to show that the column space of two row-equivalent matrices need not be same. 2. Find all the vector subspaces of R 2 . 3. Let P and Q be two subspaces of a vector space V. Show that P ∩ Q is a subspace of V. Also show that P ∪ Q need not be a subspace of V. When is P ∪ Q a subspace of V ? 4. Let P and Q be two subspaces of a vector space V. Define P + Q = ¦u + v : u ∈ P, v ∈ Q¦. Show that P +Q is a subspace of V. Also show that L(P ∪ Q) = P +Q. 5. Let S = ¦x 1 , x 2 , x 3 , x 4 ¦ where x 1 = (1, 0, 0, 0), x 2 = (1, 1, 0, 0), x 3 = (1, 2, 0, 0), x 4 = (1, 1, 1, 0). Determine all x i such that L(S) = L(S ` ¦x i ¦). 6. Let C([−1, 1]) be the set of all continuous functions on the interval [−1, 1] (cf. Example 3.1.4.11). Let W 1 = ¦f ∈ C([−1, 1]) : f(0.2) = 0¦, and W 2 = ¦f ∈ C([−1, 1]) : f ( 1 4 )exists ¦. Are W 1 , W 2 subspaces of C([−1, 1])? 7. Let V = ¦(x, y) : x, y ∈ R¦ over R. Define (x, y) ⊕ (x 1 , y 1 ) = (x + x 1 , 0) and α (x, y) = (αx, 0). Show that V is not a vector space over R. 8. Recall that M n (R) is the real vector space of all n n real matrices. Prove that the following subsets are subspaces of M n (R). (a) sl n = ¦A ∈ M n (R) : trace(A) = 0¦ (b) Sym n = ¦A ∈ M n (R) : A = A t ¦ (c) Skew n = ¦A ∈ M n (R) : A +A t = 0¦ 9. Let V = R. Define x⊕y = x−y and αx = −αx. Which vector space axioms are not satisfied here? In this section, we saw that a vector space has infinite number of vectors. Hence, one can start with any finite collection of vectors and obtain their span. It means that any vector space contains infinite number of other vector subspaces. Therefore, the following questions arise: 1. What are the conditions under which, the linear span of two distinct sets the same? 2. Is it possible to find/choose vectors so that the linear span of the chosen vectors is the whole vector space itself? 3. Suppose we are able to choose certain vectors whose linear span is the whole space. Can we find the minimum number of such vectors? We try to answer these questions in the subsequent sections. 3.2. LINEAR INDEPENDENCE 59 3.2 Linear Independence Definition 3.2.1 (Linear Independence and Dependence) Let S = ¦u 1 , u 2 , . . . , u m ¦ be any non-empty subset of V. If there exist some non-zero α i ’s 1 ≤ i ≤ m, such that α 1 u 1 2 u 2 + +α m u m = 0, then the set S is called a linearly dependent set. Otherwise, the set S is called linearly independent. Example 3.2.2 1. Let S = ¦(1, 2, 1), (2, 1, 4), (3, 3, 5)¦. Then check that 1(1, 2, 1)+1(2, 1, 4)+(−1)(3, 3, 5) = (0, 0, 0). Since α 1 = 1, α 2 = 1 and α 3 = −1 is a solution of (3.2.1), so the set S is a linearly dependent subset of R 3 . 2. Let S = ¦(1, 1, 1), (1, 1, 0), (1, 0, 1)¦. Suppose there exists α, β, γ ∈ R such that α(1, 1, 1)+β(1, 1, 0)+ γ(1, 0, 1) = (0, 0, 0). Then check that in this case we necessarily have α = β = γ = 0 which shows that the set S = ¦(1, 1, 1), (1, 1, 0), (1, 0, 1)¦ is a linearly independent subset of R 3 . In other words, if S = ¦u 1 , u 2 , . . . , u m ¦ is a non-empty subset of a vector space V, then to check whether the set S is linearly dependent or independent, one needs to consider the equation α 1 u 1 2 u 2 + +α m u m = 0. (3.2.1) In case α 1 = α 2 = = α m = 0 is the only solution of (3.2.1), the set S becomes a linearly independent subset of V. Otherwise, the set S becomes a linearly dependent subset of V. Proposition 3.2.3 Let V be a vector space. 1. Then the zero-vector cannot belong to a linearly independent set. 2. If S is a linearly independent subset of V, then every subset of S is also linearly independent. 3. If S is a linearly dependent subset of V then every set containing S is also linearly dependent. Proof. We give the proof of the first part. The reader is required to supply the proof of other parts. Let S = ¦0 = u 1 , u 2 , . . . , u n ¦ be a set consisting of the zero vector. Then for any γ = o, γu 1 + ou 2 + +0u n = 0. Hence, for the system α 1 u 1 2 u 2 + +α m u m = 0, we have a non-zero solution α 1 = γ and o = α 2 = = α n . Therefore, the set S is linearly dependent. Theorem 3.2.4 Let ¦v 1 , v 2 , . . . , v p ¦ be a linearly independent subset of a vector space V. Suppose there exists a vector v p+1 ∈ V, such that the set ¦v 1 , v 2 , . . . , v p , v p+1 ¦ is linearly dependent, then v p+1 is a linear combination of v 1 , v 2 , . . . , v p . Proof. Since the set ¦v 1 , v 2 , . . . , v p , v p+1 ¦ is linearly dependent, there exist scalars α 1 , α 2 , . . . , α p+1 , not all zero such that α 1 v 1 2 v 2 + +α p v p p+1 v p+1 = 0. (3.2.2) Claim: α p+1 = 0. Let if possible α p+1 = 0. Then equation (3.2.2) gives α 1 v 1 + α 2 v 2 + + α p v p = 0 with not all α i , 1 ≤ i ≤ p zero. Hence, by the definition of linear independence, the set ¦v 1 , v 2 , . . . , v p ¦ is linearly dependent which is contradictory to our hypothesis. Thus, α p+1 = 0 and we get v p+1 = − 1 α p+1 1 v 1 + +α p v p ). 60 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES Note that α i ∈ F for every i, 1 ≤ i ≤ p +1 and hence − αi αp+1 ∈ F for 1 ≤ i ≤ p. Hence the result follows. We now state two important corollaries of the above theorem. We don’t give their proofs as they are easy consequence of the above theorem. Corollary 3.2.5 Let ¦u 1 , u 2 , . . . , u n ¦ be a linearly dependent subset of a vector space V. Then there exists a smallest k, 2 ≤ k ≤ n such that L(u 1 , u 2 , . . . , u k ) = L(u 1 , u 2 , . . . , u k−1 ). The next corollary follows immediately from Theorem 3.2.4 and Corollary 3.2.5. Corollary 3.2.6 Let ¦v 1 , v 2 , . . . , v p ¦ be a linearly independent subset of a vector space V. Suppose there exists a vector v ∈ V, such that v ∈ L(v 1 , v 2 , . . . , v p ). Then the set ¦v 1 , v 2 , . . . , v p , v¦ is also linearly independent subset of V. Exercise 3.2.7 1. Consider the vector space R 2 . Let u 1 = (1, 0). Find all choices for the vector u 2 such that the set ¦u 1 , u 2 ¦ is linear independent subset of R 2 . Does there exist choices for vectors u 2 and u 3 such that the set ¦u 1 , u 2 , u 3 ¦ is linearly independent subset of R 2 ? 2. If none of the elements appearing along the principal diagonal of a lower triangular matrix is zero, show that the row vectors are linearly independent in R n . The same is true for column vectors. 3. Let S = ¦(1, 1, 1, 1), (1, −1, 1, 2), (1, 1, −1, 1)¦ ⊂ R 4 . Determine whether or not the vector (1, 1, 2, 1) ∈ L(S)? 4. Show that S = ¦(1, 2, 3), (−2, 1, 1), (8, 6, 10)¦ is linearly dependent in R 3 . 5. Show that S = ¦(1, 0, 0), (1, 1, 0), (1, 1, 1)¦ is a linearly independent set in R 3 . In general if ¦f 1 , f 2 , f 3 ¦ is a linearly independent set then ¦f 1 , f 1 +f 2 , f 1 +f 2 +f 3 ¦ is also a linearly independent set. 6. In R 3 , give an example of 3 vectors u, v and w such that ¦u, v, w¦ is linearly dependent but any set of 2 vectors from u, v, w is linearly independent. 7. What is the maximum number of linearly independent vectors in R 3 ? 8. Show that any set of k vectors in R 3 is linearly dependent if k ≥ 4. 9. Is the set of vectors (1, 0), ( i, 0) linearly independent subset of C 2 (R)? 10. Under what conditions on α are the vectors (1 + α, 1 − α) and (α − 1, 1 + α) in C 2 (R) linearly independent? 11. Let u, v ∈ V and M be a subspace of V. Further, let K be the subspace spanned by M and u and H be the subspace spanned by M and v. Show that if v ∈ K and v ∈ M then u ∈ H. 3.3 Bases Definition 3.3.1 (Basis of a Vector Space) 1. A non-empty subset B of a vector space V is called a basis of V if (a) B is a linearly independent set, and (b) L(B) = V, i.e., every vector in V can be expressed as a linear combination of the elements of B. 3.3. BASES 61 2. A vector in B is called a basis vector. Remark 3.3.2 Let ¦v 1 , v 2 , . . . , v p ¦ be a basis of a vector space V (F). Then any v ∈ V is a unique linear combination of the basis vectors, v 1 , v 2 , . . . , v p . Observe that if there exists a v ∈ W such that v = α 1 v 1 2 v 2 + +α p v p and v = β 1 v 1 2 v 2 + p v p then 0 = v −v = (α 1 −β 1 )v 1 + (α 2 −β 2 )v 2 + + (α p −β p )v p . But then the set ¦v 1 , v 2 , . . . , v p ¦ is linearly independent and therefore the scalars α i −β i for 1 ≤ i ≤ p must all be equal to zero. Hence, for 1 ≤ i ≤ p, α i = β i and we have the uniqueness. By convention, the linear span of an empty set is ¦0¦. Hence, the empty set is a basis of the vector space ¦0¦. Example 3.3.3 1. Check that if V = ¦(x, y, 0) : x, y ∈ R¦ ⊂ R 3 , then B = ¦(1, 0, 0), (0, 1, 0)¦ or B = ¦(1, 0, 0), (1, 1, 0)¦ or B = ¦(2, 0, 0), (1, 3, 0)¦ or are bases of V. 2. For 1 ≤ i ≤ n, let e i = (0, . . . , 0, 1 .... i th place , 0, . . . , 0) ∈ R n . Then, the set B = ¦e 1 , e 2 , . . . , e n ¦ forms a basis of R n . This set is called the standard basis of R n . That is, if n = 3, then the set ¦(1, 0, 0), (0, 1, 0), (0, 0, 1)¦ forms an standard basis of R 3 . 3. Let V = ¦(x, y, z) : x+y−z = 0, x, y, z ∈ R¦ be a vector subspace of R 3 . Then S = ¦(1, 1, 2), (2, 1, 3), (1, 2, 3)¦ ⊂ V. It can be easily verified that the vector (3, 2, 5) ∈ V and (3, 2, 5) = (1, 1, 2) + (2, 1, 3) = 4(1, 1, 2) −(1, 2, 3). Then by Remark 3.3.2, S cannot be a basis of V. A basis of V can be obtained by the following method: The condition x +y −z = 0 is equivalent to z = x +y. we replace the value of z with x +y to get (x, y, z) = (x, y, x +y) = (x, 0, x) + (0, y, y) = x(1, 0, 1) +y(0, 1, 1). Hence, ¦(1, 0, 1), (0, 1, 1)¦ forms a basis of V. 4. Let V = ¦a + ib : a, b ∈ R¦ and F = C. That is, V is a complex vector space. Note that any element a +ib ∈ V can be written as a +ib = (a +ib)1. Hence, a basis of V is ¦1¦. 5. Let V = ¦a + ib : a, b ∈ R¦ and F = R. That is, V is a real vector space. Any element a + ib ∈ V is expressible as a 1 +b i. Hence a basis of V is ¦1, i¦. Observe that i is a vector in C. Also, i ∈ R and hence i (1 + 0 i) is not defined. 6. Recall the vector space {(R), the vector space of all polynomials with real coefficients. A basis of this vector space is the set ¦1, x, x 2 , . . . , x n , . . .¦. This basis has infinite number of vectors as the degree of the polynomial can be any positive integer. Definition 3.3.4 (Finite Dimensional Vector Space) A vector space V is said to be finite dimensional if there exists a basis consisting of finite number of elements. Otherwise, the vector space V is called infinite dimensional. In Example 3.3.3, the vector space of all polynomials is an example of an infinite dimensional vector space. All the other vector spaces are finite dimensional. 62 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES Remark 3.3.5 We can use the above results to obtain a basis of any finite dimensional vector space V as follows: Step 1: Choose a non-zero vector, say, v 1 ∈ V. Then the set ¦v 1 ¦ is linearly independent. Step 2: If V = L(v 1 ), we have got a basis of V. Else there exists a vector, say, v 2 ∈ V such that v 2 ∈ L(v 1 ). Then by Corollary 3.2.6, the set ¦v 1 , v 2 ¦ is linearly independent. Step 3: If V = L(v 1 , v 2 ), then ¦v 1 , v 2 ¦ is a basis of V. Else there exists a vector, say, v 3 ∈ V such that v 3 ∈ L(v 1 , v 2 ). So, by Corollary 3.2.6, the set ¦v 1 , v 2 , v 3 ¦ is linearly independent. At the i th step, either V = L(v 1 , v 2 , . . . , v i ), or L(v 1 , v 2 , . . . , v i ) = V. In the first case, we have ¦v 1 , v 2 , . . . , v i ¦ as a basis of V. In the second case, L(v 1 , v 2 , . . . , v i ) ⊂ V . So, we choose a vector, say, v i+1 ∈ V such that v i+1 L(v 1 , v 2 , . . . , v i ). Therefore, by Corollary 3.2.6, the set ¦v 1 , v 2 , . . . , v i+1 ¦ is linearly independent. This process will finally end as V is a finite dimensional vector space. Exercise 3.3.6 1. Let S = ¦v 1 , v 2 , . . . , v p ¦ be a subset of a vector space V (F). Suppose L(S) = V but S is not a linearly independent set. Then prove that each vector in V can be expressed in more than one way as a linear combination of vectors from S. 2. Show that the set ¦(1, 0, 1), (1, i, 0), (1, 1, 1 −i)¦ is a basis of C 3 (C). 3. Let A be a matrix of rank r. Then show that the r non-zero rows in the row-reduced echelon form of A are linearly independent and they form a basis of the row space of A. 3.3.1 Important Results Theorem 3.3.7 Let ¦v 1 , v 2 , . . . , v n ¦ be a basis of a given vector space V. If ¦w 1 , w 2 , . . . , w m ¦ is a set of vectors from V with m > n then this set is linearly dependent. Proof. Since we want to find whether the set ¦w 1 , w 2 , . . . , w m ¦ is linearly independent or not, we consider the linear system α 1 w 1 2 w 2 + +α m w m = 0 (3.3.1) with α 1 , α 2 , . . . , α m as the m unknowns. If the solution set of this linear system of equations has more than one solution, then this set will be linearly dependent. As ¦v 1 , v 2 , . . . , v n ¦ is a basis of V and w i ∈ V, for each i, 1 ≤ i ≤ m, there exist scalars a ij , 1 ≤ i ≤ n, 1 ≤ j ≤ m, such that w 1 = a 11 v 1 +a 21 v 2 + +a n1 v n w 2 = a 12 v 1 +a 22 v 2 + +a n2 v n . . . = . . . w m = a 1m v 1 +a 2m v 2 + +a nm v n . The set of Equations (3.3.1) can be rewritten as α 1 ¸ n ¸ j=1 a j1 v j ¸ 2 ¸ n ¸ j=1 a j2 v j ¸ + +α m ¸ n ¸ j=1 a jm v j ¸ = 0 i.e., m ¸ i=1 α i a 1i v 1 + m ¸ i=1 α i a 2i v 2 + + m ¸ i=1 α i a ni v n = 0. 3.3. BASES 63 Since the set ¦v 1 , v 2 , . . . , v n ¦ is linearly independent, we have m ¸ i=1 α i a 1i = m ¸ i=1 α i a 2i = = m ¸ i=1 α i a ni = 0. Therefore, finding α i ’s satisfying equation (3.3.1) reduces to solving the system of homogeneous equations Aα = 0 where α t = (α 1 , α 2 , . . . , α m ) and A = a 11 a 12 a 1m a 21 a 22 a 2m . . . . . . . . . . . . a n1 a n2 a nm ¸ ¸ ¸ ¸ ¸ ¸ . Since n < m, i.e., the number of equations is strictly less than the number of unknowns, Corollary 2.5.3 implies that the solution set consists of infinite number of elements. Therefore, the equation (3.3.1) has a solution with not all α i , 1 ≤ i ≤ m, zero. Hence, the set ¦w 1 , w 2 , . . . , w m ¦ is a linearly dependent set. Remark 3.3.8 Let V be a vector subspace of R n with spanning set S. We give a method of finding a basis of V from S. 1. Construct a matrix A whose rows are the vectors in S. 2. Use only the elementary row operations R i (c) and R ij (c) to get the row-reduced form B of A (in fact we just need to make as many zero-rows as possible). 3. Let B be the set of vectors in S corresponding to the non-zero rows of B. Then the set B is a basis of L(S) = V. Example 3.3.9 Let S = ¦(1, 1, 1, 1), (1, 1, −1, 1), (1, 1, 0, 1), (1, −1, 1, 1)¦ be a subset of R 4 . Find a basis of L(S). Solution: Here A = 1 1 1 1 1 1 −1 1 1 1 0 1 1 −1 1 1 ¸ ¸ ¸ ¸ ¸ . Applying row-reduction to A, we have 1 1 1 1 1 1 −1 1 1 1 0 1 1 −1 1 1 ¸ ¸ ¸ ¸ ¸ −−−−−−−−−−−−−−−−−−−−→ R 12 (−1), R 13 (−1), R 14 (−1) 1 1 1 1 0 0 −2 0 0 0 −1 0 0 −2 0 0 ¸ ¸ ¸ ¸ ¸ −−−−−→ R 32(−2) 1 1 1 1 0 0 0 0 0 0 −1 0 0 −2 0 0 ¸ ¸ ¸ ¸ ¸ . Observe that the rows 1, 3 and 4 are non-zero. Hence, a basis of L(S) consists of the first, third and fourth vectors of the set S. Thus, B = ¦(1, 1, 1, 1), (1, 1, 0, 1), (1, −1, 1, 1)¦ is a basis of L(S). Observe that at the last step, in place of the elementary row operation R 32 (−2), we can apply R 23 (− 1 2 ) to make the third row as the zero-row. In this case, we get ¦(1, 1, 1, 1), (1, 1, −1, 1), (1, −1, 1, 1)¦ as a basis of L(S). Corollary 3.3.10 Let V be a finite dimensional vector space. Then any two bases of V have the same number of vectors. Proof. Let ¦u 1 , u 2 , . . . , u n ¦ and ¦v 1 , v 2 , . . . , v m ¦ be two bases of V with m > n. Then by the above theorem the set ¦v 1 , v 2 , . . . , v m ¦ is linearly dependent if we take ¦u 1 , u 2 , . . . , u n ¦ as the basis of V. This 1 , v 2 , . . . , v m ¦ is also a basis of V. Hence, we get m = n. Definition 3.3.11 (Dimension of a Vector Space) The dimension of a finite dimensional vector space V is the number of vectors in a basis of V, denoted dim(V ). 64 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES Note that the Corollary 3.2.6 can be used to generate a basis of any non-trivial finite dimensional vector space. Example 3.3.12 1. Consider the complex vector space C 2 (C). Then, (a +ib, c +id) = (a +ib)(1, 0) + (c +id)(0, 1). So, ¦(1, 0), (0, 1)¦ is a basis of C 2 (C) and thus dim(V ) = 2. 2. Consider the real vector space C 2 (R). In this case, any vector (a +ib, c +id) = a(1, 0) +b(i, 0) +c(0, 1) +d(0, i). Hence, the set ¦(1, 0), (i, 0), (0, 1), (0, i)¦ is a basis and dim(V ) = 4. Remark 3.3.13 It is important to note that the dimension of a vector space may change if the under- lying field (the set of scalars) is changed. Example 3.3.14 Let V be the set of all functions f : R n −→R with the property that f(x+y) = f(x)+f(y) and f(αx) = αf(x). For f, g ∈ V, and t ∈ R, define (f ⊕g)(x) = f(x) +g(x) and (t f)(x) = f(tx). Then V is a real vector space. For 1 ≤ i ≤ n, consider the functions e i (x) = e i (x 1 , x 2 , . . . , x n ) = x i . Then it can be easily verified that the set ¦e 1 , e 2 , . . . , e n ¦ is a basis of V and hence dim(V ) = n. The next theorem follows directly from Corollary 3.2.6 and Theorem 3.3.7. Hence, the proof is omitted. Theorem 3.3.15 Let S be a linearly independent subset of a finite dimensional vector space V. Then the set S can be extended to form a basis of V. Theorem 3.3.15 is equivalent to the following statement: Let V be a vector space of dimension n. Suppose, we have found a linearly independent set S = ¦v 1 , v 2 , . . . , v r ¦ ⊂ V. Then there exist vectors v r+1 , . . . , v n in V such that ¦v 1 , v 2 , . . . , v n ¦ is a basis of V. Corollary 3.3.16 Let V be a vector space of dimension n. Then any set of n linearly independent vectors forms a basis of V. Also, every set of m vectors, m > n, is linearly dependent. Example 3.3.17 Let V = ¦(v, w, x, y, z) ∈ R 5 : v + x − 3y + z = 0¦ and W = ¦(v, w, x, y, z) ∈ R 5 : w −x −z = 0, v = y¦ be two subspaces of R 5 . Find bases of V and W containing a basis of V ∩ W. Solution: Let us find a basis of V ∩ W. The solution set of the linear equations v +x −3y +z = 0, w −x −z = 0 and v = y is given by (v, w, x, y, z) t = (y, 2y, x, y, 2y −x) t = y(1, 2, 0, 1, 2) t +x(0, 0, 1, 0, −1) t . Thus, a basis of V ∩ W is ¦(1, 2, 0, 1, 2), (0, 0, 1, 0, −1)¦. To find a basis of W containing a basis of V ∩ W, we can proceed as follows: 3.3. BASES 65 1. Find a basis of W. 2. Take the basis of V ∩W found above as the first two vectors and that of W as the next set of vectors. Now use Remark 3.3.8 to get the required basis. Heuristically, we can also find the basis in the following way: A vector of W has the form (y, x + z, x, y, z) for x, y, z ∈ R. Substituting y = 1, x = 1, and z = 0 in (y, x +z, x, y, z) gives us the vector (1, 1, 1, 1, 0) ∈ W. It can be easily verified that a basis of W is ¦(1, 2, 0, 1, 2), (0, 0, 1, 0, −1), (1, 1, 1, 1, 0)¦. Similarly, a vector of V has the form (v, w, x, y, 3y−v−x) for v, w, x, y ∈ R. Substituting v = 0, w = 1, x = 0 and y = 0, gives a vector (0, 1, 0, 0, 0) ∈ V. Also, substituting v = 0, w = 1, x = 1 and y = 1, gives another vector (0, 1, 1, 1, 2) ∈ V. So, a basis of V can be taken as ¦(1, 2, 0, 1, 2), (0, 0, 1, 0, −1), (0, 1, 0, 0, 0), (0, 1, 1, 1, 2)¦. Recall that for two vector subspaces M and N of a vector space V (F), the vector subspace M + N is defined by M +N = ¦u +v : u ∈ M, v ∈ N¦. With this definition, we have the following very important theorem (for a proof, see Appendix 15.4.1). Theorem 3.3.18 Let V (F) be a finite dimensional vector space and let M and N be two subspaces of V. Then dim(M) + dim(N) = dim(M +N) + dim(M ∩ N). (3.3.2) Exercise 3.3.19 1. Find a basis of the vector space { n (R). Also, find dim({ n (R)). What can you say 2. Consider the real vector space, C([0, 2π]), of all real valued continuous functions. For each n consider the vector e n defined by e n (x) = sin(nx). Prove that the collection of vectors ¦e n : 1 ≤ n < ∞¦ is a linearly independent set. [Hint: On the contrary, assume that the set is linearly dependent. Then we have a finite set of vectors, say {e k 1 , e k 2 , . . . , e k } that are linearly dependent. That is, there exist scalars αi ∈ R for 1 ≤ i ≤ not all zero such that α1 sin(k1x) +α2 sin(k2x) +· · · +α sin(k x) = 0 for all x ∈ [0, 2π]. Now for different values of m integrate the function Z 0 sin(mx) (α1 sin(k1x) +α2 sin(k2x) +· · · +α sin(k x)) dx to get the required result.] 3. Show that the set ¦(1, 0, 0), (1, 1, 0), (1, 1, 1)¦ is a basis of C 3 (C). Is it a basis of C 3 (R) also? 4. Let W = ¦(x, y, z, w) ∈ R 4 : x +y −z +w = 0¦ be a subspace of R 4 . Find its basis and dimension. 5. Let V = ¦(x, y, z, w) ∈ R 4 : x + y − z + w = 0, x + y + z + w = 0¦ and W = ¦(x, y, z, w) ∈ R 4 : x − y − z + w = 0, x + 2y − w = 0¦ be two subspaces of R 4 . Find bases and dimensions of V, W, V ∩ W and V +W. 6. Let V be the set of all real symmetric n n matrices. Find its basis and dimension. What if V is the complex vector space of all n n Hermitian matrices? 66 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES 7. If M and N are 4-dimensional subspaces of a vector space V of dimension 7 then show that M and N have at least one vector in common other than the zero vector. 8. Let P = L¦(1, 0, 0), (1, 1, 0)¦ and Q = L¦(1, 1, 1)¦ be vector subspaces of R 3 . Show that P +Q = R 3 and P ∩ Q = ¦0¦. If u ∈ R 3 , determine u P , u Q such that u = u P +u Q where u P ∈ P and u Q ∈ Q. Is it necessary that u P and u Q are unique? 9. Let W 1 be a k-dimensional subspace of an n-dimensional vector space V (F) where k ≥ 1. Prove that there exists an (n −k)-dimensional subspace W 2 of V such that W 1 ∩ W 2 = ¦0¦ and W 1 +W 2 = V. 10. Let P and Q be subspaces of R n such that P + Q = R n and P ∩ Q = ¦0¦. Then show that each u ∈ R n can be uniquely expressed as u = u P +u Q where u P ∈ P and u Q ∈ Q. 11. Let P = L¦(1, −1, 0), (1, 1, 0)¦ and Q = L¦(1, 1, 1), (1, 2, 1)¦ be vector subspaces of R 3 . Show that P +Q = R 3 and P ∩Q = ¦0¦. Show that there exists a vector u ∈ R 3 such that u cannot be written uniquely in the form u = u P +u Q where u P ∈ P and u Q ∈ Q. 12. Recall the vector space { 4 (R). Is the set, W = ¦p(x) ∈ { 4 (R) : p(−1) = p(1) = 0¦ a subspace of { 4 (R)? If yes, find its dimension. 13. Let V be the set of all 2 2 matrices with complex entries and a 11 + a 22 = 0. Show that V is a real vector space. Find its basis. Also let W = ¦A ∈ V : a 21 = −a 12 ¦. Show W is a vector subspace of V, and find its dimension. 14. Let A = 1 2 1 3 2 0 2 2 2 4 2 −2 4 0 8 4 2 5 6 10 ¸ ¸ ¸ ¸ ¸ , and B = 2 4 0 6 −1 0 −2 5 −3 −5 1 −4 −1 −1 1 2 ¸ ¸ ¸ ¸ ¸ be two matrices. For A and B find the following: (a) their row-reduced echelon forms. (b) the matrices P 1 and P 2 such that P 1 A and P 2 B are in row-reduced form. (c) a basis each for the row spaces of A and B. (d) a basis each for the range spaces of A and B. (e) bases of the null spaces of A and B. (f) the dimensions of all the vector subspaces so obtained. 15. Let M(n, R) denote the space of all n n real matrices. For the sets given below, check that they are subspaces of M(n, R) and also find their dimension. (a) sl(n, R) = ¦A ∈ M(n, R) : tr(A) = 0¦, where recall that tr(A) stands for trace of A. (b) S(n, R) = ¦A ∈ M(n, R) : A = A t ¦. (c) A(n, R) = ¦A ∈ M(n, R) : A +A t = 0¦. Before going to the next section, we prove that for any matrix A of order mn Row rank(A) = Column rank(A). 3.3. BASES 67 Proposition 3.3.20 Let A be an mn real matrix. Then Row rank(A) = Column rank(A). Proof. Let R 1 , R 2 , . . . , R m be the rows of A and C 1 , C 2 , . . . , C n be the columns of A. Note that Row rank(A) = r, means that dim L(R 1 , R 2 , . . . , R m ) = r. Hence, there exists vectors u 1 = (u 11 , . . . , u 1n ), u 2 = (u 21 , . . . , u 2n ), . . . , u r = (u r1 , . . . , u rn ) ∈ R n with R i ∈ L(u 1 , u 2 , . . . , u r ) ∈ R n , for all i, 1 ≤ i ≤ m. Therefore, there exist real numbers α ij , 1 ≤ i ≤ m, 1 ≤ j ≤ r such that R 1 = α 11 u 1 12 u 2 + +α 1r u r = ( r ¸ i=1 α 1i u i1 , r ¸ i=1 α 1i u i2 , . . . , r ¸ i=1 α 1i u in ), R 2 = α 21 u 1 22 u 2 + +α 2r u r = ( r ¸ i=1 α 2i u i1 , r ¸ i=1 α 2i u i2 , . . . , r ¸ i=1 α 2i u in ), and so on, till R m = α m1 u 1 + +α mr u r = ( r ¸ i=1 α mi u i1 , r ¸ i=1 α mi u i2 , . . . , r ¸ i=1 α mi u in ). So, C 1 = r ¸ i=1 α 1i u i1 r ¸ i=1 α 2i u i1 . . . r ¸ i=1 α mi u i1 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ = u 11 α 11 α 21 . . . α m1 ¸ ¸ ¸ ¸ ¸ ¸ +u 21 α 12 α 22 . . . α m2 ¸ ¸ ¸ ¸ ¸ ¸ + +u r1 α 1r α 2r . . . α mr ¸ ¸ ¸ ¸ ¸ ¸ . In general, for 1 ≤ j ≤ n, we have C j = r ¸ i=1 α 1i u ij r ¸ i=1 α 2i u ij . . . r ¸ i=1 α mi u ij ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ = u 1j α 11 α 21 . . . α m1 ¸ ¸ ¸ ¸ ¸ ¸ +u 2j α 12 α 22 . . . α m2 ¸ ¸ ¸ ¸ ¸ ¸ + +u rj α 1r α 2r . . . α mr ¸ ¸ ¸ ¸ ¸ ¸ . Therefore, we observe that the columns C 1 , C 2 , . . . , C n are linear combination of the r vectors 11 , α 21 , . . . , α m1 ) t , (α 12 , α 22 , . . . , α m2 ) t , . . . , (α 1r , α 2r , . . . , α mr ) t . Therefore, Column rank(A) = dim L(C 1 , C 2 , . . . , C n ) =≤ r = Row rank(A). A similar argument gives Row rank(A) ≤ Column rank(A). Thus, we have the required result. 68 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES 3.4 Ordered Bases Let B = ¦u 1 , u 2 , . . . , u n ¦ be a basis of a vector space V (F). As B is a set, there is no ordering of its elements. In this section, we want to associate an order among the vectors in any basis of V. Definition 3.4.1 (Ordered Basis) An ordered basis for a vector space V (F) of dimension n, is a ba- sis ¦u 1 , u 2 , . . . , u n ¦ together with a one-to-one correspondence between the sets ¦u 1 , u 2 , . . . , u n ¦ and ¦1, 2, 3, . . . , n¦. If the ordered basis has u 1 as the first vector, u 2 as the second vector and so on, then we denote this ordered basis by (u 1 , u 2 , . . . , u n ). Example 3.4.2 Consider { 2 (R), the vector space of all polynomials of degree less than or equal to 2 with coefficients from R. The set ¦1 −x, 1 +x, x 2 ¦ is a basis of { 2 (R). For any element a 0 +a 1 x +a 2 x 2 ∈ { 2 (R), we have a 0 +a 1 x +a 2 x 2 = a 0 −a 1 2 (1 −x) + a 0 +a 1 2 (1 +x) +a 2 x 2 . If (1−x, 1+x, x 2 ) is an ordered basis, then a 0 −a 1 2 is the first component, a 0 +a 1 2 is the second component, and a 2 is the third component of the vector a 0 +a 1 x +a 2 x 2 . If we take (1 + x, 1 − x, x 2 ) as an ordered basis, then a 0 +a 1 2 is the first component, a 0 −a 1 2 is the second component, and a 2 is the third component of the vector a 0 +a 1 x +a 2 x 2 . That is, as ordered bases (u 1 , u 2 , . . . , u n ), (u 2 , u 3 , . . . , u n , u 1 ), and (u n , u 1 , u 2 , . . . , u n−1 ) are different even though they have the same set of vectors as elements. Definition 3.4.3 (Coordinates of a Vector) Let B = (v 1 , v 2 , . . . , v n ) be an ordered basis of a vector space V (F) and let v ∈ V. If v = β 1 v 1 2 v 2 + +β n v n then the tuple (β 1 , β 2 , . . . , β n ) is called the coordinate of the vector v with respect to the ordered basis B. Mathematically, we denote it by [v] B = (β 1 , . . . , β n ) t , a column vector. Suppose B 1 = (u 1 , u 2 , . . . , u n ) and B 2 = (u n , u 1 , u 2 , . . . , u n−1 ) are two ordered bases of V. Then for any x ∈ V there exists unique scalars α 1 , α 2 , . . . , α n such that x = α 1 u 1 2 u 2 + +α n u n = α n u n 1 u 1 + +α n−1 u n−1 . Therefore, [x] B1 = (α 1 , α 2 , . . . , α n ) t and [x] B2 = (α n , α 1 , α 2 , . . . , α n−1 ) t . Note that x is uniquely written as n ¸ i=1 α i u i and hence the coordinates with respect to an ordered basis are unique. Suppose that the ordered basis B 1 is changed to the ordered basis B 3 = (u 2 , u 1 , u 3 , . . . , u n ). Then [x] B3 = (α 2 , α 1 , α 3 , . . . , α n ) t . So, the coordinates of a vector depend on the ordered basis chosen. Example 3.4.4 Let V = R 3 . Consider the ordered bases B 1 = (1, 0, 0), (0, 1, 0), (0, 0, 1) , B 2 = (1, 0, 0), (1, 1, 0), (1, 1, 1) and B 3 = (1, 1, 1), (1, 1, 0), (1, 0, 0) of V. Then, with respect to the above bases we have (1, −1, 1) = 1 (1, 0, 0) + (−1) (0, 1, 0) + 1 (0, 0, 1). = 2 (1, 0, 0) + (−2) (1, 1, 0) + 1 (1, 1, 1). = 1 (1, 1, 1) + (−2) (1, 1, 0) + 2 (1, 0, 0). 3.4. ORDERED BASES 69 Therefore, if we write u = (1, −1, 1), then [u] B1 = (1, −1, 1) t , [u] B2 = (2, −2, 1) t , [u] B3 = (1, −2, 2) t . In general, let V be an n-dimensional vector space with ordered bases B 1 = (u 1 , u 2 , . . . , u n ) and B 2 = (v 1 , v 2 , . . . , v n ). Since, B 1 is a basis of V, there exists unique scalars a ij , 1 ≤ i, j ≤ n such that v i = n ¸ l=1 a li u l for 1 ≤ i ≤ n. That is, for each i, 1 ≤ i ≤ n, [v i ] B1 = (a 1i , a 2i , . . . , a ni ) t . Let v ∈ V with [v] B2 = (α 1 , α 2 , . . . , α n ) t . As B 2 as ordered basis (v 1 , v 2 , . . . , v n ), we have v = n ¸ i=1 α i v i = n ¸ i=1 α i ¸ n ¸ j=1 a ji u j ¸ = n ¸ j=1 n ¸ i=1 a ji α i u j . Since B 1 is a basis this representation of v in terms of u i ’s is unique. So, [v] B1 = n ¸ i=1 a 1i α i , n ¸ i=1 a 2i α i , . . . , n ¸ i=1 a ni α i t = a 11 a 1n a 21 a 2n . . . . . . . . . a n1 a nn ¸ ¸ ¸ ¸ ¸ ¸ α 1 α 2 . . . α n ¸ ¸ ¸ ¸ ¸ ¸ = A[v] B2 . Note that the i th column of the matrix A is equal to [v i ] B1 , i.e., the i th column of A is the coordinate of the i th vector v i of B 2 with respect to the ordered basis B 1 . Hence, we have proved the following theorem. Theorem 3.4.5 Let V be an n-dimensional vector space with ordered bases B 1 = (u 1 , u 2 , . . . , u n ) and B 2 = (v 1 , v 2 , . . . , v n ). Let A = [[v 1 ] B1 , [v 2 ] B1 , . . . , [v n ] B1 ] . Then for any v ∈ V, [v] B1 = A[v] B2 . Example 3.4.6 Consider two bases B 1 = (1, 0, 0), (1, 1, 0), (1, 1, 1) and B 2 = (1, 1, 1), (1, −1, 1), (1, 1, 0) of R 3 . 1. Then [(x, y, z)] B1 = (x −y) (1, 0, 0) + (y −z) (1, 1, 0) +z (1, 1, 1) = (x −y, y −z, z) t and [(x, y, z)] B2 = ( y −x 2 +z) (1, 1, 1) + x −y 2 (1, −1, 1) +(x −z) (1, 1, 0) = ( y −x 2 +z, x −y 2 , x −z) t . 70 CHAPTER 3. FINITE DIMENSIONAL VECTOR SPACES 2. Let A = [a ij ] = 0 2 0 0 −2 1 1 1 0 ¸ ¸ ¸. The columns of the matrix A are obtained by the following rule: [(1, 1, 1)] B1 = 0 (1, 0, 0) + 0 (1, 1, 0) + 1 (1, 1, 1) = (0, 0, 1) t , [(1, −1, 1)] B1 = 2 (1, 0, 0) + (−2) (1, 1, 0) + 1 (1, 1, 1) = (2, −2, 1) t and [(1, 1, 0)] B1 = 0 (1, 0, 0) + 1 (1, 1, 0) + 0 (1, 1, 1) = (0, 1, 0) t . That is, the elements of B 2 = (1, 1, 1), (1, −1, 1), (1, 1, 0) are expressed in terms of the ordered basis B 1 . 3. Note that for any (x, y, z) ∈ R 3 , [(x, y, z)] B1 = x −y y −z z ¸ ¸ ¸ = 0 2 0 0 −2 1 1 1 0 ¸ ¸ ¸ y−x 2 +z x−y 2 x −z ¸ ¸ ¸ = A [(x, y, z)] B2 . 4. The matrix A is invertible and hence [(x, y, z)] B2 = A −1 [(x, y, z)] B1 . In the next chapter, we try to understand Theorem 3.4.5 again using the ideas of ‘linear transforma- tions / functions’. Exercise 3.4.7 1. Determine the coordinates of the vectors (1, 2, 1) and (4, −2, 2) with respect to the basis B = (2, 1, 0), (2, 1, 1), (2, 2, 1) of R 3 . 2. Consider the vector space { 3 (R). (a) Show that B 1 = (1 −x, 1 +x 2 , 1 −x 3 , 3 +x 2 −x 3 ) and B 2 = (1, 1 −x, 1 +x 2 , 1 −x 3 ) are bases of { 3 (R). (b) Find the coordinates of the vector u = 1 +x +x 2 +x 3 with respect to the ordered basis B 1 and B 2 . (c) Find the matrix A such that [u] B2 = A[u] B1 . (d) Let v = a 0 +a 1 x +a 2 x 2 +a 3 x 3 . Then verify the following: [v] B1 = −a 1 −a 0 −a 1 + 2a 2 −a 3 −a 0 −a 1 +a 2 −2a 3 a 0 +a 1 −a 2 +a 3 ¸ ¸ ¸ ¸ ¸ = 0 1 0 0 −1 0 1 0 −1 0 0 1 1 0 0 0 ¸ ¸ ¸ ¸ ¸ a 0 +a 1 −a 2 +a 3 −a 1 a 2 −a 3 ¸ ¸ ¸ ¸ ¸ = [v] B2 . Chapter 4 Linear Transformations 4.1 Definitions and Basic Properties Throughout this chapter, the scalar field F is either always the set R or always the set C. Definition 4.1.1 (Linear Transformation) Let V and W be vector spaces over F. A map T : V −→W is called a linear transformation if T(αu +βv) = αT(u) +βT(v), for all α, β ∈ F, and u, v ∈ V. We now give a few examples of linear transformations. Example 4.1.2 1. Define T : R−→R 2 by T(x) = (x, 3x) for all x ∈ R. Then T is a linear transformation as T(x +y) = (x +y, 3(x +y)) = (x, 3x) + (y, 3y) = T(x) +T(y). 2. Verify that the maps given below from R n to R are linear transformations. Let x = (x 1 , x 2 , . . . , x n ). (a) Define T(x) = n ¸ i=1 x i . (b) For any i, 1 ≤ i ≤ n, define T i (x) = x i . (c) For a fixed vector a = (a 1 , a 2 , . . . , a n ) ∈ R n , define T(x) = n ¸ i=1 a i x i . Note that examples (a) and (b) can be obtained by assigning particular values for the vector a. 3. Define T : R 2 −→R 3 by T((x, y)) = (x +y, 2x −y, x + 3y). Then T is a linear transformation with T((1, 0)) = (1, 2, 1) and T((0, 1)) = (1, −1, 3). 4. Let A be an mn real matrix. Define a map T A : R n −→R m by T A (x) = Ax for every x t = (x 1 , x 2 , . . . , x n ) ∈ R n . Then T A is a linear transformation. That is, every m n real matrix defines a linear transformation from R n to R m . 5. Recall that { n (R) is the set of all polynomials of degree less than or equal to n with real coefficients. Define T : R n+1 −→{ n (R) by T((a 1 , a 2 , . . . , a n+1 )) = a 1 +a 2 x + +a n+1 x n for (a 1 , a 2 , . . . , a n+1 ) ∈ R n+1 . Then T is a linear transformation. 71 72 CHAPTER 4. LINEAR TRANSFORMATIONS Proposition 4.1.3 Let T : V −→W be a linear transformation. Suppose that 0 V is the zero vector in V and 0 W is the zero vector of W. Then T(0 V ) = 0 W . Proof. Since 0 V = 0 V +0 V , we have T(0 V ) = T(0 V +0 V ) = T(0 V ) +T(0 V ). So, T(0 V ) = 0 W as T(0 V ) ∈ W. From now on, we write 0 for both the zero vector of the domain space and the zero vector of the range space. Definition 4.1.4 (Zero Transformation) Let V be a vector space and let T : V −→W be the map defined by T(v) = 0 for every v ∈ V. Then T is a linear transformation. Such a linear transformation is called the zero transformation and is denoted by 0. Definition 4.1.5 (Identity Transformation) Let V be a vector space and let T : V −→V be the map defined by T(v) = v for every v ∈ V. Then T is a linear transformation. Such a linear transformation is called the Identity transformation and is denoted by I. We now prove a result that relates a linear transformation T with its value on a basis of the domain space. Theorem 4.1.6 Let T : V −→W be a linear transformation and B = (u 1 , u 2 , . . . , u n ) be an ordered basis of V. Then the linear transformation T is a linear combination of the vectors T(u 1 ), T(u 2 ), . . . , T(u n ). In other words, T is determined by T(u 1 ), T(u 2 ), . . . , T(u n ). Proof. Since B is a basis of V, for any x ∈ V, there exist scalars α 1 , α 2 , . . . , α n such that x = α 1 u 1 2 u 2 + +α n u n . So, by the definition of a linear transformation T(x) = T(α 1 u 1 + +α n u n ) = α 1 T(u 1 ) + +α n T(u n ). Observe that, given x ∈ V, we know the scalars α 1 , α 2 , . . . , α n . Therefore, to know T(x), we just need to know the vectors T(u 1 ), T(u 2 ), . . . , T(u n ) in W. That is, for every x ∈ V, T(x) is determined by the coordinates (α 1 , α 2 , . . . , α n ) of x with respect to the ordered basis B and the vectors T(u 1 ), T(u 2 ), . . . , T(u n ) ∈ W. Exercise 4.1.7 1. Which of the following are linear transformations T : V −→W? Justify your answers. (a) Let V = R 2 and W = R 3 with T (x, y) = (x +y + 1, 2x −y, x + 3y) (b) Let V = W = R 2 with T (x, y) = (x −y, x 2 −y 2 ) (c) Let V = W = R 2 with T (x, y) = (x −y, [x[) (d) Let V = R 2 and W = −→R 4 with T (x, y) = (x +y, x −y, 2x +y, 3x −4y) (e) Let V = W = R 4 with T (x, y, z, w) = (z, x, w, y) 2. Recall that M 2 (R) is the space of all 2 2 matrices with real entries. Then, which of the following are linear transformations T : M 2 (R)−→M 2 (R)? 4.1. DEFINITIONS AND BASIC PROPERTIES 73 (a) T(A) = A t (b) T(A) = I +A (c) T(A) = A 2 (d) T(A) = BAB −1 , where B is some fixed 2 2 matrix. 3. Let T : R −→R be a map. Then T is a linear transformation if and only if there exists a unique c ∈ R such that T(x) = cx for every x ∈ R. 4. Let A be an n n real matrix. Consider the linear transformation T A (x) = Ax for every x ∈ R n . Then prove that T 2 (x) := T(T(x)) = A 2 x. In general, for k ∈ N, prove that T k (x) = A k x. 5. Use the ideas of matrices to give examples of linear transformations T, S : R 3 −→R 3 that satisfy: (a) T = 0, T 2 = 0, T 3 = 0. (b) T = 0, S = 0, S ◦ T = 0, T ◦ S = 0; where T ◦ S(x) = T S(x) . (c) S 2 = T 2 , S = T. (d) T 2 = I, T = I. 6. Let T : R n −→ R n be a linear transformation such that T = 0 and T 2 = 0. Let x ∈ R n such that T(x) = 0. Then prove that the set ¦x, T(x)¦ is linearly independent. In general, if T k = 0 for 1 ≤ k ≤ p and T p+1 = 0, then for any vector x ∈ R n with T p (x) = 0 prove that the set ¦x, T(x), . . . , T p (x)¦ is linearly independent. 7. Let T : R n −→R m be a linear transformation, and let x 0 ∈ R n with T(x 0 ) = y. Consider the sets S = ¦x ∈ R n : T(x) = y¦ and N = ¦x ∈ R n : T(x) = 0¦. Show that for every x ∈ S there exists z ∈ N such that x = x 0 +z. 8. Define a map T : C −→C by T(z) = z, the complex conjugate of z. Is T linear on (a) C over { (b) C over C. 9. Find all functions f : R 2 −→R 2 that satisfy the conditions (a) f( (x, x) ) = (x, x) and (b) f( (x, y) ) = (y, x) for all (x, y) ∈ R 2 . That is, f fixes the line y = x and sends the point (x 1 , y 1 ) for x 1 = y 1 to its mirror image along the line y = x. Theorem 4.1.8 Let T : V −→W be a linear transformation. For w ∈ W, define the set T −1 (w) = ¦v ∈ V : T(v) = w¦. Suppose that the map T is one-one and onto. 1. Then for each w ∈ W, the set T −1 (w) is a set consisting of a single element. 2. The map T −1 : W−→V defined by T −1 (w) = v whenever T(v) = w. is a linear transformation. 74 CHAPTER 4. LINEAR TRANSFORMATIONS Proof. Since T is onto, for each w ∈ W there exists a vector v ∈ V such that T(v) = w. So, the set T −1 (w) is non-empty. Suppose there exist vectors v 1 , v 2 ∈ V such that T(v 1 ) = T(v 2 ). But by assumption, T is one-one and therefore v 1 = v 2 . This completes the proof of Part 1. We now show that T −1 as defined above is a linear transformation. Let w 1 , w 2 ∈ W. Then by Part 1, there exist unique vectors v 1 , v 2 ∈ V such that T −1 (w 1 ) = v 1 and T −1 (w 2 ) = v 2 . Or equivalently, T(v 1 ) = w 1 and T(v 2 ) = w 2 . So, for any α 1 , α 2 ∈ F, we have T(α 1 v 1 2 v 2 ) = α 1 w 1 2 w 2 . Thus for any α 1 , α 2 ∈ F, T −1 1 w 1 2 w 2 ) = α 1 v 1 2 v 2 = α 1 T −1 (w 1 ) +α 2 T −1 (w 2 ). Hence T −1 : W−→V, defined as above, is a linear transformation. Definition 4.1.9 (Inverse Linear Transformation) Let T : V −→W be a linear transformation. If the map T is one-one and onto, then the map T −1 : W−→V defined by T −1 (w) = v whenever T(v) = w is called the inverse of the linear transformation T. Example 4.1.10 1. Define T : R 2 −→R 2 by T((x, y)) = (x +y, x −y). Then T −1 : R 2 −→R 2 is defined by T −1 ((x, y)) = ( x +y 2 , x −y 2 ). Note that T ◦ T −1 ((x, y)) = T(T −1 ((x, y))) = T(( x +y 2 , x −y 2 )) = ( x +y 2 + x −y 2 , x +y 2 x −y 2 ) = (x, y). Hence, T ◦ T −1 = I, the identity transformation. Verify that T −1 ◦ T = I. Thus, the map T −1 is indeed the inverse of the linear transformation T. 2. Recall the vector space { n (R) and the linear transformation T : R n+1 −→{ n (R) defined by T((a 1 , a 2 , . . . , a n+1 )) = a 1 +a 2 x + +a n+1 x n for (a 1 , a 2 , . . . , a n+1 ) ∈ R n+1 . Then T −1 : { n (R)−→R n+1 is defined as T −1 (a 1 +a 2 x + +a n+1 x n ) = (a 1 , a 2 , . . . , a n+1 ) for a 1 +a 2 x + +a n+1 x n ∈ { n (R). Verify that T ◦ T −1 = T −1 ◦ T = I. Hence, conclude that the map T −1 is indeed the inverse of the linear transformation T. 4.2 Matrix of a linear transformation In this section, we relate linear transformation over finite dimensional vector spaces with matrices. For this, we ask the reader to recall the results on ordered basis, studied in Section 3.4. Let V and W be finite dimensional vector spaces over the set F with respective dimensions m and n. Also, let T : V −→W be a linear transformation. Suppose B 1 = (v 1 , v 2 , . . . , v n ) is an ordered basis of 4.2. MATRIX OF A LINEAR TRANSFORMATION 75 V. In the last section, we saw that a linear transformation is determined by its image on a basis of the domain space. We therefore look at the images of the vectors v j ∈ B 1 for 1 ≤ j ≤ n. Now for each j, 1 ≤ j ≤ n, the vectors T(v j ) ∈ W. We now express these vectors in terms of an ordered basis B 2 = (w 1 , w 2 , . . . , w m ) of W. So, for each j, 1 ≤ j ≤ n, there exist unique scalars a 1j , a 2j , . . . , a mj ∈ F such that T(v 1 ) = a 11 w 1 +a 21 w 2 + +a m1 w m T(v 2 ) = a 12 w 1 +a 22 w 2 + +a m2 w m . . . T(v n ) = a 1n w 1 +a 2n w 2 + +a mn w m . Or in short, T(v j ) = m ¸ i=1 a ij w i for 1 ≤ j ≤ n. In other words, for each j, 1 ≤ j ≤ n, the coordinates of T(v j ) with respect to the ordered basis B 2 is the column vector [a 1j , a 2j , . . . , a mj ] t . Equivalently, [T(v j )] B2 = a 1j a 2j . . . a mj ¸ ¸ ¸ ¸ ¸ ¸ . Let [x] B1 = [x 1 , x 2 , . . . , x n ] t be the coordinates of a vector x ∈ V. Then T(x) = T( n ¸ j=1 x j v j ) = n ¸ j=1 x j T(v j ) = n ¸ j=1 x j ( m ¸ i=1 a ij w i ) = m ¸ i=1 ( n ¸ j=1 a ij x j )w i . Define a matrix A by A = a 11 a 12 a 1n a 21 a 22 a 2n . . . . . . . . . . . . a m1 a m2 a mn ¸ ¸ ¸ ¸ ¸ ¸ . Then the coordinates of the vector T(x) with respect to the ordered basis B 2 is [T(x)] B2 = ¸ n j=1 a 1j x j ¸ n j=1 a 2j x j . . . ¸ n j=1 a mj x j ¸ ¸ ¸ ¸ ¸ ¸ = a 11 a 12 a 1n a 21 a 22 a 2n . . . . . . . . . . . . a m1 a m2 a mn ¸ ¸ ¸ ¸ ¸ ¸ x 1 x 2 . . . x n ¸ ¸ ¸ ¸ ¸ ¸ = A [x] B1 . The matrix A is called the matrix of the linear transformation T with respect to the ordered bases B 1 and B 2 , and is denoted by T[B 1 , B 2 ]. We thus have the following theorem. Theorem 4.2.1 Let V and W be finite dimensional vector spaces with dimensions n and m, respectively. Let T : V −→W be a linear transformation. If B 1 is an ordered basis of V and B 2 is an ordered basis of W, then there exists an mn matrix A = T[B 1 , B 2 ] such that [T(x)] B2 = A [x] B1 . 76 CHAPTER 4. LINEAR TRANSFORMATIONS Remark 4.2.2 Let B 1 = (v 1 , v 2 , . . . , v n ) be an ordered basis of V and B 2 = (w 1 , w 2 , . . . , w m ) be an ordered basis of W. Let T : V −→ W be a linear transformation with A = T[B 1 , B 2 ]. Then the first column of A is the coordinate of the vector T(v 1 ) in the basis B 2 . In general, the i th column of A is the coordinate of the vector T(v i ) in the basis B 2 . We now give a few examples to understand the above discussion and the theorem. Example 4.2.3 1. Let T : R 2 −→R 2 be a linear transformation, given by T( (x, y) ) = (x +y, x −y). We obtain T[B 1 , B 2 ], the matrix of the linear transformation T with respect to the ordered bases B 1 = (1, 0), (0, 1) and B 2 = (1, 1), (1, −1) of R 2 . For any vector (x, y) ∈ R 2 , [(x, y)] B1 = ¸ x y ¸ as (x, y) = x(1, 0) +y(0, 1). Also, by definition of the linear transformation T, we have T( (1, 0) ) = (1, 1) = 1 (1, 1) + 0 (1, −1). So, [T( (1, 0) )] B2 = (1, 0) t and T( (0, 1) ) = (1, −1) = 0 (1, 1) + 1 (1, −1). That is, [T( (0, 1) )] B2 = (0, 1) t . So the T[B 1 , B 2 ] = ¸ 1 0 0 1 ¸ . Observe that in this case, [T( (x, y) )] B2 = [(x +y, x −y)] B2 = x(1, 1) +y(1, −1) = ¸ x y ¸ , and T[B 1 , B 2 ] [(x, y)] B1 = ¸ 1 0 0 1 ¸¸ x y ¸ = ¸ x y ¸ = [T( (x, y) )] B2 . 2. Let B 1 = (1, 0, 0), (0, 1, 0), (0, 0, 1) , B 2 = (1, 0, 0), (1, 1, 0), (1, 1, 1) be two ordered bases of R 3 . Define T : R 3 −→R 3 by T(x) = x. Then T((1, 0, 0)) = 1 (1, 0, 0) + 0 (1, 1, 0) + 0 (1, 1, 1), T((0, 1, 0)) = −1 (1, 0, 0) + 1 (1, 1, 0) + 0 (1, 1, 1), and T((0, 0, 1)) = 0 (1, 0, 0) + (−1) (1, 1, 0) + 1 (1, 1, 1). Thus, we have T[B 1 , B 2 ] = [[T((1, 0, 0))] B2 , [T((0, 1, 0))] B2 , [T((0, 0, 1))] B2 ] = [(1, 0, 0) t , (−1, 1, 0) t , (0, −1, 1) t ] = 1 −1 0 0 1 −1 0 0 1 ¸ ¸ ¸. Similarly check that T[B 1 , B 1 ] = 1 0 0 0 1 0 0 0 1 ¸ ¸ ¸. 4.3. RANK-NULLITY THEOREM 77 3. Let T : R 3 −→R 2 be define by T((x, y, z)) = (x +y −z, x + z). Let B 1 = (1, 0, 0), (0, 1, 0), (0, 0, 1) and B 2 = (1, 0), (0, 1) be the ordered bases of the domain and range space, respectively. Then T[B 1 , B 2 ] = ¸ 1 1 −1 1 0 1 ¸ . Check that that [T(x, y, z)] B2 = T[B 1 , B 2 ] [(x, y, z)] B1 . Exercise 4.2.4 Recall the space { n (R) ( the vector space of all polynomials of degree less than or equal to n). We define a linear transformation D : { n (R)−→{ n (R) by D(a 0 +a 1 x +a 2 x 2 + +a n x n ) = a 1 + 2a 2 x + +na n x n−1 . Find the matrix of the linear transformation D. However, note that the image of the linear transformation is contained in { n−1 (R). Remark 4.2.5 1. Observe that T[B 1 , B 2 ] = [[T(v 1 )] B2 , [T(v 2 )] B2 , . . . , [T(v n )] B2 ]. 2. It is important to note that [T(x)] B2 = T[B 1 , B 2 ] [x] B1 . That is, we multiply the matrix of the linear transformation with the coordinates [x] B1 , of the vector x ∈ V to obtain the coordinates of the vector T(x) ∈ W. 3. If A is an mn matrix, then A induces a linear transformation T A : R n −→R m , defined by T A (x) = Ax. We sometimes write A for T A . Suppose that the standard bases for R n and R m are the ordered bases B 1 and B 2 , respectively. Then observe that T[B 1 , B 2 ] = A. 4.3 Rank-Nullity Theorem Definition 4.3.1 (Range and Null Space) Let V, W be finite dimensional vector spaces over the same set of scalars and T : V −→W be a linear transformation. We define 1. {(T) = ¦T(x) : x ∈ V ¦, and 2. ^(T) = ¦x ∈ V : T(x) = 0¦. Proposition 4.3.2 Let V and W be finite dimensional vector spaces and let T : V −→W be a linear trans- formation. Suppose that (v 1 , v 2 , . . . , v n ) is an ordered basis of V. Then 1. (a) {(T) is a subspace of W. (b) {(T) = L(T(v 1 ), T(v 2 ), . . . , T(v n )). (c) dim({(T)) ≤ dim(W). 2. (a) ^(T) is a subspace of V. (b) dim(^(T)) ≤ dim(V ). 78 CHAPTER 4. LINEAR TRANSFORMATIONS 3. T is one-one ⇐⇒ ^(T) = ¦0¦ is the zero subspace of V ⇐⇒ ¦T(u i ) : 1 ≤ i ≤ n¦ is a basis of {(T). 4. dim({(T)) = dim(V ) if and only if ^(T) = ¦0¦. Proof. The results about {(T) and ^(T) can be easily proved. We thus leave the proof for the We now assume that T is one-one. We need to show that ^(T) = ¦0¦. Let u ∈ ^(T). Then by definition, T(u) = 0. Also for any linear transformation (see Proposition 4.1.3), T(0) = 0. Thus T(u) = T(0). So, T is one-one implies u = 0. That is, ^(T) = ¦0¦. Let ^(T) = ¦0¦. We need to show that T is one-one. So, let us assume that for some u, v ∈ V, T(u) = T(v). Then, by linearity of T, T(u−v) = 0. This implies, u−v ∈ ^(T) = ¦0¦. This in turn implies u = v. Hence, T is one-one. The other parts can be similarly proved. Remark 4.3.3 1. The space {(T) is called the range space of T and ^(T) is called the null space of T. 2. We write ρ(T) = dim({(T)) and ν(T) = dim(^(T)). 3. ρ(T) is called the rank of the linear transformation T and ν(T) is called the nullity of T. Example 4.3.4 Determine the range and null space of the linear transformation T : R 3 −→R 4 with T(x, y, z) = (x −y +z, y −z, x, 2x −5y + 5z). Solution: By Definition {(T) = L(T(1, 0, 0), T(0, 1, 0), T(0, 0, 1)). We therefore have {(T) = L (1, 0, 1, 2), (−1, 1, 0, −5), (1, −1, 0, 5) = L (1, 0, 1, 2), (1, −1, 0, 5) = ¦α(1, 0, 1, 2) +β(1, −1, 0, 5) : α, β ∈ R¦ = ¦(α +β, −β, α, 2α + 5β) : α, β ∈ R¦ = ¦(x, y, z, w) ∈ R 4 : x +y −z = 0, 5y −2z +w = 0¦. Also, by definition ^(T) = ¦(x, y, z) ∈ R 3 : T(x, y, z) = 0¦ = ¦(x, y, z) ∈ R 3 : (x −y +z, y −z, x, 2x −5y + 5z) = 0¦ = ¦(x, y, z) ∈ R 3 : x −y +z = 0, y −z = 0, x = 0, 2x −5y + 5z = 0¦ = ¦(x, y, z) ∈ R 3 : y −z = 0, x = 0¦ = ¦(x, y, z) ∈ R 3 : y = z, x = 0¦ = ¦(0, y, y) ∈ R 3 : y arbitrary¦ = L((0, 1, 1)) Exercise 4.3.5 1. Let T : V −→W be a linear transformation and let ¦T(v 1 ), T(v 2 ), . . . , T(v n )¦ be linearly independent in {(T). Prove that ¦v 1 , v 2 , . . . , v n ¦ ⊂ V is linearly independent. 4.3. RANK-NULLITY THEOREM 79 2. Let T : R 2 −→R 3 be defined by T (1, 0) = (1, 0, 0), T (0, 1) = (1, 0, 0). Then the vectors (1, 0) and (0, 1) are linearly independent whereas T (1, 0) and T (0, 1) are linearly dependent. 3. Is there a linear transformation T : R 3 −→R 2 such that T(1, −1, 1) = (1, 2), and T(−1, 1, 2) = (1, 0)? 4. Recall the vector space { n (R). Define a linear transformation D : { n (R)−→{ n (R) by D(a 0 + a 1 x +a 2 x 2 + +a n x n ) = a 1 + 2a 2 x + +na n x n−1 . Describe the null space and range space of D. Note that the range space is contained in the space { n−1 (R). 5. Let T : R 3 −→R 3 be defined by T(1, 0, 0) = (0, 0, 1), T(1, 1, 0) = (1, 1, 1) and T(1, 1, 1) = (1, 1, 0). (a) Find T(x, y, z) for x, y, z ∈ R, (b) Find {(T) and ^(T). Also calculate ρ(T) and ν(T). (c) Show that T 3 = T and find the matrix of the linear transformation with respect to the standard basis. 6. Let T : R 2 −→R 2 be a linear transformation with T((3, 4)) = (0, 1), T((−1, 1)) = (2, 3). Find the matrix representation T[B, B] of T with respect to the ordered basis B = (1, 0), (1, 1) of R 2 . 7. Determine a linear transformation T : R 3 −→R 3 whose range space is L¦(1, 2, 0), (0, 1, 1), (1, 3, 1)¦. 8. Suppose the following chain of matrices is given. A −→B 1 −→B 1 −→B 2 −→B k−1 −→B k −→B. If row space of B is in the row space of B k and the row space of B l is in the row space of B l−1 for 2 ≤ l ≤ k then show that the row space of B is in the row space of A. We now state and prove the rank-nullity Theorem. This result also follows from Proposition 4.3.2. Theorem 4.3.6 (Rank Nullity Theorem) Let T : V −→W be a linear transformation and V be a finite dimensional vector space. Then dim({(T)) + dim(^(T)) = dim(V ), or equivalently ρ(T) +ν(T) = dim(V ). 80 CHAPTER 4. LINEAR TRANSFORMATIONS Proof. Let dim(V ) = n and dim(^(T)) = r. Suppose ¦u 1 , u 2 , . . . , u r ¦ is a basis of ^(T). Since ¦u 1 , u 2 , . . . , u r ¦ is a linearly independent set in V, we can extend it to form a basis of V (see Corollary 3.3.15). So, there exist vectors ¦u r+1 , u r+2 , . . . , u n ¦ such that ¦u 1 , . . . , u r , u r+1 , . . . , u n ¦ is a basis of V. Therefore, by Proposition 4.3.2 {(T) = L(T(u 1 ), T(u 2 ), . . . , T(u n )) = L(0, . . . , 0, T(u r+1 ), T(u r+2 ), . . . , T(u n )) = L(T(u r+1 ), T(u r+2 ), . . . , T(u n )). We now prove that the set ¦T(u r+1 ), T(u r+2 ), . . . , T(u n )¦ is linearly independent. Suppose the set is not linearly independent. Then, there exists scalars, α r+1 , α r+2 , . . . , α n , not all zero such that α r+1 T(u r+1 ) +α r+2 T(u r+2 ) + +α n T(u n ) = 0. That is, T(α r+1 u r+1 r+2 u r+2 + +α n u n ) = 0. So, by definition of ^(T), α r+1 u r+1 r+2 u r+2 + +α n u n ∈ ^(T) = L(u 1 , . . . , u r ). Hence, there exists scalars α i , 1 ≤ i ≤ r such that α r+1 u r+1 r+2 u r+2 + +α n u n = α 1 u 1 2 u 2 + +α r u r . That is, α 1 u 1 + + +α r u r −α r+1 u r+1 − −α n u n = 0. But the set ¦u 1 , u 2 , . . . , u n ¦ is a basis of V and so linearly independent. Thus by definition of linear independence α i = 0 for all i, 1 ≤ i ≤ n. In other words, we have shown that ¦T(u r+1 ), T(u r+2 ), . . . , T(u n )¦ is a basis of {(T). Hence, dim({(T)) + dim(^(T)) = (n −r) +r = n = dim(V ). Using the Rank-nullity theorem, we give a short proof of the following result. Corollary 4.3.7 Let T : V −→V be a linear transformation on a finite dimensional vector space V. Then T is one-one ⇐⇒T is onto ⇐⇒T is invertible. Proof. By Proposition 4.3.2, T is one-one if and only if ^(T) = ¦0¦. By the rank-nullity Theorem 4.3.6 ^(T) = ¦0¦ is equivalent to the condition dim({(T)) = dim(V ). Or equivalently T is onto. By definition, T is invertible if T is one-one and onto. But we have shown that T is one-one if and only if T is onto. Thus, we have the last equivalent condition. Remark 4.3.8 Let V be a finite dimensional vector space and let T : V −→V be a linear transformation. If either T is one-one or T is onto, then T is invertible. The following are some of the consequences of the rank-nullity theorem. The proof is left as an 4.3. RANK-NULLITY THEOREM 81 Corollary 4.3.9 The following are equivalent for an mn real matrix A. 1. Rank (A) = k. 2. There exist exactly k rows of A that are linearly independent. 3. There exist exactly k columns of A that are linearly independent. 4. There is a k k submatrix of A with non-zero determinant and every (k + 1) (k + 1) submatrix of A has zero determinant. 5. The dimension of the range space of A is k. 6. There is a subset of R m consisting of exactly k linearly independent vectors b 1 , b 2 , . . . , b k such that the system Ax = b i for 1 ≤ i ≤ k is consistent. 7. The dimension of the null space of A = n −k. Exercise 4.3.10 1. Let T : V −→W be a linear transformation. (a) If V is finite dimensional then show that the null space and the range space of T are also finite dimensional. (b) If V and W are both finite dimensional then show that i. if dim(V ) < dim(W) then T is onto. ii. if dim(V ) > dim(W) then T is not one-one. 2. Let A be an mn real matrix. Then (a) if n > m, then the system Ax = 0 has infinitely many solutions, (b) if n < m, then there exists a non-zero vector b = (b 1 , b 2 , . . . , b m ) t such that the system Ax = b does not have any solution. 3. Let A be an mn matrix. Prove that Row Rank (A) = Column Rank (A). [Hint: Define T A : R n −→R m by T A (v) = Av for all v ∈ R n . Let Row Rank (A) = r. Use Theorem 2.5.1 to show, Ax = 0 has n −r linearly independent solutions. This implies, ν(T A ) = dim(¦v ∈ R n : T A (v) = 0¦) = dim(¦v ∈ R n : Av = 0¦) = n −r. Now observe that {(T A ) is the linear span of columns of A and use the rank-nullity Theorem 4.3.6 to get the required result.] 4. Prove Theorem 2.5.1. [Hint: Consider the linear system of equation Ax = b with the orders of A, x and b, respectively as m n, n 1 and m 1. Define a linear transformation T : R n −→R m by T(v) = Av. First observe that if the solution exists then b is a linear combination of the columns of A and the linear span of the columns of A give us {(T). Note that ρ(A) = column rank(A) = dim({(T)) = (say). Then for part i) one can proceed as follows. i) Let C i1 , C i2 , . . . , C i be the linearly independent columns of A. Then rank(A) < rank([A b]) implies that ¦C i1 , C i2 , . . . , C i , b¦ is linearly independent. Hence b ∈ L(C i1 , C i2 , . . . , C i ). Hence, the system doesn’t have any solution. On similar lines prove the other two parts.] 5. Let T, S : V −→V be linear transformations with dim(V ) = n. 82 CHAPTER 4. LINEAR TRANSFORMATIONS (a) Show that {(T +S) ⊂ {(T) +{(S). Deduce that ρ(T +S) ≤ ρ(T) +ρ(S). Hint: For two subspaces M, N of a vector space V, recall the definition of the vector subspace M +N. (b) Use the above and the rank-nullity Theorem 4.3.6 to prove ν(T +S) ≥ ν(T) +ν(S) −n. 6. Let V be the complex vector space of all complex polynomials of degree at most n. Given k distinct complex numbers z 1 , z 2 , . . . , z k , we define a linear transformation T : V −→C k by T P(z) = P(z 1 ), P(z 2 ), . . . , P(z k ) . For each k ≥ 1, determine the dimension of the range space of T. 7. Let A be an n n real matrix with A 2 = A. Consider the linear transformation T A : R n −→ R n , defined by T A (v) = Av for all v ∈ R n . Prove that (a) T A ◦ T A = T A (use the condition A 2 = A). (b) ^(T A ) ∩ {(T A ) = ¦0¦. Hint: Let x ∈ ^(T A ) ∩ {(T A ). This implies T A (x) = 0 and x = T A (y) for some y ∈ R n . So, x = T A (y) = (T A ◦ T A )(y) = T A T A (y) = T A (x) = 0. (c) R n = ^(T A ) +{(T A ). Hint: Let ¦v 1 , . . . , v k ¦ be a basis of ^(T A ). Extend it to get a basis ¦v 1 , . . . , v k , v k+1 , . . . , v n ¦ of R n . Then by Rank-nullity Theorem 4.3.6, ¦T A (v k+1 ), . . . , T A (v n )¦ is a basis of {(T A ). 4.4 Similarity of Matrices In the last few sections, the following has been discussed in detail: Given a finite dimensional vector space V of dimension n, we fixed an ordered basis B. For any v ∈ V, we calculated the column vector [v] B , to obtain the coordinates of v with respect to the ordered basis B. Also, for any linear transformation T : V −→V, we got an n n matrix T[B, B], the matrix of T with respect to the ordered basis B. That is, once an ordered basis of V is fixed, every linear transformation is represented by a matrix with entries from the scalars. In this section, we understand the matrix representation of T in terms of different bases B 1 and B 2 of V. That is, we relate the two n n matrices T[B 1 , B 1 ] and T[B 2 , B 2 important theorem. This theorem also enables us to understand why the matrix product is defined somewhat differently. Theorem 4.4.1 (Composition of Linear Transformations) Let V, W and Z be finite dimensional vec- tor spaces with ordered bases B 1 , B 2 , B 3 , respectively. Also, let T : V −→W and S : W−→Z be linear transformations. Then the composition map S ◦ T : V −→Z is a linear transformation and (S ◦ T) [B 1 , B 3 ] = S[B 2 , B 3 ] T[B 1 , B 2 ]. Proof. Let B 1 = (u 1 , u 2 , . . . , u n ), B 2 = (v 1 , v 2 , . . . , v m ) and B 3 = (w 1 , w 2 , . . . , w p ) be ordered bases of V, W and Z, respectively. Then (S ◦ T) [B 1 , B 3 ] = [[S ◦ T(u 1 )] B3 , [S ◦ T(u 2 )] B3 , . . . , [S ◦ T(u n )] B3 ]. 4.4. SIMILARITY OF MATRICES 83 Now for 1 ≤ t ≤ n, (S ◦ T) (ut) = S(T(ut)) = S m X j=1 (T[B1, B2])jtvj « = m X j=1 (T[B1, B2])jtS(vj) = m X j=1 (T[B1, B2])jt p X k=1 (S[B2, B3]) kj w k = p X k=1 ( m X j=1 (S[B2, B3]) kj (T[B1, B2])jt)w k = p X k=1 (S[B2, B3] T[B1, B2]) kt w k . So, [(S ◦ T) (u t )] B3 = ((S[B 2 , B 3 ] T[B 1 , B 2 ]) 1t , . . . , (S[B 2 , B 3 ] T[B 1 , B 2 ]) pt ) t . Hence, (S ◦ T) [B 1 , B 3 ] = [(S ◦ T) (u 1 )] B3 , . . . , [(S ◦ T) (u n )] B3 = S[B 2 , B 3 ] T[B 1 , B 2 ]. This completes the proof. Proposition 4.4.2 Let V be a finite dimensional vector space and let T, S : V −→V be a linear transforma- tions. Then ν(T) +ν(S) ≥ ν(T ◦ S) ≥ max¦ν(T), ν(S)¦. Proof. We first prove the second inequality. Suppose that v ∈ ^(S). Then T ◦ S(v) = T(S(v)) = T(0) = 0. So, ^(S) ⊂ ^(T ◦ S). Therefore, ν(S) ≤ ν(T ◦ S). Suppose dim(V ) = n. Then using the rank-nullity theorem, observe that ν(T ◦ S) ≥ ν(T) ⇐⇒n −ν(T ◦ S) ≤ n −ν(T) ⇐⇒ρ(T ◦ S) ≤ ρ(T). So, to complete the proof of the second inequality, we need to show that {(T ◦ S) ⊂ {(T). This is true as {(S) ⊂ V. We now prove the first inequality. Let k = ν(S) and let ¦v 1 , v 2 , . . . , v k ¦ be a basis of ^(S). Clearly, ¦v 1 , v 2 , . . . , v k ¦ ⊂ ^(T ◦ S) as T(0) = 0. We extend it to get a basis ¦v 1 , v 2 , . . . , v k , u 1 , u 2 , . . . , u ¦ of ^(T ◦ S). Claim: The set ¦S(u 1 ), S(u 2 ), . . . , S(u )¦ is linearly independent subset of ^(T). As u 1 , u 2 , . . . , u ∈ ^(T ◦ S), the set ¦S(u 1 ), S(u 2 ), . . . , S(u )¦ is a subset of ^(T). Let if possible the given set be linearly dependent. Then there exist non-zero scalars c 1 , c 2 , . . . , c such that c 1 S(u 1 ) +c 2 S(u 2 ) + +c S(u ) = 0. So, the vector ¸ i=1 c i u i ∈ ^(S) and is a linear combination of the basis vectors v 1 , v 2 , . . . , v k of ^(S). Therefore, there exist scalars α 1 , α 2 , α k such that ¸ i=1 c i u i = k ¸ i=1 α i v i . Or equivalently ¸ i=1 c i u i + k ¸ i=1 (−α i )v i = 0. 84 CHAPTER 4. LINEAR TRANSFORMATIONS That is, the 0 vector is a non-trivial linear combination of the basis vectors v 1 , v 2 , . . . , v k , u 1 , u 2 , . . . , u of ^(T ◦ S). A contradiction. Thus, the set ¦S(u 1 ), S(u 2 ), . . . , S(u )¦ is a linearly independent subset of ^(T) and so ν(T) ≥ . Hence, ν(T ◦ S) = k + ≤ ν(S) +ν(T). Recall from Theorem 4.1.8 that if T is an invertible linear Transformation, then T −1 : V −→V is a linear transformation defined by T −1 (u) = v whenever T(v) = u. We now state an important result about inverse of a linear transformation. The reader is required to supply the proof (use Theorem 4.4.1). Theorem 4.4.3 (Inverse of a Linear Transformation) Let V be a finite dimensional vector space with ordered bases B 1 and B 2 . Also let T : V −→V be an invertible linear transformation. Then the matrix of T and T −1 are related by T[B 1 , B 2 ] −1 = T −1 [B 2 , B 1 ]. Exercise 4.4.4 For the linear transformations given below, find the matrix T[B, B]. 1. Let B = (1, 1, 1), (1, −1, 1), (1, 1, −1) be an ordered basis of R 3 . Define T : R 3 −→R 3 by T(1, 1, 1) = (1, −1, 1), T(1, −1, 1) = (1, 1, −1), and T(1, 1, −1) = (1, 1, 1). Is T an invertible linear transforma- tion? Give reasons. 2. Let B = 1, x, x 2 , x 3 ) be an ordered basis of { 3 (R). Define T : { 3 (R)−→{ 3 (R) by T(1) = 1, T(x) = 1 +x, T(x 2 ) = (1 +x) 2 , and T(x 3 ) = (1 +x) 3 . Prove that T is an invertible linear transformation. Also, find T −1 [B, B]. Let V be a vector space with dim(V ) = n. Let B 1 = (u 1 , u 2 , . . . , u n ) and B 2 = (v 1 , v 2 , . . . , v n ¦ be two ordered bases of V. Recall from Definition 4.1.5 that I : V −→V is the identity linear transformation defined by I(x) = x for every x ∈ V. Suppose x ∈ V with [x] B1 = (α 1 , α 2 , . . . , α n ) t and [x] B2 = 1 , β 2 , . . . , β n ) t . We now express each vector in B 2 as a linear combination of the vectors from B 1 . Since v i ∈ V, for 1 ≤ i ≤ n, and B 1 is a basis of V, we can find scalars a ij , 1 ≤ i, j ≤ n such that v i = I(v i ) = n ¸ j=1 a ji u j for all i, 1 ≤ i ≤ n. Hence, [I(v i )] B1 = [v i ] B1 = (a 1i , a 2i , , a ni ) t and I[B 2 , B 1 ] = [[I(v 1 )] B1 , [I(v 2 )] B1 , . . . , [I(v n )] B1 ] = a 11 a 12 a 1n a 21 a 22 a 2n . . . . . . . . . . . . a n1 a n2 a nn ¸ ¸ ¸ ¸ ¸ ¸ . Thus, we have proved the following result. Theorem 4.4.5 (Change of Basis Theorem) Let V be a finite dimensional vector space with ordered bases B 1 = (u 1 , u 2 , . . . , u n ¦ and B 2 = (v 1 , v 2 , . . . , v n ¦. Suppose x ∈ V with [x] B1 = (α 1 , α 2 , . . . , α n ) t and [x] B2 = (β 1 , β 2 , . . . , β n ) t . Then [x] B1 = I[B 2 , B 1 ] [x] B2 . 4.4. SIMILARITY OF MATRICES 85 Equivalently, α 1 α 2 . . . α n ¸ ¸ ¸ ¸ ¸ ¸ = a 11 a 12 a 1n a 21 a 22 a 2n . . . . . . . . . . . . a n1 a n2 a nn ¸ ¸ ¸ ¸ ¸ ¸ β 1 β 2 . . . β n ¸ ¸ ¸ ¸ ¸ ¸ . Note: Observe that the identity linear transformation I : V −→V defined by I(x) = x for every x ∈ V is invertible and I[B 2 , B 1 ] −1 = I −1 [B 1 , B 2 ] = I[B 1 , B 2 ]. Therefore, we also have [x] B2 = I[B 1 , B 2 ] [x] B1 . Let V be a finite dimensional vector space and let B 1 and B 2 be two ordered bases of V. Let T : V −→V be a linear transformation. We are now in a position to relate the two matrices T[B 1 , B 1 ] and T[B 2 , B 2 ]. Theorem 4.4.6 Let V be a finite dimensional vector space and let B 1 = (u 1 , u 2 , . . . , u n ) and B 2 = (v 1 , v 2 , . . . , v n ) be two ordered bases of V. Let T : V −→V be a linear transformation with B = T[B 1 , B 1 ] and C = T[B 2 , B 2 ] as matrix representations of T in bases B 1 and B 2 . Also, let A = [a ij ] = I[B 2 , B 1 ], be the matrix of the identity linear transformation with respect to the bases B 1 and B 2 . Then BA = AC. Equivalently B = ACA −1 . Proof. For any x ∈ V , we represent [T(x)] B2 in two ways. Using Theorem 4.2.1, the first expression is [T(x)] B2 = T[B 2 , B 2 ] [x] B2 . (4.4.1) Using Theorem 4.4.5, the other expression is [T(x)] B2 = I[B 1 , B 2 ] [T(x)] B1 = I[B 1 , B 2 ] T[B 1 , B 1 ] [x] B1 = I[B 1 , B 2 ] T[B 1 , B 1 ] I[B 2 , B 1 ] [x] B2 . (4.4.2) Hence, using (4.4.1) and (4.4.2), we see that for every x ∈ V, I[B 1 , B 2 ] T[B 1 , B 1 ] I[B 2 , B 1 ] [x] B2 = T[B 2 , B 2 ] [x] B2 . Since the result is true for all x ∈ V, we get I[B 1 , B 2 ] T[B 1 , B 1 ] I[B 2 , B 1 ] = T[B 2 , B 2 ]. (4.4.3) That is, A −1 BA = C or equivalently ACA −1 = B. Another Proof: Let B = [b ij ] and C = [c ij ]. Then for 1 ≤ i ≤ n, T(u i ) = n ¸ j=1 b ji u j and T(v i ) = n ¸ j=1 c ji v j . So, for each j, 1 ≤ j ≤ n, T(vj) = T(I(vj)) = T( n X k=1 a kj u k ) = n X k=1 a kj T(u k ) = n X k=1 a kj ( n X =1 b k u ) = n X =1 ( n X k=1 b k a kj )u 86 CHAPTER 4. LINEAR TRANSFORMATIONS and therefore, [T(v j )] B1 = n ¸ k=1 b 1k a kj n ¸ k=1 b 2k a kj . . . n ¸ k=1 b nk a kj ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ = B a 1j a 2j . . . a nj ¸ ¸ ¸ ¸ ¸ ¸ . Hence T[B 2 , B 1 ] = BA. Also, for each j, 1 ≤ j ≤ n, T(v j ) = n ¸ k=1 c kj v k = n ¸ k=1 c kj I(v k ) = n ¸ k=1 c kj ( n ¸ =1 a k u ) = n ¸ =1 ( n ¸ k=1 a k c kj )u and so [T(v j )] B1 = n ¸ k=1 a 1k c kj n ¸ k=1 a 2k c kj . . . n ¸ k=1 a nk c kj ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ = A c 1j c 2j . . . c nj ¸ ¸ ¸ ¸ ¸ ¸ . This gives us T[B 2 , B 1 ] = AC. We thus have AC = T[B 2 , B 1 ] = BA. Let V be a vector space with dim(V ) = n, and let T : V −→V be a linear transformation. Then for each ordered basis B of V, we get an n n matrix T[B, B]. Also, we know that for any vector space we have infinite number of choices for an ordered basis. So, as we change an ordered basis, the matrix of the linear transformation changes. Theorem 4.4.6 tells us that all these matrices are related. Now, let A and B be two nn matrices such that P −1 AP = B for some invertible matrix P. Recall the linear transformation T A : R n −→R n defined by T A (x) = Ax for all x ∈ R n . Then we have seen that if the standard basis of R n is the ordered basis B, then A = T A [B, B]. Since P is an invertible matrix, its columns are linearly independent and hence we can take its columns as an ordered basis B 1 . Then note that B = T A [B 1 , B 1 ]. The above observations lead to the following remark and the definition. Remark 4.4.7 The identity (4.4.3) shows how the matrix representation of a linear transformation T changes if the ordered basis used to compute the matrix representation is changed. Hence, the matrix I[B 1 , B 2 ] is called the B 1 : B 2 change of basis matrix. Definition 4.4.8 (Similar Matrices) Two square matrices B and C of the same order are said to be similar if there exists a non-singular matrix P such that B = PCP −1 or equivalently BP = PC. Remark 4.4.9 Observe that if A = T[B, B] then ¦S −1 AS : S is n n invertible matrix ¦ is the set of all matrices that are similar to the given matrix A. Therefore, similar matrices are just different matrix representations of a single linear transformation. 4.4. SIMILARITY OF MATRICES 87 Example 4.4.10 1. Consider { 2 (R), with ordered bases B 1 = 1, 1 +x, 1 +x +x 2 and B 2 = 1 +x −x 2 , 1 + 2x +x 2 , 2 + x +x 2 . Then [1 +x −x 2 ] B1 = 0 1 + 2 (1 +x) + (−1) (1 +x +x 2 ) = (0, 2, −1) t , [1 + 2x +x 2 ] B1 = (−1) 1 + 1 (1 +x) + 1 (1 +x +x 2 ) = (−1, 1, 1) t , and [2 +x +x 2 ] B1 = 1 1 + 0 (1 +x) + 1 (1 +x +x 2 ) = (1, 0, 1) t . Therefore, I[B2, B1] = [[I(1 +x −x 2 )]B 1 , [I(1 + 2x +x 2 )]B 1 , [I(2 +x +x 2 )]B 1 ] = [[1 +x −x 2 ]B 1 , [1 + 2x +x 2 ]B 1 , [2 +x +x 2 ]B 1 ] = 2 6 4 0 −1 1 2 1 0 −1 1 1 3 7 5. Find the matrices T[B 1 , B 1 ] and T[B 2 , B 2 ]. Also verify that T[B 2 , B 2 ] = I[B 1 , B 2 ] T[B 1 , B 1 ] I[B 2 , B 1 ] = I −1 [B 2 , B 1 ] T[B 1 , B 1 ] I[B 2 , B 1 ]. 2. Consider two bases B 1 = (1, 0, 0), (1, 1, 0), (1, 1, 1) and B 2 = (1, 1, −1), (1, 2, 1), (2, 1, 1) of R 3 . Suppose T : R 3 −→R 3 is a linear transformation defined by T((x, y, z)) = (x +y, x +y + 2z, y −z). Then T[B 1 , B 1 ] = 0 0 −2 1 1 4 0 1 0 ¸ ¸ ¸, and T[B 2 , B 2 ] = −4/5 1 8/5 −2/5 2 9/5 8/5 0 −1/5 ¸ ¸ ¸. Find I[B 1 , B 2 ] and verify, I[B 1 , B 2 ] T[B 1 , B 1 ] I[B 2 , B 1 ] = T[B 2 , B 2 ]. Check that, T[B 1 , B 1 ] I[B 2 , B 1 ] = I[B 2 , B 1 ] T[B 2 , B 2 ] = 2 −2 −2 −2 4 5 2 1 0 ¸ ¸ ¸. Exercise 4.4.11 1. Let V be an n-dimensional vector space and let T : V −→V be a linear transformation. Suppose T has the property that T n−1 = 0 but T n = 0. (a) Then prove that there exists a vector u ∈ V such that the set ¦u, T(u), . . . , T n−1 (u)¦ is a basis of V. (b) Let B = (u, T(u), . . . , T n−1 (u)). Then prove that T[B, B] = 0 0 0 0 1 0 0 0 0 1 0 0 . . . . . . . . . . . . 0 0 1 0 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ . 88 CHAPTER 4. LINEAR TRANSFORMATIONS (c) Let A be an n n matrix with the property that A n−1 = 0 but A n = 0. Then prove that A is similar to the matrix given above. 2. Let T : R 3 −→R 3 be a linear transformation given by T((x, y, z)) = (x +y + 2z, x −y −3z, 2x + 3y +z). Let B be the standard basis and B 1 = (1, 1, 1), (1, −1, 1), (1, 1, 2) be another ordered basis. (a) Find the matrices T[B, B] and T[B 1 , B 1 ]. (b) Find the matrix P such that P −1 T[B, B] P = T[B 1 , B 1 ]. 3. Let T : R 3 −→R 3 be a linear transformation given by T((x, y, z)) = (x, x +y, x +y +z). Let B be the standard basis and B 1 = (1, 0, 0), (1, 1, 0), (1, 1, 1) be another ordered basis. (a) Find the matrices T[B, B] and T[B 1 , B 1 ]. (b) Find the matrix P such that P −1 T[B, B] P = T[B 1 , B 1 ]. 4. Let B 1 = (1, 2, 0), (1, 3, 2), (0, 1, 3) and B 2 = (1, 2, 1), (0, 1, 2), (1, 4, 6) be two ordered bases of R 3 . (a) Find the change of basis matrix P from B 1 to B 2 . (b) Find the change of basis matrix Q from B 2 to B 1 . (c) Verify that PQ = I = QP. (d) Find the change of basis matrix from the standard basis of R 3 to B 1 . What do you notice? Chapter 5 Inner Product Spaces We had learned that given vectors i and j (which are at an angle of 90 ) in a plane, any vector in the plane is a linear combination of the vectors i and j. In this section, we investigate a method by which any basis of a finite dimensional vector can be transferred to another basis in such a way that the vectors in the new basis are at an angle of 90 to each other. To do this, we start by defining a notion of inner product (dot product) in a vector space. This helps us in finding out whether two vectors are at 90 or not. 5.1 Definition and Basic Properties In R 2 , given two vectors x = (x 1 , x 2 ), y = (y 1 , y 2 ), we know the inner product x y = x 1 y 1 +x 2 y 2 . Note that for any x, y, z ∈ R 2 and α ∈ R, this inner product satisfies the conditions x (y +αz) = x y +αx z, x y = y x, and x x ≥ 0 and x x = 0 if and only if x = 0. Thus, we are motivated to define an inner product on an arbitrary vector space. Definition 5.1.1 (Inner Product) Let V (F) be a vector space over F. An inner product over V (F), denoted by ' , `, is a map, ' , ` : V V −→F such that for u, v, w ∈ V and a, b ∈ F 1. 'au +bv, w` = a'u, w` +b'v, w`, 2. 'u, v` = 'v, u`, the complex conjugate of 'u, v`, and 3. 'u, u` ≥ 0 for all u ∈ V and equality holds if and only if u = 0. Definition 5.1.2 (Inner Product Space) Let V be a vector space with an inner product ' , `. Then (V, ' , `) is called an inner product space, in short denoted by ips. Example 5.1.3 The first two examples given below are called the standard inner product or the dot product on R n and C n , respectively.. 1. Let V = R n be the real vector space of dimension n. Given two vectors u = (u 1 , u 2 , . . . , u n ) and v = (v 1 , v 2 , . . . , v n ) of V, we define 'u, v` = u 1 v 1 +u 2 v 2 + +u n v n = uv t . Verify ' , ` is an inner product. 89 90 CHAPTER 5. INNER PRODUCT SPACES 2. Let V = C n be a complex vector space of dimension n. Then for u = (u 1 , u 2 , . . . , u n ) and v = (v 1 , v 2 , . . . , v n ) in V, check that 'u, v` = u 1 v 1 +u 2 v 2 + +u n v n = uv is an inner product. 3. Let V = R 2 and let A = ¸ 4 −1 −1 2 ¸ . Define 'x, y` = xAy t . Check that ' , ` is an inner product. Hint: Note that xAy t = 4x 1 y 1 −x 1 y 2 −x 2 y 1 + 2x 2 y 2 . 4. let x = (x 1 , x 2 , x 3 ), y = (y 1 , y 2 , y 3 ) ∈ R 3 ., Show that 'x, y` = 10x 1 y 1 + 3x 1 y 2 + 3x 2 y 1 + 2x 2 y 2 + x 2 y 3 +x 3 y 2 +x 3 y 3 is an inner product in R 3 (R). 5. Consider the real vector space R 2 . In this example, we define three products that satisfy two conditions out of the three conditions for an inner product. Hence the three products are not inner products. (a) Define 'x, y` = '(x 1 , x 2 ), (y 1 , y 2 )` = x 1 y 1 . Then it is easy to verify that the third condition is not valid whereas the first two conditions are valid. (b) Define 'x, y` = '(x 1 , x 2 ), (y 1 , y 2 )` = x 2 1 + y 2 1 + x 2 2 + y 2 2 . Then it is easy to verify that the first condition is not valid whereas the second and third conditions are valid. (c) Define 'x, y` = '(x 1 , x 2 ), (y 1 , y 2 )` = x 1 y 3 1 + x 2 y 3 2 . Then it is easy to verify that the second condition is not valid whereas the first and third conditions are valid. Remark 5.1.4 Note that in parts 1 and 2 of Example 5.1.3, the inner products are uv t and uv , respectively. This occurs because the vectors u and v are row vectors. In general, u and v are taken as column vectors and hence one uses the notation u t v or u v. Exercise 5.1.5 Verify that inner products defined in parts 3 and 4 of Example 5.1.3, are indeed inner products. Definition 5.1.6 (Length/Norm of a Vector) For u ∈ V, we define the length (norm) of u, denoted |u|, by |u| = 'u, u`, the positive square root. A very useful and a fundamental inequality concerning the inner product is due to Cauchy and Schwartz. The next theorem gives the statement and a proof of this inequality. Theorem 5.1.7 (Cauchy-Schwartz inequality) Let V (F) be an inner product space. Then for any u, v ∈ V ['u, v`[ ≤ |u| |v|. The equality holds if and only if the vectors u and v are linearly dependent. Further, if u = 0, then v = 'v, u |u| ` u |u| . Proof. If u = 0, then the inequality holds. Let u = 0. Note that 'λu + v, λu +v` ≥ 0 for all λ ∈ F. In particular, for λ = − 'v, u` |u| 2 , we get 0 ≤ 'λu +v, λu +v` = λλ|u| 2 +λ'u, v` +λ'v, u` +|v| 2 = 'v, u` |u| 2 'v, u` |u| 2 |u| 2 'v, u` |u| 2 'u, v` − 'v, u` |u| 2 'v, u` +|v| 2 = |v| 2 ['v, u`[ 2 |u| 2 . 5.1. DEFINITION AND BASIC PROPERTIES 91 Or, in other words ['v, u`[ 2 ≤ |u| 2 |v| 2 and the proof of the inequality is over. Observe that if u = 0 then the equality holds if and only of λu +v = 0 for λ = − 'v, u` |u| 2 . That is, u and v are linearly dependent. We leave it for the reader to prove v = 'v, u |u| ` u |u| . Definition 5.1.8 (Angle between two vectors) Let V be a real vector space. Then for every u, v ∈ V, by the Cauchy-Schwartz inequality, we have −1 ≤ 'u, v` |u| |v| ≤ 1. We know that cos : [0, π] −→ [−1, 1] is an one-one and onto function. Therefore, for every real number 'u, v` |u| |v| , there exists a unique θ, 0 ≤ θ ≤ π, such that cos θ = 'u, v` |u| |v| . 1. The real number θ with 0 ≤ θ ≤ π and satisfying cos θ = 'u, v` |u| |v| is called the angle between the two vectors u and v in V. 2. The vectors u and v in V are said to be orthogonal if 'u, v` = 0. 3. A set of vectors ¦u 1 , u 2 , . . . , u n ¦ is called mutually orthogonal if 'u i , u j ` = 0 for all 1 ≤ i = j ≤ n. Exercise 5.1.9 1. Let ¦e 1 , e 2 , . . . , e n ¦ be the standard basis of R n . Then prove that with respect to the standard inner product on R n , the vectors e i satisfy the following: (a) |e i | = 1 for 1 ≤ i ≤ n. (b) 'e i , e j ` = 0 for 1 ≤ i = j ≤ n. 2. Recall the following inner product on R 2 : for x = (x 1 , x 2 ) t and y = (y 1 , y 2 ) t , 'x, y` = 4x 1 y 1 −x 1 y 2 −x 2 y 1 + 2x 2 y 2 . (a) Find the angle between the vectors e 1 = (1, 0) t and e 2 = (0, 1) t . (b) Let u = (1, 0) t . Find v ∈ R 2 such that 'v, u` = 0. (c) Find two vectors x, y ∈ R 2 , such that |x| = |y| = 1 and 'x, y` = 0. 3. Find an inner product in R 2 such that the following conditions hold: |(1, 2)| = |(2, −1)| = 1, and '(1, 2), (2, −1)` = 0. [Hint: Consider a symmetric matrix A = ¸ a b b c ¸ . Define 'x, y` = y t Ax and solve a system of 3 equations for the unknowns a, b, c.] 92 CHAPTER 5. INNER PRODUCT SPACES 4. Let V be a complex vector space with dim(V ) = n. Fix an ordered basis B = (u 1 , u 2 , . . . , u n ). Define a map ' , ` : V V −→C by 'u, v` = n ¸ i=1 a i b i whenever [u] B = (a 1 , a 2 , . . . , a n ) t and [v] B = (b 1 , b 2 , . . . , b n ) t . Show that the above defined map is indeed an inner product. 5. Let x = (x 1 , x 2 , x 3 ), y = (y 1 , y 2 , y 3 ) ∈ R 3 . Show that 'x, y` = 10x 1 y 1 + 3x 1 y 2 + 3x 2 y 1 + 2x 2 y 2 +x 2 y 3 +x 3 y 2 +x 3 y 3 is an inner product in R 3 (R). With respect to this inner product, find the angle between the vectors (1, 1, 1) and (2, −5, 2). 6. Consider the set M n×n (R) of all real square matrices of order n. For A, B ∈ M n×n (R) we define 'A, B` = tr(AB t ). Then 'A +B, C` = tr (A +B)C t = tr(AC t ) +tr(BC t ) = 'A, C` +'B, C`. 'A, B` = tr(AB t ) = tr( (AB t ) t ) = tr(BA t ) = 'B, A`. Let A = (a ij ). Then 'A, A` = tr(AA t ) = n ¸ i=1 (AA t ) ii = n ¸ i=1 n ¸ j=1 a ij a ij = n ¸ i=1 n ¸ j=1 a 2 ij and therefore, 'A, A` > 0 for all non-zero matrices A. So, it is clear that 'A, B` is an inner product on M n×n (R). 7. Let V be the real vector space of all continuous functions with domain [−2π, 2π]. That is, V = C[−2π, 2π]. Then show that V is an inner product space with inner product 1 −1 f(x)g(x)dx. For different values of m and n, find the angle between the functions cos(mx) and sin(nx). 8. Let V be an inner product space. Prove that |u +v| ≤ |u| +|v| for every u, v ∈ V. This inequality is called the triangle inequality. 9. Let z 1 , z 2 , . . . , z n ∈ C. Use the Cauchy-Schwartz inequality to prove that [z 1 +z 2 + +z n [ ≤ n([z 1 [ 2 +[z 2 [ 2 + +[z n [ 2 ). When does the equality hold? 10. Let x, y ∈ R n . Observe that 'x, y` = 'y, x`. Hence or otherwise prove the following: (a) 'x, y` = 0 ⇐⇒|x −y| 2 = |x| 2 +|y| 2 , (This is called Pythagoras Theorem). (b) |x| = |y| ⇐⇒'x+y, x−y` = 0, (x and y form adjacent sides of a rhombus as the diagonals x +y and x −y are orthogonal). (c) |x +y| 2 + |x −y| 2 = 2|x| 2 + 2|y| 2 , (This is called the Parallelogram Law). (d) 4'x, y` = |x +y| 2 −|x −y| 2 (This is called the polarisation identity). Remark 5.1.10 i. Suppose the norm of a vector is given. Then, the polarisation identity can be used to define an inner product. 5.1. DEFINITION AND BASIC PROPERTIES 93 ii. Observe that if 'x, y` = 0 then the parallelogram spanned by the vectors x and y is a rectangle. The above equality tells us that the lengths of the two diagonals are equal. Are these results true if x, y ∈ C n (C)? 11. Let x, y ∈ C n (C). Prove that (a) 4'x, y` = |x +y| 2 −|x −y| 2 +i|x +iy| 2 −i|x −iy| 2 . (b) If x = 0 then |x +ix| 2 = |x| 2 +|ix| 2 , even though 'x, ix` = 0. (c) If |x +y| 2 = |x| 2 +|y| 2 and |x +iy| 2 = |x| 2 +|iy| 2 then show that 'x, y` = 0. 12. Let V be an n-dimensional inner product space, with an inner product ' , `. Let u ∈ V be a fixed vector with |u| = 1. Then give reasons for the following statements. (a) Let S = ¦v ∈ V : 'v, u` = 0¦. Then S is a subspace of V of dimension n −1. (b) Let 0 = α ∈ F and let S = ¦v ∈ V : 'v, u` = α¦. Then S is not a subspace of V. (c) For any v ∈ S, there exists a vector v 0 ∈ S , such that v = v 0 +αu. Theorem 5.1.11 Let V be an inner product space. Let ¦u 1 , u 2 , . . . , u n ¦ be a set of non-zero, mutually orthogonal vectors of V. 1. Then the set ¦u 1 , u 2 , . . . , u n ¦ is linearly independent. 2. | n ¸ i=1 α i u i | 2 = n ¸ i=1 i [ 2 |u i | 2 ; 3. Let dim(V ) = n and also let |u i | = 1 for i = 1, 2, . . . , n. Then for any v ∈ V, v = n ¸ i=1 'v, u i `u i . In particular, 'v, u i ` = 0 for all i = 1, 2, . . . , n if and only if v = 0. Proof. Consider the set of non-zero, mutually orthogonal vectors ¦u 1 , u 2 , . . . , u n ¦. Suppose there exist scalars c 1 , c 2 , . . . , c n not all zero, such that c 1 u 1 +c 2 u 2 + +c n u n = 0. Then for 1 ≤ i ≤ n, we have 0 = '0, u i ` = 'c 1 u 1 +c 2 u 2 + +c n u n , u i ` = n ¸ j=1 c j 'u j , u i ` = c i as 'u j , u i ` = 0 for all j = i and 'u i , u i ` = 1. This gives a contradiction to our assumption that some of the c i ’s are non-zero. This establishes the linear independence of a set of non-zero, mutually orthogonal vectors. For the second part, using 'u i , u j ` = 0 if i = j |u i | 2 if i = j for 1 ≤ i, j ≤ n, we have | n ¸ i=1 α i u i | 2 = ' n ¸ i=1 α i u i , n ¸ i=1 α i u i ` = n ¸ i=1 α i 'u i , n ¸ j=1 α j u j ` = n ¸ i=1 α i n ¸ j=1 α j 'u i , u j ` = n ¸ i=1 α i α i 'u i , u i ` = n ¸ i=1 i [ 2 |u i | 2 . 94 CHAPTER 5. INNER PRODUCT SPACES For the third part, observe from the first part, the linear independence of the non-zero mutually orthogonal vectors u 1 , u 2 , . . . , u n . Since dim(V ) = n, they form a basis of V. Thus, for every vector v ∈ V, there exist scalars α i , 1 ≤ i ≤ n, such that v = ¸ n i=1 α i u n . Hence, 'v, u j ` = ' n ¸ i=1 α i u i , u j ` = n ¸ i=1 α i 'u i , u j ` = α j . Therefore, we have obtained the required result. Definition 5.1.12 (Orthonormal Set) Let V be an inner product space. A set of non-zero, mutually or- thogonal vectors ¦v 1 , v 2 , . . . , v n ¦ in V is called an orthonormal set if |v i | = 1 for i = 1, 2, . . . , n. If the set ¦v 1 , v 2 , . . . , v n ¦ is also a basis of V, then the set of vectors ¦v 1 , v 2 , . . . , v n ¦ is called an orthonormal basis of V. Example 5.1.13 1. Consider the vector space R 2 with the standard inner product. Then the standard ordered basis B = (1, 0), (0, 1) is an orthonormal set. Also, the basis B 1 = 1 2 (1, 1), 1 2 (1, −1) is an orthonormal set. 2. Let R n be endowed with the standard inner product. Then by Exercise 5.1.9.1, the standard ordered basis (e 1 , e 2 , . . . , e n ) is an orthonormal set. In view of Theorem 5.1.11, we inquire into the question of extracting an orthonormal basis from a given basis. In the next section, we describe a process (called the Gram-Schmidt Orthogonalisation process) that generates an orthonormal set from a given set containing finitely many vectors. Remark 5.1.14 The last part of the above theorem can be rephrased as “suppose ¦v 1 , v 2 , . . . , v n ¦ is an orthonormal basis of an inner product space V. Then for each u ∈ V the numbers 'u, v i ` for 1 ≤ i ≤ n are the coordinates of u with respect to the above basis”. That is, let B = (v 1 , v 2 , . . . , v n ) be an ordered basis. Then for any u ∈ V, [u] B = ('u, v 1 `, 'u, v 2 `, . . . , 'u, v n `) t . 5.2 Gram-Schmidt Orthogonalisation Process Let V be a finite dimensional inner product space. Suppose u 1 , u 2 , . . . , u n is a linearly independent subset of V. Then the Gram-Schmidt orthogonalisation process uses the vectors u 1 , u 2 , . . . , u n to construct new vectors v 1 , v 2 , . . . , v n such that 'v i , v j ` = 0 for i = j, |v i | = 1 and Span ¦u 1 , u 2 , . . . , u i ¦ = Span ¦v 1 , v 2 , . . . , v i ¦ for i = 1, 2, . . . , n. This process proceeds with the following idea. Suppose we are given two vectors u and v in a plane. If we want to get vectors z and y such that z is a unit vector in the direction of u and y is a unit vector perpendicular to z, then they can be obtained in the following way: Take the first vector z = u |u| . Let θ be the angle between the vectors u and v. Then cos(θ) = 'u, v` |u| |v| . Defined α = |v| cos(θ) = 'u, v` |u| = 'z, v`. Then w = v − α z is a vector perpendicular to the unit vector z, as we have removed the component of z from v. So, the vectors that we are interested in are z and y = w |w| . This idea is used to give the Gram-Schmidt Orthogonalisation process which we now describe. 5.2. GRAM-SCHMIDT ORTHOGONALISATION PROCESS 95 u v <v,u> v u u || || Figure 5.1: Gram-Schmidt Process Theorem 5.2.1 (Gram-Schmidt Orthogonalisation Process) Let V be an inner product space. Suppose ¦u 1 , u 2 , . . . , u n ¦ is a set of linearly independent vectors of V. Then there exists a set ¦v 1 , v 2 , . . . , v n ¦ of vectors of V satisfying the following: 1. |v i | = 1 for 1 ≤ i ≤ n, 2. 'v i , v j ` = 0 for 1 ≤ i, j ≤ n, i = j and 3. L(v 1 , v 2 , . . . , v i ) = L(u 1 , u 2 , . . . , u i ) for 1 ≤ i ≤ n. Proof. We successively define the vectors v 1 , v 2 , . . . , v n as follows. v 1 = u 1 |u 1 | . Calculate w 2 = u 2 −'u 2 , v 1 `v 1 , and let v 2 = w 2 |w 2 | . Obtain w 3 = u 3 − 'u 3 , v 1 `v 1 −'u 3 , v 2 `v 2 , and let v 3 = w 3 |w 3 | . In general, if v 1 , v 2 , v 3 , v 4 , . . . , v i−1 w i = u i −'u i , v 1 `v 1 −'u i , v 2 `v 2 − −'u i , v i−1 `v i−1 , (5.2.1) and define v i = w i |w i | . We prove the theorem by induction on n, the number of linearly independent vectors. For n = 1, we have v 1 = u 1 |u 1 | . Since u 1 = 0, v 1 = 0 and |v 1 | 2 = 'v 1 , v 1 ` = ' u 1 |u 1 | , u 1 |u 1 | ` = 'u 1 , u 1 ` |u 1 | 2 = 1. Hence, the result holds for n = 1. Let the result hold for all k ≤ n − 1. That is, suppose we are given any set of k, 1 ≤ k ≤ n − 1 linearly independent vectors ¦u 1 , u 2 , . . . , u k ¦ of V. Then by the inductive assumption, there exists a set ¦v 1 , v 2 , . . . , v k ¦ of vectors satisfying the following: 1. |v i | = 1 for 1 ≤ i ≤ k, 2. 'v i , v j ` = 0 for 1 ≤ i = j ≤ k, and 96 CHAPTER 5. INNER PRODUCT SPACES 3. L(v 1 , v 2 , . . . , v i ) = L(u 1 , u 2 , . . . , u i ) for 1 ≤ i ≤ k. Now, let us assume that we are given a set of n linearly independent vectors ¦u 1 , u 2 , . . . , u n ¦ of V. Then by the inductive assumption, we already have vectors v 1 , v 2 , . . . , v n−1 satisfying 1. |v i | = 1 for 1 ≤ i ≤ n −1, 2. 'v i , v j ` = 0 for 1 ≤ i = j ≤ n −1, and 3. L(v 1 , v 2 , . . . , v i ) = L(u 1 , u 2 , . . . , u i ) for 1 ≤ i ≤ n −1. Using (5.2.1), we define w n = u n −'u n , v 1 `v 1 −'u n , v 2 `v 2 − −'u n , v n−1 `v n−1 . (5.2.2) We first show that w n ∈ L(v 1 , v 2 , . . . , v n−1 ). This will also imply that w n = 0 and hence v n = w n |w n | is well defined. On the contrary, assume that w n ∈ L(v 1 , v 2 , . . . , v n−1 ). Then there exist scalars α 1 , α 2 , . . . , α n−1 such that w n = α 1 v 1 2 v 2 + +α n−1 v n−1 . So, by (5.2.2) u n = α 1 +'u n , v 1 ` v 1 + α 2 +'u n , v 2 ` v 2 + + ( α n−1 +'u n , v n−1 ` v n−1 . Thus, by the third induction assumption, u n ∈ L(v 1 , v 2 , . . . , v n−1 ) = L(u 1 , u 2 , . . . , u n−1 ). This gives a contradiction to the given assumption that the set of vectors ¦u 1 , u 2 , . . . , u n ¦ is linear independent. So, w n = 0. Define v n = w n |w n | . Then |v n | = 1. Also, it can be easily verified that 'v n , v i ` = 0 for 1 ≤ i ≤ n −1. Hence, by the principle of mathematical induction, the proof of the theorem is complete. We illustrate the Gram-Schmidt process by the following example. Example 5.2.2 Let ¦(1, −1, 1, 1), (1, 0, 1, 0), (0, 1, 0, 1)¦ be a linearly independent set in R 4 (R). Find an orthonormal set ¦v 1 , v 2 , v 3 ¦ such that L( (1, −1, 1, 1), (1, 0, 1, 0), (0, 1, 0, 1) ) = L(v 1 , v 2 , v 3 ). Solution: Let u 1 = (1, 0, 1, 0). Define v 1 = (1, 0, 1, 0) 2 . Let u 2 = (0, 1, 0, 1). Then w 2 = (0, 1, 0, 1) −'(0, 1, 0, 1), (1, 0, 1, 0) 2 `v 1 = (0, 1, 0, 1). Hence, v 2 = (0, 1, 0, 1) 2 . Let u 3 = (1, −1, 1, 1). Then w3 = (1, −1, 1, 1) −(1, −1, 1, 1), (1, 0, 1, 0) 2 v1 −(1, −1, 1, 1), (0, 1, 0, 1) 2 v2 = (0, −1, 0, 1) and v 3 = (0, −1, 0, 1) 2 . 5.2. GRAM-SCHMIDT ORTHOGONALISATION PROCESS 97 Remark 5.2.3 1. Let ¦u 1 , u 2 , . . . , u k ¦ be any basis of a k-dimensional subspace W of R n . Then by Gram-Schmidt orthogonalisation process, we get an orthonormal set ¦v 1 , v 2 , . . . , v k ¦ ⊂ R n with W = L(v 1 , v 2 , . . . , v k ), and for 1 ≤ i ≤ k, L(v 1 , v 2 , . . . , v i ) = L(u 1 , u 2 , . . . , u i ). 2. Suppose we are given a set of n vectors, ¦u 1 , u 2 , . . . , u n ¦ of V that are linearly dependent. Then by Corollary 3.2.5, there exists a smallest k, 2 ≤ k ≤ n such that L(u 1 , u 2 , . . . , u k ) = L(u 1 , u 2 , . . . , u k−1 ). We claim that in this case, w k = 0. Since, we have chosen the smallest k satisfying L(u 1 , u 2 , . . . , u i ) = L(u 1 , u 2 , . . . , u i−1 ), for 2 ≤ i ≤ n, the set ¦u 1 , u 2 , . . . , u k−1 ¦ is linearly independent (use Corollary 3.2.5). So, by Theorem 5.2.1, there exists an orthonormal set ¦v 1 , v 2 , . . . , v k−1 ¦ such that L(u 1 , u 2 , . . . , u k−1 ) = L(v 1 , v 2 , . . . , v k−1 ). As u k ∈ L(v 1 , v 2 , . . . , v k−1 ), by Remark 5.1.14 u k = 'u k , v 1 `v 1 +'u k , v 2 `v 2 + +'u k , v k−1 `v n−1 . So, by definition of w k , w k = 0. Therefore, in this case, we can continue with the Gram-Schmidt process by replacing u k by u k+1 . 3. Let S be a countably infinite set of linearly independent vectors. Then one can apply the Gram- Schmidt process to get a countably infinite orthonormal set. 4. Let ¦v 1 , v 2 , . . . , v k ¦ be an orthonormal subset of R n . Let B = (e 1 , e 2 , . . . , e n ) be the standard ordered basis of R n . Then there exist real numbers α ij , 1 ≤ i ≤ k, 1 ≤ j ≤ n such that [v i ] B = (α 1i , α 2i , . . . , α ni ) t . Let A = [v 1 , v 2 , . . . , v k ]. Then in the ordered basis B, we have A = α 11 α 12 α 1k α 21 α 22 α 2k . . . . . . . . . . . . α n1 α n2 α nk ¸ ¸ ¸ ¸ ¸ ¸ is an n k matrix. Also, observe that the conditions |v i | = 1 and 'v i , v j ` = 0 for 1 ≤ i = j ≤ n, implies that 1 = |v i | = |v i | 2 = 'v i , v i ` = n ¸ j=1 α 2 ji , and 0 = 'v i , v j ` = n ¸ s=1 α si α sj . (5.2.3) 98 CHAPTER 5. INNER PRODUCT SPACES Note that, A t A = 2 6 6 6 6 4 v t 1 v t 2 . . . v t k 3 7 7 7 7 5 [v1, v2, . . . , v k ] = 2 6 6 6 6 4 v1 2 v1, v2 · · · v1, v k v2, v1 v2 2 · · · v2, v k . . . . . . . . . . . . v k , v1 v k , v2 · · · v k 2 3 7 7 7 7 5 = 2 6 6 6 6 4 1 0 · · · 0 0 1 · · · 0 . . . . . . . . . . . . 0 0 · · · 1 3 7 7 7 7 5 = I k . Or using (5.2.3), in the language of matrices, we get A t A = α 11 α 21 α n1 α 12 α 22 α n2 . . . . . . . . . . . . α 1k α 2k α nk ¸ ¸ ¸ ¸ ¸ ¸ α 11 α 12 α 1k α 21 α 22 α 2k . . . . . . . . . . . . α n1 α n2 α nk ¸ ¸ ¸ ¸ ¸ ¸ = I k . Perhaps the readers must have noticed that the inverse of A is its transpose. Such matrices are called orthogonal matrices and they have a special role to play. Definition 5.2.4 (Orthogonal Matrix) A nn real matrix A is said to be an orthogonal matrix if A A t = A t A = I n . It is worthwhile to solve the following exercises. Exercise 5.2.5 1. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. 2. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n . (b) the columns of A form an orthonormal basis of R n . (c) for any two vectors x, y ∈ R n×1 , 'Ax, Ay` = 'x, y`. (d) for any vector x ∈ R n×1 , |Ax| = |x|. 3. Let ¦u 1 , u 2 , . . . , u n ¦ be an orthonormal basis of R n . Let B = (e 1 , e 2 , . . . , e n ) be the standard basis of R n . Construct an n n matrix A by A = [u 1 , u 2 , . . . , u n ] = a 11 a 12 a 1n a 21 a 22 a 2n . . . . . . . . . . . . a n1 a n2 a nn ¸ ¸ ¸ ¸ ¸ ¸ where u i = n ¸ j=1 a ji e j , for 1 ≤ i ≤ n. Prove that A t A = I n . Hence deduce that A is an orthogonal matrix. 4. Let A be an n n upper triangular matrix. If A is also an orthogonal matrix, then prove that A = I n . 5.2. GRAM-SCHMIDT ORTHOGONALISATION PROCESS 99 Theorem 5.2.6 (QR Decomposition) Let A be a square matrix of order n. Then there exist matrices Q and R such that Q is orthogonal and R is upper triangular with A = QR. In case, A is non-singular, the diagonal entries of R can be chosen to be positive. Also, in this case, the decomposition is unique. Proof. We prove the theorem when A is non-singular. The proof for the singular case is left as an exercise. Let the columns of A be x 1 , x 2 , . . . , x n . The Gram-Schmidt orthogonalisation process applied to the vectors x 1 , x 2 , . . . , x n gives the vectors u 1 , u 2 , . . . , u n satisfying L(u 1 , u 2 , . . . , u i ) = L(x 1 , x 2 , . . . , x i ), |u i | = 1, 'u i , u j ` = 0, ¸ for 1 ≤ i = j ≤ n. (5.2.4) Now, consider the ordered basis B = (u 1 , u 2 , . . . , u n ). From (5.2.4), for 1 ≤ i ≤ n, we have L(u 1 , u 2 , . . . , u i ) = L(x 1 , x 2 , . . . , x i ). So, we can find scalars α ji , 1 ≤ j ≤ i such that x i = α 1i u 1 2i u 2 + +α ii u i = 1i , . . . , α ii , 0 . . . , 0) t B . (5.2.5) Let Q = [u 1 , u 2 , . . . , u n ]. Then by Exercise 5.2.5.3, Q is an orthogonal matrix. We now define an n n upper triangular matrix R by R = α 11 α 12 α 1n 0 α 22 α 2n . . . . . . . . . . . . 0 0 α nn ¸ ¸ ¸ ¸ ¸ ¸ . By using (5.2.5), we get QR = [u 1 , u 2 , . . . , u n ] α 11 α 12 α 1n 0 α 22 α 2n . . . . . . . . . . . . 0 0 α nn ¸ ¸ ¸ ¸ ¸ ¸ = ¸ α 11 u 1 , α 12 u 1 22 u 2 , . . . , n ¸ i=1 α in u i = [x 1 , x 2 , . . . , x n ] = A. Thus, we see that A = QR, where Q is an orthogonal matrix (see Remark 5.2.3.4) and R is an upper triangular matrix. The proof doesn’t guarantee that for 1 ≤ i ≤ n, α ii is positive. But this can be achieved by replacing the vector u i by −u i whenever α ii is negative. Uniqueness: suppose Q 1 R 1 = Q 2 R 2 then Q −1 2 Q 1 = R 2 R −1 1 . Observe the following properties of upper triangular matrices. 1. The inverse of an upper triangular matrix is also an upper triangular matrix, and 2. product of upper triangular matrices is also upper triangular. Thus the matrix R 2 R −1 1 is an upper triangular matrix. Also, by Exercise 5.2.5.1, the matrix Q −1 2 Q 1 is an orthogonal matrix. Hence, by Exercise 5.2.5.4, R 2 R −1 1 = I n . So, R 2 = R 1 and therefore Q 2 = Q 1 . Suppose we have matrix A = [x 1 , x 2 , . . . , x k ] of dimension nk with rank (A) = r. Then by Remark 5.2.3.2, the application of the Gram-Schmidt orthogonalisation process yields a set ¦u 1 , u 2 , . . . , u r ¦ of 100 CHAPTER 5. INNER PRODUCT SPACES orthonormal vectors of R n . In this case, for each i, 1 ≤ i ≤ r, we have L(u 1 , u 2 , . . . , u i ) = L(x 1 , x 2 , . . . , x j ), for some j, i ≤ j ≤ k. Hence, proceeding on the lines of the above theorem, we have the following result. Theorem 5.2.7 (Generalised QR Decomposition) Let A be an n k matrix of rank r. Then A = QR, where 1. Q is an n r matrix with Q t Q = I r . That is, the columns of Q form an orthonormal set, 2. If Q = [u 1 , u 2 , . . . , u r ], then L(u 1 , u 2 , . . . , u r ) = L(x 1 , x 2 , . . . , x k ), and 3. R is an r k matrix with rank (R) = r. Example 5.2.8 1. Let A = 1 0 1 2 0 1 −1 1 1 0 1 1 0 1 1 1 ¸ ¸ ¸ ¸ ¸ . Find an orthogonal matrix Q and an upper triangular matrix R such that A = QR. Solution: From Example 5.2.2, we know that v 1 = 1 2 (1, 0, 1, 0), v 2 = 1 2 (0, 1, 0, 1), v 3 = 1 2 (0, −1, 0, 1). (5.2.6) We now compute w 4 . If we denote u 4 = (2, 1, 1, 1) t then by the Gram-Schmidt process, w 4 = u 4 −'u 4 , v 1 `v 1 −'u 4 , v 2 `v 2 −'u 4 , v 3 `v 3 = 1 2 (1, 0, −1, 0) t . (5.2.7) Thus, using (5.2.6) and (5.2.7), we get Q = v 1 , v 2 , v 3 , v 4 = 1 2 0 0 1 2 0 1 2 −1 2 0 1 2 0 0 −1 2 0 1 2 1 2 0 ¸ ¸ ¸ ¸ ¸ ¸ and R = 2 0 2 3 2 0 2 0 2 0 0 2 0 0 0 0 −1 2 ¸ ¸ ¸ ¸ ¸ ¸ . The readers are advised to check that A = QR is indeed correct. 2. Let A = 1 1 1 0 −1 0 −2 1 1 1 1 0 1 0 2 1 ¸ ¸ ¸ ¸ ¸ . Find a 43 matrix Q satisfying Q t Q = I 3 and an upper triangular matrix R such that A = QR. Solution: Let us apply the Gram Schmidt orthogonalisation to the columns of A. Or equivalently to the rows of A t . So, we need to apply the process to the subset ¦(1, −1, 1, 1), (1, 0, 1, 0), (1, −2, 1, 2), (0, 1, 0, 1)¦ of R 4 . 5.2. GRAM-SCHMIDT ORTHOGONALISATION PROCESS 101 Let u 1 = (1, −1, 1, 1). Define v 1 = u 1 2 . Let u 2 = (1, 0, 1, 0). Then w 2 = (1, 0, 1, 0) −'u 2 , v 1 `v 1 = (1, 0, 1, 0) − v 1 = 1 2 (1, 1, 1, −1). Hence, v 2 = (1, 1, 1, −1) 2 . Let u 3 = (1, −2, 1, 2). Then w 3 = u 3 −'u 3 , v 1 `v 1 −'u 3 , v 2 `v 2 = u 3 −3v 1 +v 2 = 0. So, we again take u 3 = (0, 1, 0, 1). Then w 3 = u 3 −'u 3 , v 1 `v 1 −'u 3 , v 2 `v 2 = u 3 −0v 1 −0v 2 = u 3 . So, v 3 = (0, 1, 0, 1) 2 . Hence, Q = [v 1 , v 2 , v 3 ] = 1 2 1 2 0 −1 2 1 2 1 2 1 2 1 2 0 1 2 −1 2 1 2 ¸ ¸ ¸ ¸ ¸ ¸ , and R = 2 1 3 0 0 1 −1 0 0 0 0 2 ¸ ¸ ¸. (a) rank (A) = 3, (b) A = QR with Q t Q = I 3 , and (c) R a 3 4 upper triangular matrix with rank (R) = 3. Exercise 5.2.9 1. Determine an orthonormal basis of R 4 containing the vectors (1, −2, 1, 3) and (2, 1, −3, 1). 2. Prove that the polynomials 1, x, 3 2 x 2 1 2 , 5 2 x 3 3 2 x form an orthogonal set of functions in the in- ner product space C[−1, 1] with the inner product 'f, g` = 1 −1 f(t)g(t)dt. Find the corresponding functions, f(x) with |f(x)| = 1. 3. Consider the vector space C[−π, π] with the standard inner product defined in the above exercise. Find an orthonormal basis for the subspace spanned by x, sin x and sin(x + 1). 4. Let M be a subspace of R n and dimM = m. A vector x ∈ R n is said to be orthogonal to M if 'x, y` = 0 for every y ∈ M. (a) How many linearly independent vectors can be orthogonal to M? (b) If M = ¦(x 1 , x 2 , x 3 ) ∈ R 3 : x 1 +x 2 +x 3 = 0¦, determine a maximal set of linearly independent vectors orthogonal to M in R 3 . 5. Determine an orthogonal basis of vector subspace spanned by ¦(1, 1, 0, 1), (−1, 1, 1, −1), (0, 2, 1, 0), (1, 0, 0, 0)¦ in R 4 . 6. Let S = ¦(1, 1, 1, 1), (1, 2, 0, 1), (2, 2, 4, 0)¦. Find an orthonormal basis of L(S) in R 4 . 7. Let R n be endowed with the standard inner product. Suppose we have a vector x t = (x 1 , x 2 , . . . , x n ) ∈ R n , with |x| = 1. Then prove the following: (a) the set ¦x¦ can always be extended to form an orthonormal basis of R n . (b) Let this basis be ¦x, x 2 , . . . , x n ¦. Suppose B = (e 1 , e 2 , . . . , e n ) is the standard basis of R n . Let A = ¸ [x] B , [x 2 ] B , . . . , [x n ] B . Then prove that A is an orthogonal matrix. 8. Let v, w ∈ R n , n ≥ 1 with |u| = |w| = 1. Prove that there exists an orthogonal matrix A such that Av = w. Prove also that A can be chosen such that det(A) = 1. 102 CHAPTER 5. INNER PRODUCT SPACES 5.3 Orthogonal Projections and Applications Recall that given a k-dimensional vector subspace of a vector space V of dimension n, one can always find an (n −k)-dimensional vector subspace W 0 of V (see Exercise 3.3.19.9) satisfying W +W 0 = V and W ∩ W 0 = ¦0¦. The subspace W 0 is called the complementary subspace of W in V. We now define an important class of linear transformations on an inner product space, called orthogonal projections. Definition 5.3.1 (Projection Operator) Let V be an n-dimensional vector space and let W be a k- dimensional subspace of V. Let W 0 be a complement of W in V. Then we define a map P W : V −→ V by P W (v) = w, whenever v = w+w 0 , w ∈ W, w 0 ∈ W 0 . The map P W is called the projection of V onto W along W 0 . Remark 5.3.2 The map P is well defined due to the following reasons: 1. W +W 0 = V implies that for every v ∈ V, we can find w ∈ W and w 0 ∈ W 0 such that v = w+w 0 . 2. W ∩ W 0 = ¦0¦ implies that the expression v = w+w 0 is unique for every v ∈ V. The next proposition states that the map defined above is a linear transformation from V to V. We omit the proof, as it follows directly from the above remarks. Proposition 5.3.3 The map P W : V −→V defined above is a linear transformation. Example 5.3.4 Let V = R 3 and W = ¦(x, y, z) ∈ R 3 : x +y −z = 0¦. 1. Let W 0 = L( (1, 2, 2) ). Then W ∩ W 0 = ¦0¦ and W +W 0 = R 3 . Also, for any vector (x, y, z) ∈ R 3 , note that (x, y, z) = w+w 0 , where w = (z −y, 2z −2x −y, 3z −2x −2y), and w 0 = (x + y −z)(1, 2, 2). So, by definition, P W ((x, y, z)) = (z −y, 2z −2x −y, 3z −2x −2y) = 0 −1 1 −2 −1 2 −2 −2 3 ¸ ¸ ¸ x y z ¸ ¸ ¸. 2. Let W 0 = L( (1, 1, 1) ). Then W ∩ W 0 = ¦0¦ and W +W 0 = R 3 . Also, for any vector (x, y, z) ∈ R 3 , note that (x, y, z) = w+w 0 , where w = (z −y, z −x, 2z −x −y), and w 0 = (x +y −z)(1, 1, 1). So, by definition, P W ( (x, y, z) ) = (z −y, z −x, 2z −x −y) = 0 −1 1 −1 0 1 −1 −1 2 ¸ ¸ ¸ x y z ¸ ¸ ¸. Remark 5.3.5 1. The projection map P W depends on the complementary subspace W 0 . 2. Observe that for a fixed subspace W, there are infinitely many choices for the complementary subspace W 0 . 5.3. ORTHOGONAL PROJECTIONS AND APPLICATIONS 103 3. It will be shown later that if V is an inner product space with inner product, ' , `, then the subspace W 0 is unique if we put an additional condition that W 0 = ¦v ∈ V : 'v, w` = 0 for all w ∈ W¦. We now prove some basic properties about projection maps. Theorem 5.3.6 Let W and W 0 be complementary subspaces of a vector space V. Let P W : V −→ V be a projection operator of V onto W along W 0 . Then 1. the null space of P W , ^(P W ) = ¦v ∈ V : P W (v) = 0¦ = W 0 . 2. the range space of P W , {(P W ) = ¦P W (v) : v ∈ V ¦ = W. 3. P 2 W = P W . The condition P 2 W = P W is equivalent to P W (I −P W ) = 0 = (I −P W )P W . Proof. We only prove the first part of the theorem. Let w 0 ∈ W 0 . Then w 0 = 0 +w 0 for 0 ∈ W. So, by definition, P(w 0 ) = 0. Hence, W 0 ⊂ ^(P W ). Also, for any v ∈ V, let P W (v) = 0 with v = w + w 0 for some w 0 ∈ W 0 and w ∈ W. Then by definition 0 = P W (v) = w. That is, w = 0 and v = w 0 . Thus, v ∈ W 0 . Hence ^(P W ) = W 0 . Exercise 5.3.7 1. Let A be an n n real matrix with A 2 = A. Consider the linear transformation T A : R n −→R n , defined by T A (v) = Av for all v ∈ R n . Prove that (a) T A ◦ T A = T A (use the condition A 2 = A). (b) ^(T A ) ∩ {(T A ) = ¦0¦. Hint: Let x ∈ ^(T A ) ∩ {(T A ). This implies T A (x) = 0 and x = T A (y) for some y ∈ R n . So, x = T A (y) = (T A ◦ T A )(y) = T A T A (y) = T A (x) = 0. (c) R n = ^(T A ) +{(T A ). Hint: Let ¦v 1 , . . . , v k ¦ be a basis of ^(T A ). Extend it to get a basis ¦v 1 , . . . , v k , v k+1 , . . . , v n ¦ of R n . Then by Rank-nullity Theorem 4.3.6, ¦T A (v k+1 ), . . . , T A (v n )¦ is a basis of {(T A ). (d) Define W = {(T A ) and W 0 = ^(T A ). Then T A is a projection operator of R n onto W along W 0 . Recall that the first three parts of this exercise was also given in Exercise 4.3.10.7. 2. Find all 22 real matrices A such that A 2 = A. Hence or otherwise, determine all projection operators of R 2 . The next result uses the Gram-Schmidt orthogonalisation process to get the complementary subspace in such a way that the vectors in different subspaces are orthogonal. Definition 5.3.8 (Orthogonal Subspace of a Set) Let V be an inner product space. Let S be a non-empty subset of V . We define S = ¦v ∈ V : 'v, s` = 0 for all s ∈ S¦. Example 5.3.9 Let V = R. 1. S = ¦0¦. Then S = R. 2. S = R, Then S = ¦0¦. 3. Let S be any subset of R containing a non-zero real number. Then S = ¦0¦. 104 CHAPTER 5. INNER PRODUCT SPACES Theorem 5.3.10 Let S be a subset of a finite dimensional inner product space V, with inner product ' , `. Then 1. S is a subspace of V. 2. Let S be equal to a subspace W. Then the subspaces W and W are complementary. Moreover, if w ∈ W and u ∈ W , then 'u, w` = 0 and V = W +W . Proof. We leave the prove of the first part for the reader. The prove of the second part is as follows: Let dim(V ) = n and dim(W) = k. Let ¦w 1 , w 2 , . . . , w k ¦ be a basis of W. By Gram-Schmidt orthogo- nalisation process, we get an orthonormal basis, say, ¦v 1 , v 2 , . . . , v k ¦ of W. Then, for any v ∈ V, v − k ¸ i=1 'v, v i `v i ∈ W . So, V ⊂ W +W . Also, for any v ∈ W ∩W , by definition of W , 0 = 'v, v` = |v| 2 . So, v = 0. That is, W ∩ W = ¦0¦. Definition 5.3.11 (Orthogonal Complement) Let W be a subspace of a vector space V. The subspace W is called the orthogonal complement of W in V. Exercise 5.3.12 1. Let W = ¦(x, y, z) ∈ R 3 : x + y + z = 0¦. Find W with respect to the standard inner product. 2. Let W be a subspace of a finite dimensional inner product space V . Prove that (W ) = W. 3. Let V be the vector space of all n n real matrices. Then Exercise5.1.9.6 shows that V is a real inner product space with the inner product given by 'A, B` = tr(AB t ). If W is the subspace given by W = ¦A ∈ V : A t = A¦, determine W . Definition 5.3.13 (Orthogonal Projection) Let W be a subspace of a finite dimensional inner product space V, with inner product ' , `. Let W be the orthogonal complement of W in V. Define P W : V −→V by P W (v) = w where v = w+u, with w ∈ W, and u ∈ W . Then P W is called the orthogonal projection of V onto W along W . Definition 5.3.14 (Self-Adjoint Transformation/Operator) Let V be an inner product space with inner product ' , `. A linear transformation T : V −→ V is called a self-adjoint operator if 'T(v), u` = 'v, T(u)` for every u, v ∈ V. Example 5.3.15 1. Let A be an nn real symmetric matrix. That is, A t = A. Then show that the linear transformation T A : R n −→R n defined by T A (x) = Ax for every x t ∈ R n Solution: By definition, for every x t , y t ∈ R n , 'T A (x), y` = (y) t Ax = (y) t A t x = (Ay) t x = 'x, T A (y)`. Hence, the result follows. 2. Let A be an n n Hermitian matrix, that is, A = A. Then the linear transformation T A : C n −→C n defined by T A (z) = Az for every z t ∈ C n Remark 5.3.16 1. By Proposition 5.3.3, the map P W defined above is a linear transformation. 5.3. ORTHOGONAL PROJECTIONS AND APPLICATIONS 105 2. P 2 W = P W , (I −P W )P W = 0 = P W (I −P W ). 3. Let u, v ∈ V with u = u 1 +u 2 and v = v 1 +v 2 for some u 1 , v 1 ∈ W and u 2 , v 2 ∈ W . Then we know that 'u i , v j ` = 0 whenever 1 ≤ i = j ≤ 2. Therefore, for every u, v ∈ V, 'P W (u), v` = 'u 1 , v` = 'u 1 , v 1 +v 2 ` = 'u 1 , v 1 ` = 'u 1 +u 2 , v 1 ` = 'u, P W (v)`. Thus, the orthogonal projection operator is a self-adjoint operator. 4. Let v ∈ V and w ∈ W. Then P W (w) = w for all w ∈ W. Therefore, using Remarks 5.3.16.2 and 5.3.16.3, we get v −PW(v), w = ` I −PW ´ (v), PW(w) = PW ` I −PW ´ (v), w = 0(v), w = 0, w = 0 for every w ∈ W. 5. In particular, 'v −P W (v), P W (v) −w` = 0 as P W (v) ∈ W. Thus, 'v −P W (v), P W (v) −w ` = 0, for every w ∈ W. Hence, for any v ∈ V and w ∈ W, we have |v −w| 2 = |v −P W (v) +P W (v) −w| 2 = |v −P W (v)| 2 +|P W (v) −w| 2 +2'v −P W (v), P W (v) −w` = |v −P W (v)| 2 +|P W (v) −w| 2 . Therefore, |v −w| ≥ |v −P W (v)| and the equality holds if and only if w = P W (v). Since P W (v) ∈ W, we see that d(v, W) = inf ¦|v −w| : w ∈ W¦ = |v −P W (v)|. That is, P W (v) is the vector nearest to v ∈ W. This can also be stated as: the vector P W (v) solves the following minimisation problem: inf w∈W |v −w| = |v −P W (v)|. 5.3.1 Matrix of the Orthogonal Projection The minimization problem stated above arises in lot of applications. So, it will be very helpful if the matrix of the orthogonal projection can be obtained under a given basis. To this end, let W be a k-dimensional subspace of R n with W as its orthogonal complement. Let P W : R n −→ R n be the orthogonal projection of R n onto W. Suppose, we are given an orthonormal basis B = (v 1 , v 2 , . . . , v k ) of W. Under the assumption that B is known, we explicitly give the matrix of P W with respect to an extended ordered basis of R n . Let us extend the given ordered orthonormal basis B of W to get an orthonormal ordered basis B 1 = (v 1 , v 2 , . . . , v k , v k+1 . . . , v n ) of R n . Then by Theorem 5.1.11, for any v ∈ R n , v = n ¸ i=1 'v, v i `v i . Thus, by definition, P W (v) = k ¸ i=1 'v, v i `v i . Let A = [v 1 , v 2 , . . . , v k ]. Consider the standard orthogonal 106 CHAPTER 5. INNER PRODUCT SPACES ordered basis B 2 = (e 1 , e 2 , . . . , e n ) of R n . Therefore, if v i = n ¸ j=1 a ji e j , for 1 ≤ i ≤ k, then A = 2 6 6 6 6 4 a11 a12 · · · a 1k a21 a22 · · · a 2k . . . . . . . . . . . . an1 an2 · · · a nk 3 7 7 7 7 5 , [v]B 2 = 2 6 6 6 6 6 6 6 6 6 4 n P i=1 a1iv, vi n P i=1 a2iv, vi . . . n P i=1 aniv, vi 3 7 7 7 7 7 7 7 7 7 5 and [PW(v)]B 2 = 2 6 6 6 6 6 6 6 6 6 6 4 k P i=1 a1iv, vi k P i=1 a2iv, vi . . . k P i=1 aniv, vi 3 7 7 7 7 7 7 7 7 7 7 5 . Then as observed in Remark 5.2.3.4, A t A = I k . That is, for 1 ≤ i, j ≤ k, n ¸ s=1 a si a sj = 1 if i = j 0 if i = j. (5.3.1) Thus, using the associativity of matrix product and (5.3.1), we get (AA t )(v) = A 2 6 6 6 6 4 a11 a21 · · · an1 a12 a22 · · · an2 . . . . . . . . . . . . a 1k a 2k · · · a nk 3 7 7 7 7 5 2 6 6 6 6 6 6 6 6 6 4 n P i=1 a1iv, vi n P i=1 a2iv, vi . . . n P i=1 aniv, vi 3 7 7 7 7 7 7 7 7 7 5 = A 2 6 6 6 6 6 6 6 6 6 4 n P s=1 as1 n P i=1 asiv, vi « n P s=1 as2 n P i=1 asiv, vi « . . . n P s=1 a sk n P i=1 asiv, vi « 3 7 7 7 7 7 7 7 7 7 5 = A 2 6 6 6 6 6 6 6 6 6 4 n P i=1 n P s=1 as1asi « v, vi n P i=1 n P s=1 as2asi « v, vi . . . n P i=1 n P s=1 a sk asi « v, vi 3 7 7 7 7 7 7 7 7 7 5 = A 2 6 6 6 6 4 v, v1 v, v2 . . . v, v k 3 7 7 7 7 5 = 2 6 6 6 6 6 6 6 6 6 6 4 k P i=1 a1iv, vi k P i=1 a2iv, vi . . . k P i=1 aniv, vi 3 7 7 7 7 7 7 7 7 7 7 5 = [PW(v)]B 2 . Thus P W [B 2 , B 2 ] = AA t . Thus, we have proved the following theorem. Theorem 5.3.17 Let W be a k-dimensional subspace of R n and let P W : R n −→ R n be the orthogonal projection of R n onto W along W . Suppose, B = (v 1 , v 2 , . . . , v k ) is an orthonormal ordered basis of W. Define an nk matrix A = [v 1 , v 2 , . . . , v k ]. Then the matrix of the linear transformation P W in the standard orthogonal ordered basis (e 1 , e 2 , . . . , e n ) is AA t . 5.3. ORTHOGONAL PROJECTIONS AND APPLICATIONS 107 Example 5.3.18 Let W = ¦(x, y, z, w) ∈ R 4 : x = y, z = w¦ be a subspace of W. Then an orthonormal ordered basis of W is 1 2 (1, 1, 0, 0), 1 2 (0, 0, 1, 1) , and that of W is 1 2 (1, −1, 0, 0), 1 2 (0, 0, 1, −1) . Therefore, if P W : R 4 −→R 4 is an orthogonal projection of R 4 onto W along W , then the corresponding matrix A is given by A = 1 2 0 1 2 0 0 1 2 0 1 2 ¸ ¸ ¸ ¸ ¸ ¸ . Hence, the matrix of the orthogonal projection P W in the ordered basis B = 1 2 (1, 1, 0, 0), 1 2 (0, 0, 1, 1), 1 2 (1, −1, 0, 0), 1 2 (0, 0, 1, −1) is P W [B, B] = AA t = 1 2 1 2 0 0 1 2 1 2 0 0 0 0 1 2 1 2 0 0 1 2 1 2 ¸ ¸ ¸ ¸ ¸ . It is easy to see that 1. the matrix P W [B, B] is symmetric, 2. P W [B, B] 2 = P W [B, B], and 3. I 4 −P W [B, B] P W [B, B] = 0 = P W [B, B] I 4 −P W [B, B] . Also, for any (x, y, z, w) ∈ R 4 , we have [(x, y, z, w)] B = x +y 2 , z +w 2 , x −y 2 , z −w 2 t . Thus, P W (x, y, z, w) = x +y 2 (1, 1, 0, 0) + z +w 2 (0, 0, 1, 1) is the closest vector to the subspace W for any vector (x, y, z, w) ∈ R 4 . Exercise 5.3.19 1. Show that for any non-zero vector v t ∈ R n , the rank of the matrix vv t is 1. 2. Let W be a subspace of a vector space V and let P : V −→ V be the orthogonal projection of V onto W along W . Let B be an orthonormal ordered basis of V. Then prove that corresponding matrix satisfies P[B, B] t = P[B, B]. 3. Let A be an n n matrix with A 2 = A and A t = A. Consider the associated linear transformation T A : R n −→ R n defined by T A (v) = Av for all v t ∈ R n . Then prove that there exists a subspace W of R n such that T A is the orthogonal projection of R n onto W along W . 4. Let W 1 and W 2 be two distinct subspaces of a finite dimensional vector space V. Let P W1 and P W2 be the corresponding orthogonal projection operators of V along W 1 and W 2 , respectively. Then by constructing an example in R 2 , show that the map P W1 ◦ P W2 is a projection but not an orthogonal projection. 108 CHAPTER 5. INNER PRODUCT SPACES 5. Let W be an (n−1)-dimensional vector subspace of R n and let W be its orthogonal complement. Let B = (v 1 , v 2 , . . . , v n−1 , v n ) be an orthogonal ordered basis of R n with (v 1 , v 2 , . . . , v n−1 ) an ordered basis of W. Define a map T : R n −→R n by T(v) = w 0 −w whenever v = w+w 0 for some w ∈ W and w 0 ∈ W . Then (a) prove that T is a linear transformation, (b) find the matrix, T[B, B], and (c) prove that T[B, B] is an orthogonal matrix. T is called the reflection along W . Chapter 6 Eigenvalues, Eigenvectors and Diagonalisation 6.1 Introduction and Definitions In this chapter, the linear transformations are from a given finite dimensional vector space V to itself. Observe that in this case, the matrix of the linear transformation is a square matrix. So, in this chapter, all the matrices are square matrices and a vector x means x = (x 1 , x 2 , . . . , x n ) t for some positive integer n. Example 6.1.1 Let A be a real symmetric matrix. Consider the following problem: Maximize (Minimize) x t Ax such that x ∈ R n and x t x = 1. To solve this, consider the Lagrangian L(x, λ) = x t Ax −λ(x t x −1) = n ¸ i=1 n ¸ j=1 a ij x i x j −λ( n ¸ i=1 x 2 i −1). Partially differentiating L(x, λ) with respect to x i for 1 ≤ i ≤ n, we get ∂L ∂x 1 = 2a 11 x 1 + 2a 12 x 2 + + 2a 1n x n −2λx 1 , ∂L ∂x 2 = 2a 21 x 1 + 2a 22 x 2 + + 2a 2n x n −2λx 2 , and so on, till ∂L ∂x n = 2a n1 x 1 + 2a n2 x 2 + + 2a nn x n −2λx n . Therefore, to get the points of extrema, we solve for (0, 0, . . . , 0) t = ( ∂L ∂x 1 , ∂L ∂x 2 , . . . , ∂L ∂x n ) t = ∂L ∂x = 2(Ax −λx). We therefore need to find a λ ∈ R and 0 = x ∈ R n such that Ax = λx for the extremal problem. Example 6.1.2 Consider a system of n ordinary differential equations of the form d y(t) dt = Ay, t ≥ 0; (6.1.1) 109 110 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALISATION where A is a real n n matrix and y is a column vector. To get a solution, let us assume that y(t) = ce λt (6.1.2) is a solution of (6.1.1) and look into what λ and c has to satisfy, i.e., we are investigating for a necessary condition on λ and c so that (6.1.2) is a solution of (6.1.1). Note here that (6.1.1) has the zero solution, namely y(t) ≡ 0 and so we are looking for a non-zero c. Differentiating (6.1.2) with respect to t and λe λt c = Ae λt c or equivalently (A −λI)c = 0. (6.1.3) So, (6.1.2) is a solution of the given system of differential equations if and only if λ and c satisfy (6.1.3). That is, given an nn matrix A, we are this lead to find a pair (λ, c) such that c = 0 and (6.1.3) is satisfied. Let A be a matrix of order n. In general, we ask the question: For what values of λ ∈ F, there exist a non-zero vector x ∈ F n such that Ax = λx? (6.1.4) Here, F n stands for either the vector space R n over R or C n over C. Equation (6.1.4) is equivalent to the equation (A −λI)x = 0. By Theorem 2.5.1, this system of linear equations has a non-zero solution, if rank (A −λI) < n, or equivalently det(A −λI) = 0. So, to solve (6.1.4), we are forced to choose those values of λ ∈ F for which det(A − λI) = 0. Observe that det(A −λI) is a polynomial in λ of degree n. We are therefore lead to the following definition. Definition 6.1.3 (Characteristic Polynomial) Let A be a matrix of order n. The polynomial det(A −λI) is called the characteristic polynomial of A and is denoted by p(λ). The equation p(λ) = 0 is called the characteristic equation of A. If λ ∈ F is a solution of the characteristic equation p(λ) = 0, then λ is called a characteristic value of A. Some books use the term eigenvalue in place of characteristic value. Theorem 6.1.4 Let A = [a ij ]; a ij ∈ F, for 1 ≤ i, j ≤ n. Suppose λ = λ 0 ∈ F is a root of the characteristic equation. Then there exists a non-zero v ∈ F n such that Av = λ 0 v. Proof. Since λ 0 is a root of the characteristic equation, det(A−λ 0 I) = 0. This shows that the matrix A −λ 0 I is singular and therefore by Theorem 2.5.1 the linear system (A −λ 0 I n )x = 0 has a non-zero solution. Remark 6.1.5 Observe that the linear system Ax = λx has a solution x = 0 for every λ ∈ F. So, we consider only those x ∈ F n that are non-zero and are solutions of the linear system Ax = λx. Definition 6.1.6 (Eigenvalue and Eigenvector) If the linear system Ax = λx has a non-zero solution x ∈ F n for some λ ∈ F, then 1. λ ∈ F is called an eigenvalue of A, 6.1. INTRODUCTION AND DEFINITIONS 111 2. 0 = x ∈ F n is called an eigenvector corresponding to the eigenvalue λ of A, and 3. the tuple (λ, x) is called an eigenpair. Remark 6.1.7 To understand the difference between a characteristic value and an eigenvalue, we give the following example. Consider the matrix A = ¸ 0 1 −1 0 ¸ . Then the characteristic polynomial of A is p(λ) = λ 2 + 1. Given the matrix A, recall the linear transformation T A : F 2 −→F 2 defined by T A (x) = Ax for every x ∈ F 2 . 1. If F = C, that is, if A is considered a complex matrix, then the roots of p(λ) = 0 in C are ±i. So, A has (i, (1, i) t ) and (−i, (i, 1) t ) as eigenpairs. 2. If F = R, that is, if A is considered a real matrix, then p(λ) = 0 has no solution in R. Therefore, if F = R, then A has no eigenvalue but it has ±i as characteristic values. Remark 6.1.8 Note that if (λ, x) is an eigenpair for an nn matrix A then for any non-zero c ∈ F, c = 0, (λ, cx) is also an eigenpair for A. Similarly, if x 1 , x 2 , . . . , x r are eigenvectors of A corresponding to the eigenvalue λ, then for any non-zero (c 1 , c 2 , . . . , c r ) ∈ F r , it is easily seen that if r ¸ i=1 c i x i = 0, then r ¸ i=1 c i x i is also an eigenvector of A corresponding to the eigenvalue λ. Hence, when we talk of eigenvectors corresponding to an eigenvalue λ, we mean linearly independent eigenvectors. Suppose λ 0 ∈ F is a root of the characteristic equation det(A − λ 0 I) = 0. Then A − λ 0 I is singular and rank (A − λ 0 I) < n. Suppose rank (A − λ 0 I) = r < n. Then by Corollary 4.3.9, the linear system (A − λ 0 I)x = 0 has n − r linearly independent solutions. That is, A has n − r linearly independent eigenvectors corresponding to the eigenvalue λ 0 whenever rank (A −λ 0 I) = r < n. Example 6.1.9 1. Let A = diag(d 1 , d 2 , . . . , d n ) with d i ∈ R for 1 ≤ i ≤ n. Then p(λ) = ¸ n i=1 (λ − d i ) is the characteristic equation. So, the eigenpairs are (d 1 , (1, 0, . . . , 0) t ), (d 2 , (0, 1, 0, . . . , 0) t ), . . . , (d n , (0, . . . , 0, 1) t ). 2. Let A = ¸ 1 1 0 1 ¸ . Then det(A − λI 2 ) = (1 − λ) 2 . Hence, the characteristic equation has roots 1, 1. That is 1 is a repeated eigenvalue. Now check that the equation (A − I 2 )x = 0 for x = (x 1 , x 2 ) t is equivalent to the equation x 2 = 0. And this has the solution x = (x 1 , 0) t . Hence, from the above remark, (1, 0) t is a representative for the eigenvector. Therefore, here we have two eigenvalues 1, 1 but only one eigenvector. 3. Let A = ¸ 1 0 0 1 ¸ . Then det(A−λI 2 ) = (1 −λ) 2 . The characteristic equation has roots 1, 1. Here, the matrix that we have is I 2 and we know that I 2 x = x for every x t ∈ R 2 and we can choose any two linearly independent vectors x t , y t from R 2 to get (1, x) and (1, y) as the two eigenpairs. In general, if x 1 , x 2 , . . . , x n are linearly independent vectors in R n , then (1, x 1 ), (1, x 2 ), . . . , (1, x n ) are eigenpairs for the identity matrix, I n . 112 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALISATION 4. Let A = ¸ 1 2 2 1 ¸ . Then det(A − λI 2 ) = (λ − 3)(λ + 1). The characteristic equation has roots 3, −1. Now check that the eigenpairs are (3, (1, 1) t ), and (−1, (1, −1) t ). In this case, we have two distinct eigenvalues and the corresponding eigenvectors are also linearly independent. The reader is required to prove the linear independence of the two eigenvectors. 5. Let A = ¸ 1 −1 1 1 ¸ . Then det(A−λI 2 ) = λ 2 −2λ+2. The characteristic equation has roots 1+i, 1−i. Hence, over R, the matrix A has no eigenvalue. Over C, the reader is required to show that the eigenpairs are (1 +i, (i, 1) t ) and (1 −i, (1, i) t ). Exercise 6.1.10 1. Find the eigenvalues of a triangular matrix. 2. Find eigenpairs over C, for each of the following matrices: ¸ 1 0 0 0 ¸ , ¸ 1 1 +i 1 −i 1 ¸ , ¸ i 1 +i −1 +i i ¸ , ¸ cos θ −sinθ sin θ cos θ ¸ , and ¸ cos θ sinθ sin θ −cos θ ¸ . 3. Let A and B be similar matrices. (a) Then prove that A and B have the same set of eigenvalues. (b) Let (λ, x) be an eigenpair for A and (λ, y) be an eigenpair for B. What is the relationship between the vectors x and y? [Hint: Recall that if the matrices A and B are similar, then there exists a non-singular matrix P such that B = PAP −1 .] 4. Let A = (a ij ) be an n n matrix. Suppose that for all i, 1 ≤ i ≤ n, n ¸ j=1 a ij = a. Then prove that a is an eigenvalue of A. What is the corresponding eigenvector? 5. Prove that the matrices A and A t have the same set of eigenvalues. Construct a 2 2 matrix A such that the eigenvectors of A and A t are different. 6. Let A be a matrix such that A 2 = A (A is called an idempotent matrix). Then prove that its eigenvalues are either 0 or 1 or both. 7. Let A be a matrix such that A k = 0 (A is called a nilpotent matrix) for some positive integer k ≥ 1. Then prove that its eigenvalues are all 0. Theorem 6.1.11 Let A = [a ij ] be an n n matrix with eigenvalues λ 1 , λ 2 , . . . , λ n , not necessarily distinct. Then det(A) = n ¸ i=1 λ i and tr(A) = n ¸ i=1 a ii = n ¸ i=1 λ i . Proof. Since λ 1 , λ 2 , . . . , λ n are the n eigenvalues of A, by definition, det(A −λI n ) = p(λ) = (−1) n (λ −λ 1 )(λ −λ 2 ) (λ −λ n ). (6.1.5) (6.1.5) is an identity in λ as polynomials. Therefore, by substituting λ = 0 in (6.1.5), we get det(A) = (−1) n (−1) n n ¸ i=1 λ i = n ¸ i=1 λ i . 6.1. INTRODUCTION AND DEFINITIONS 113 Also, det(A−λIn) = 2 6 6 6 6 4 a11 −λ a12 · · · a1n a21 a22 −λ · · · a2n . . . . . . . . . . . . an1 an2 · · · ann −λ 3 7 7 7 7 5 (6.1.6) = a0 −λa1 +λ 2 a2 +· · · +(−1) n−1 λ n−1 an−1 + (−1) n λ n (6.1.7) for some a 0 , a 1 , . . . , a n−1 ∈ F. Note that a n−1 , the coefficient of (−1) n−1 λ n−1 , comes from the product (a 11 −λ)(a 22 −λ) (a nn −λ). So, a n−1 = n ¸ i=1 a ii = tr(A) by definition of trace. But , from (6.1.5) and (6.1.7), we get a 0 −λa 1 2 a 2 + + (−1) n−1 λ n−1 a n−1 + (−1) n λ n = (−1) n (λ −λ 1 )(λ −λ 2 ) (λ −λ n ). (6.1.8) Therefore, comparing the coefficient of (−1) n−1 λ n−1 , we have tr(A) = a n−1 = (−1)¦(−1) n ¸ i=1 λ i ¦ = n ¸ i=1 λ i . Hence, we get the required result. Exercise 6.1.12 1. Let A be a skew symmetric matrix of order 2n+1. Then prove that 0 is an eigenvalue of A. 2. Let A be a 3 3 orthogonal matrix (AA t = I).If det(A) = 1, then prove that there exists a non-zero vector v ∈ R 3 such that Av = v. Let A be an n n matrix. Then in the proof of the above theorem, we observed that the charac- teristic equation det(A − λI) = 0 is a polynomial equation of degree n in λ. Also, for some numbers a 0 , a 1 , . . . , a n−1 ∈ F, it has the form λ n +a n−1 λ n−1 +a n−2 λ 2 + a 1 λ +a 0 = 0. Note that, in the expression det(A −λI) = 0, λ is an element of F. Thus, we can only substitute λ by elements of F. It turns out that the expression A n +a n−1 A n−1 +a n−2 A 2 + a 1 A+a 0 I = 0 holds true as a matrix identity. This is a celebrated theorem called the Cayley Hamilton Theorem. We state this theorem without proof and give some implications. Theorem 6.1.13 (Cayley Hamilton Theorem) Let A be a square matrix of order n. Then A satisfies its characteristic equation. That is, A n +a n−1 A n−1 +a n−2 A 2 + a 1 A+a 0 I = 0 holds true as a matrix identity. 114 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALISATION Some of the implications of Cayley Hamilton Theorem are as follows. Remark 6.1.14 1. Let A = ¸ 0 1 0 0 ¸ . Then its characteristic polynomial is p(λ) = λ 2 . Also, for the function, f(x) = x, f(0) = 0, and f(A) = A = 0. This shows that the condition f(λ) = 0 for each eigenvalue λ of A does not imply that f(A) = 0. 2. Suppose we are given a square matrix A of order n and we are interested in calculating A where is large compared to n. Then we can use the division algorithm to find numbers α 0 , α 1 , . . . , α n−1 and a polynomial f(λ) such that λ = f(λ) λ n +a n−1 λ n−1 +a n−2 λ 2 + a 1 λ +a 0 0 +λα 1 + +λ n−1 α n−1 . Hence, by the Cayley Hamilton Theorem, A = α 0 I +α 1 A + +α n−1 A n−1 . That is, we just need to compute the powers of A till n −1. In the language of graph theory, it says the following: “Let G be a graph on n vertices. Suppose there is no path of length n − 1 or less from a vertex v to a vertex u of G. Then there is no path from v to u of any length. That is, the graph G is disconnected and v and u are in different components.” 3. Let A be a non-singular matrix of order n. Then note that a n = det(A) = 0 and A −1 = −1 a n [A n−1 +a n−1 A n−2 + +a 1 I]. This matrix identity can be used to calculate the inverse. Note that the vector A −1 (as an element of the vector space of all n ×n matrices) is a linear combination of the vectors I, A, . . . , A n−1 . Exercise 6.1.15 Find inverse of the following matrices by using the Cayley Hamilton Theorem i) 2 3 4 5 6 7 1 1 2 ¸ ¸ ¸ ii) −1 −1 1 1 −1 1 0 1 1 ¸ ¸ ¸ iii) 1 −2 −1 −2 1 −1 0 −1 2 ¸ ¸ ¸. Theorem 6.1.16 If λ 1 , λ 2 , . . . , λ k are distinct eigenvalues of a matrix A with corresponding eigenvectors x 1 , x 2 , . . . , x k , then the set ¦x 1 , x 2 , . . . , x k ¦ is linearly independent. Proof. The proof is by induction on the number m of eigenvalues. The result is obviously true if m = 1 as the corresponding eigenvector is non-zero and we know that any set containing exactly one non-zero vector is linearly independent. Let the result be true for m, 1 ≤ m < k. We prove the result for m+ 1. We consider the equation c 1 x 1 +c 2 x 2 + +c m+1 x m+1 = 0 (6.1.9) for the unknowns c 1 , c 2 , . . . , c m+1 . We have 0 = A0 = A(c 1 x 1 +c 2 x 2 + +c m+1 x m+1 ) = c 1 Ax 1 +c 2 Ax 2 + +c m+1 Ax m+1 = c 1 λ 1 x 1 +c 2 λ 2 x 2 + +c m+1 λ m+1 x m+1 . (6.1.10) 6.2. DIAGONALISATION 115 From Equations (6.1.9) and (6.1.10), we get c 2 2 −λ 1 )x 2 +c 3 3 −λ 1 )x 3 + +c m+1 m+1 −λ 1 )x m+1 = 0. This is an equation in m eigenvectors. So, by the induction hypothesis, we have c i i −λ 1 ) = 0 for 2 ≤ i ≤ m+ 1. But the eigenvalues are distinct implies λ i − λ 1 = 0 for 2 ≤ i ≤ m + 1. We therefore get c i = 0 for 2 ≤ i ≤ m+ 1. Also, x 1 = 0 and therefore (6.1.9) gives c 1 = 0. Thus, we have the required result. We are thus lead to the following important corollary. Corollary 6.1.17 The eigenvectors corresponding to distinct eigenvalues of an n n matrix A are linearly independent. Exercise 6.1.18 1. For an n n matrix A, prove the following. (a) A and A t have the same set of eigenvalues. (b) If λ is an eigenvalue of an invertible matrix A then 1 λ is an eigenvalue of A −1 . (c) If λ is an eigenvalue of A then λ k is an eigenvalue of A k for any positive integer k. (d) If A and B are n n matrices with A nonsingular then BA −1 and A −1 B have the same set of eigenvalues. In each case, what can you say about the eigenvectors? 2. Let A and B be 2 2 matrices for which det(A) = det(B) and tr(A) = tr(B). (a) Do A and B have the same set of eigenvalues? (b) Give examples to show that the matrices A and B need not be similar. 3. Let (λ 1 , u) be an eigenpair for a matrix A and let (λ 2 , u) be an eigenpair for another matrix B. (a) Then prove that (λ 1 2 , u) is an eigenpair for the matrix A +B. (b) Give an example to show that if λ 1 , λ 2 are respectively the eigenvalues of A and B, then λ 1 2 need not be an eigenvalue of A +B. 4. Let λ i , 1 ≤ i ≤ n be distinct non-zero eigenvalues of an n n matrix A. Let u i , 1 ≤ i ≤ n be the corresponding eigenvectors. Then show that B = ¦u 1 , u 2 , . . . , u n ¦ forms a basis of F n (F). If [b] B = (c 1 , c 2 , . . . , c n ) t then show that Ax = b has the unique solution x = c 1 λ 1 u 1 + c 2 λ 2 u 2 + + c n λ n u n . 6.2 Diagonalisation Let A be a square matrix of order n and let T A : F n −→F n be the corresponding linear transformation. In this section, we ask the question “does there exist a basis B of F n such that T A [B, B], the matrix of the linear transformation T A , is in the simplest possible form.” We know that, the simplest form for a matrix is the identity matrix and the diagonal matrix. In this section, we show that for a certain class of matrices A, we can find a basis B such that T A [B, B] is a diagonal matrix, consisting of the eigenvalues of A. This is equivalent to saying that A is similar to a diagonal matrix. To show the above, we need the following definition. 116 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALISATION Definition 6.2.1 (Matrix Diagonalisation) A matrix A is said to be diagonalisable if there exists a non- singular matrix P such that P −1 AP is a diagonal matrix. Remark 6.2.2 Let A be an n n diagonalisable matrix with eigenvalues λ 1 , λ 2 , . . . , λ n . By definition, A is similar to a diagonal matrix D. Observe that D = diag(λ 1 , λ 2 , . . . , λ n ) as similar matrices have the same set of eigenvalues and the eigenvalues of a diagonal matrix are its diagonal entries. Example 6.2.3 Let A = ¸ 0 1 −1 0 ¸ . Then we have the following: 1. Let V = R 2 . Then A has no real eigenvalue (see Example 6.1.8 and hence A doesn’t have eigenvectors that are vectors in R 2 . Hence, there does not exist any non-singular 2 2 real matrix P such that P −1 AP is a diagonal matrix. 2. In case, V = C 2 (C), the two complex eigenvalues of A are −i, i and the corresponding eigenvectors are (i, 1) t and (−i, 1) t , respectively. Also, (i, 1) t and (−i, 1) t can be taken as a basis of C 2 (C). Define a 2 2 complex matrix by U = 1 2 ¸ i −i 1 1 ¸ . Then U AU = ¸ −i 0 0 i ¸ . Theorem 6.2.4 let A be an nn matrix. Then A is diagonalisable if and only if A has n linearly independent eigenvectors. Proof. Let A be diagonalisable. Then there exist matrices P and D such that P −1 AP = D = diag(λ 1 , λ 2 , . . . , λ n ). Or equivalently, AP = PD. Let P = [u 1 , u 2 , . . . , u n ]. Then AP = PD implies that Au i = d i u i for 1 ≤ i ≤ n. Since u i ’s are the columns of a non-singular matrix P, they are non-zero and so for 1 ≤ i ≤ n, we get the eigenpairs (d i , u i ) of A. Since, u i ’s are columns of the non-singular matrix P, using Corollary 4.3.9, we get u 1 , u 2 , . . . , u n are linearly independent. Thus we have shown that if A is diagonalisable then A has n linearly independent eigenvectors. Conversely, suppose A has n linearly independent eigenvectors u i , 1 ≤ i ≤ n with eigenvalues λ i . Then Au i = λ i u i . Let P = [u 1 , u 2 , . . . , u n ]. Since u 1 , u 2 , . . . , u n are linearly independent, by Corollary 4.3.9, P is non-singular. Also, AP = [Au 1 , Au 2 , . . . , Au n ] = [λ 1 u 1 , λ 2 u 2 , . . . , λ n u n ] = [u 1 , u 2 , . . . , u n ] λ 1 0 0 0 λ 2 0 . . . . . . . . . 0 0 λ n ¸ ¸ ¸ ¸ ¸ ¸ = PD. Therefore the matrix A is diagonalisable. Corollary 6.2.5 let A be an n n matrix. Suppose that the eigenvalues of A are distinct. Then A is diagonalisable. 6.2. DIAGONALISATION 117 Proof. As A is an nn matrix, it has n eigenvalues. Since all the eigenvalues of A are distinct, by Corol- lary 6.1.17, the n eigenvectors are linearly independent. Hence, by Theorem 6.2.4, A is diagonalisable. Corollary 6.2.6 Let A be an n n matrix with λ 1 , λ 2 , . . . , λ k as its distinct eigenvalues and p(λ) as its characteristic polynomial. Suppose that for each i, 1 ≤ i ≤ k, (x − λ i ) mi divides p(λ) but (x − λ i ) mi+1 does not divides p(λ) for some positive integers m i . Then A is diagonalisable if and only if dim ker(A −λ i I) = m i for each i, 1 ≤ i ≤ k. Or equivalently A is diagonalisable if and only if rank(A −λ i I) = n −m i for each i, 1 ≤ i ≤ k. Proof. As A is diagonalisable, by Theorem 6.2.4, A has n linearly independent eigenvalues. Also, k ¸ i=1 m i = n as deg(p(λ)) = n. Hence, for each eigenvalue λ i , 1 ≤ i ≤ k, A has exactly m i linearly independent eigenvectors. Thus, for each i, 1 ≤ i ≤ k, the homogeneous linear system (A − λ i I)x = 0 has exactly m i linearly independent vectors in its solution set. Therefore, dim ker(A − λ i I) ≥ m i . Indeed dim ker(A −λ i I) = m i for 1 ≤ i ≤ k follows from a simple counting argument. Now suppose that for each i, 1 ≤ i ≤ k, dim ker(A−λ i I) = m i . Then for each i, 1 ≤ i ≤ k, we can choose m i linearly independent eigenvectors. Also by Corollary 6.1.17, the eigenvectors corresponding to distinct eigenvalues are linearly independent. Hence A has n = k ¸ i=1 m i linearly independent eigenvectors. Hence by Theorem 6.2.4, A is diagonalisable. Example 6.2.7 1. Let A = 2 1 1 1 2 1 0 −1 1 ¸ ¸ ¸. Then det(A − λI) = (2 − λ) 2 (1 − λ). Hence, A has eigenvalues 1, 2, 2. It is easily seen that 1, (1, 0, −1) t and ( 2, (1, 1, −1) t are the only eigenpairs. That is, the matrix A has exactly one eigenvector corresponding to the repeated eigenvalue 2. Hence, by Theorem 6.2.4, the matrix A is not diagonalisable. 2. Let A = 2 1 1 1 2 1 1 1 2 ¸ ¸ ¸. Then det(A − λI) = (4 − λ)(1 − λ) 2 . Hence, A has eigenvalues 1, 1, 4. It can be easily verified that (1, −1, 0) t and (1, 0, −1) t correspond to the eigenvalue 1 and (1, 1, 1) t corresponds to the eigenvalue 4. Note that the set ¦(1, −1, 0) t , (1, 0, −1) t ¦ consisting of eigenvectors corresponding to the eigenvalue 1 are not orthogonal. This set can be replaced by the orthogonal set ¦(1, 0, −1) t , (1, −2, 1) t ¦ which still consists of eigenvectors corresponding to the eigenvalue 1 as (1, −2, 1) = 2(1, −1, 0) − (1, 0, −1). Also, the set ¦(1, 1, 1), (1, 0, −1), (1, −2, 1)¦ forms a basis of R 3 . So, by Theorem 6.2.4, the matrix A is diagonalisable. Also, if U = 1 3 1 2 1 6 1 3 0 −2 6 1 3 1 2 1 6 ¸ ¸ ¸ is the corresponding unitary matrix then U AU = diag(4, 1, 1). Observe that the matrix A is a symmetric matrix. In this case, the eigenvectors are mutually orthogonal. In general, for any nn real symmetric matrix A, there always exist n eigenvectors and they are mutually orthogonal. This result will be proved later. Exercise 6.2.8 1. By finding the eigenvalues of the following matrices, justify whether or not A = PDP −1 for some real non-singular matrix P and a real diagonal matrix D. i) ¸ cos θ sin θ −sinθ cos θ ¸ ii) ¸ cos θ sin θ sinθ −cos θ ¸ for any θ with 0 ≤ θ ≤ 2π. 118 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALISATION 2. Let A be an n n matrix and B an m m matrix. Suppose C = ¸ A 0 0 B ¸ . Then show that C is diagonalisable if and only if both A and B are diagonalisable. 3. Let T : R 5 −→R 5 be a linear transformation with rank (T −I) = 3 and ^(T) = ¦(x 1 , x 2 , x 3 , x 4 , x 5 ) ∈ R 5 [ x 1 +x 4 +x 5 = 0, x 2 +x 3 = 0¦. Then (a) determine the eigenvalues of T? (b) find the number of linearly independent eigenvectors corresponding to each eigenvalue? 4. Let A be a non-zero square matrix such that A 2 = 0. Show that A cannot be diagonalised. [Hint: Use Remark 6.2.2.] 5. Are the following matrices diagonalisable? i) 1 3 2 1 0 2 3 1 0 0 −1 1 0 0 0 4 ¸ ¸ ¸ ¸ ¸ , ii) 1 0 −1 0 1 0 0 0 2 ¸ ¸ ¸, iii) 1 −3 3 0 −5 6 0 −3 4 ¸ ¸ ¸. 6.3 Diagonalisable matrices In this section, we will look at some special classes of square matrices which are diagonalisable. We will also be dealing with matrices having complex entries and hence for a matrix A = [a ij ], recall the following definitions. Definition 6.3.1 (Special Matrices) 1. A = (a ji ), is called the conjugate transpose of the matrix A. Note that A = A t = A t . 2. A square matrix A with complex entries is called (a) a Hermitian matrix if A = A. (b) a unitary matrix if A A = A A = I n . (c) a skew-Hermitian matrix if A = −A. (d) a normal matrix if A A = AA . 3. A square matrix A with real entries is called (a) a symmetric matrix if A t = A. (b) an orthogonal matrix if A A t = A t A = I n . (c) a skew-symmetric matrix if A t = −A. Note that a symmetric matrix is always Hermitian, a skew-symmetric matrix is always skew-Hermitian and an orthogonal matrix is always unitary. Each of these matrices are normal. If A is a unitary matrix then A = A −1 . Example 6.3.2 1. Let B = ¸ i 1 −1 i ¸ . Then B is skew-Hermitian. 6.3. DIAGONALISABLE MATRICES 119 2. Let A = 1 2 ¸ 1 i i 1 ¸ and B = ¸ 1 1 −1 1 ¸ . Then A is a unitary matrix and B is a normal matrix. Note that 2A is also a normal matrix. Definition 6.3.3 (Unitary Equivalence) Let A and B be two n n matrices. They are called unitarily equivalent if there exists a unitary matrix U such that A = U BU. Exercise 6.3.4 1. Let A be any matrix. Then A = 1 2 (A + A ) + 1 2 (A − A ) where 1 2 (A + A ) is the Hermitian part of A and 1 2 (A −A ) is the skew-Hermitian part of A. 2. Every matrix can be uniquely expressed as A = S +iT where both S and T are Hermitian matrices. 3. Show that A−A is always skew-Hermitian. 4. Does there exist a unitary matrix U such that UAU −1 = B where A = 1 1 4 0 2 2 0 0 3 ¸ ¸ ¸ and B = 2 −1 3 2 0 1 2 0 0 3 ¸ ¸ ¸. Proposition 6.3.5 Let A be an n n Hermitian matrix. Then all the eigenvalues of A are real. Proof. Let (λ, x) be an eigenpair. Then Ax = λx and A = A implies x A = x A = (Ax) = (λx) = λx . Hence λx x = x (λx) = x (Ax) = (x A)x = (λx )x = λx x. But x is an eigenvector and hence x = 0 and so the real number |x| 2 = x x is non-zero as well. Thus λ = λ. That is, λ is a real number. Theorem 6.3.6 Let A be an n n Hermitian matrix. Then A is unitarily diagonalisable. That is, there exists a unitary matrix U such that U AU = D; where D is a diagonal matrix with the eigenvalues of A as the diagonal entries. In other words, the eigenvectors of A form an orthonormal basis of C n . Proof. We will prove the result by induction on the size of the matrix. The result is clearly true if n = 1. Let the result be true for n = k − 1. we will prove the result in case n = k. So, let A be a k k matrix and let (λ 1 , x) be an eigenpair of A with |x| = 1. We now extend the linearly independent set ¦x¦ to form an orthonormal basis ¦x, u 2 , u 3 , . . . , u k ¦ (using Gram-Schmidt Orthogonalisation) of C k . As ¦x, u 2 , u 3 , . . . , u k ¦ is an orthonormal set, u i x = 0 for all i = 2, 3, . . . , k. Therefore, observe that for all i, 2 ≤ i ≤ k, (Au i ) x = (u i ∗ A )x = u i (A x) = u i (Ax) = u i 1 x) = λ 1 (u i x) = 0. 120 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALISATION Hence, we also have x (Au i ) = 0 for 2 ≤ i ≤ k. Now, define U 1 = [x, u 2 , , u k ] (with x, u 2 , . . . , u k as columns of U 1 ). Then the matrix U 1 is a unitary matrix and U −1 1 AU 1 = U 1 AU 1 = U 1 [Ax Au 2 Au k ] = x u 2 . . . u k ¸ ¸ ¸ ¸ ¸ ¸ 1 x Au 2 Au k ] = λ 1 x x x Au k u 2 1 x) u 2 (Au k ) . . . . . . . . . u k 1 x) u k (Au k ) ¸ ¸ ¸ ¸ ¸ ¸ = λ 1 0 0 . . . B 0 ¸ ¸ ¸ ¸ ¸ ¸ , where B is a (k − 1) (k − 1) matrix. As the matrix U 1 is unitary, U 1 = U −1 1 . So, A = A gives (U −1 1 AU 1 ) = U −1 1 AU 1 . This condition, together with the fact that λ 1 is a real number (use Propo- sition 6.3.5), implies that B = B. That is, B is also a Hermitian matrix. Therefore, by induction hypothesis there exists a (k −1) (k −1) unitary matrix U 2 such that U −1 2 BU 2 = D 2 = diag(λ 2 , . . . , λ k ). Recall that , the entries λ i , for 2 ≤ i ≤ k are the eigenvalues of the matrix B. We also know that two similar matrices have the same set of eigenvalues. Hence, the eigenvalues of A are λ 1 , λ 2 , . . . , λ k . Define U = U 1 ¸ 1 0 0 U 2 ¸ . Then U is a unitary matrix and U −1 AU = U1 " 1 0 0 U2 #! −1 A U1 " 1 0 0 U2 #! = " 1 0 0 U −1 2 # U −1 1 ! A U1 " 1 0 0 U2 #! = " 1 0 0 U −1 2 # ` U −1 1 AU1 ´ " 1 0 0 U2 # = " 1 0 0 U −1 2 #" λ1 0 0 B #" 1 0 0 U2 # = " λ1 0 0 U −1 2 BU2 # = " λ1 0 0 D2 # . Thus, U −1 AU is a diagonal matrix with diagonal entries λ 1 , λ 2 , . . . , λ k , the eigenvalues of A. Hence, the result follows. Corollary 6.3.7 Let A be an n n real symmetric matrix. Then 1. the eigenvalues of A are all real, 2. the corresponding eigenvectors can be chosen to have real entries, and 3. the eigenvectors also form an orthonormal basis of R n . Proof. As A is symmetric, A is also an Hermitian matrix. Hence, by Proposition 6.3.5, the eigenvalues of A are all real. Let (λ, x) be an eigenpair of A. Suppose x t ∈ C n . Then there exist y t , z t ∈ R n such that x = y +iz. So, Ax = λx =⇒A(y +iz) = λ(y +iz). 6.3. DIAGONALISABLE MATRICES 121 Comparing the real and imaginary parts, we get Ay = λy and Az = λz. Thus, we can choose the eigenvectors to have real entries. To prove the orthonormality of the eigenvectors, we proceed on the lines of the proof of Theorem Exercise 6.3.8 1. Let A be a skew-Hermitian matrix. Then all the eigenvalues of A are either zero or purely imaginary. Also, the eigenvectors corresponding to distinct eigenvalues are mutually orthogonal. [Hint: Carefully study the proof of Theorem 6.3.6.] 2. Let A be an n n unitary matrix. Then (a) the rows of A form an orthonormal basis of C n . (b) the columns of A form an orthonormal basis of C n . (c) for any two vectors x, y ∈ C n×1 , 'Ax, Ay` = 'x, y`. (d) for any vector x ∈ C n×1 , |Ax| = |x|. (e) for any eigenvalue λ A, [λ[ = 1. (f) the eigenvectors x, y corresponding to distinct eigenvalues λ and µ satisfy 'x, y` = 0. That is, if (λ, x) and (µ, y) are eigenpairs, with λ = µ, then x and y are mutually orthogonal. 3. Let A be a normal matrix. Then, show that if (λ, x) is an eigenpair for A then (λ, x) is an eigenpair for A . 4. Show that the matrices A = ¸ 4 4 0 4 ¸ and B = ¸ 10 9 −4 −2 ¸ are similar. Is it possible to find a unitary matrix U such that A = U BU? 5. Let A be a 2 2 orthogonal matrix. Then prove the following: (a) if det(A) = 1, then A = ¸ cos θ −sinθ sinθ cos θ ¸ for some θ, 0 ≤ θ < 2π. (b) if det A = −1, then there exists a basis of R 2 in which the matrix of A looks like ¸ 1 0 0 −1 ¸ . 6. Describe all 2 2 orthogonal matrices. 7. Let A = 2 1 1 1 2 1 1 1 2 ¸ ¸ ¸. Determine A 301 . 8. Let A be a 3 3 orthogonal matrix. Then prove the following: (a) if det(A) = 1, then A is a rotation about a fixed axis, in the sense that A has an eigenpair (1, x) such that the restriction of A to the plane x is a two dimensional rotation of x . (b) if det A = −1, then the action of A corresponds to a reflection through a plane P, followed by a rotation about the line through the origin that is perpendicular to P. Remark 6.3.9 In the previous exercise, we saw that the matrices A = ¸ 4 4 0 4 ¸ and B = ¸ 10 9 −4 −2 ¸ are similar but not unitarily equivalent, whereas unitary equivalence implies similarity equivalence as U = U −1 . But in numerical calculations, unitary transformations are preferred as compared to similarity transformations. The main reasons being: 122 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALISATION 1. Exercise 6.3.8.2 implies that an orthonormal change of basis leaves unchanged the sum of squares of the absolute values of the entries which need not be true under a non-orthonormal change of basis. 2. As U = U −1 for a unitary matrix U, unitary equivalence is computationally simpler. 3. Also in doing “conjugate transpose”, the loss of accuracy due to round-off errors doesn’t occur. We next prove the Schur’s Lemma and use it to show that normal matrices are unitarily diagonalis- able. Lemma 6.3.10 (Schur’s Lemma) Every n n complex matrix is unitarily similar to an upper triangular matrix. Proof. We will prove the result by induction on the size of the matrix. The result is clearly true if n = 1. Let the result be true for n = k − 1. we will prove the result in case n = k. So, let A be a k k matrix and let (λ 1 , x) be an eigenpair for A with |x| = 1. Now the linearly independent set ¦x¦ is extended, using the Gram-Schmidt Orthogonalisation, to get an orthonormal basis ¦x, u 2 , u 3 , . . . , u k ¦. Then U 1 = [x u 2 u k ] (with x, u 2 , . . . , u k as the columns of the matrix U 1 ) is a unitary matrix and U −1 1 AU 1 = U 1 AU 1 = U 1 [Ax Au 2 Au k ] = x u 2 . . . u k ¸ ¸ ¸ ¸ ¸ ¸ 1 x Au 2 Au k ] = λ 1 0 . . . B 0 ¸ ¸ ¸ ¸ ¸ ¸ where B is a (k −1) (k −1) matrix. By induction hypothesis there exists a (k −1) (k −1) unitary matrix U 2 such that U −1 2 BU 2 is an upper triangular matrix with diagonal entries λ 2 , . . . , λ k , the eigen values of the matrix B. Observe that since the eigenvalues of B are λ 2 , . . . , λ k the eigenvalues of A are λ 1 , λ 2 , . . . , λ k . Define U = U 1 ¸ 1 0 0 U 2 ¸ . Then check that U is a unitary matrix and U −1 AU is an upper triangular matrix with diagonal entries λ 1 , λ 2 , . . . , λ k , the eigenvalues of the matrix A. Hence, the result follows. Exercise 6.3.11 1. Let A be an nn real invertible matrix. Prove that there exists an orthogonal matrix P and a diagonal matrix D with positive diagonal entries such that AA t = PDP −1 . 2. Show that matrices A = 1 1 1 0 2 1 0 0 3 ¸ ¸ ¸ and B = 2 −1 2 0 1 0 0 0 3 ¸ ¸ ¸ are unitarily equivalent via the unitary matrix U = 1 2 1 1 0 1 −1 0 0 0 2 ¸ ¸ ¸. Hence, conclude that the upper triangular matrix obtained in the ”Schur’s Lemma” need not be unique. 3. Show that the normal matrices are diagonalisable. [Hint: Show that the matrix B in the proof of the above theorem is also a normal matrix and if T is an upper triangular matrix with T T = TT then T has to be a diagonal matrix]. Remark 6.3.12 (The Spectral Theorem for Normal Matrices) Let A be an n n normal matrix. Then the above exercise shows that there exists an orthonormal basis ¦x 1 , x 2 , . . . , x n ¦ of C n (C) such that Ax i = λ i x i for 1 ≤ i ≤ n. 6.4. SYLVESTER’S LAW OF INERTIA AND APPLICATIONS 123 4. Let A be a normal matrix. Prove the following: (a) if all the eigenvalues of A are 0, then A = 0, (b) if all the eigenvalues of A are 1, then A = I. We end this chapter with an application of the theory of diagonalisation to the study of conic sections in analytic geometry and the study of maxima and minima in analysis. 6.4 Sylvester’s Law of Inertia and Applications Definition 6.4.1 (Bilinear Form) Let A be a n n matrix with real entries. A bilinear form in x = (x 1 , x 2 , . . . , x n ) t , y = (y 1 , y 2 , . . . , y n ) t is an expression of the type Q(x, y) = x t Ay = n ¸ i,j=1 a ij x i y j . Observe that if A = I (the identity matrix) then the bilinear form reduces to the standard real inner product. Also, if we want it to be symmetric in x and y then it is necessary and sufficient that a ij = a ji for all i, j = 1, 2, . . . , n. Why? Hence, any symmetric bilinear form is naturally associated with a real symmetric matrix. Definition 6.4.2 (Sesquilinear Form) Let A be a n n matrix with complex entries. A sesquilinear form in x = (x 1 , x 2 , . . . , x n ) t , y = (y 1 , y 2 , . . . , y n ) t is given by H(x, y) = n ¸ i,j=1 a ij x i y j . Note that if A = I (the identity matrix) then the sesquilinear form reduces to the standard complex inner product. Also, it can be easily seen that this form is ‘linear’ in the first component and ‘conjugate linear’ in the second component. Also, if we want H(x, y) = H(y, x) then the matrix A need to be an Hermitian matrix. Note that if a ij ∈ R and x, y ∈ R n , then the sesquilinear form reduces to a bilinear form. The expression Q(x, x) is called the quadratic form and H(x, x) the Hermitian form. We generally write Q(x) and H(x) in place of Q(x, x) and H(x, x), respectively. It can be easily shown that for any choice of x, the Hermitian form H(x) is a real number. Therefore, in matrix notation, for a Hermitian matrix A, the Hermitian form can be rewritten as H(x) = x t Ax, where x = (x 1 , x 2 , . . . , x n ) t , and A = [a ij ]. Example 6.4.3 Let A = ¸ 1 2 −i 2 +i 2 ¸ . Then check that A is an Hermitian matrix and for x = (x 1 , x 2 ) t , the Hermitian form H(x) = x Ax = (x 1 , x 2 ) ¸ 1 2 −i 2 +i 2 ¸ x 1 x 2 = x 1 x 1 + 2x 2 x 2 + (2 −i)x 1 x 2 + (2 +i)x 2 x 1 = [x 1 [ 2 + 2[x 2 [ 2 + 2Re[(2 −i)x 1 x 2 ] where ‘Re’ denotes the real part of a complex number. This shows that for every choice of x the Hermitian form is always real. Why? 124 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALISATION The main idea is to express H(x) as sum of squares and hence determine the possible values that it can take. Note that if we replace x by cx, where c is any complex number, then H(x) simply gets multiplied by [c[ 2 and hence one needs to study only those x for which |x| = 1, i.e., x is a normalised vector. From Exercise 6.3.11.3 one knows that if A = A (A is Hermitian) then there exists a unitary matrix U such that U AU = D (D = diag(λ 1 , λ 2 , . . . , λ n ) with λ i ’s the eigenvalues of the matrix A which we know are real). So, taking z = U x (i.e., choosing z i ’s as linear combination of x j ’s with coefficients coming from the entries of the matrix U ), one gets H(x) = x Ax = z U AUz = z Dz = n ¸ i=1 λ i [z i [ 2 = n ¸ i=1 λ i n ¸ j=1 u ji x j 2 . (6.4.1) Thus, one knows the possible values that H(x) can take depending on the eigenvalues of the matrix A in case A is a Hermitian matrix. Also, for 1 ≤ i ≤ n, n ¸ j=1 u ji x j represents the principal axes of the conic that they represent in the n-dimensional space. Equation (6.4.1) gives one method of writing H(x) as a sum of n absolute squares of linearly inde- pendent linear forms. One can easily show that there are more than one way of writing H(x) as sum of squares. The question arises, “what can we say about the coefficients when H(x) has been written as sum of absolute squares”. This question is answered by ‘Sylvester’s law of inertia’ which we state as the next lemma. Lemma 6.4.4 Every Hermitian form H(x) = x Ax (with A an Hermitian matrix) in n variables can be written as H(x) = [y 1 [ 2 +[y 2 [ 2 + +[y p [ 2 −[y p+1 [ 2 − −[y r [ 2 where y 1 , y 2 , . . . , y r are linearly independent linear forms in x 1 , x 2 , . . . , x n , and the integers p and r, 0 ≤ p ≤ r ≤ n, depend only on A. Proof. From Equation (6.4.1) it is easily seen that H(x) has the required form. Need to show that p and r are uniquely given by A. Hence, let us assume on the contrary that there exist positive integers p, q, r, s with p > q such that H(x) = [y 1 [ 2 +[y 2 [ 2 + +[y p [ 2 −[y p+1 [ 2 − −[y r [ 2 = [z 1 [ 2 +[z 2 [ 2 + +[z q [ 2 −[z q+1 [ 2 − −[z s [ 2 . Since, y = (y 1 , y 2 , . . . , y n ) t and z = (z 1 , z 2 , . . . , z n ) t are linear combinations of x 1 , x 2 , . . . , x n , we can find a matrix B such that z = By. Choose y p+1 = y p+2 = = y r = 0. Since p > q, Theorem 2.5.1, gives the existence of finding nonzero values of y 1 , y 2 , . . . , y p such that z 1 = z 2 = = z q = 0. Hence, we get [y 1 [ 2 +[y 2 [ 2 + +[y p [ 2 = −([z q+1 [ 2 + +[z s [ 2 ). Now, this can hold only if y 1 = y 2 = = y p = 0, which gives a contradiction. Hence p = q. Similarly, the case r > s can be resolved. Note: The integer r is the rank of the matrix A and the number r − 2p is sometimes called the inertial degree of A. We complete this chapter by understanding the graph of ax 2 + 2hxy +by 2 + 2fx + 2gy +c = 0 for a, b, c, f, g, h ∈ R. We first look at the following example. 6.4. SYLVESTER’S LAW OF INERTIA AND APPLICATIONS 125 Example 6.4.5 Sketch the graph of 3x 2 + 4xy + 3y 2 = 5. Solution: Note that 3x 2 + 4xy + 3y 2 = [x, y] ¸ 3 2 2 3 ¸¸ x y ¸ . The eigenpairs for ¸ 3 2 2 3 ¸ are (5, (1, 1) t ), (1, (1, −1) t ). Thus, ¸ 3 2 2 3 ¸ = ¸ 1 2 1 2 1 2 1 2 ¸¸ 5 0 0 1 ¸ ¸ 1 2 1 2 1 2 1 2 ¸ . Let ¸ u v ¸ = ¸ 1 2 1 2 1 2 1 2 ¸¸ x y ¸ = ¸ x+y 2 x−y 2 ¸ . Then 3x 2 + 4xy + 3y 2 = [x, y] ¸ 3 2 2 3 ¸¸ x y ¸ = [x, y] ¸ 1 2 1 2 1 2 1 2 ¸¸ 5 0 0 1 ¸¸ 1 2 1 2 1 2 1 2 ¸¸ x y ¸ = u, v ¸ 5 0 0 1 ¸¸ u v ¸ = 5u 2 +v 2 . Thus the given graph reduces to 5u 2 +v 2 = 5 or equivalently u 2 + v 2 5 = 1. Therefore, the given graph represents an ellipse with the principal axes u = 0 and v = 0. That is, the principal axes are y +x = 0 and x −y = 0. The eccentricity of the ellipse is e = 2 5 , the foci are at the points S 1 = (− 2, 2) and S 2 = ( 2, − 2), and the equations of the directrices are x −y = ± 5 2 . S 1 S 2 Figure 6.1: Ellipse 126 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALISATION Definition 6.4.6 (Associated Quadratic Form) Let ax 2 +2hxy +by 2 +2gx+2fy +c = 0 be the equation of a general conic. The quadratic expression ax 2 + 2hxy +by 2 = x, y ¸ a h h b ¸ ¸ x y ¸ is called the quadratic form associated with the given conic. We now consider the general conic. We obtain conditions on the eigenvalues of the associated quadratic form to characterise the different conic sections in R 2 (endowed with the standard inner product). Proposition 6.4.7 Consider the general conic ax 2 + 2hxy +by 2 + 2gx + 2fy +c = 0. Prove that this conic represents 1. an ellipse if ab −h 2 > 0, 2. a parabola if ab −h 2 = 0, and 3. a hyperbola if ab −h 2 < 0. Proof. Let A = ¸ a h h b ¸ . Then the associated quadratic form ax 2 + 2hxy +by 2 = x y A ¸ x y ¸ . As A is a symmetric matrix, by Corollary 6.3.7, the eigenvalues λ 1 , λ 2 of A are both real, the corre- sponding eigenvectors u 1 , u 2 are orthonormal and A is unitarily diagonalisable with A = ¸ u t 1 u t 2 ¸¸ λ 1 0 0 λ 2 ¸ u 1 u 2 . (6.4.2) Let ¸ u v ¸ = u 1 u 2 ¸ x y ¸ . Then ax 2 + 2hxy +by 2 = λ 1 u 2 + λ 2 v 2 and the equation of the conic section in the (u, v)-plane, reduces to λ 1 u 2 2 v 2 + 2g 1 u + 2f 1 v +c = 0. Now, depending on the eigenvalues λ 1 , λ 2 , we consider different cases: 1. λ 1 = 0 = λ 2 . Substituting λ 1 = λ 2 = 0 in (6.4.2) gives A = 0. Thus, the given conic reduces to a straight line 2g 1 u + 2f 1 v +c = 0 in the (u, v)-plane. 2. λ 1 = 0, λ 2 = 0. In this case, the equation of the conic reduces to λ 2 (v +d 1 ) 2 = d 2 u +d 3 for some d 1 , d 2 , d 3 ∈ R. (a) If d 2 = d 3 = 0, then in the (u, v)-plane, we get the pair of coincident lines v = −d 1 . 6.4. SYLVESTER’S LAW OF INERTIA AND APPLICATIONS 127 (b) If d 2 = 0, d 3 = 0. i. If λ 2 d 3 > 0, then we get a pair of parallel lines v = −d 1 ± d 3 λ 2 . ii. If λ 2 d 3 < 0, the solution set corresponding to the given conic is an empty set. (c) If d 2 = 0. Then the given equation is of the form Y 2 = 4aX for some translates X = x + α and Y = y +β and thus represents a parabola. Also, observe that λ 1 = 0 implies that the det(A) = 0. That is, ab −h 2 = det(A) = 0. 3. λ 1 > 0 and λ 2 < 0. Let λ 2 = −α 2 . Then the equation of the conic can be rewritten as λ 1 (u +d 1 ) 2 −α 2 (v +d 2 ) 2 = d 3 for some d 1 , d 2 , d 3 ∈ R. In this case, we have the following: (a) suppose d 3 = 0. Then the equation of the conic reduces to λ 1 (u +d 1 ) 2 −α 2 (v +d 2 ) 2 = 0. The terms on the left can be written as product of two factors as λ 1 , α 2 > 0. Thus, in this case, the given equation represents a pair of intersecting straight lines in the (u, v)-plane. (b) suppose d 3 = 0. As d 3 = 0, we can assume d 3 > 0. So, the equation of the conic reduces to λ 1 (u +d 1 ) 2 d 3 α 2 (v +d 2 ) 2 d 3 = 1. This equation represents a hyperbola in the (u, v)-plane, with principal axes u +d 1 = 0 and v +d 2 = 0. As λ 1 λ 2 < 0, we have ab −h 2 = det(A) = λ 1 λ 2 < 0. 4. λ 1 , λ 2 > 0. In this case, the equation of the conic can be rewritten as λ 1 (u +d 1 ) 2 2 (v +d 2 ) 2 = d 3 , for some d 1 , d 2 , d 3 ∈ R. we now consider the following cases: (a) suppose d 3 = 0. Then the equation of the ellipse reduces to a pair of perpendicular lines u +d 1 = 0 and v +d 2 = 0 in the (u, v)-plane. (b) suppose d 3 < 0. Then there is no solution for the given equation. Hence, we do not get any real ellipse in the (u, v)-plane. (c) suppose d 3 > 0. In this case, the equation of the conic reduces to λ 1 (u +d 1 ) 2 d 3 + α 2 (v +d 2 ) 2 d 3 = 1. This equation represents an ellipse in the (u, v)-plane, with principal axes u +d 1 = 0 and v +d 2 = 0. 128 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALISATION Also, the condition λ 1 λ 2 > 0 implies that ab −h 2 = det(A) = λ 1 λ 2 > 0. Remark 6.4.8 Observe that the condition ¸ u v ¸ = u 1 u 2 ¸ x y ¸ implies that the principal axes of the conic are functions of the eigenvectors u 1 and u 2 . Exercise 6.4.9 Sketch the graph of the following surfaces: 1. x 2 + 2xy +y 2 −6x −10y = 3. 2. 2x 2 + 6xy + 3y 2 −12x −6y = 5. 3. 4x 2 −4xy + 2y 2 + 12x −8y = 10. 4. 2x 2 −6xy + 5y 2 −10x + 4y = 7. As a last application, we consider the following problem that helps us in understanding the quadrics. Let ax 2 +by 2 +cz 2 + 2dxy + 2exz + 2fyz + 2lx + 2my + 2nz +q = 0 (6.4.3) be a general quadric. Then we need to follow the steps given below to write the above quadric in the standard form and thereby get the picture of the quadric. The steps are: 1. Observe that this equation can be rewritten as x t Ax +b t x +q = 0, where A = a d e d b f e f c ¸ ¸ ¸, b = 2l 2m 2n ¸ ¸ ¸, and x = x y z ¸ ¸ ¸. 2. As the matrix A is symmetric matrix, find an orthogonal matrix P such that P t AP is a diagonal matrix. 3. Replace the vector x by y = P t x. Then writing y t = (y 1 , y 2 , y 3 ), the equation (6.4.3) reduces to λ 1 y 2 1 2 y 2 2 3 y 2 3 + 2l 1 y 1 + 2l 2 y 2 + 2l 3 y 3 +q = 0 (6.4.4) where λ 1 , λ 2 , λ 3 are the eigenvalues of A. 4. Complete the squares, if necessary, to write the equation (6.4.4) in terms of the variables z 1 , z 2 , z 3 so that this equation is in the standard form. 5. Use the condition y = P t x to determine the centre and the planes of symmetry of the quadric in terms of the original system. 6.4. SYLVESTER’S LAW OF INERTIA AND APPLICATIONS 129 Example 6.4.10 Determine the quadric 2x 2 + 2y 2 + 2z 2 + 2xy + 2xz + 2yz + 4x + 2y + 4z + 2 = 0. Solution: In this case, A = 2 1 1 1 2 1 1 1 2 ¸ ¸ ¸ and b = 4 2 4 ¸ ¸ ¸ and q = 2. Check that for the orthonormal matrix P = 1 3 1 2 1 6 1 3 −1 2 1 6 1 3 0 −2 6 ¸ ¸ ¸, P t AP = 4 0 0 0 1 0 0 0 1 ¸ ¸ ¸. So, the equation of the quadric reduces to 4y 2 1 +y 2 2 + y 2 3 + 10 3 y 1 + 2 2 y 2 2 6 y 3 + 2 = 0. Or equivalently, 4(y 1 + 5 4 3 ) 2 + (y 2 + 1 2 ) 2 + (y 3 1 6 ) 2 = 9 12 . So, the equation of the quadric in standard form is 4z 2 1 +z 2 2 +z 2 3 = 9 12 , where the point (x, y, z) t = P( −5 4 3 , −1 2 , 1 6 ) t = ( −3 4 , 1 4 , −3 4 ) t is the centre. The calculation of the planes of symmetry is left as an exercise to the reader. 130 CHAPTER 6. EIGENVALUES, EIGENVECTORS AND DIAGONALISATION Part II Ordinary Differential Equation 131 Chapter 7 Differential Equations 7.1 Introduction and Preliminaries There are many branches of science and engineering where differential equations arise naturally. Now a days, it finds applications in many areas including medicine, economics and social sciences. In this context, the study of differential equations assumes importance. In addition, in the study of differential equations, we also see the applications of many tools from analysis and linear algebra. Without spending more time on motivation, (which will be clear as we go along) let us start with the following notations. Let x be an independent variable and let y be a dependent variable of x. The derivatives of y (with respect to x) are denoted by y = dy dx , y = d 2 y dx 2 , . . . , y (k) = d (k) y dx (k) for k ≥ 3. The independent variable will be defined for an interval I; where I is either R or an interval a < x < b ⊂ R. With these notations, we are ready to define the term “differential equation”. A differential equation is a relationship between the independent variable and the unknown dependent function along with its derivatives. More precisely, we have the following definition. Definition 7.1.1 (Ordinary Differential Equation, ODE) An equation of the form f x, y, y , . . . , y (n) = 0 for x ∈ I (7.1.1) is called an Ordinary Differential Equation; where f is a known function from I R n+1 to R. Remark 7.1.2 1. The aim of studying the ODE (7.1.1) is to determine the unknown function y which satisfies the differential equation under suitable conditions. 2. Usually (7.1.1) is written as f x, y, y , . . . , y (n) = 0, and the interval I is not mentioned in most of the examples. Some examples of differential equations are 1. y = 6 sinx + 9; 2. y + 2y 2 = 0; 3. y = x + cos y; 4. (y ) 2 +y = 0. 133 134 CHAPTER 7. DIFFERENTIAL EQUATIONS 5. y +y = 0. 6. y +y = 0. 7. y (3) = 0. 8. y +msin y = 0. Definition 7.1.3 (Order of a Differential Equation) The order of a differential equation is the order of the highest derivative occurring in the equation. In Example 7.1, the order of Equations 1, 3, 4, 5 are one, that of Equations 2, 6 and 8 are two and the Equation 7 has order three. Definition 7.1.4 (Solution) A function y = φ(x) is called a solution of the differential equation (7.1.1) on I if 1. φ(x) is differentiable (as many times as the order of the equation) on I and 2. φ(x) satisfies the differential equation for all x ∈ I. That is, f x, φ(x), φ (x), . . . , φ (n) (x) = 0 for all x ∈ I. If y = φ(x) is a solution of an ODE (7.1.1) on I, we also say that φ(x) satisfies (7.1.1). Sometimes a solution y = φ(x) is also called an integral. Example 7.1.5 1. Consider the differential equation y + 2y = 0 on R. We see that if we take y(x) = ce −2x , then y(x) is differentiable, y (x) = −2ce −2x and therefore y (x) + 2y(x) = −2ce −2x + 2ce −2x = 0 for all x ∈ R. Hence, y(x) = ce −2x is a solution of the given differential equation for all x ∈ R. 2. It can be easily verified that for any constant a ∈ R, y = a 1 −x is a solution of the differential equation (1 −x)y −y = 0 on any interval that does not contain the point x = 1 as the function y = a 1 −x is not defined at x = 1. Furthere it can be shown that y(x) ≡ 0 is the only solution for this equation whenever the interval I contains the point x = 1. 3. Consider the differential equation (x −1) +yy = 0 on [−1, 1]. It can be easily verified that a solution y = φ(x) of this differential equation satisfies the relation (x −1) 2 2 (x) = 1. Definition 7.1.6 (Explicit/Implicit Solution) A solution of the form y = φ(x) is called an explicit so- lution (e.g., see Examples 7.1.5.1 and 7.1.5.2). If y is given by an implicit relation h(x, y) = 0 and satisfies the differential equation, then y is called an implicit solution (e.g., see Example 7.1.5.3). Remark 7.1.7 Since the solution is obtained by integration, we may expect a constant of integration (for each integration) to appear in a solution of a differential equation. If the order of the ODE is n, we expect n(n ≥ 1) arbitrary constants. To start with, let us try to understand the structure of a first order differential equation of the form f(x, y, y ) = 0 (7.1.2) and move to higher orders later. 7.1. INTRODUCTION AND PRELIMINARIES 135 Definition 7.1.8 (General Solution) A function φ(x, c) is called a general solution of (7.1.2) on an interval I ⊂ R, if φ(x, c) is a solution of (7.1.2) for each x ∈ I, for an arbitrary constant c. Remark 7.1.9 The family of functions ¦φ(, c) : c is a constant such that φ(, c) is well defined¦ is called a one parameter family of functions and c is called a parameter. In other words, a general solution of (7.1.2) is nothing but a one parameter family of solutions of (7.1.2). Example 7.1.10 1. Determine a differential equation for which a family of circles with center at (1, 0) and arbitrary radius, a is an implicit solution. Solution: This family is represented by the implicit relation (x −1) 2 +y 2 = a 2 , (7.1.3) where a is a real constant. Then y is a solution of the differential equation (x −1) +y dy dx = 0. (7.1.4) The function y satisfying (7.1.3) is a one parameter family of solutions or a general solution of (7.1.4). 2. Consider the one parameter family of circles with center at (c, 0) and unit radius. The family is represented by the implicit relation (x −c) 2 +y 2 = 1, (7.1.5) where c is a real constant. Show that y satisfies yy 2 +y 2 = 1. Solution: We note that, differentiation of the given equation, leads to (x −c) +yy = 0. Now, eliminating c from the two equations, we get (yy ) 2 +y 2 = 1. In Example 7.1.10.1, we see that y is not defined explicitly as a function of x but implicitly defined by (7.1.3). On the other hand y = 1 1 −x is an explicit solution in Example 7.1.5.2. Let us now look at some geometrical interpretations of the differential Equation (7.1.2). The Equation (7.1.2) is a relation between x, y and the slope of the function y at the point x. For instance, let us find the equation of the curve passing through (0, 1 2 ) and whose slope at each point (x, y) is − x 4y . If y is the required curve, then y satisfies dy dx = − x 4y , y(0) = 1 2 . It is easy to verify that y satisfies the equation x 2 + 4y 2 = 1. Exercise 7.1.11 1. Find the order of the following differential equations: (a) y 2 + sin(y ) = 1. (b) y + (y ) 2 = 2x. (c) (y ) 3 +y −2y 4 = −1. 2. Show that for each k ∈ R, y = ke x is a solution of y = y. 136 CHAPTER 7. DIFFERENTIAL EQUATIONS 3. Find a differential equation satisfied by the given family of curves: (a) y = mx, m real (family of lines). (b) y 2 = 4ax, a real (family of parabolas). (c) x = r 2 cos θ, y = r 2 sin θ, θ is a parameter of the curve and r is a real number (family of circles in parametric representation). 4. Find the equation of the curve C which passes through (1, 0) and whose slope at each point (x, y) is −x y . 7.2 Separable Equations In general, it may not be possible to find solutions of y = f(x, y), where f is an arbitrary continuous function. But there are special cases of the function f for which the above equation can be solved. One such set of equations is y = g(y)h(x). (7.2.1) The Equation (7.2.1) is called a Separable Equation and is equivalent to 1 g(y) dy dx = h(x). Integrating with respect to x, we get H(x) = h(x)dx = 1 g(y) dy dx dy = dy g(y) = G(y) +c, where c is a constant. Hence, its implicit solution is G(y) +c = H(x). Example 7.2.1 1. Solve: y = y(y −1). Solution: Here, g(y) = y (y −1) and h(x) = 1. Then dy y (y −1) = dx. By using partial fractions and integrating, we get y = 1 1 −e x+c , where c is a constant of integration. 2. Solve y = y 2 . Solution: It is easy to deduce that y = − 1 x +c , where c is a constant; is the required solution. Observe that the solution is defined, only if x +c = 0 for any x. For example, if we let y(0) = a, then y = − a ax −1 exists as long as ax −1 = 0. 7.2. SEPARABLE EQUATIONS 137 7.2.1 Equations Reducible to Separable Form There are many equations which are not of the form 7.2.1, but by a suitable substitution, they can be reduced to the separable form. One such class of equation is y = g 1 (x, y) g 2 (x, y) or equivalently y = g( y x ) where g 1 and g 2 are homogeneous functions of the same degree in x and y, and g is a continuous function. In this case, we use the substitution, y = xu(x) to get y = xu + u. Thus, the above equation after substitution becomes xu +u(x) = g(u), which is a separable equation in u. For illustration, we consider some examples. Example 7.2.2 1. Find the general solution of 2xyy −y 2 +x 2 = 0. Solution: Let I be any interval not containing 0. Then 2 y x y −( y x ) 2 + 1 = 0. Letting y = xu(x), we have 2u(u x +u) −u 2 + 1 = 0 or 2xuu +u 2 + 1 = 0 or equivalently 2u 1 +u 2 du dx = − 1 x . On integration, we get 1 +u 2 = c x or x 2 +y 2 −cx = 0. The general solution can be re-written in the form (x − c 2 ) 2 +y 2 = c 2 4 . This represents a family of circles with center ( c 2 c 2 . 2. Find the equation of the curve passing through (0, 1) and whose slope at each point (x, y) is − x 2y . Solution: If y is such a curve then we have dy dx = − x 2y and y(0) = 1. Notice that it is a separable equation and it is easy to verify that y satisfies x 2 + 2y 2 = 2. 3. The equations of the type dy dx = a 1 x +b 1 y +c 1 a 2 x +b 2 y +c 2 can also be solved by the above method by replacing x by x +h and y by y +k, where h and k are to be chosen such that a 1 h +b 1 k +c 1 = 0 = a 2 h +b 2 k +c 2 . This condition changes the given differential equation into dy dx = a 1 x +b 1 y a 2 x +b 2 y . Thus, if x = 0 then the equation reduces to the form y = g( y x ). Exercise 7.2.3 1. Find the general solutions of the following: 138 CHAPTER 7. DIFFERENTIAL EQUATIONS (a) dy dx = −x(ln x)(ln y). (b) y −1 cos −1 +(e x + 1) dy dx = 0. 2. Find the solution of (a) (2a 2 +r 2 ) = r 2 cos dr , r(0) = a. (b) xe x+y = dy dx , y(0) = 0. 3. Obtain the general solutions of the following: (a) ¦y −xcosec ( y x )¦ = x dy dx . (b) xy = y + x 2 +y 2 . (c) dy dx = x −y + 2 −x +y + 2 . 4. Solve y = y −y 2 and use it to determine lim x−→∞ y. This equation occurs in a model of population. 7.3 Exact Equations As remarked, there are no general methods to find a solution of (7.1.2). The Exact Equations is yet another class of equations that can be easily solved. In this section, we introduce this concept. Let D be a region in xy-plane and let M and N be real valued functions defined on D. Consider an equation M(x, y) +N(x, y) dy dx = 0, (x, y) ∈ D. (7.3.1) In most of the books on Differential Equations, this equation is also written as M(x, y)dx +N(x, y)dy = 0, (x, y) ∈ D. (7.3.2) Definition 7.3.1 (Exact Equation) The Equation (7.3.1) is called Exact if there exists a real valued twice continuously differentiable function f : R 2 −→R (or the domain is an open subset of R 2 ) such that ∂f ∂x = M and ∂f ∂y = N. (7.3.3) Remark 7.3.2 If (7.3.1) is exact, then ∂f ∂x + ∂f ∂y dy dx = df(x, y) dx = 0. This implies that f(x, y) = c (where c is a constant) is an implicit solution of (7.3.1). In other words, the left side of (7.3.1) is an exact differential. Example 7.3.3 The equation y +x dy dx = 0 is an exact equation. Observe that in this example, f(x, y) = xy. The proof of the next theorem is given in Appendix 15.6.2. Theorem 7.3.4 Let M and N be twice continuously differentiable function in a region D. The Equation (7.3.1) is exact if and only if ∂M ∂y = ∂N ∂x . (7.3.4) 7.3. EXACT EQUATIONS 139 Note: If (7.3.1) or (7.3.2) is exact, then there is a function f(x, y) satisfying f(x, y) = c for some constant c, such that d(f(x, y)) = M(x, y)dx +N(x, y)dy = 0. Let us consider some examples, where Theorem 7.3.4 can be used to easily find the general solution. Example 7.3.5 1. Solve 2xe y + (x 2 e y + cos y ) dy dx = 0. Solution: With the above notations, we have M = 2xe y , N = x 2 e y + cos y, ∂M ∂y = 2xe y and ∂N ∂x = 2xe y . Therefore, the given equation is exact. Hence, there exists a function G(x, y) such that ∂G ∂x = 2xe y and ∂G ∂y = x 2 e y + cos y. The first partial differentiation when integrated with respect to x (assuming y to be a constant) gives, G(x, y) = x 2 e y +h(y). But then ∂G ∂y = ∂(x 2 e y + h(y)) ∂y = N implies dh dy = cos y or h(y) = sin y + c where c is an arbitrary constant. Thus, the general solution of the given equation is x 2 e y + siny = c. The solution in this case is in implicit form. 2. Find values of and m such that the equation y 2 +mxy dy dx = 0 is exact. Also, find its general solution. Solution: In this example, we have M = y 2 , N = mxy, ∂M ∂y = 2y and ∂N ∂x = my. Hence for the given equation to be exact, m = 2. With this condition on and m, the equation reduces to y 2 + 2xy dy dx = 0. This equation is not meaningful if = 0. Thus, the above equation reduces to d dx (xy 2 ) = 0 whose solution is xy 2 = c for some arbitrary constant c. 140 CHAPTER 7. DIFFERENTIAL EQUATIONS 3. Solve the equation (3x 2 e y −x 2 )dx + (x 3 e y +y 2 )dy = 0. Solution: Here M = 3x 2 e y −x 2 and N = x 3 e y +y 2 . Hence, ∂M ∂y = ∂N ∂x = 3x 2 e y . Thus the given equation is exact. Therefore, G(x, y) = (3x 2 e y −x 2 )dx = x 3 e y x 3 3 +h(y) (keeping y as constant). To determine h(y), we partially differentiate G(x, y) with respect to y and compare with N to get h(y) = y 3 3 . Hence G(x, y) = x 3 e y x 3 3 + y 3 3 = c is the required implicit solution. 7.3.1 Integrating Factors On may occasions, M(x, y) +N(x, y) dy dx = 0, or equivalently M(x, y)dx +N(x, y)dy = 0 may not be exact. But the above equation may become exact, if we multiply it by a proper factor. For example, the equation ydx −dy = 0 is not exact. But, if we multiply it with e −x , then the equation reduces to e −x ydx −e −x dy = 0, or equivalently d e −x y = 0, an exact equation. Such a factor (in this case, e −x ) is called an integrating factor for the given equation. Formally Definition 7.3.6 (Integrating Factor) A function Q(x, y) is called an integrating factor for the (7.3.1), if the equation Q(x, y)M(x, y)dx +Q(x, y)N(x, y)dy = 0 is exact. Example 7.3.7 1. Solve the equation ydx −xdy = 0, x, y > 0. Solution: It can be easily verified that the given equation is not exact. Multiplying by 1 xy , the equation reduces to 1 xy ydx − 1 xy xdy = 0, or equivalently d (ln x −ln y) = 0. Thus, by definition, 1 xy is an integrating factor. Hence, a general solution of the given equation is G(x, y) = 1 xy = c, for some constant c ∈ R. That is, y = cx, for some constant c ∈ R. 7.3. EXACT EQUATIONS 141 2. Find a general solution of the differential equation 4y 2 + 3xy dx − 3xy + 2x 2 dy = 0. Solution: It can be easily verified that the given equation is not exact. Method 1: Here the terms M = 4y 2 + 3xy and N = −(3xy + 2x 2 ) are homogeneous functions of degree 2. It may be checked that an integrating factor for the given differential equation is 1 Mx +Ny = 1 xy x +y . Hence, we need to solve the partial differential equations ∂G(x, y) ∂x = y 4y + 3x xy x +y = 4 x 1 x +y and (7.3.5) ∂G(x, y) ∂y = −x(3y + 2x) xy x +y = − 2 y 1 x +y . (7.3.6) Integrating (keeping y constant) (7.3.5), we have G(x, y) = 4 ln[x[ −ln [x +y[ +h(y) (7.3.7) and integrating (keeping x constant) (7.3.6), we get G(x, y) = −2 ln[y[ −ln [x +y[ +g(x). (7.3.8) Comparing (7.3.7) and (7.3.8), the required solution is G(x, y) = 4 ln[x[ −ln [x +y[ −2 ln [y[ = ln c for some real constant c. Or equivalently, the solution is x 4 = c x +y y 2 . Method 2: Here the terms M = 4y 2 + 3xy and N = −(3xy + 2x 2 ) are polynomial in x and y. Therefore, we suppose that x α y β is an integrating factor for some α, β ∈ R. We try to find this α and β. Multiplying the terms M(x, y) and N(x, y) with x α y β , we get M(x, y) = x α y β 4y 2 + 3xy , and N(x, y) = −x α y β (3xy + 2x 2 ). For the new equation to be exact, we need ∂M(x, y) ∂y = ∂N(x, y) ∂x . That is, the terms 4(2 +β)x α y 1+β + 3(1 +β)x 1+α y β and −3(1 +α)x α y 1+β −2(2 +α)x 1+α y β must be equal. Solving for α and β, we get α = −5 and β = 1. That is, the expression y x 5 is also an integrating factor for the given differential equation. This integrating factor leads to G(x, y) = − y 3 x 4 y 2 x 3 +h(y) and G(x, y) = − y 3 x 4 y 2 x 3 +g(x). Thus, we need h(y) = g(x) = c, for some constant c ∈ R. Hence, the required solution by this method is y 2 y +x = cx 4 . 142 CHAPTER 7. DIFFERENTIAL EQUATIONS Remark 7.3.8 1. If (7.3.1) has a general solution, then it can be shown that (7.3.1) admits an integrating factor. 2. If (7.3.1) has an integrating factor, then it has many (in fact infinitely many) integrating factors. 3. Given (7.3.1), whether or not it has an integrating factor, is a tough question to settle. 4. In some cases, we use the following rules to find the integrating factors. (a) Consider a homogeneous equation M(x, y)dx +N(x, y)dy = 0. If Mx +Ny = 0, then 1 Mx +Ny is an Integrating Factor. (b) If the functions M(x, y) and N(x, y) are polynomial functions in x, y; then x α y β works as an integrating factor for some appropriate values of α and β. (c) The equation M(x, y)dx + N(x, y)dy = 0 has e R f(x)dx as an integrating factor, if f(x) = 1 N ∂M ∂y ∂N ∂x is a function of x alone. (d) The equation M(x, y)dx + N(x, y)dy = 0 has e R g(y)dy as an integrating factor, if g(y) = 1 M ∂M ∂y ∂N ∂x is a function of y alone. (e) For the equation yM 1 (xy)dx +xN 1 (xy)dy = 0 with Mx −Ny = 0, the function 1 Mx −Ny is an integrating factor. Exercise 7.3.9 1. Show that the following equations are exact and hence solve them. (a) (r + sinθ + cos θ) dr +r(cos θ −sin θ) = 0. (b) (e −x −ln y + y x ) + (− x y + ln x + cos y) dy dx = 0. 2. Find conditions on the function g(x, y) so that the equation (x 2 +xy 2 ) +¦ax 2 y 2 +g(x, y)¦ dy dx = 0 is exact. 3. What are the conditions on f(x), g(y), φ(x), and ψ(y) so that the equation (φ(x) +ψ(y)) + (f(x) +g(y)) dy dx = 0 is exact. 4. Verify that the following equations are not exact. Further find suitable integrating factors to solve them. (a) y + (x +x 3 y 2 ) dy dx = 0. (b) y 2 + (3xy +y 2 −1) dy dx = 0. (c) y + (x +x 3 y 2 ) dy dx = 0. 7.4. LINEAR EQUATIONS 143 (d) y 2 + (3xy +y 2 −1) dy dx = 0. 5. Find the solution of (a) (x 2 y + 2xy 2 ) + 2(x 3 + 3x 2 y) dy dx = 0 with y(1) = 0. (b) y(xy + 2x 2 y 2 ) +x(xy −x 2 y 2 ) dy dx = 0 with y(1) = 1. 7.4 Linear Equations Some times we might think of a subset or subclass of differential equations which admit explicit solutions. This question is pertinent when we say that there are no means to find the explicit solution of dy dx = f(x, y) where f is an arbitrary continuous function in (x, y) in suitable domain of definition. In this context, we have a class of equations, called Linear Equations (to be defined shortly) which admit explicit solutions. Definition 7.4.1 (Linear/Nonlinear Equations) Let p(x) and q(x) be real-valued piecewise continuous functions defined on interval I = [a, b]. The equation y +p(x)y = q(x), x ∈ I (7.4.1) is called a linear equation, where y stands for dy dx . The Equation (7.4.1) is called Linear non-homogeneous if q(x) = 0 and is called Linear homogeneous if q(x) = 0 on I. A first order equation is called a non-linear equation (in the independent variable) if it is neither a linear homogeneous nor a non-homogeneous linear equation. Example 7.4.2 1. The equation y = sin y is a non-linear equation. 2. The equation y +y = sin x is a linear non-homogeneous equation. 3. The equation y +x 2 y = 0 is a linear homogeneous equation. Define the indefinite integral P(x) = p(x)dx ( or x a p(s)ds). Multiplying (7.4.1) by e P(x) , we get e P(x) y +e P(x) p(x)y = e P(x) q(x) or equivalently d dx (e P(x) y) = e P(x) q(x). On integration, we get e P(x) y = c + e P(x) q(x)dx. In other words, y = ce −P(x) +e −P(x) e P(x) q(x)dx (7.4.2) where c is an arbitrary constant is the general solution of (7.4.1). Remark 7.4.3 If we let P(x) = x a p(s)ds in the above discussion, (7.4.2) also represents y = y(a)e −P(x) +e −P(x) x a e P(s) q(s)ds. (7.4.3) As a simple consequence, we have the following proposition. 144 CHAPTER 7. DIFFERENTIAL EQUATIONS Proposition 7.4.4 y = ce −P(x) (where c is any constant) is the general solution of the linear homogeneous equation y +p(x)y = 0. (7.4.4) In particular, when p(x) = k, is a constant, the general solution is y = ce −kx , with c an arbitrary constant. Example 7.4.5 1. Comparing the equation y = y with (7.4.1), we have p(x) = −1 and q(x) = 0. Hence, P(x) = (−1)dx = −x. Substituting for P(x) in (7.4.2), we get y = ce x as the required general solution. We can just use the second part of the above proposition to get the above result, as k = −1. 2. The general solution of xy = −y, x ∈ I (0 ∈ I) is y = cx −1 , where c is an arbitrary constant. Notice that no non-zero solution exists if we insist on the condition lim x→0,x>0 y = 0. A class of nonlinear Equations (7.4.1) (named after Bernoulli (1654−1705)) can be reduced to linear equation. These equations are of the type y +p(x)y = q(x)y a . (7.4.5) If a = 0 or a = 1, then (7.4.5) is a linear equation. Suppose that a = 0, 1. We then define u(x) = y 1−a and therefore u = (1 −a)y y −a = (1 −a)(q(x) −p(x)u) or equivalently u + (1 −a)p(x)u = (1 −a)q(x), (7.4.6) a linear equation. For illustration, consider the following example. Example 7.4.6 For m, n constants and m = 0, solve y −my +ny 2 = 0. Solution: Let u = y −1 . Then u(x) satisfies u +mu = n and its solution is u = Ae −mx +e −mx ne mx dx = Ae −mx + n m . Equivalently y = 1 Ae −mx + n m with m = 0 and A an arbitrary constant, is the general solution. Exercise 7.4.7 1. In Example 7.4.6, show that u +mu = n. 2. Find the genral solution of the following: (a) y +y = 4. (b) y −3y = 10. (c) y −2xy = 0. (d) y −xy = 4x. (e) y +y = e −x . 7.5. MISCELLANEOUS REMARKS 145 (f) sinh xy +y coshx = e x . (g) (x 2 + 1)y + 2xy = x 2 . 3. Solve the following IVP’s: (a) y −4y = 5, y(0) = 0. (b) y + (1 +x 2 )y = 3, y(0) = 0. (c) y +y = cos x, y(π) = 0. (d) y −y 2 = 1, y(0) = 0. (e) (1 +x)y +y = 2x 2 , y(1) = 1. 4. Let y 1 be a solution of y +a(x)y = b 1 (x) and y 2 be a solution of y +a(x)y = b 2 (x). Then show that y 1 +y 2 is a solution of y +a(x)y = b 1 (x) +b 2 (x). 5. Reduce the following to linear equations and hence solve: (a) y + 2y = y 2 . (b) (xy +x 3 e y )y = y 2 . (c) y sin(y) +xcos(y) = x. (d) y −y = xy 2 . 6. Find the solution of the IVP y + 4xy +xy 3 = 0, y(0) = 1 2 . 7.5 Miscellaneous Remarks In Section 7.4, we have learned to solve the linear equations. There are many other equations, though not linear, which are also amicable for solving. Below, we consider a few classes of equations which can be solved. In this section or in the sequel, p denotes dy dx or y . A word of caution is needed here. The method described below are more or less ad hoc methods. 1. Equations solvable for y: Consider an equation of the form y = f(x, p). (7.5.1) Differentiating with respect to x, we get dy dx = p = ∂f(x, p) ∂x + ∂f(x, p) ∂p dp dx of equivalently p = g(x, p, dp dx ). (7.5.2) The Equation (7.5.2) can be viewed as a differential equation in p and x. We now assume that (7.5.2) can be solved for p and its solution is h(x, p, c) = 0. (7.5.3) If we are able to eliminate p between (7.5.1) and (7.5.3), then we have an implicit solution of the (7.5.1). Solve y = 2px −xp 2 . Solution: Differentiating with respect to x and replacing dy dx by p, we get p = 2p −p 2 + 2x dp dx −2xp dp dx or (p + 2x dp dx )(1 −p) = 0. 146 CHAPTER 7. DIFFERENTIAL EQUATIONS So, either p + 2x dp dx = 0 or p = 1. That is, either p 2 x = c or p = 1. Eliminating p from the given equation leads to an explicit solution y = 2x c x −c or y = x. The first solution is a one-parameter family of solutions, giving us a general solution. The latter one is a solution but not a general solution since it is not a one parameter family of solutions. 2. Equations in which the independent variable x is missing: These are equations of the type f(y, p) = 0. If possible we solve for y and we proceed. Sometimes introducing an arbitrary parameter helps. We illustrate it below. Solve y 2 +p 2 = a 2 where a is a constant. Solution: We equivalently rewrite the given equation, by (arbitrarily) introducing a new param- eter t by y = a sin t, p = a cos t from which it follows dy dt = a cos t; p = dy dx = dy dt dx dt and so dx dt = 1 p dy dt = 1 or x = t +c. Therefore, a general solution is y = a sin(t +c). 3. Equations in which y (dependent variable or the unknown) is missing: We illustrate this case by an example. Find the general solution of x = p 3 −p −1. Solution: Recall that p = dy dx . Now, from the given equation, we have dy dp = dy dx dx dp = p(3p 2 −1). Therefore, y = 3 4 p 4 1 2 p 2 +c (regarding p as a parameter). The desired solution in this case is in the parametric form, given by x = t 3 −t −1 and y = 3 4 t 4 1 2 t 2 +c where c is an arbitrary constant. Remark 7.5.1 The readers are again informed that the methods discussed in 1), 2), 3) are more or less ad hoc methods. It may not work in all cases. Exercise 7.5.2 1. Find the general solution of y = (1 +p)x +p 2 . Hint: Differentiate with respect to x to get dx dp = −(x + 2p) ( a linear equation in x). Express the solution in the parametric form y(p) = (1 +p)x +p 2 , x(p) = 2(1 −p) +ce −p . 7.6. INITIAL VALUE PROBLEMS 147 2. Solve the following differential equations: (a) 8y = x 2 +p 2 . (b) y +xp = x 4 p 2 . (c) y 2 log y −p 2 = 2xyp. (d) 2y +p 2 + 2p = 2x(p + 1). (e) 2y = 2x 2 + 4px +p 2 . 7.6 Initial Value Problems As we had seen, there are no methods to solve a general equation of the form y = f(x, y) (7.6.1) and in this context two questions may be pertinent. 1. Does (7.6.1) admit solutions at all (i.e., the existence problem)? 2. Is there a method to find solutions of (7.6.1) in case the answer to the above question is in the affirmative? The answers to the above two questions are not simple. But there are partial answers if some additional restrictions on the function f are imposed. The details are discussed in this section. For a, b ∈ R with a > 0, b > 0, we define S = ¦(x, y) ∈ R 2 : [x −x 0 [ ≤ a, [y −y 0 [ ≤ b¦ ⊂ I R. Definition 7.6.1 (Initial Value Problems) Let f : S −→R be a continuous function on a S. The problem of finding a solution y of y = f(x, y), (x, y) ∈ S, x ∈ I with y(x 0 ) = y 0 (7.6.2) in a neighbourhood I of x 0 (or an open interval I containing x 0 ) is called an Initial Value Problem, henceforth denoted by IVP. The condition y(x 0 ) = y 0 in (7.6.2) is called the initial condition stated at x = x 0 and y 0 is called the initial value. Further, we assume that a and b are finite. Let M = max¦[f(x, y)[ : (x, y) ∈ S¦. Such an M exists since S is a closed and bounded set and f is a continuous function and let h = min(a, b M ). The ensuing proposition is simple and hence the proof is omitted. Proposition 7.6.2 A function y is a solution of IVP (7.6.2) if and only if y satisfies y = y 0 + x x0 f(s, y(s))ds. (7.6.3) In the absence of any knowledge of a solution of IVP (7.6.2), we now try to find an approximate solution. Any solution of the IVP (7.6.2) must satisfy the initial condition y(x 0 ) = y 0 . Hence, as a crude approximation to the solution of IVP (7.6.2), we define y 0 = y 0 for all x ∈ [x 0 −h, x 0 +h]. 148 CHAPTER 7. DIFFERENTIAL EQUATIONS Now the Equation (7.6.3) appearing in Proposition 7.6.2, helps us to refine or improve the approximate solution y 0 with a hope of getting a better approximate solution. We define y 1 = y o + x x0 f(s, y 0 )ds and for n = 2, 3, . . . , we inductively define y n = y 0 + x x0 f(s, y n−1 (s))ds for all x ∈ [x 0 −h, x 0 +h]. As yet we have not checked a few things, like whether the point (s, y n (s)) ∈ S or not. We formalise the theory in the latter part of this section. To get ourselves motivated, let us apply the above method to the following IVP. Example 7.6.3 Solve the IVP y = −y, y(0) = 1, −1 ≤ x ≤ 1. Solution: From Proposition 7.6.2, a function y is a solution of the above IVP if and only if y = 1 − x x0 y(s)ds. We have y 0 = y(0) ≡ 1 and y 1 = 1 − x 0 ds = 1 −x. So, y 2 = 1 − x 0 (1 −s)ds = 1 −x + x 2 2! . By induction, one can easily verify that y n = 1 −x + x 2 2! x 3 3! + + (−1) n x n n! . Note: The solution of the given IVP is y = e −x and that lim n−→∞ y n = e −x . This example justifies the use of the word approximate solution for the y n ’s. We now formalise the above procedure. Definition 7.6.4 (Picard’s Successive Approximations) Consider the IVP (7.6.2). For x ∈ I with [x − x 0 [ ≤ a, define inductively y 0 (x) = y 0 and for n = 1, 2, . . . , y n (x) = y 0 + x x0 f(s, y n−1 (s))ds. (7.6.4) Then y 0 , y 1 , . . . , y n , . . . are called Picard’s successive approximations to the IVP (7.6.2). Whether (7.6.4) is well defined or not is settled in the following proposition. Proposition 7.6.5 The Picard’s approximates y n ’s, for the IVP (7.6.2) defined by (7.6.4) is well defined on the interval [x −x 0 [ ≤ h = min¦a, b M ¦, i.e., for x ∈ [x 0 −h, x 0 +h]. 7.6. INITIAL VALUE PROBLEMS 149 Proof. We have to verify that for each n = 0, 1, 2, . . . , (s, y n ) belongs to the domain of definition of f for [s −x 0 [ ≤ h. This is needed due to the reason that f(s, y n ) appearing as integrand in (7.6.4) may not be defined. For n = 0, it is obvious that f(s, y 0 ) ∈ S as [s − x 0 [ ≤ a and [y 0 − y 0 [ = 0 ≤ b. For n = 1, we notice that, if [x −x 0 [ ≤ h then [y 1 −y 0 [ ≤ M[x −x 0 [ ≤ Mh ≤ b. So, (x, y 1 ) ∈ S whenever [x −x 0 [ ≤ h. The rest of the proof is by the method of induction. We have established the result for n = 1, namely (x, y 1 ) ∈ S if [x −x 0 [ ≤ h. Assume that for k = 1, 2, . . . , n−1, (x, y k ) ∈ S whenever [x−x 0 [ ≤ h. Now, by definition of y n , we have y n −y 0 = x x0 f(s, y n−1 )ds. But then by induction hypotheses (s, y n−1 ) ∈ S and hence [y n −y 0 [ ≤ M[x −x 0 [ ≤ Mh ≤ b. This shows that (x, y n ) ∈ S whenever [x −x 0 [ ≤ h. Hence (x, y k ) ∈ S for k = n holds and therefore the proof of the proposition is complete. Let us again come back to Example 7.6.3 in the light of Proposition 7.6.2. Example 7.6.6 Compute the successive approximations to the IVP y = −y, −1 ≤ x ≤ 1, [y −1[ ≤ 1 and y(0) = 1. (7.6.5) Solution: Note that x 0 = 0, y 0 = 1, f(x, y) = −y, and a = b = 1. The set S on which we are studying the differential equation is S = ¦(x, y) : [x[ ≤ 1, [y −1[ ≤ 1¦. By Proposition 7.6.2, on this set M = max¦[y[ : (x, y) ∈ S¦ = 2 and h = min¦1, 1/2¦ = 1/2. Therefore, the approximate solutions y n ’s are defined only for the interval [− 1 2 , 1 2 ], if we use Proposition 7.6.2. Observe that the exact solution y = e −x and the approximate solutions y n ’s of Example 7.6.3 exist on [−1, 1]. But the approximate solutions as seen above are defined in the interval [− 1 2 , 1 2 ]. That is, for any IVP, the approximate solutions y n ’s may exist on a larger interval as compared to the interval obtained by the application of the Proposition 7.6.2. We now consider another example. Example 7.6.7 Find the Picard’s successive approximations for the IVP y = f(y), 0 ≤ x ≤ 1, y ≥ 0 and y(0) = 0; (7.6.6) where f(y) = y for y ≥ 0. 150 CHAPTER 7. DIFFERENTIAL EQUATIONS Solution: By definition y 0 (x) = y 0 ≡ 0 and y 1 (x) = y 0 + x 0 f(y 0 )ds = 0 + x 0 0ds = 0. A similar argument implies that y n (x) ≡ 0 for all n = 2, 3, . . . and lim n−→∞ y n (x) ≡ 0. Also, it can be easily verified that y(x) ≡ 0 is a solution of the IVP (7.6.6). Also y(x) = x 2 4 , 0 ≤ x ≤ 1 is a solution of (7.6.6) and the ¦y n ¦’s do not converge to x 2 4 . Note here that the IVP (7.6.6) has at least two solutions. The following result is about the existence of a unique solution to a class of IVPs. We state the theorem without proof. Theorem 7.6.8 (Picard’s Theorem on Existence and Uniqueness) Let S = ¦(x, y) : [x −x 0 [ ≤ a, [y − y 0 [ ≤ b¦, and a, b > 0. Let f : S−→R be such that f as well as ∂f ∂y are continuous on S. Also, let M, K ∈ R be constants such that [f[ ≤ M, [ ∂f ∂y [ ≤ K on S. Let h = min¦a, b/M¦. Then the sequence of successive approximations ¦y n ¦ (defined by (7.6.4)) for the IVP (7.6.2) uniformly converges on [x −x 0 [ ≤ h to a solution of IVP (7.6.2). Moreover the solution to IVP (7.6.2) is unique. Remark 7.6.9 The theorem asserts the existence of a unique solution on a subinterval [x −x 0 [ ≤ h of the given interval [x − x 0 [ ≤ a. In a way it is in a neighbourhood of x 0 and so this result is also called the local existence of a unique solution. A natural question is whether the solution exists on the whole of the interval [x −x 0 [ ≤ a. The answer to this question is beyond the scope of this book. Whenever we talk of the Picard’s theorem, we mean it in this local sense. Exercise 7.6.10 1. Compute the sequence ¦y n ¦ of the successive approximations to the IVP y = y (y −1), y(x 0 ) = 0, x 0 ≥ 0. 2. Show that the solution of the IVP y = y (y −1), y(x 0 ) = 1, x 0 ≥ 0 is y ≡ 1, x ≥ x 0 . 3. The IVP y = y, y(0) = 0, x ≥ 0 has solutions y 1 ≡ 0 as well as y 2 = x 2 4 , x ≥ 0. Why does the existence of the two solutions not 4. Consider the IVP y = y, y(0) = 1 in ¦(x, y) : [x[ ≤ a, [y[ ≤ b¦ for any a, b > 0. (a) Compute the interval of existence of the solution of the IVP by using Theorem 7.6.8. (b) Show that y = e x is the solution of the IVP which exists on whole of R. This again shows that the solution to an IVP may exist on a larger interval than what is being implied by Theorem 7.6.8. 7.6. INITIAL VALUE PROBLEMS 151 7.6.1 Orthogonal Trajectories One among the many applications of differential equations is to find curves that intersect a given family of curves at right angles. In other words, given a family F, of curves, we wish to find curve (or curves) Γ which intersect orthogonally with any member of F (whenever they intersect). It is important to note that we are not insisting that Γ should intersect every member of F, but if they intersect, the angle between their tangents, at every point of intersection, is 90 . Such a family of curves Γ is called “orthogonal trajectories” of the family F. That is, at the common point of intersection, the tangents are orthogonal. In case, the family F 1 and F 2 are identical, we say that the family is self-orthogonal. Before procedding to an example, let us note that at the common point of intersection, the product of the slopes of the tangent is −1. In order to find the orthogonal trajectories of a family of curves F, parametrized by a constant c, we eliminate c between y and y . This gives the slope at any point (x, y) and is independent of the choice of the curve. Below, we illustrate, how to obtain the orthogonal trajectories. Example 7.6.11 Compute the orthogonal trajectories of the family F of curves given by F : y 2 = cx 3 , (7.6.7) where c is an arbitrary constant. Solution: Differentiating (7.6.7), we get 2yy = 3cx 2 . (7.6.8) Elimination of c between (7.6.7) and (7.6.8), leads to y = 3cx 2 2y = 3 2x cx 3 y = 3y 2x . (7.6.9) At the point (x, y), if any curve intersects orthogonally, then (if its slope is y ) we must have y = − 2x 3y . Solving this differential equation, we get y 2 = − x 2 3 +c. Or equivalently, y 2 + x 2 3 = c is a family of curves which intersects the given family F orthogonally. Below, we summarize how to determine the orthogonal trajectories. Step 1: Given the family F(x, y, c) = 0, determine the differential equation, y = f(x, y), (7.6.10) for which the given family F are a general solution. The Equation (7.6.10) is obtained by the elimination of the constant c appearing in F(x, y, c) = 0 “using the equation obtained by differentiating this equation with respect to x”. Step 2: The differential equation for the orthogonal trajectories is then given by y = − 1 f(x, y) . (7.6.11) Final Step: The general solution of (7.6.11) is the orthogonal trajectories of the given family. In the following, let us go through the steps. 152 CHAPTER 7. DIFFERENTIAL EQUATIONS Example 7.6.12 Find the orthogonal trajectories of the family of stright lines y = mx + 1, (7.6.12) where m is a real parameter. Solution: Differentiating (7.6.12), we get y = m. So, substituting m in (7.6.12), we have y = y x + 1. Or equivalently, y = y −1 x . So, by the final step, the orthogonal trajectories satisfy the differential equation y = x 1 −y . (7.6.13) It can be easily verified that the general solution of (7.6.13) is x 2 +y 2 −2y = c, (7.6.14) where c is an arbitrary constant. In other words, the orthogonal trajectories of the family of straight lines (7.6.12) is the family of circles given by (7.6.14). Exercise 7.6.13 1. Find the orthogonal trajectories of the following family of curves (the constant c appearing below is an arbitrary constant). (a) y = x + c. (b) x 2 +y 2 = c. (c) y 2 = x +c. (d) y = cx 2 . (e) x 2 −y 2 = c. 2. Show that the one parameter family of curves y 2 = 4k(k +x), k ∈ R are self orthogonal. 3. Find the orthogonal trajectories of the family of circles passing through the points (1, −2) and (1, 2). 7.7 Numerical Methods All said and done, the Picard’s Successive approximations is not suitable for computations on computers. In the absence of methods for closed form solution (in the explicit form), we wish to explore “how computers can be used to find approximate solutions of IVP” of the form y = f(x, y), y(x 0 ) = y 0 . (7.7.1) In this section, we study a simple method to find the “numerical solutions” of (7.7.1). The study of dif- ferential equations has two important aspects (among other features) namely, the qualitative theory, the latter is called ”Numerical methods” for solving (7.7.1). What is presented here is at a very rudimentary level nevertheless it gives a flavour of the numerical method. To proceed further, we assume that f is a “good function” (there by meaning “sufficiently differen- tiable”). In such case, we have y(x +h) = y +hy + h 2 2! y + 7.7. NUMERICAL METHODS 153 x n = x x 0 x 1 x 2 Figure 7.1: Partitioning the interval which suggests a “crude” approximation y(x + h) · y + hf(x, y) (if h is small enough), the symbol · means “approximately equal to”. With this in mind, let us think of finding y, where y is the solution of (7.7.1) with x > x 0 . Let h = x −x 0 n and define x i = x 0 +ih, i = 0, 1, 2, . . . , n. That is, we have divided the interval [x 0 , x] into n equal intervals with end points x 0 , x 1 , . . . , x = x n . Our aim is to calculate y : At the first step, we have y(x + h) · y 0 + hf x 0 , y 0 . Define y 1 = y 0 +hf(x 0 , y 0 ). Error at first step is [y(x 0 +h) −y 1 [ = E 1 . Similarly, we define y 2 = y 1 +hf(x 1 , y 1 ) and we approximate y(x 0 +2h) = y(x 2 ) · y 1 +hf(x 1 , y 1 ) = y 2 and so on. In general, by letting y k = y k−1 +hf(x k−1 , y k−1 ), we define (inductively) y(x 0 + (k + 1)h) = y k+1 · y k +hf(x k , y k ), k = 0, 1, 2, . . . , n −1. This method of calculation of y 1 , y 2 , . . . , y n is called the Euler’s method. The approximate solution of (7.7.1) is obtained by “linear elements” joining (x 0 , y 0 ), (x 1 , y 1 ), . . . , (x n , y n ). x x x x x 4 3 2 1 0 x x n−1 n 1 2 n−1 n y y y y y 0 y 3 Figure 7.2: Approximate Solution 154 CHAPTER 7. DIFFERENTIAL EQUATIONS Chapter 8 Second Order and Higher Order Equations 8.1 Introduction Second order and higher order equations occur frequently in science and engineering (like pendulum problem etc.) and hence has its own importance. It has its own flavour also. We devote this section for an elementary introduction. Definition 8.1.1 (Second Order Linear Differential Equation) The equation p(x)y +q(x)y +r(x)y = c(x), x ∈ I (8.1.1) is called a second order linear differential equation. Here I is an interval contained in R; and the functions p(), q(), r(), and c() are real valued continuous functions defined on R. The functions p(), q(), and r() are called the coefficients of Equation (8.1.1) and c(x) is called the non-homogeneous term or the force function. Equation (8.1.1) is called linear homogeneous if c(x) ≡ 0 and non-homogeneous if c(x) = 0. Recall that a second order equation is called nonlinear if it is not linear. Example 8.1.2 1. The equation y + 9 sin y = 0 is a second order equation which is nonlinear. 2. y −y = 0 is an example of a linear second order equation. 3. y +y +y = sin x is a non-homogeneous linear second order equation. 4. ax 2 y + bxy + cy = 0 c = 0 is a homogeneous second order linear equation. This equation is called Euler Equation of order 2. Here a, b, and c are real constants. Definition 8.1.3 A function y defined on I is called a solution of Equation (8.1.1) if y is twice differentiable and satisfies Equation (8.1.1). Example 8.1.4 1. e x and e −x are solutions of y −y = 0. 2. sinx and cos x are solutions of y +y = 0. 155 156 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS We now state an important theorem whose proof is simple and is omitted. Theorem 8.1.5 (Superposition Principle) Let y 1 and y 2 be two given solutions of p(x)y +q(x)y +r(x)y = 0, x ∈ I. (8.1.2) Then for any two real number c 1 , c 2 , the function c 1 y 1 +c 2 y 2 is also a solution of Equation (8.1.2). It is to be noted here that Theorem 8.1.5 is not an existence theorem. That is, it does not assert the existence of a solution of Equation (8.1.2). Definition 8.1.6 (Solution Space) The set of solutions of a differential equation is called the solution space. For example, all the solutions of the Equation (8.1.2) form a solution space. Note that y(x) ≡ 0 is also a solution of Equation (8.1.2). Therefore, the solution set of a Equation (8.1.2) is non-empty. A moments reflection on Theorem 8.1.5 tells us that the solution space of Equation (8.1.2) forms a real vector space. Remark 8.1.7 The above statements also hold for any homogeneous linear differential equation. That is, the solution space of a homogeneous linear differential equation is a real vector space. The natural question is to inquire about its dimension. This question will be answered in a sequence of results stated below. We first recall the definition of Linear Dependence and Independence. Definition 8.1.8 (Linear Dependence and Linear Independence) Let I be an interval in R and let f, g : I −→R be continuous functions. we say that f, g are said to be linearly dependent if there are real numbers a and b (not both zero) such that af(t) +bg(t) = 0 for all t ∈ I. The functions f(), g() are said to be linearly independent if f(), g() are not linear dependent. To proceed further and to simplify matters, we assume that p(x) ≡ 1 in Equation (8.1.2) and that the function q(x) and r(x) are continuous on I. In other words, we consider a homogeneous linear equation y +q(x)y +r(x)y = 0, x ∈ I, (8.1.3) where q and r are real valued continuous functions defined on I. The next theorem, given without proof, deals with the existence and uniqueness of solutions of Equation (8.1.3) with initial conditions y(x 0 ) = A, y (x 0 ) = B for some x 0 ∈ I. Theorem 8.1.9 (Picard’s Theorem on Existence and Uniqueness) Consider the Equation (8.1.3) along with the conditions y(x 0 ) = A, y (x 0 ) = B, for some x 0 ∈ I (8.1.4) where A and B are prescribed real constants. Then Equation (8.1.3), with initial conditions given by Equation (8.1.4) has a unique solution on I. A word of Caution: Note that the coefficient of y in Equation (8.1.3) is 1. Before we apply Theorem 8.1.9, we have to ensure this condition. An important application of Theorem 8.1.9 is that the equation (8.1.3) has exactly 2 linearly inde- pendent solutions. In other words, the set of all solutions over R, forms a real vector space of dimension 2. 8.1. INTRODUCTION 157 Theorem 8.1.10 Let q and r be real valued continuous functions on I. Then Equation (8.1.3) has exactly two linearly independent solutions. Moreover, if y 1 and y 2 are two linearly independent solutions of Equation (8.1.3), then the solution space is a linear combination of y 1 and y 2 . Proof. Let y 1 and y 2 be two unique solutions of Equation (8.1.3) with initial conditions y 1 (x 0 ) = 1, y 1 (x 0 ) = 0, and y 2 (x 0 ) = 0, y 2 (x 0 ) = 1 for some x 0 ∈ I. (8.1.5) The unique solutions y 1 and y 2 exist by virtue of Theorem 8.1.9. We now claim that y 1 and y 2 are linearly independent. Consider the system of linear equations αy 1 (x) +βy 2 (x) = 0, (8.1.6) where α and β are unknowns. If we can show that the only solution for the system (8.1.6) is α = β = 0, then the two solutions y 1 and y 2 will be linearly independent. Use initial condition on y 1 and y 2 to show that the only solution is indeed α = β = 0. Hence the result follows. We now show that any solution of Equation (8.1.3) is a linear combination of y 1 and y 2 . Let ζ be any solution of Equation (8.1.3) and let d 1 = ζ(x 0 ) and d 2 = ζ (x 0 ). Consider the function φ defined by φ(x) = d 1 y 1 (x) +d 2 y 2 (x). By Definition 8.1.3, φ is a solution of Equation (8.1.3). Also note that φ(x 0 ) = d 1 and φ (x 0 ) = d 2 . So, φ and ζ are two solution of Equation (8.1.3) with the same initial conditions. Hence by Picard’s Theorem on Existence and Uniqueness (see Theorem 8.1.9), φ(x) ≡ ζ(x) or ζ(x) = d 1 y 1 (x) +d 2 y 2 (x). Thus, the equation (8.1.3) has two linearly independent solutions. Remark 8.1.11 1. Observe that the solution space of Equation (8.1.3) forms a real vector space of dimension 2. 2. The solutions y 1 and y 2 corresponding to the initial conditions y 1 (x 0 ) = 1, y 1 (x 0 ) = 0, and y 2 (x 0 ) = 0, y 2 (x 0 ) = 1 for some x 0 ∈ I, are called a fundamental system of solutions for Equation (8.1.3). 3. Note that the fundamental system for Equation (8.1.3) is not unique. Consider a 22 non-singular matrix A = ¸ a b c d ¸ with a, b, c, d ∈ R. Let ¦y 1 , y 2 ¦ be a fundamental system for the differential Equation 8.1.3 and y t = [y 1 , y 2 ]. Then the rows of the matrix Ay = ¸ ay 1 +by 2 cy 1 + dy 2 ¸ also form a fundamental system for Equation 8.1.3. That is, if ¦y 1 , y 2 ¦ is a fundamental system for Equation 8.1.3 then ¦ay 1 + by 2 , cy 1 + dy 2 ¦ is also a fundamental system whenever ad −bc = det(A) = 0. Example 8.1.12 ¦1, x¦ is a fundamental system for y = 0. Note that ¦1 −x, 1 +x¦ is also a fundamental system. Here the matrix is ¸ 1 −1 1 1 ¸ . Exercise 8.1.13 1. State whether the following equations are second-order linear or second- order non-linear equaitons. 158 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS (a) y +y sin x = 5. (b) y + (y ) 2 +y sinx = 0. (c) y +yy = −2. (d) (x 2 + 1)y + (x 2 + 1) 2 y −5y = sin x. 2. By showing that y 1 = e x and y 2 = e −x are solutions of y −y = 0 conclude that sinh x and coshx are also solutions of y − y = 0. Do sinh x and coshx form a fundamental set of solutions? 3. Given that ¦sinx, cos x¦ forms a basis for the solution space of y +y = 0, find another basis. 8.2 More on Second Order Equations In this section, we wish to study some more properties of second order equations which have nice applications. They also have natural generalisations to higher order equations. Definition 8.2.1 (General Solution) Let y 1 and y 2 be a fundamental system of solutions for y +q(x)y +r(x)y = 0, x ∈ I. (8.2.1) The general solution y of Equation (8.2.1) is defined by y = c 1 y 1 +c 2 y 2 , x ∈ I where c 1 and c 2 are arbitrary real constants. Note that y is also a solution of Equation (8.2.1). In other words, the general solution of Equation (8.2.1) is a 2-parameter family of solutions, the parameters being c 1 and c 2 . 8.2.1 Wronskian In this subsection, we discuss the linear independence or dependence of two solutions of Equation (8.2.1). Definition 8.2.2 (Wronskian) Let y 1 and y 2 be two real valued continuously differentiable function on an interval I ⊂ R. For x ∈ I, define W(y 1 , y 2 ) := y 1 y 1 y 2 y 2 = y 1 y 2 −y 1 y 2 . W is called the Wronskian of y 1 and y 2 . Example 8.2.3 1. Let y 1 = cos x and y 2 = sin x, x ∈ I ⊂ R. Then W(y 1 , y 2 ) = sin x cos x cos x −sinx ≡ −1 for all x ∈ I. (8.2.2) Hence ¦cos x, sin x¦ is a linearly independent set. 8.2. MORE ON SECOND ORDER EQUATIONS 159 2. Let y 1 = x 2 [x[, and y 2 = x 3 for x ∈ (−1, 1). Let us now compute y 1 and y 2 . From analysis, we know that y 1 is differentiable at x = 0 and y 1 (x) = −3x 2 if x < 0 and y 1 (x) = 3x 2 if x ≥ 0. Therefore, for x ≥ 0, W(y 1 , y 2 ) = y 1 y 1 y 2 y 2 = x 3 3x 2 x 3 3x 2 = 0 and for x < 0, W(y 1 , y 2 ) = y 1 y 1 y 2 y 2 = −x 3 −3x 2 x 3 3x 2 = 0. That is, for all x ∈ (−1, 1), W(y 1 , y 2 ) = 0. It is also easy to note that y 1 , y 2 are linearly independent on (−1, 1). In fact,they are linearly independent on any interval (a, b) containing 0. Given two solutions y 1 and y 2 of Equation (8.2.1), we have a characterisation for y 1 and y 2 to be linearly independent. Theorem 8.2.4 Let I ⊂ R be an interval. Let y 1 and y 2 be two solutions of Equation (8.2.1). Fix a point x 0 ∈ I. Then for any x ∈ I, W(y 1 , y 2 ) = W(y 1 , y 2 )(x 0 ) exp(− x x0 q(s)ds). (8.2.3) Consequently, W(y 1 , y 2 )(x 0 ) = 0 ⇐⇒W(y 1 , y 2 ) = 0 for all x ∈ I. Proof. First note that, for any x ∈ I, W(y 1 , y 2 ) = y 1 y 2 −y 1 y 2 . So d dx W(y 1 , y 2 ) = y 1 y 2 −y 1 y 2 (8.2.4) = y 1 (−q(x)y 2 − r(x)y 2 ) −(−q(x)y 1 −r(x)y 1 ) y 2 (8.2.5) = q(x) y 1 y 2 −y 1 y 2 (8.2.6) = −q(x)W(y 1 , y 2 ). (8.2.7) So, we have W(y 1 , y 2 ) = W(y 1 , y 2 )(x 0 ) exp x x0 q(s)ds . This completes the proof of the first part. The second part follows the moment we note that the exponential function does not vanish. Alter- natively, W(y 1 , y 2 ) satisfies a first order linear homogeneous equation and therefore W(y 1 , y 2 ) ≡ 0 if and only if W(y 1 , y 2 )(x 0 ) = 0. Remark 8.2.5 1. If the Wronskian W(y 1 , y 2 ) of two solutions y 1 , y 2 of (8.2.1) vanish at a point x 0 ∈ I, then W(y 1 , y 2 ) is identically zero on I. 160 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS 2. If any two solutions y 1 , y 2 of Equation (8.2.1) are linearly dependent (on I), then W(y 1 , y 2 ) ≡ 0 on I. Theorem 8.2.6 Let y 1 and y 2 be any two solutions of Equation (8.2.1). Let x 0 ∈ I be arbitrary. Then y 1 and y 2 are linearly independent on I if and only if W(y 1 , y 2 )(x 0 ) = 0. Proof. Let y 1 , y 2 be linearly independent on I. To show: W(y 1 , y 2 )(x 0 ) = 0. Suppose not. Then W(y 1 , y 2 )(x 0 ) = 0. So, by Theorem 2.5.1 the equations c 1 y 1 (x 0 ) +c 2 y 2 (x 0 ) = 0 and c 1 y 1 (x 0 ) +c 2 y 2 (x 0 ) = 0 (8.2.8) 1 , d 2 . (as 0 = W(y 1 , y 2 )(x 0 ) = y 1 (x 0 )y 2 (x 0 ) −y 1 (x 0 )y 2 (x 0 ).) Let y = d 1 y 1 +d 2 y 2 . Note that Equation (8.2.8) now implies y(x 0 ) = 0 and y (x 0 ) = 0. Therefore, by Picard’s Theorem on existence and uniqueness of solutions (see Theorem 8.1.9), the solu- tion y ≡ 0 on I. That is, d 1 y 1 + d 2 y 2 ≡ 0 for all x ∈ I with [d 1 [ + [d 2 [ = 0. That is, y 1 , y 2 is linearly dependent on I. A contradiction. Therefore, W(y 1 , y 2 )(x 0 ) = 0. This proves the first part. Suppose that W(y 1 , y 2 )(x 0 ) = 0 for some x 0 ∈ I. Therefore, by Theorem 8.2.4, W(y 1 , y 2 ) = 0 for all x ∈ I. Suppose that c 1 y 1 (x) + c 2 y 2 (x) = 0 for all x ∈ I. Therefore, c 1 y 1 (x) + c 2 y 2 (x) = 0 for all x ∈ I. Since x 0 ∈ I, in particular, we consider the linear system of equations c 1 y 1 (x 0 ) +c 2 y 2 (x 0 ) = 0 and c 1 y 1 (x 0 ) +c 2 y 2 (x 0 ) = 0. (8.2.9) But then by using Theorem 2.5.1 and the condition W(y 1 , y 2 )(x 0 ) = 0, the only solution of the linear system (8.2.9) is c 1 = c 2 = 0. So, by Definition 8.1.8, y 1 , y 2 are linearly independent. Remark 8.2.7 Recall the following from Example 2: 1. The interval I = (−1, 1). 2. y 1 = x 2 [x[, y 2 = x 3 and W(y 1 , y 2 ) ≡ 0 for all x ∈ I. 3. The functions y 1 and y 2 are linearly independent. This example tells us that Theorem 8.2.6 may not hold if y 1 and y 2 are not solutions of Equation (8.2.1) but are just some arbitrary functions on (−1, 1). The following corollary is a consequence of Theorem 8.2.6. Corollary 8.2.8 Let y 1 , y 2 be two linearly independent solutions of Equation (8.2.1). Let y be any solution of Equation (8.2.1). Then there exist unique real numbers d 1 , d 2 such that y = d 1 y 1 +d 2 y 2 on I. Proof. Let x 0 ∈ I. Let y(x 0 ) = a, y (x 0 ) = b. Here a and b are known since the solution y is given. Also for any x 0 ∈ I, by Theorem 8.2.6, W(y 1 , y 2 )(x 0 ) = 0 as y 1 , y 2 are linearly independent solutions of Equation (8.2.1). Therefore by Theorem 2.5.1, the system of linear equations c 1 y 1 (x 0 ) +c 2 y 2 (x 0 ) = a and c 1 y 1 (x 0 ) +c 2 y 2 (x 0 ) = b (8.2.10) has a unique solution d 1 , d 2 . Define ζ(x) = d 1 y 1 + d 2 y 2 for x ∈ I. Note that ζ is a solution of Equation (8.2.1) with ζ(x 0 ) = a and ζ (x 0 ) = b. Hence, by Picard’s Theorem on existence and uniqueness (see Theorem 8.1.9), ζ = y for all x ∈ I. That is, y = d 1 y 1 +d 2 y 2 . 8.2. MORE ON SECOND ORDER EQUATIONS 161 Exercise 8.2.9 1. Let y 1 and y 2 be any two linearly independent solutions of y + a(x)y = 0. Find W(y 1 , y 2 ). 2. Let y 1 and y 2 be any two linearly independent solutions of y +a(x)y + b(x)y = 0, x ∈ I. Show that y 1 and y 2 cannot vanish at any x = x 0 ∈ I. 3. Show that there is no equation of the type y +a(x)y +b(x)y = 0, x ∈ [0, 2π] 1 = sin x and y 2 = x −π as its solutions; where a(x) and b(x) are any continuous functions on [0, 2π]. [Hint: Use Exercise 8.2.9.2.] 8.2.2 Method of Reduction of Order We are going to show that in order to find a fundamental system for Equation (8.2.1), it is sufficient to have the knowledge of a solution of Equation (8.2.1). In other words, if we know one (non-zero) solution y 1 of Equation (8.2.1), then we can determine a solution y 2 of Equation (8.2.1), so that ¦y 1 , y 2 ¦ forms a fundamental system for Equation (8.2.1). The method is described below and is usually called the method of reduction of order. Let y 1 be an every where non-zero solution of Equation (8.2.1). Assume that y 2 = u(x)y 1 is a solution of Equation (8.2.1), where u is to be determined. Substituting y 2 in Equation (8.2.1), we have (after a bit of simplification) u y 1 +u (2y 1 +py 1 ) +u(y 1 +py 1 +qy 1 ) = 0. By letting u = v, and observing that y 1 is a solution of Equation (8.2.1), we have v y 1 +v(2y 1 +py 1 ) = 0 which is same as d dx (vy 2 1 ) = −p(vy 2 1 ). This is a linear equation of order one (hence the name, reduction of order) in v whose solution is vy 2 1 = exp(− x x0 p(s)ds), x 0 ∈ I. Substituting v = u and integrating we get u = x x0 1 y 2 1 (s) exp(− s x0 p(t)dt)ds, x 0 ∈ I and hence a second solution of Equation (8.2.1) is y 2 = y 1 x x0 1 y 2 1 (s) exp(− s x0 p(t)dt)ds. It is left as an exercise to show that y 1 , y 2 are linearly independent. That is, ¦y 1 , y 2 ¦ form a funda- mental system for Equation (8.2.1). We illustrate the method by an example. 162 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS Example 8.2.10 Given that e y 1 = 1 x , x ≥ 1 is a solution of x 2 y + 4xy + 2y = 0, (8.2.11) determine another solution y 2 of (8.2.11), such that the solutions y 1 , y 2 , for x ≥ 1 are linearly independent. Solution: With the notations used above, note that x 0 = 1, p(x) = 4 x , and y 2 (x) = u(x)y 1 (x), where u is given by u = x 1 1 y 2 1 (s) exp s 1 p(t)dt ds = x 1 1 y 2 1 (s) exp ln(s 4 ) ds = x 1 s 2 s 4 ds = 1 − 1 x ; where A and B are constants. So, y 2 (x) = 1 x 1 x 2 . Since the term 1 x 1 , we can take y 2 = 1 x 2 . So, 1 x and 1 x 2 are the required two linearly independent solutions of (8.2.11). Exercise 8.2.11 In the following, use the given solution y 1 , to find another solution y 2 so that the two solutions y 1 and y 2 are linearly independent. 1. y = 0, y 1 = 1, x ≥ 0. 2. y + 2y +y = 0, y 1 = e x , x ≥ 0. 3. x 2 y −xy +y = 0, y 1 = x, x ≥ 1. 4. xy +y = 0, y 1 = 1, x ≥ 1. 5. y +xy −y = 0, y 1 = x, x ≥ 1. 8.3 Second Order equations with Constant Coefficients Definition 8.3.1 Let a and b be constant real numbers. An equation y +ay +by = 0 (8.3.1) is called a second order homogeneous linear equation with constant coefficients. Let us assume that y = e λx to be a solution of Equation (8.3.1) (where λ is a constant, and is to be determined). To simplify the matter, we denote L(y) = y +ay +by and p(λ) = λ 2 +aλ +b. It is easy to note that L(e λx ) = p(λ)e λx . Now, it is clear that e λx is a solution of Equation (8.3.1) if and only if p(λ) = 0. (8.3.2) 8.3. SECOND ORDER EQUATIONS WITH CONSTANT COEFFICIENTS 163 Equation (8.3.2) is called the characteristic equation of Equation (8.3.1). Equation (8.3.2) is a Case 1: Let λ 1 , λ 2 be real roots of Equation (8.3.2) with λ 1 = λ 2 . Then e λ1x and e λ2x are two solutions of Equation (8.3.1) and moreover they are linearly independent (since λ 1 = λ 2 ). That is, ¦e λ1x , e λ2x ¦ forms a fundamental system of solutions of Equation (8.3.1). Case 2: Let λ 1 = λ 2 be a repeated root of p(λ) = 0. Then p 1 ) = 0. Now, d dx (L(e λx )) = L(xe λx ) = p (λ)e λx +xp(λ)e λx . But p 1 ) = 0 and therefore, L(xe λ1x ) = 0. Hence, e λ1x and xe λ1x are two linearly independent solutions of Equation (8.3.1). In this case, we have a fundamental system of solutions of Equation (8.3.1). Case 3: Let λ = α +iβ be a complex root of Equation (8.3.2). So, α −iβ is also a root of Equation (8.3.2). Before we proceed, we note: Lemma 8.3.2 Let y = u + iv be a solution of Equation (8.3.1), where u and v are real valued functions. Then u and v are solutions of Equation (8.3.1). In other words, the real part and the imaginary part of a complex valued solution (of a real variable ODE Equation (8.3.1)) are themselves solution of Equation (8.3.1). Proof. exercise. Let λ = α +iβ be a complex root of p(λ) = 0. Then e αx (cos(βx) +i sin(βx)) is a complex solution of Equation (8.3.1). By Lemma 8.3.2, y 1 = e αx cos(βx) and y 2 = sin(βx) are solutions of Equation (8.3.1). It is easy to note that y 1 and y 2 are linearly independent. It is as good as saying ¦e λx cos(βx), e λx sin(βx)¦ forms a fundamental system of solutions of Equation (8.3.1). Exercise 8.3.3 1. Find the general solution of the follwoing equations. (a) y −4y + 3y = 0. (b) 2y + 5y = 0. (c) y −9y = 0. (d) y +k 2 y = 0, where k is a real constant. 2. Solve the following IVP’s. (a) y +y = 0, y(0) = 0, y (0) = 1. (b) y −y = 0, y(0) = 1, y (0) = 1. (c) y + 4y = 0, y(0) = −1, y (0) = −3. (d) y + 4y + 4y = 0, y(0) = 1, y (0) = 0. 3. Find two linearly independent solutions y 1 and y 2 of the following equations. (a) y −5y = 0. (b) y + 6y + 5y = 0. (c) y + 5y = 0. 164 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS (d) y + 6y + 9y = 0. Also, in each case, find W(y 1 , y 2 ). 4. Show that the IVP y +y = 0, y(0) = 0 and y (0) = B has a unique solution for any real number B. 5. Consider the problem y +y = 0, y(0) = 0 and y (π) = B. (8.3.3) Show that it has a solution if and only if B = 0. Compare this with Exercise 4. Also, show that if B = 0, then there are infinitely many solutions to (8.3.3). 8.4 Non Homogeneous Equations Throughout this section, I denotes an interval in R. we assume that q(), r() and f() are real valued continuous function defined on I. Now, we focus the attention to the study of non-homogeneous equation of the form y +q(x)y +r(x)y = f(x). (8.4.1) We assume that the functions q(), r() and f() are known/given. The non-zero function f() in (8.4.1) is also called the non-homogeneous term or the forcing function. The equation y +q(x)y +r(x)y = 0. (8.4.2) is called the homogeneous equation corresponding to (8.4.1). Consider the set of all twice differentiable functions defined on I. We define an operator L on this set by L(y) = y +q(x)y + r(x)y. Then (8.4.1) and (8.4.2) can be rewritten in the (compact) form L(y) = f (8.4.3) L(y) = 0. (8.4.4) The ensuing result relates the solutions of (8.4.1) and (8.4.2). Theorem 8.4.1 1. Let y 1 and y 2 be two solutions of (8.4.1) on I. Then y = y 1 − y 2 is a solution of (8.4.2). 2. Let z be any solution of (8.4.1) on I and let z 1 be any solution of (8.4.2). Then y = z +z 1 is a solution of (8.4.1) on I. Proof. Observe that L is a linear transformation on the set of twice differentiable function on I. We therefore have L(y 1 ) = f and L(y 2 ) = f. The linearity of L implies that L(y 1 −y 2 ) = 0 or equivalently, y = y 1 −y 2 is a solution of (8.4.2). For the proof of second part, note that L(z) = f and L(z 1 ) = 0 implies that L(z +z 1 ) = L(z) +L(z 1 ) = f. Thus, y = z +z 1 is a solution of (8.4.1). The above result leads us to the following definition. 8.4. NON HOMOGENEOUS EQUATIONS 165 Definition 8.4.2 (General Solution) A general solution of (8.4.1) on I is a solution of (8.4.1) of the form y = y h +y p , x ∈ I where y h = c 1 y 1 + c 2 y 2 is a general solution of the corresponding homogeneous equation (8.4.2) and y p is any solution of (8.4.1) (preferably containing no arbitrary constants). We now prove that the solution of (8.4.1) with initial conditions is unique. Theorem 8.4.3 (Uniqueness) Suppose that x 0 ∈ I. Let y 1 and y 2 be two solutions of the IVP y +qy +ry = f, y(x 0 ) = a, y (x 0 ) = b. (8.4.5) Then y 1 = y 2 for all x ∈ I. Proof. Let z = y 1 −y 2 . Then z satisfies L(z) = 0, z(x 0 ) = 0, z (x 0 ) = 0. By the uniqueness theorem 8.1.9, we have z ≡ 0 on I. Or in other words, y 1 ≡ y 2 on I. Remark 8.4.4 The above results tell us that to solve (i.e., to find the general solution of (8.4.1)) or the IVP (8.4.5), we need to find the general solution of the homogeneous equation (8.4.2) and a particular solution y p of (8.4.1). To repeat, the two steps needed to solve (8.4.1), are: 1. compute the general solution of (8.4.2), and 2. compute a particular solution of (8.4.1). Step 1. has been dealt in the previous sections. The remainder of the section is devoted to step 2., i.e., we elaborate some methods for computing a particular solution y p of (8.4.1). Exercise 8.4.5 1. Find the general solution of the following equations: (a) y + 5y = −5. (You may note here that y = −x is a particular solution.) (b) y −y = −2 sinx. (First show that y = sin x is a particular solution.) 2. Solve the following IVPs: (a) y +y = 2e x , y(0) = 0 = y (0). (It is given that y = e x is a particular solution.) (b) y −y = −2 cos x, y(0) = 0, y (0) = 1. (First guess a particular solution using the idea given in Exercise 8.4.5.1b ) 3. Let f 1 (x) and f 2 (x) be two continuous functions. Let y i ’s be particular solutions of y +q(x)y +r(x)y = f i (x), i = 1, 2; where q(x) and r(x) are continuous functions. Show that y 1 + y 2 is a particular solution of y + q(x)y +r(x)y = f 1 (x) +f 2 (x). 166 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS 8.5 Variation of Parameters In the previous section, calculation of particular integrals/solutions for some special cases have been studied. Recall that the homogeneous part of the equation had constant coefficients. In this section, we deal with a useful technique of finding a particular solution when the coefficients of the homogeneous part are continuous functions and the forcing function f(x) (or the non-homogeneous term) is piecewise continuous. Suppose y 1 and y 2 are two linearly independent solutions of y +q(x)y +r(x)y = 0 (8.5.1) on I, where q(x) and r(x) are arbitrary continuous functions defined on I. Then we know that y = c 1 y 1 +c 2 y 2 is a solution of (8.5.1) for any constants c 1 and c 2 . We now “vary” c 1 and c 2 to functions of x, so that y = u(x)y 1 +v(x)y 2 , x ∈ I (8.5.2) is a solution of the equation y +q(x)y +r(x)y = f(x), on I, (8.5.3) where f is a piecewise continuous function defined on I. The details are given in the following theorem. Theorem 8.5.1 (Method of Variation of Parameters) Let q(x) and r(x) be continuous functions defined on I and let f be a piecewise continuous function on I. Let y 1 and y 2 be two linearly independent solutions of (8.5.1) on I. Then a particular solution y p of (8.5.3) is given by y p = −y 1 y 2 f(x) W dx +y 2 y 1 f(x) W dx, (8.5.4) where W = W(y 1 , y 2 ) is the Wronskian of y 1 and y 2 . (Note that the integrals in (8.5.4) are the indefinite integrals of the respective arguments.) Proof. Let u(x) and v(x) be continuously differentiable functions (to be determined) such that y p = uy 1 +vy 2 , x ∈ I (8.5.5) is a particular solution of (8.5.3). Differentiation of (8.5.5) leads to y p = uy 1 +vy 2 +u y 1 +v y 2 . (8.5.6) We choose u and v so that u y 1 +v y 2 = 0. (8.5.7) Substituting (8.5.7) in (8.5.6), we have y p = uy 1 +vy 2 , and y p = uy 1 +vy 2 +u y 1 +v y 2 . (8.5.8) Since y p is a particular solution of (8.5.3), substitution of (8.5.5) and (8.5.8) in (8.5.3), we get u y 1 +q(x)y 1 +r(x)y 1 +v y 2 +q(x)y 2 +r(x)y 2 +u y 1 +v y 2 = f(x). As y 1 and y 2 are solutions of the homogeneous equation (8.5.1), we obtain the condition u y 1 +v y 2 = f(x). (8.5.9) 8.5. VARIATION OF PARAMETERS 167 We now determine u and v from (8.5.7) and (8.5.9). By using the Cramer’s rule for a linear system of equations, we get u = − y 2 f(x) W and v = y 1 f(x) W (8.5.10) (note that y 1 and y 2 are linearly independent solutions of (8.5.1) and hence the Wronskian, W = 0 for any x ∈ I). Integration of (8.5.10) give us u = − y 2 f(x) W dx and v = y 1 f(x) W dx (8.5.11) ( without loss of generality, we set the values of integration constants to zero). Equations (8.5.11) and (8.5.5) yield the desired results. Thus the proof is complete. Before, we move onto some examples, the following comments are useful. Remark 8.5.2 1. The integrals in (8.5.11) exist, because y 2 and W(= 0) are continuous functions and f is a piecewise continuous function. Sometimes, it is useful to write (8.5.11) in the form u = − x x0 y 2 (s)f(s) W(s) ds and v = x x0 y 1 (s)f(s) W(s) ds where x ∈ I and x 0 is a fixed point in I. In such a case, the particular solution y p as given by (8.5.4) assumes the form y p = −y 1 x x0 y 2 (s)f(s) W(s) ds +y 2 ) x x0 y 1 (s)f(s) W(s) ds (8.5.12) for a fixed point x 0 ∈ I and for any x ∈ I. 2. Again, we stress here that, q and r are assumed to be continuous. They need not be constants. Also, f is a piecewise continuous function on I. 3. A word of caution. While using (8.5.4), one has to keep in mind that the coefficient of y in (8.5.3) is 1. Example 8.5.3 1. Find the general solution of y +y = 1 2 + sin x , x ≥ 0. Solution: The general solution of the corresponding homogeneous equation y +y = 0 is given by y h = c 1 cos x +c 2 sin x. Here, the solutions y 1 = sinx and y 2 = cos x are linearly independent over I = [0, ∞) and W = W(sin x, cos x) = 1. Therefore, a particular solution, y h , by Theorem 8.5.1, is y p = −y 1 y 2 2 + sinx dx +y 2 y 1 2 + sin x dx = sin x cos x 2 + sin x dx + cos x sin x 2 + sinx dx = −sinx ln(2 + sin x) + cos x (x − 2 1 2 + sinx dx). (8.5.13) So, the required general solution is y = c 1 cos x +c 2 sin x +y p where y p is given by (8.5.13). 168 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS 2. Find a particular solution of x 2 y −2xy + 2y = x 3 , x > 0. Solution: Verify that the given equation is y 2 x y + 2 x 2 y = x and two linearly independent solutions of the corresponding homogeneous part are y 1 = x and y 2 = x 2 . Here W = W(x, x 2 ) = x x 2 1 2x = x 2 , x > 0. By Theorem 8.5.1, a particular solution y p is given by y p = −x x 2 x x 2 dx +x 2 x x x 2 dx = − x 3 2 +x 3 = x 3 2 . The readers should note that the methods of Section 8.7 are not applicable as the given equation is not an equation with constant coefficients. Exercise 8.5.4 1. Find a particular solution for the following problems: (a) y +y = f(x), 0 ≤ x ≤ 1 where f(x) = 0 if 0 ≤ x < 1 2 1 if 1 2 ≤ x ≤ 1. (b) y +y = 2 sec x for all x ∈ (0, π 2 ). (c) y −3y + 2y = −2 cos(e −x ), x > 0. (d) x 2 y +xy −y = 2x, x > 0. 2. Use the method of variation of parameters to find the general solution of (a) y −y = −e x for all x ∈ R. (b) y +y = sin x for all x ∈ R. 3. Solve the following IVPs: (a) y +y = f(x), x ≥ 0 where f(x) = 0 if 0 ≤ x < 1 1 if x ≥ 1. with y(0) = 0 = y (0). (b) y −y = [x[ for all x ∈ [−1, ∞) with y(−1) = 0 and y (−1) = 1. 8.6 Higher Order Equations with Constant Coefficients This section is devoted to an introductory study of higher order linear equations with constant coeffi- cients. This is an extension of the study of 2 nd order linear equations with constant coefficients (see, Section 8.3). The standard form of a linear n th order differential equation with constant coefficients is given by L n (y) = f(x) on I, (8.6.1) where L n d n dx n +a 1 d n−1 dx n−1 + +a n−1 d dx +a n 8.6. HIGHER ORDER EQUATIONS WITH CONSTANT COEFFICIENTS 169 is a linear differential operator of order n with constant coefficients, a 1 , a 2 , . . . , a n being real constants (called the coefficients of the linear equation) and the function f(x) is a piecewise continuous function defined on the interval I. We will be using the notation y (n) for the n th derivative of y. If f(x) ≡ 0, then (8.6.1) which reduces to L n (y) = 0 on I, (8.6.2) is called a homogeneous linear equation, otherwise (8.6.1) is called a non-homogeneous linear equation. The function f is also known as the non-homogeneous term or a forcing term. Definition 8.6.1 A function y defined on I is called a solution of (8.6.1) if y is n times differentiable and y along with its derivatives satisfy (8.6.1). Remark 8.6.2 1. If u and v are any two solutions of (8.6.1), then y = u − v is also a solution of (8.6.2). Hence, if v is a solution of (8.6.2) and y p is a solution of (8.6.1), then u = v + y p is a solution of (8.6.1). 2. Let y 1 and y 2 be two solutions of (8.6.2). Then for any constants (need not be real) c 1 , c 2 , y = c 1 y 1 +c 2 y 2 is also a solution of (8.6.2). The solution y is called the superposition of y 1 and y 2 . 3. Note that y ≡ 0 is a solution of (8.6.2). This, along with the super-position principle, ensures that the set of solutions of (8.6.2) forms a vector space over R. This vector space is called the solution space or space of solutions of (8.6.2). As in Section 8.3, we first take up the study of (8.6.2). It is easy to note (as in Section 8.3) that for a constant λ, L n (e λx ) = p(λ)e λx where, p(λ) = λ n +a 1 λ n−1 + + a n (8.6.3) Definition 8.6.3 (Characteristic Equation) The equation p(λ) = 0, where p(λ) is defined in (8.6.3), is called the characteristic equation of (8.6.2). Note that p(λ) is of polynomial of degree n with real coefficients. Thus, it has n zeros (counting with multiplicities). Also, in case of complex roots, they will occur in conjugate pairs. In view of this, we have the following theorem. The proof of the theorem is omitted. Theorem 8.6.4 e λx is a solution of (8.6.2) on any interval I ⊂ R if and only if λ is a root of (8.6.3) 1. If λ 1 , λ 2 , . . . , λ n are distinct roots of p(λ) = 0, then e λ1x , e λ2x , . . . , e λnx are the n linearly independent solutions of (8.6.2). 2. If λ 1 is a repeated root of p(λ) = 0 of multiplicity k, i.e., λ 1 is a zero of (8.6.3) repeated k times, then e λ1x , xe λ1x , . . . , x k−1 e λ1x are linearly independent solutions of (8.6.2), corresponding to the root λ 1 of p(λ) = 0. 170 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS 3. If λ 1 = α +iβ is a complex root of p(λ) = 0, then so is the complex conjugate λ 1 = α −iβ. Then the corresponding linearly independent solutions of (8.6.2) are y 1 = e αx cos(βx) +i sin(βx) and y 2 = e αx cos(βx) −i sin(βx) . These are complex valued functions of x. However, using super-position principle, we note that y 1 +y 2 2 = e αx cos(βx) and y 1 −y 2 2i = e αx sin(βx) are also solutions of (8.6.2). Thus, in the case of λ 1 = α + iβ being a complex root of p(λ) = 0, we have the linearly independent solutions e αx cos(βx) and e αx sin(βx). Example 8.6.5 1. Find the solution space of the differential equation y −6y + 11y −6y = 0. Solution: Its characteristic equation is p(λ) = λ 3 −6λ 2 + 11λ −6 = 0. By inspection, the roots of p(λ) = 0 are λ = 1, 2, 3. So, the linearly independent solutions are e x , e 2x , e 3x and the solution space is ¦c 1 e x +c 2 e 2x + c 3 e 3x : c 1 , c 2 , c 3 ∈ R¦. 2. Find the solution space of the differential equation y −2y +y = 0. Solution: Its characteristic equation is p(λ) = λ 3 −2λ 2 +λ = 0. By inspection, the roots of p(λ) = 0 are λ = 0, 1, 1. So, the linearly independent solutions are 1, e x , xe x and the solution space is ¦c 1 +c 2 e x +c 3 xe x : c 1 , c 2 , c 3 ∈ R¦. 3. Find the solution space of the differential equation y (4) + 2y +y = 0. Solution: Its characteristic equation is p(λ) = λ 4 + 2λ 2 + 1 = 0. By inspection, the roots of p(λ) = 0 are λ = i, i, −i, −i. So, the linearly independent solutions are sinx, xsin x, cos x, xcos x and the solution space is ¦c 1 sin x +c 2 cos x +c 3 xsin x +c 4 xcos x : c 1 , c 2 , c 3 , c 4 ∈ R¦. 8.6. HIGHER ORDER EQUATIONS WITH CONSTANT COEFFICIENTS 171 From the above discussion, it is clear that the linear homogeneous equation (8.6.2), admits n lin- early independent solutions since the algebraic equation p(λ) = 0 has exactly n roots (counting with multiplicity). Definition 8.6.6 (General Solution) Let y 1 , y 2 , . . . , y n be any set of n linearly independent solution of (8.6.2). Then y = c 1 y 1 + c 2 y 2 + +c n y n is called a general solution of (8.6.2), where c 1 , c 2 , . . . , c n are arbitrary real constants. Example 8.6.7 1. Find the general solution of y = 0. Solution: Note that 0 is the repeated root of the characteristic equation λ 3 = 0. So, the general solution is y = c 1 +c 2 x +c 3 x 2 . 2. Find the general solution of y +y +y +y = 0. Solution: Note that the roots of the characteristic equation λ 3 + λ 2 + λ + 1 = 0 are −1, i, −i. So, the general solution is y = c 1 e −x +c 2 sin x +c 3 cos x. Exercise 8.6.8 1. Find the general solution of the following differential equations: (a) y +y = 0. (b) y + 5y −6y = 0. (c) y iv + 2y +y = 0. 2. Find a linear differential equation with constant coefficients and of order 3 which admits the following solutions: (a) cos x, sin x and e −3x . (b) e x , e 2x and e 3x . (c) 1, e x and x. 3. Solve the following IVPs: (a) y iv −y = 0, y(0) = 0, y (0) = 0, y (0) = 0, y (0) = 1. (b) 2y +y + 2y +y = 0, y(0) = 0, y (0) = 1, y (0) = 0. 4. Euler Cauchy Equations: Let a 0 , a 1 , . . . , a n−1 ∈ R be given constants. The equation x n d n y dx n +a n−1 x n−1 d n−1 y dx n−1 + +a 0 y = 0, x ∈ I (8.6.4) is called the homogeneous Euler-Cauchy Equation (or just Euler’s Equation) of degree n. (8.6.4) is also called the standard form of the Euler equation. We define L(y) = x n d n y dx n +a n−1 x n−1 d n−1 y dx n−1 + +a 0 y. 172 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS Then substituting y = x λ , we get L(x λ ) = λ(λ −1) (λ −n + 1) +a n−1 λ(λ −1) (λ −n + 2) + +a 0 x λ . So, x λ is a solution of (8.6.4), if and only if λ(λ −1) (λ −n + 1) +a n−1 λ(λ −1) (λ −n + 2) + +a 0 = 0. (8.6.5) Essentially, for finding the solutions of (8.6.4), we need to find the roots of (8.6.5), which is a polynomial in λ. With the above understanding, solve the following homogeneous Euler equations: (a) x 3 y + 3x 2 y + 2xy = 0. (b) x 3 y −6x 2 y + 11xy −6y = 0. (c) x 3 y −x 2 y +xy −y = 0. For an alternative method of solving (8.6.4), see the next exercise. 5. Consider the Euler equation (8.6.4) with x > 0 and x ∈ I. Let x = e t or equivalently t = ln x. Let D = d dt and d = d dx . Then (a) show that xd(y) = Dy(t), or equivalently x dy dx = dy) dt . (b) using mathematical induction, show that x n d n y = D(D −1) (D −n + 1) y(t). (c) with the new (independent) variable t, the Euler equation (8.6.4) reduces to an equation with constant coefficients. So, the questions in the above part can be solved by the method just explained. We turn our attention toward the non-homogeneous equation (8.6.1). If y p is any solution of (8.6.1) and if y h is the general solution of the corresponding homogeneous equation (8.6.2), then y = y h +y p is a solution of (8.6.1). The solution y involves n arbitrary constants. Such a solution is called the general solution of (8.6.1). Solving an equation of the form (8.6.1) usually means to find a general solution of (8.6.1). The solution y p is called a particular solution which may not involve any arbitrary constants. Solving (8.6.1) essentially involves two steps (as we had seen in detail in Section 8.3). Step 1: a) Calculation of the homogeneous solution y h and b) Calculation of the particular solution y p . In the ensuing discussion, we describe the method of undetermined coefficients to determine y p . Note that a particular solution is not unique. In fact, if y p is a solution of (8.6.1) and u is any solution of (8.6.2), then y p +u is also a solution of (8.6.1). The undetermined coefficients method is applicable for equations (8.6.1). 8.7 Method of Undetermined Coefficients In the previous section, we have seen than a general solution of L n (y) = f(x) on I (8.7.6) can be written in the form y = y h +y p , where y h is a general solution of L n (y) = 0 and y p is a particular solution of (8.7.6). In view of this, in this section, we shall attempt to obtain y p for (8.7.6) using the method of undetermined coefficients in the following particular cases of f(x); 8.7. METHOD OF UNDETERMINED COEFFICIENTS 173 1. f(x) = ke αx ; k = 0, α a real constant 2. f(x) = e αx k 1 cos(βx) +k 2 sin(βx) ; k 1 , k 2 , α, β ∈ R 3. f(x) = x m . Case I. f(x) = ke αx ; k = 0, α a real constant. We first assume that α is not a root of the characteristic equation, i.e., p(α) = 0. Note that L n (e αx ) = p(α)e αx . Therefore, let us assume that a particular solution is of the form y p = Ae αx , where A, an unknown, is an undetermined coefficient. Thus L n (y p ) = Ap(α)e αx . Since p(α) = 0, we can choose A = k p(α) to obtain L n (y p ) = ke αx . Thus, y p = k p(α) e αx is a particular solution of L n (y) = ke αx . Modification Rule: If α is a root of the characteristic equation, i.e., p(α) = 0, with multiplicity r, (i.e., p(α) = p (α) = = p (r−1) (α) = 0 and p (r) (α) = 0) then we take, y p of the form y p = Ax r e αx and obtain the value of A by substituting y p in L n (y) = ke αx . Example 8.7.1 1. Find a particular solution of y −4y = 2e x . Solution: Here f(x) = 2e x with k = 2 and α = 1. Also, the characteristic polynomial, p(λ) = λ 2 −4. Note that α = 1 is not a root of p(λ) = 0. Thus, we assume y p = Ae x . This on substitution gives Ae x −4Ae x = 2e x =⇒−3Ae x = 2e x . So, we choose A = −2 3 , which gives a particular solution as y p = −2e x 3 . 2. Find a particular solution of y −3y + 3y −y = 2e x . Solution: The characteristic polynomial is p(λ) = λ 3 −3λ 2 + 3λ −1 = (λ −1) 3 and α = 1. Clearly, p(1) = 0 and λ = α = 1 has multiplicity r = 3. Thus, we assume y p = Ax 3 e x . Substituting it in the given equation,we have Ae x x 3 + 9x 2 + 18x + 6 − 3Ae x x 3 + 6x 2 + 6x + 3Ae x x 3 + 3x 2 − Ax 3 e x = 2e x . Solving for A, we get A = 1 3 , and thus a particular solution is y p = x 3 e x 3 . 174 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS 3. Find a particular solution of y −y = e 2x . Solution: The characteristic polynomial is p(λ) = λ 3 −λ and α = 2. Thus, using y p = Ae 2x , we get A = 1 p(α) = 1 6 , and hence a particular solution is y p = e 2x 6 . 4. Solve y −3y + 3y −y = 2e 2x . Exercise 8.7.2 Find a particular solution for the following differential equations: 1. y −3y + 2y = e x . 2. y −9y = e 3x . 3. y −3y + 6y −4y = e 2x . Case II. f(x) = e αx k 1 cos(βx) +k 2 sin(βx) ; k 1 , k 2 , α, β ∈ R We first assume that α + iβ is not a root of the characteristic equation, i.e., p(α + iβ) = 0. Here, we assume that y p is of the form y p = e αx Acos(βx) +Bsin(βx) , and then comparing the coefficients of e αx cos x and e αx sin x (why!) in L n (y) = f(x), obtain the values of A and B. Modification Rule: If α+iβ is a root of the characteristic equation, i.e., p(α+iβ) = 0, with multiplicity r, then we assume a particular solution as y p = x r e αx Acos(βx) +Bsin(βx) , and then comparing the coefficients in L n (y) = f(x), obtain the values of A and B. Example 8.7.3 1. Find a particular solution of y + 2y + 2y = 4e x sin x. Solution: Here, α = 1 and β = 1. Thus α + iβ = 1 + i, which is not a root of the characteristic equation p(λ) = λ 2 + 2λ + 2 = 0. Note that the roots of p(λ) = 0 are −1 ±i. Thus, let us assume y p = e x (Asinx +Bcos x) . This gives us (−4B + 4A)e x sin x + (4B + 4A)e x cos x = 4e x sinx. Comparing the coefficients of e x cos x and e x sin x on both sides, we get A −B = 1 and A + B = 0. On solving for A and B, we get A = −B = 1 2 . So, a particular solution is y p = e x 2 (sin x −cos x) . 2. Find a particular solution of y +y = sinx. Solution: Here, α = 0 and β = 1. Thus α + iβ = i, which is a root with multiplicity r = 1, of the characteristic equation p(λ) = λ 2 + 1 = 0. So, let y p = x(Acos x +Bsin x) . Substituting this in the given equation and comparing the coefficients of cos x and sin x on both sides, we get B = 0 and A = − 1 2 . Thus, a particular solution is y p = −1 2 xcos x. 8.7. METHOD OF UNDETERMINED COEFFICIENTS 175 Exercise 8.7.4 Find a particular solution for the following differential equations: 1. y −y +y −y = e x cos x. 2. y + 2y +y = sin x. 3. y −2y + 2y = e x cos x. Case III. f(x) = x m . Suppose p(0) = 0. Then we assume that y p = A m x m +A m−1 x m−1 + +A 0 and then compare the coefficient of x k in L n (y p ) = f(x) to obtain the values of A i for 0 ≤ i ≤ m. Modification Rule: If λ = 0 is a root of the characteristic equation, i.e., p(0) = 0, with multiplicity r, then we assume a particular solution as y p = x r A m x m +A m−1 x m−1 + +A 0 and then compare the coefficient of x k in L n (y p ) = f(x) to obtain the values of A i for 0 ≤ i ≤ m. Example 8.7.5 Find a particular solution of y −y +y −y = x 2 . Solution: As p(0) = 0, we assume y p = A 2 x 2 +A 1 x +A 0 which on substitution in the given differential equation gives −2A 2 + (2A 2 x +A 1 ) −(A 2 x 2 +A 1 x +A 0 ) = x 2 . Comparing the coefficients of different powers of x and solving, we get A 2 = −1, A 1 = −2 and A 0 = 0. Thus, a particular solution is y p = −(x 2 + 2x). Finally, note that if y p1 is a particular solution of L n (y) = f 1 (x) and y p2 is a particular solution of L n (y) = f 2 (x), then a particular solution of L n (y) = k 1 f 1 (x) +k 2 f 2 (x) is given by y p = k 1 y p1 +k 2 y p2 . In view of this, one can use method of undetermined coefficients for the cases, where f(x) is a linear combination of the functions described above. Example 8.7.6 Find a particular soltution of y +y = 2 sinx + sin 2x. Solution: We can divide the problem into two problems: 1. y +y = 2 sinx. 176 CHAPTER 8. SECOND ORDER AND HIGHER ORDER EQUATIONS 2. y +y = sin2x. For the first problem, a particular solution (Example 8.7.3.2) is y p1 = 2 −1 2 xcos x = −xcos x. For the second problem, one can check that y p2 = −1 3 sin(2x) is a particular solution. Thus, a particular solution of the given problem is y p1 +y p2 = −xcos x − 1 3 sin(2x). Exercise 8.7.7 Find a particular solution for the following differential equations: 1. y −y +y −y = 5e x cos x + 10e 2x . 2. y + 2y +y = x +e −x . 3. y + 3y −4y = 4e x +e 4x . 4. y + 9y = cos x +x 2 +x 3 . 5. y −3y + 4y = x 2 +e 2x sin x. 6. y + 4y + 6y + 4y + 5y = 2 sinx +x 2 . Chapter 9 Solutions Based on Power Series 9.1 Introduction In the previous chapter, we had a discussion on the methods of solving y +ay +by = f(x); where a, b were real numbers and f was a real valued continuous function. We also looked at Euler Equations which can be reduced to the above form. The natural question is: what if a and b are functions of x? In this chapter, we have a partial answer to the above question. In general, there are no methods of finding a solution of an equation of the form y +q(x)y +r(x)y = f(x), x ∈ I where q(x) and r(x) are real valued continuous functions defined on an interval I ⊂ R. In such a situation, we look for a class of functions q(x) and r(x) for which we may be able to solve. One such class of functions is called the set of analytic functions. Definition 9.1.1 (Power Series) Let x 0 ∈ R and a 0 , a 1 , . . . , a n , . . . ∈ R be fixed. An expression of the type ¸ n=0 a n (x −x 0 ) n (9.1.1) is called a power series in x around x 0 . The point x 0 is called the centre, and a n ’s are called the coefficients. In short, a 0 , a 1 , . . . , a n , . . . are called the coefficient of the power series and x 0 is called the centre. Note here that a n ∈ R is the coefficient of (x −x 0 ) n and that the power series converges for x = x 0 . So, the set S = ¦x ∈ R : ¸ n=0 a n (x −x 0 ) n converges¦ is a non-empty. It turns out that the set S is an interval in R. We are thus led to the following definition. Example 9.1.2 1. Consider the power series x − x 3 3! + x 5 5! x 7 7! + . In this case, x 0 = 0 is the centre, a 0 = 0 and a 2n = 0 for n ≥ 1. Also, a 2n+1 = (−1) n (2n + 1)! , n = 1, 2, . . . . Recall that the Taylor series expansion around x 0 = 0 of sin x is same as the above power series. 177 178 CHAPTER 9. SOLUTIONS BASED ON POWER SERIES 2. Any polynomial a 0 +a 1 x +a 2 x 2 + +a n x n is a power series with x 0 = 0 as the centre, and the coefficients a m = 0 for m ≥ n + 1. Definition 9.1.3 (Radius of Convergence) A real number R ≥ 0 is called the radius of convergence of the power series (9.1.1), if ¸ n≥0 a n (x − x 0 ) n converges absolutely for all x satisfying [x − x 0 [ < R and diverges for all x satisfying [x −x 0 [ > R. From what has been said earlier, it is clear that the set of points x where the power series (9.1.1) is convergent is the interval (−R + x 0 , x 0 + R), whenever R is the radius of convergence. If R = 0, the power series is convergent only at x = x 0 . Let R > 0 be the radius of convergence of the power series (9.1.1). Let I = (−R + x 0 , x 0 + R). In the interval I, the power series (9.1.1) converges. Hence, it defines a real valued function and we denote it by f(x), i.e., f(x) = ¸ n=1 a n (x −x 0 ) n , x ∈ I. Such a function is well defined as long as x ∈ I. f is called the function defined by the power series (9.1.1) on I. Sometimes, we also use the terminology that (9.1.1) induces a function f on I. It is a natural question to ask how to find the radius of convergence of a power series (9.1.1). We state one such result below but we do not intend to give a proof. Theorem 9.1.4 1. Let ¸ n=1 a n (x−x 0 ) n be a power series with centre x 0 . Then there exists a real number R ≥ 0 such that ¸ n=1 a n (x −x 0 ) n converges for all x ∈ (−R +x 0 , x 0 + R). In this case, the power series ¸ n=1 a n (x −x 0 ) n converges absolutely and uniformly on [x −x 0 [ ≤ r for all r < R and diverges for all x with [x −x 0 [ > R. 2. Suppose R is the radius of convergence of the power series (9.1.1). Suppose lim n−→∞ n [a n [ exists and equals . (a) If = 0, then R = 1 . (b) If = 0, then the power series (9.1.1) converges for all x ∈ R. Note that lim n−→∞ n [a n [ exists if lim n−→∞ a n+1 a n exists and lim n−→∞ n [a n [ = lim n−→∞ a n+1 a n . Remark 9.1.5 If the reader is familiar with the concept of limsup of a sequence, then we have a modification of the above theorem. In case, n [a n [ does not tend to a limit as n −→ ∞, then the above theorem holds if we replace lim n−→∞ n [a n [ by limsup n−→∞ n [a n [. 9.1. INTRODUCTION 179 Example 9.1.6 1. Consider the power series ¸ n=0 (x +1) n . Here x 0 = −1 is the centre and a n = 1 for all n ≥ 0. So, n [a n [ = n 1 = 1. Hence, by Theorem 9.1.4, the radius of convergenceR = 1. 2. Consider the power series ¸ n≥0 (−1) n (x + 1) 2n+1 (2n + 1)! . In this case, the centre is x 0 = −1, a n = 0 for n even and a 2n+1 = (−1) n (2n + 1)! . So, lim n−→∞ 2n+1 [a 2n+1 [ = 0 and lim n−→∞ 2n [a 2n [ = 0. Thus, lim n−→∞ n [a n [ exists and equals 0. Therefore, the power series converges for all x ∈ R. Note that the series converges to sin(x + 1). 3. Consider the power series ¸ n=1 x 2n . In this case, we have a 2n = 1 and a 2n+1 = 0 for n = 0, 1, 2, . . . . So, lim n−→∞ 2n+1 [a 2n+1 [ = 0 and lim n−→∞ 2n [a 2n [ = 1. Thus, lim n−→∞ n [a n [ does not exist. We let u = x 2 . Then the power series ¸ n=1 x 2n reduces to ¸ n=1 u n . But then from Example 9.1.6.1, we learned that ¸ n=1 u n converges for all u with [u[ < 1. Therefore, the original power series converges whenever [x 2 [ < 1 or equivalently whenever [x[ < 1. So, the radius of convergence is R = 1. Note that 1 1 −x 2 = ¸ n=1 x 2n for [x[ < 1. 4. Consider the power series ¸ n≥0 n n x n . In this case, n [a n [ = n n n = n. doesn’t have any finite limit as n −→∞. Hence, the power series converges only for x = 0. 5. The power series ¸ n≥0 x n n! has coefficients a n = 1 n! and it is easily seen that lim n−→∞ 1 n! 1 n = 0 and the power series converges for all x ∈ R. Recall that it represents e x . Definition 9.1.7 Let f : I −→ R be a function and x 0 ∈ I. f is called analytic around x 0 if there exists a δ > 0 such that f(x) = ¸ n≥0 a n (x −x 0 ) n for every x with [x −x 0 [ < δ. That is, f has a power series representation in a neighbourhood of x 0 . 9.1.1 Properties of Power Series Now we quickly state some of the important properties of the power series. Consider two power series ¸ n=0 a n (x −x 0 ) n and ¸ n=0 b n (x −x 0 ) n 180 CHAPTER 9. SOLUTIONS BASED ON POWER SERIES 1 > 0 and R 2 > 0, respectively. Let F(x) and G(x) be the functions defined by the two power series defined for all x ∈ I, where I = (−R+x 0 , x 0 +R) with R = min¦R 1 , R 2 ¦. Note that both the power series converge for all x ∈ I. With F(x), G(x) and I as defined above, we have the following properties of the power series. 1. Equality of Power Series The two power series defined by F(x) and G(x) are equal for all x ∈ I if and only if a n = b n for all n = 0, 1, 2, . . . . In particular, if ¸ n=0 a n (x −x 0 ) n = 0 for all x ∈ I, then a n = 0 for all n = 0, 1, 2, . . . . For all x ∈ I, we have F(x) +G(x) = ¸ n=0 (a n +b n )(x −x 0 ) n Essentially, it says that in the common part of the regions of convergence, the two power series can be added term by term. 3. Multiplication of Power Series Let us define c 0 = a 0 b 0 , and inductively c n = n ¸ j=1 a n−j b j . Then for all x ∈ I, the product of F(x) and G(x) is defined by H(x) = F(x)G(x) = ¸ n=0 c n (x −x 0 ) n . H(x) is called the “Cauchy Product” of F(x) and G(x). Note that for any n ≥ o, the coefficient of x n in ¸ ¸ j=0 a j (x −x 0 ) j ¸ ¸ k=0 b k (x −x 0 ) k is c n = n ¸ j=1 a n−j b j . 4. Term by Term Differentiation The term by term differentiation of the power series function F(x) is ¸ n=1 na n (x −x 0 ) n . Note that it also has R 1 as the radius of convergence as by Theorem 9.1.4 lim n−→∞ n [a n [ = 1 R1 and lim n−→∞ n [na n [ = lim n−→∞ n [n[ lim n−→∞ n [a n [ = 1 1 R 1 . Let 0 < r < R 1 . Then for all x ∈ (−r +x 0 , x 0 +r), we have d dx F(x) = F (x) = ¸ n=1 na n (x −x 0 ) n . In other words, inside the region of convergence, the power series can be differentiated term by term. 9.2. SOLUTIONS IN TERMS OF POWER SERIES 181 In the following, we shall consider power series with x 0 = 0 as the centre. Note that by a transfor- mation of X = x −x 0 , the centre of the power series can be shifted to the origin. Exercise 9.1.1 1. which of the following represents a power series (with centre x 0 indicated in the brack- ets) in x? (a) 1 +x 2 +x 4 + +x 2n + (x 0 = 0). (b) 1 + sin x + (sin x) 2 + + (sin x) n + (x 0 = 0). (c) 1 +x[x[ +x 2 [x 2 [ + +x n [x n [ + (x 0 = 0). 2. Let f(x) and g(x) be two power series around x 0 = 0, defined by f(x) = x − x 3 3! + x 5 5! − + (−1) n x 2n+1 (2n + 1)! + and g(x) = 1 − x 2 2! + x 4 4! − + (−1) n x 2n (2n)! + . Find the radius of convergence of f(x) and g(x). Also, for each x in the domain of convergence, show that f (x) = g(x) and g (x) = −f(x). [Hint: Use Properties 1, 2, 3 and 4 mentioned above. Also, note that we usually call f(x) by sin x and g(x) by cos x.] 3. Find the radius of convergence of the following series centred at x 0 = −1. (a) 1 + (x + 1) + (x+1) 2 2! + + (x+1) n n! + . (b) 1 + (x + 1) + 2(x + 1) 2 + +n(x + 1) n + . 9.2 Solutions in terms of Power Series Consider a linear second order equation of the type y +a(x)y +b(x)y = 0. (9.2.1) Let a and b be analytic around the point x 0 = 0. In such a case, we may hope to have a solution y in terms of a power series, say y = ¸ k=0 c k x k . (9.2.2) In the absence of any information, let us assume that (9.2.1) has a solution y represented by (9.2.2). We substitute (9.2.2) in Equation (9.2.1) and try to find the values of c k ’s. Let us take up an example for illustration. Example 9.2.1 Consider the differential equation y +y = 0 (9.2.3) Here a(x) ≡ 0, b(x) ≡ 1, which are analytic around x 0 = 0. Solution: Let y = ¸ n=0 c n x n . (9.2.4) 182 CHAPTER 9. SOLUTIONS BASED ON POWER SERIES Then y = ¸ n=0 nc n x n−1 and y = ¸ n=0 n(n − 1)c n x n−2 . Substituting the expression for y, y and y in Equation (9.2.3), we get ¸ n=0 n(n −1)c n x n−2 + ¸ n=0 c n x n = 0 or, equivalently 0 = ¸ n=0 (n + 2)(n + 1)c n+2 x n + ¸ n=0 c n x n = ¸ n=0 ¦(n + 1)(n + 2)c n+2 +c n ¦x n . Hence for all n = 0, 1, 2, . . . , (n + 1)(n + 2)c n+2 +c n = 0 or c n+2 = − c n (n + 1)(n + 2) . Therefore, we have c 2 = − c0 2! , c 3 = − c1 3! , c 4 = (−1) 2 c0 4! , c 5 = (−1) 2 c1 5! , . . . . . . c 2n = (−1) n c0 (2n)! , c 2n+1 = (−1) n c1 (2n+1)! . Here, c 0 and c 1 are arbitrary. So, y = c 0 ¸ n=0 (−1) n x 2n (2n)! +c 1 ¸ n=0 (−1) n x 2n+1 (2n + 1)! or y = c 0 cos(x) + c 1 sin(x) where c 0 and c 1 can be chosen arbitrarily. For c 0 = 1 and c 1 = 0, we get y = cos(x). That is, cos(x) is a solution of the Equation (9.2.3). Similarly, y = sin(x) is also a solution of Equation (9.2.3). Exercise 9.2.2 Assuming that the solutions y of the following differential equations admit power series representation, find y in terms of a power series. 1. y = −y, (centre at x 0 = 0). 2. y = 1 +y 2 , (centre at x 0 = 0). 3. Find two linearly independent solutions of (a) y −y = 0, (centre at x 0 = 0). (b) y + 4y = 0, (centre at x 0 = 0). 9.3 Statement of Frobenius Theorem for Regular (Ordinary) Point Earlier, we saw a few properties of a power series and some uses also. Presently, we inquire the question, namely, whether an equation of the form y +a(x)y +b(x)y = f(x), x ∈ I (9.3.1) admits a solution y which has a power series representation around x ∈ I. In other words, we are interested in looking into an existence of a power series solution of (9.3.1) under certain conditions on a(x), b(x) and f(x). The following is one such result. We omit its proof. 9.4. LEGENDRE EQUATIONS AND LEGENDRE POLYNOMIALS 183 Theorem 9.3.1 Let a(x), b(x) and f(x) admit a power series representation around a point x = x 0 ∈ I, with non-zero radius of convergence r 1 , r 2 and r 3 , respectively. Let R = min¦r 1 , r 2 , r 3 ¦. Then the Equation (9.3.1) has a solution y which has a power series representation around x 0 Remark 9.3.2 We remind the readers that Theorem 9.3.1 is true for Equations (9.3.1), whenever the coefficient of y is 1. Secondly, a point x 0 is called an ordinary point for (9.3.1) if a(x), b(x) and f(x) admit power series expansion (with non-zero radius of convergence) around x = x 0 . x 0 is called a singular point for (9.3.1) if x 0 is not an ordinary point for (9.3.1). The following are some examples for illustration of the utility of Theorem 9.3.1. Exercise 9.3.3 1. Examine whether the given point x 0 is an ordinary point or a singular point for the following differential equations. (a) (x −1)y + sin xy = 0, x 0 = 0. (b) y + sin x x−1 y = 0, x 0 = 0. (c) Find two linearly independent solutions of (d) (1 −x 2 )y −2xy +n(n + 1)y = 0, x 0 = 0, n is a real constant. 2. Show that the following equations admit power series solutions around a given x 0 . Also, find the power series solutions if it exists. (a) y +y = 0, x 0 = 0. (b) xy +y = 0, x 0 = 0. (c) y + 9y = 0, x 0 = 0. 9.4 Legendre Equations and Legendre Polynomials 9.4.1 Introduction Legendre Equation plays a vital role in many problems of mathematical Physics and in the theory of quadratures (as applied to Numerical Integration). Definition 9.4.1 The equation (1 −x 2 )y −2xy +p(p + 1)y = 0, −1 < x < 1 (9.4.1) where p ∈ R, is called a Legendre Equation of order p. Equation (9.4.1) was studied by Legendre and hence the name Legendre Equation. Equation (9.4.1) may be rewritten as y 2x (1 −x 2 ) y + p(p + 1) (1 −x 2 ) y = 0. The functions 2x 1 −x 2 and p(p + 1) 1 −x 2 are analytic around x 0 = 0 (since they have power series expressions with centre at x 0 = 0 and with R = 1 as the radius of convergence). By Theorem 9.3.1, a solution y of (9.4.1) admits a power series solution (with centre at x 0 = 0) with radius of convergence R = 1. Let us 184 CHAPTER 9. SOLUTIONS BASED ON POWER SERIES assume that y = ¸ k=0 a k x k is a solution of (9.4.1). We have to find the value of a k ’s. Substituting the expression for y = ¸ k=0 ka k x k−1 and y = ¸ k=0 k(k −1)a k x k−2 in Equation (9.4.1), we get ¸ k=0 ¦(k + 1)(k + 2)a k+2 +a k (p −k)(p + k + 1)¦ x k = 0. Hence, for k = 0, 1, 2, . . . a k+2 = − (p − k)(p +k + 1) (k + 1)(k + 2) a k . It now follows that a 2 = − p(p+1) 2! a 0 , a 3 = − (p−1)(p+2) 3! a 1 , a 4 = − (p−2)(p+3) 3·4 a 2 a 5 = (−1) 2 (p−1)(p−3)(p+2)(p+4) 5! a 1 = (−1) 2 p(p−2)(p+1)(p+3) 4! a 0 , etc. In general, a 2m = (−1) m p(p −2) (p −2m+ 2)(p + 1)(p + 3) (p + 2m−1) (2m)! a 0 and a 2m+1 = (−1) m (p −1)(p −3) (p −2m+ 1)(p + 2)(p + 4) (p + 2m) (2m+ 1)! a 1 . It turns out that both a 0 and a 1 are arbitrary. So, by choosing a 0 = 1, a 1 = 0 and a 0 = 0, a 1 = 1 in the above expressions, we have the following two solutions of the Legendre Equation (9.4.1), namely, y 1 = 1 − p(p + 1) 2! x 2 + + (−1) m (p −2m+ 2) (p + 2m−1) (2m)! x 2m + (9.4.2) and y 2 = x − (p −1)(p + 2) 3! x 3 + + (−1) m (p −2m+ 1) (p + 2m) (2m+ 1)! x 2m+1 + . (9.4.3) Remark 9.4.2 y 1 and y 2 are two linearly independent solutions of the Legendre Equation (9.4.1). It now follows that the general solution of (9.4.1) is y = c 1 y 1 +c 2 y 2 (9.4.4) where c 1 and c 2 are arbitrary real numbers. 9.4.2 Legendre Polynomials In many problems, the real number p, appearing in the Legendre Equation (9.4.1), is a non-negative integer. Suppose p = n is a non-negative integer. Recall a k+2 = − (n −k)(n +k + 1) (k + 1)(k + 2) a k , k = 0, 1, 2, . . . . (9.4.5) Therefore, when k = n, we get a n+2 = a n+4 = = a n+2m = = 0 for all positive integer m. Case 1: Let n be a positive even integer. Then y 1 in Equation (9.4.2) is a polynomial of degree n. In fact, y 1 is an even polynomial in the sense that the terms of y 1 are even powers of x and hence y 1 (−x) = y 1 (x). 9.4. LEGENDRE EQUATIONS AND LEGENDRE POLYNOMIALS 185 Case 2: Now, let n be a positive odd integer. Then y 2 (x) in Equation (9.4.3) is a polynomial of degree n. In this case, y 2 is an odd polynomial in the sense that the terms of y 2 are odd powers of x and hence y 2 (−x) = −y 2 (x). In either case, we have a polynomial solution for Equation (9.4.1). Definition 9.4.3 A polynomial solution P n (x) of (9.4.1) is called a Legendre Polynomial whenever P n (1) = 1. Fix a positive integer n and consider P n (x) = a 0 + a 1 x + + a n x n . Then it can be checked that P n (1) = 1 if we choose a n = (2n)! 2 n (n!) 2 = 1 3 5 (2n − 1) n! . Using the recurrence relation, we have a n−2 = − (n −1)n 2(2n −1) a n = − (2n −2)! 2 n (n −1)!(n −2)! by the choice of a n . In general, if n −2m ≥ 0, then a n−2m = (−1) m (2n −2m)! 2 n m!(n −m)!(n −2m)! . Hence, M ¸ m=0 (−1) m (2n −2m)! 2 n m!(n −m)!(n −2m)! x n−2m , (9.4.6) where M = n 2 when n is even and M = n −1 2 when n is odd. Proposition 9.4.4 Let p = n be a non-negative even integer. Then any polynomial solution y of (9.4.1) which has only even powers of x is a multiple of P n (x). Similarly, if p = n is a non-negative odd integer, then any polynomial solution y of (9.4.1) which has only odd powers of x is a multiple of P n (x). Proof. Suppose that n is a non-negative even integer. Let y be a polynomial solution of (9.4.1). By (9.4.4) y = c 1 y 1 +c 2 y 2 , where y 1 is a polynomial of degree n (with even powers of x) and y 2 is a power series solution with odd powers only. Since y is a polynomial, we have c 2 = 0 or y = c 1 y 1 with c 1 = 0. Similarly, P n (x) = c 1 y 1 with c 1 = 0. which implies that y is a multiple of P n (x). A similar proof holds when n is an odd positive integer. We have an alternate way of evaluating P n (x). They are used later for the orthogonality properties of the Legendre polynomials, P n (x)’s. Theorem 9.4.5 (Rodrigu˙ es Formula) The Legendre polynomials P n (x) for n = 1, 2, . . . , are given by P n (x) = 1 2 n n! d n dx n (x 2 −1) n . (9.4.7) Proof. Let V (x) = (x 2 −1) n . Then d dx V (x) = 2nx(x 2 −1) n−1 or (x 2 −1) d dx V (x) = 2nx(x 2 −1) n = 2nxV (x). 186 CHAPTER 9. SOLUTIONS BASED ON POWER SERIES Now differentiating (n + 1) times (by the use of the Leibniz rule for differentiation), we get (x 2 −1) d n+2 dx n+2 V (x) + 2(n + 1)x d n+1 dx n+1 V (x) + 2n(n + 1) 1 2 d n dx n V (x) − 2nx d n+1 dx n+1 V (x) −2n(n + 1) d n dx n V (x) = 0. By denoting, U(x) = d n dx n V (x), we have (x 2 −1)U +U ¦2(n + 1)x −2nx¦ +U¦n(n + 1) −2n(n + 1)¦ = 0 or (1 −x 2 )U −2xU +n(n + 1)U = 0. This tells us that U(x) is a solution of the Legendre Equation (9.4.1). So, by Proposition 9.4.4, we have P n (x) = αU(x) = α d n dx n (x 2 −1) n for some α ∈ R. Also, let us note that d n dx n (x 2 −1) n = d n dx n ¦(x −1)(x + 1)¦ n = n!(x + 1) n + terms containing a factor of (x −1). Therefore, d n dx n (x 2 −1) n x=1 = 2 n n! or, equivalently 1 2 n n! d n dx n (x 2 − 1) n x=1 = 1 and thus P n (x) = 1 2 n n! d n dx n (x 2 −1) n . Example 9.4.6 1. When n = 0, P 0 (x) = 1. 2. When n = 1, P 1 (x) = 1 2 d dx (x 2 −1) = x. 3. When n = 2, P 2 (x) = 1 2 2 2! d 2 dx 2 (x 2 −1) 2 = 1 8 ¦12x 2 −4¦ = 3 2 x 2 1 2 . One may observe that the Rodrigu˙ es formula is very useful in the computation of P n (x) for “small” values of n. Theorem 9.4.7 Let P n (x) denote, as usual, the Legendre Polynomial of degree n. Then 1 −1 P n (x)P m (x) dx = 0 if m = n. (9.4.8) Proof. We know that the polynomials P n (x) and P m (x) satisfy (1 −x 2 )P n (x) +n(n + 1)P n (x) = 0 and (9.4.9) (1 −x 2 )P m (x) +m(m + 1)P m (x) = 0. (9.4.10) Multiplying Equation (9.4.9) by P m (x) and Equation (9.4.10) by P n (x) and subtracting, we get n(n + 1) −m(m + 1) P n (x)P m (x) = (1 −x 2 )P m (x) P n (x) − (1 −x 2 )P n (x) P m (x). 9.4. LEGENDRE EQUATIONS AND LEGENDRE POLYNOMIALS 187 Therefore, n(n + 1) − m(m+ 1) 1 −1 P n (x)P m (x)dx = 1 −1 (1 −x 2 )P m (x) P n (x) − (1 −x 2 )P n (x) P m (x) dx = − 1 −1 (1 −x 2 )P m (x)P n (x)dx + (1 −x 2 )P m (x)P n (x) x=1 x=−1 + 1 −1 (1 −x 2 )P n (x)P m (x)dx + (1 −x 2 )P n (x)P m (x) x=1 x=−1 = 0. Since n = m, n(n + 1) = m(m + 1) and therefore, we have 1 −1 P n (x)P m (x) dx = 0 if m = n. Theorem 9.4.8 For n = 0, 1, 2, . . . 1 −1 P 2 n (x) dx = 2 2n + 1 . (9.4.11) Proof. Let us write V (x) = (x 2 −1) n . By the Rodrigue’s formula, we have 1 −1 P 2 n (x) dx = 1 −1 1 n!2 n 2 d n dx n V (x) d n dx n V (x)dx. Let us call I = 1 −1 d n dx n V (x) d n dx n V (x)dx. Note that for 0 ≤ m < n, d m dx m V (−1) = d m dx m V (1) = 0. (9.4.12) Therefore, integrating I by parts and using (9.4.12) at each step, we get I = 1 −1 d 2n dx 2n V (x) (−1) n V (x)dx = (2n)! 1 −1 (1 −x 2 ) n dx = (2n)! 2 1 0 (1 −x 2 ) n dx. Now substitute x = cos θ and use the value of the integral π 2 0 sin 2n θ dθ, to get the required result. We now state an important expansion theorem. The proof is beyond the scope of this book. Theorem 9.4.9 Let f(x) be a real valued continuous function defined in [−1, 1]. Then f(x) = ¸ n=0 a n P n (x), x ∈ [−1, 1] where a n = 2n + 1 2 1 −1 f(x)P n (x)dx. Legendre polynomials can also be generated by a suitable function. To do that, we state the following result without proof. 188 CHAPTER 9. SOLUTIONS BASED ON POWER SERIES Theorem 9.4.10 Let P n (x) be the Legendre polynomial of degree n. Then 1 1 −2xt +t 2 = ¸ n=0 P n (x)t n , t = 1. (9.4.13) The function h(t) = 1 1 −2xt +t 2 admits a power series expansion in t (for small t) and the coefficient of t n in P n (x). The function h(t) is called the generating function for the Legendre polynomials. Exercise 9.4.11 1. By using the Rodrigue’s formula, find P 0 (x), P 1 (x) and P 2 (x). 2. Use the generating function (9.4.13) (a) to find P 0 (x), P 1 (x) and P 2 (x). (b) to show that P n (x) is an odd function whenever n is odd and is an even function whenevern is even. Using the generating function (9.4.13), we can establish the following relations: (n + 1)P n+1 (x) = (2n + 1) x P n (x) −n P n−1 (x) (9.4.14) nP n (x) = xP n (x) −P n−1 (x) (9.4.15) P n+1 (x) = xP n (x) + (n + 1)P n (x). (9.4.16) The relations (9.4.14), (9.4.15) and (9.4.16) are called recurrence relations for the Legendre polyno- mials, P n (x). The relation (9.4.14) is also known as Bonnet’s recurrence relation. We will now give the proof of (9.4.14) using (9.4.13). The readers are required to proof the other two recurrence relations. Differentiating the generating function (9.4.13) with respect to t (keeping the variable x fixed), we get 1 2 (1 −2xt +t 2 ) 3 2 (−2x + 2t) = ¸ n=0 nP n (x)t n−1 . Or equivalently, (x −t)(1 −2xt +t 2 ) 1 2 = (1 −2xt +t 2 ) ¸ n=0 nP n (x)t n−1 . We now substitute ¸ n=0 P n (x)t n in the left hand side for (1 −2xt +t 2 ) 1 2 , to get (x −t) ¸ n=0 P n (x)t n = (1 −2xt +t 2 ) ¸ n=0 nP n (x)t n−1 . The two sides and power series in t and therefore, comparing the coefficient of t n , we get xP n (x) − P n−1 (x) = (n + 1)P n (x) + (n −1)P n−1 (x) −2n x P n (x). This is clearly same as (9.4.14). To prove (9.4.15), one needs to differentiate the generating function with respect to x (keeping t fixed) and doing a similar simplification. Now, use the relations (9.4.14) and (9.4.15) to get the relation (9.4.16). These relations will be helpful in solving the problems given below. Exercise 9.4.12 1. Find a polynomial solution y(x) of (1 −x 2 )y −2xy +20y = 0 such that y(1) = 10. 2. Prove the following: 9.4. LEGENDRE EQUATIONS AND LEGENDRE POLYNOMIALS 189 (a) 1 −1 P m (x)dx = 0 for all positive integers m ≥ 1. (b) 1 −1 x 2n+1 P 2m (x)dx = 0 whenever m and n are positive integers with m = n. (c) 1 −1 x m P n (x)dx = 0 whenever m and n are positive integers with m < n. 3. Show that P n (1) = n(n + 1) 2 and P n (−1) = (−1) n−1 n(n + 1) 2 . 4. Establish the following recurrence relations. (a) (n + 1)P n (x) = P n+1 (x) −xP n (x). (b) (1 −x 2 )P n (x) = n P n−1 (x) −xP n (x) . 190 CHAPTER 9. SOLUTIONS BASED ON POWER SERIES Part III Laplace Transform 191 Chapter 10 Laplace Transform 10.1 Introduction In many problems, a function f(t), t ∈ [a, b] is transformed to another function F(s) through a relation of the type: F(s) = b a K(t, s)f(t)dt where K(t, s) is a known function. Here, F(s) is called integral transform of f(t). Thus, an integral transform sends a given function f(t) into another function F(s). This transformation of f(t) into F(s) provides a method to tackle a problem more readily. In some cases, it affords solutions to otherwise difficult problems. In view of this, the integral transforms find numerous applications in engineering problems. Laplace transform is a particular case of integral transform (where f(t) is defined on [0, ∞) and K(s, t) = e −st ). As we will see in the following, application of Laplace transform reduces a linear differential equation with constant coefficients to an algebraic equation, which can be solved by algebraic methods. Thus, it provides a powerful tool to solve differential equations. It is important to note here that there is some sort of analogy with what we had learnt during the study of logarithms in school. That is, to multiply two numbers, we first calculate their logarithms, add them and then use the table of antilogarithm to get back the original product. In a similar way, we first transform the problem that was posed as a function of f(t) to a problem in F(s), make some calculations and then use the table of inverse Laplace transform to get the solution of the actual problem. In this chapter, we shall see same properties of Laplace transform and its applications in solving differential equations. 10.2 Definitions and Examples Definition 10.2.1 (Piece-wise Continuous Function) 1. A function f(t) is said to be a piece-wise con- tinuous function on a closed interval [a, b] ⊂ R, if there exists finite number of points a = t 0 < t 1 < t 2 < < t N = b such that f(t) is continuous in each of the intervals (t i−1 , t i ) for 1 ≤ i ≤ N and has finite limits as t approaches the end points, see the Figure 10.1. 2. A function f(t) is said to be a piece-wise continuous function for t ≥ 0, if f(t) is a piece-wise continuous function on every closed interval [a, b] ⊂ [0, ∞). For example, see Figure 10.1. 193 194 CHAPTER 10. LAPLACE TRANSFORM Figure 10.1: Piecewise Continuous Function Definition 10.2.2 (Laplace Transform) Let f : [0, ∞) −→ R and s ∈ R. Then F(s), for s ∈ R is called the Laplace transform of f(t), and is defined by L(f(t)) = F(s) = 0 f(t)e −st dt whenever the integral exists. (Recall that 0 g(t)dt exists if lim b−→∞ b 0 g(t)d(t) exists and we define 0 g(t)dt = lim b−→∞ b 0 g(t)d(t).) Remark 10.2.3 1. Let f(t) be an exponentially bounded function, i.e., [f(t)[ ≤ Me αt for all t > 0 and for some real numbers α and M with M > 0. Then the Laplace transform of f exists. 2. Suppose F(s) exists for some function f. Then by definition, lim b−→∞ b 0 f(t)e −st dt exists. Now, one can use the theory of improper integrals to conclude that lim s−→∞ F(s) = 0. Hence, a function F(s) satisfying lim s−→∞ F(s) does not exist or lim s−→∞ F(s) = 0, cannot be a Laplace transform of a function f. Definition 10.2.4 (Inverse Laplace Transform) Let L(f(t)) = F(s). That is, F(s) is the Laplace trans- form of the function f(t). Then f(t) is called the inverse Laplace transform of F(s). In that case, we write f(t) = L −1 (F(s)). 10.2.1 Examples Example 10.2.5 1. Find F(s) = L(f(t)), where f(t) = 1, t ≥ 0. Solution: F(s) = 0 e −st dt = lim b−→∞ e −st −s b 0 = 1 s − lim b−→∞ e −sb s . Note that if s > 0, then lim b−→∞ e −sb s = 0. 10.2. DEFINITIONS AND EXAMPLES 195 Thus, F(s) = 1 s , for s > 0. In the remaining part of this chapter, whenever the improper integral is calculated, we will not explicitly write the limiting process. However, the students are advised to provide the details. 2. Find the Laplace transform F(s) of f(t), where f(t) = t, t ≥ 0. Solution: Integration by parts gives F(s) = 0 te −st dt = −te −st s 0 + 0 e −st s dt = 1 s 2 for s > 0. 3. Find the Laplace transform of f(t) = t n , n a positive integer. Solution: Substituting st = τ, we get F(s) = 0 e −st t n dt = 1 s n+1 0 e −τ τ n = n! s n+1 for s > 0. 4. Find the Laplace transform of f(t) = e at , t ≥ 0. Solution: We have L(e at ) = 0 e at e −st dt = 0 e −(s−a)t dt = 1 s −a for s > a. 5. Compute the Laplace transform of cos(at), t ≥ 0. Solution: L(cos(at)) = 0 cos(at)e −st dt = cos(at) e −st −s 0 0 −a sin(at) e −st −s dt = 1 s a sin(at) s e −st −s 0 0 a 2 cos(at) s e −st −s dt Note that the limits exist only when s > 0. Hence, a 2 +s 2 s 2 0 cos(at)e −st dt = 1 s . Thus L(cos(at)) = s a 2 +s 2 ; s > 0. 6. Similarly, one can show that L(sin(at)) = a s 2 +a 2 , s > 0. 7. Find the Laplace transform of f(t) = 1 t , t > 0. Solution: Note that f(t) is not a bounded function near t = 0 (why!). We will still show that the 196 CHAPTER 10. LAPLACE TRANSFORM Laplace transform of f(t) exists. L( 1 t ) = 0 1 t e −st dt = 0 s τ e −τ s ( substitute τ = st) = 1 s 0 τ 1 2 e −τ dτ = 1 s 0 τ 1 2 −1 e −τ dτ. Recall that for calculating the integral 0 τ 1 2 −1 e −τ dτ, one needs to consider the double integral 0 0 e −(x 2 +y 2 ) dxdy = 0 e −x 2 dx 2 = 1 2 0 τ 1 2 −1 e −τ 2 . It turns out that 0 τ 1 2 −1 e −τ dτ = π. Thus, L( 1 t ) = π s for s > 0. We now put the above discussed examples in tabular form as they constantly appear in applications of Laplace transform to differential equations. f(t) L(f(t)) f(t) L(f(t)) 1 1 s , s > 0 t 1 s 2 , s > 0 t n n! s n+1 , s > 0 e at 1 s −a , s > a sin(at) a s 2 +a 2 , s > 0 cos(at) s s 2 +a 2 , s > 0 sinh(at) a s 2 −a 2 , s > a cosh(at) s s 2 −a 2 , s > a Table 10.1: Laplace transform of some Elementary Functions 10.3 Properties of Laplace Transform Lemma 10.3.1 (Linearity of Laplace Transform) 1. Let a, b ∈ R. Then L af(t) +bg(t) = 0 af(t) +bg(t) e −st dt = aL(f(t)) +bL(g(t)). 2. If F(s) = L(f(t)), and G(s) = L(g(t)), then L −1 aF(s) +bG(s) = af(t) +bg(t). The above lemma is immediate from the definition of Laplace transform and the linearity of the definite integral. Example 10.3.2 1. Find the Laplace transform of cosh(at). Solution: cosh(at) = e at +e −at 2 . Thus L(cosh(at)) = 1 2 1 s −a + 1 s +a = s s 2 −a 2 , s > [a[. 10.3. PROPERTIES OF LAPLACE TRANSFORM 197 a 2a 1 1 a 2a 1 a Figure 10.2: f(t) 2. Similarly, L(sinh(at)) = 1 2 1 s −a 1 s +a = a s 2 −a 2 , s > [a[. 3. Find the inverse Laplace transform of 1 s(s + 1) . Solution: L −1 1 s(s + 1) = L −1 1 s 1 s + 1 = L −1 1 s −L −1 1 s + 1 = 1 −e −t . Thus, the inverse Laplace transform of 1 s(s + 1) is f(t) = 1 −e −t . Theorem 10.3.3 (Scaling by a) Let f(t) be a piecewise continuous function with Laplace transform F(s). Then for a > 0, L(f(at)) = 1 a F( s a ). Proof. By definition and the substitution z = at, we get L(f(at)) = 0 e −st f(at)dt = 1 a 0 e −s z a f(z)dz = 1 a 0 e s a z f(z)dz = 1 a F( s a ). Exercise 10.3.4 1. Find the Laplace transform of t 2 +at +b, cos(wt +θ), cos 2 t, sinh 2 t; where a, b, w and θ are arbitrary constants. 2. Find the Laplace transform of the function f() given by the graphs in Figure 10.2. 3. If L(f(t)) = 1 s 2 + 1 + 1 2s + 1 , find f(t). The next theorem relates the Laplace transform of the function f (t) with that of f(t). Theorem 10.3.5 (Laplace Transform of Differentiable Functions) Let f(t), for t > 0, be a differentiable function with the derivative, f (t), being continuous. Suppose that there exist constants M and T such that [f(t)[ ≤ Me αt for all t ≥ T. If L(f(t)) = F(s) then L(f (t)) = sF(s) −f(0) for s > α. (10.3.1) 198 CHAPTER 10. LAPLACE TRANSFORM Proof. Note that the condition [f(t)[ ≤ Me αt for all t ≥ T implies that lim b−→∞ f(b)e −sb = 0 for s > α. So, by definition, L f (t) = 0 e −st f (t)dt = lim b−→∞ b 0 e −st f (t)dt = lim b−→∞ f(t)e −st b 0 − lim b−→∞ b 0 f(t)(−s)e −st dt = −f(0) +sF(s). We can extend the above result for n th derivative of a function f(t), if f (t), . . . , f (n−1) (t), f (n) (t) exist and f (n) (t) is continuous for t ≥ 0. In this case, a repeated use of Theorem 10.3.5, gives the following corollary. Corollary 10.3.6 Let f(t) be a function with L(f(t)) = F(s). If f (t), . . . , f (n−1) (t), f (n) (t) exist and f (n) (t) is continuous for t ≥ 0, then L f (n) (t) = s n F(s) −s n−1 f(0) −s n−2 f (0) − −f (n−1) (0). (10.3.2) In particular, for n = 2, we have L f (t) = s 2 F(s) −sf(0) −f (0). (10.3.3) Corollary 10.3.7 Let f (t) be a piecewise continuous function for t ≥ 0. Also, let f(0) = 0. Then L(f (t)) = sF(s) or equivalently L −1 (sF(s)) = f (t). Example 10.3.8 1. Find the inverse Laplace transform of s s 2 + 1 . Solution: We know that L −1 ( 1 s 2 + 1 ) = sint. Then sin(0) = 0 and therefore, L −1 ( s s 2 + 1 ) = cos t. 2. Find the Laplace transform of f(t) = cos 2 (t). Solution: Note that f(0) = 1 and f (t) = −2 cos t sin t = −sin(2t). Also, L(−sin(2t)) = −2 s 2 + 4 . Now, using Theorem 10.3.5, we get L(f(t)) = 1 s 2 s 2 + 4 + 1 = s 2 + 2 s(s 2 + 4) . Lemma 10.3.9 (Laplace Transform of tf(t)) Let f(t) be a piecewise continuous function with L(f(t)) = F(s). If the function F(s) is differentiable, then L(tf(t)) = − d ds F(s). Equivalently, L −1 (− d ds F(s)) = tf(t). 10.3. PROPERTIES OF LAPLACE TRANSFORM 199 Proof. By definition, F(s) = 0 e −st f(t)dt. The result is obtained by differentiating both sides with respect to s. Suppose we know the Laplace transform of a f(t) and we wish to find the Laplace transform of the function g(t) = f(t) t . Suppose that G(s) = L(g(t)) exists. Then writing f(t) = tg(t) gives F(s) = L(f(t)) = L(tg(t)) = − d ds G(s). Thus, G(s) = − s a F(p)dp for some real number a. As lim s−→∞ G(s) = 0, we get G(s) = s F(p)dp. Hence,we have the following corollary. Corollary 10.3.10 Let L(f(t)) = F(s) and g(t) = f(t) t . Then L(g(t)) = G(s) = s F(p)dp. Example 10.3.11 1. Find L(t sin(at)). Solution: We know L(sin(at)) = a s 2 +a 2 . Hence L(t sin(at)) = 2as (s 2 + a 2 ) 2 . 2. Find the function f(t) such that F(s) = 4 (s −1) 3 . Solution: We know L(e t ) = 1 s −1 and 4 (s −1) 3 = 2 d ds 1 (s −1) 2 = 2 d 2 ds 2 1 s −1 . By lemma 10.3.9, we know that L(tf(t)) = − d ds F(s). Suppose d ds F(s) = G(s). Then g(t) = L −1 G(s) = L −1 d ds F(s) = −tf(t). Therefore, L −1 d 2 ds 2 F(s) = L −1 d ds G(s) = −tg(t) = t 2 f(t). Thus we get f(t) = 2t 2 e t . Lemma 10.3.12 (Laplace Transform of an Integral) If F(s) = L(f(t)) then L ¸ t 0 f(τ)dτ = F(s) s . Equivalently, L −1 F(s) s = t 0 f(τ)dτ. Proof. By definition, L t 0 f(τ) dτ = 0 e −st t 0 f(τ) dτ dt = 0 t 0 e −st f(τ) dτdt. We don’t go into the details of the proof of the change in the order of integration. We assume that the order of the integrations can be changed and therefore 0 t 0 e −st f(τ) dτdt = 0 τ e −st f(τ) dt dτ. 200 CHAPTER 10. LAPLACE TRANSFORM Thus, L t 0 f(τ) dτ = 0 t 0 e −st f(τ) dτdt = 0 τ e −st f(τ) dt dτ = 0 τ e −s(t−τ)−sτ f(τ) dt dτ = 0 e −sτ f(τ)dτ τ e −s(t−τ) dt = 0 e −sτ f(τ)dτ 0 e −sz dz = F(s) 1 s . Example 10.3.13 1. Find L( t 0 sin(az)dz). Solution: We know L(sin(at)) = a s 2 +a 2 . Hence L( t 0 sin(az)dz) = 1 s a (s 2 +a 2 ) = a s(s 2 +a 2 ) . 2. Find L t 0 τ 2 . Solution: By Lemma 10.3.12 L t 0 τ 2 = L t 2 s = 1 s 2! s 3 = 2 s 4 . 3. Find the function f(t) such that F(s) = 4 s(s −1) . Solution: We know L(e t ) = 1 s −1 . So, L −1 4 s(s −1) = 4L −1 1 s 1 s −1 = 4 t 0 e τ dτ = 4(e t −1). Lemma 10.3.14 (s-Shifting) Let L(f(t)) = F(s). Then L(e at f(t)) = F(s −a) for s > a. Proof. L(e at f(t)) = 0 e at f(t)e −st dt = 0 f(t)e −(s−a)t dt = F(s −a) s > a. Example 10.3.15 1. Find L(e at sin(bt)). Solution: We know L(sin(bt)) = b s 2 +b 2 . Hence L(e at sin(bt)) = b (s −a) 2 +b 2 . 2. Find L −1 s−5 (s−5) 2 +36 . Solution: By s-Shifting, if L(f(t)) = F(s) then L(e at f(t)) = F(s −a). Here, a = 5 and L −1 s s 2 + 36 = L −1 s s 2 + 6 2 = cos(6t). Hence, f(t) = e 5t cos(6t). 10.3. PROPERTIES OF LAPLACE TRANSFORM 201 10.3.1 Inverse Transforms of Rational Functions Let F(s) be a rational function of s. We give a few examples to explain the methods for calculating the inverse Laplace transform of F(s). Example 10.3.16 1. Denominator of F has Distinct Real Roots: If F(s) = (s + 1)(s + 3) s(s + 2)(s + 8) find f(t). Solution: F(s) = 3 16s + 1 12(s + 2) + 35 48(s + 8) . Thus, f(t) = 3 16 + 1 12 e −2t + 35 48 e −8t . 2. Denominator of F has Distinct Complex Roots: If F(s) = 4s + 3 s 2 + 2s + 5 find f(t). Solution: F(s) = 4 s + 1 (s + 1) 2 + 2 2 1 2 2 (s + 1) 2 + 2 2 . Thus, f(t) = 4e −t cos(2t) − 1 2 e −t sin(2t). 3. Denominator of F has Repeated Real Roots: If F(s) = 3s + 4 (s + 1)(s 2 + 4s + 4) find f(t). Solution: Here, F(s) = 3s + 4 (s + 1)(s 2 + 4s + 4) = 3s + 4 (s + 1)(s + 2) 2 = a s + 1 + b s + 2 + c (s + 2) 2 . Solving for a, b and c, we get F(s) = 1 s+1 1 s+2 + 2 (s+2) 2 = 1 s+1 1 s+2 + 2 d ds 1 (s+2) . Thus, f(t) = e −t −e −2t + 2te −2t . 10.3.2 Transform of Unit Step Function Definition 10.3.17 (Unit Step Function) The Unit-Step function is defined by U a (t) = 0 if 0 ≤ t < a 1 if t ≥ a . Example 10.3.18 L U a (t) = a e −st dt = e −sa s , s > 0. Lemma 10.3.19 (t-Shifting) Let L(f(t)) = F(s). Define g(t) by g(t) = 0 if 0 ≤ t < a f(t −a) if t ≥ a . Then g(t) = U a (t)f(t −a) and L g(t) = e −as F(s). 202 CHAPTER 10. LAPLACE TRANSFORM a f(t) c d d+a c g(t) Figure 10.3: Graphs of f(t) and U a (t)f(t −a) Proof. Let 0 ≤ t < a. Then U a (t) = 0 and so, U a (t)f(t −a) = 0 = g(t). If t ≥ a, then U a (t) = 1 and U a (t)f(t −a) = f(t −a) = g(t). Since the functions g(t) and U a (t)f(t −a) take the same value for all t ≥ 0, we have g(t) = U a (t)f(t −a). Thus, L(g(t)) = 0 e −st g(t)dt = a e −st f(t −a)dt = 0 e −s(t+a) f(t)dt = e −as 0 e −st f(t)dt = e −as F(s). Example 10.3.20 Find L −1 e −5s s 2 −4s−5 . Solution: Let G(s) = e −5s s 2 −4s−5 = e −5s F(s), with F(s) = 1 s 2 −4s−5 . Since s 2 −4s −5 = (s −2) 2 −3 2 L −1 (F(s)) = L −1 1 3 3 (s −2) 2 −3 2 = 1 3 sinh(3t)e 2t . Hence, by Lemma 10.3.19 L −1 (G(s)) = 1 3 U 5 (t) sinh 3(t −5) e 2(t−5) . Example 10.3.21 Find L(f(t)), where f(t) = 0 t < 2π t cos t t > 2π. Solution: Note that f(t) = 0 t < 2π (t −2π) cos(t −2π) + 2π cos(t −2π) t > 2π. Thus, L(f(t)) = e −2πs s 2 −1 (s 2 + 1) 2 + 2π s s 2 + 1 Note: To be filled by a graph 10.4 Some Useful Results 10.4.1 Limiting Theorems The following two theorems give us the behaviour of the function f(t) when t −→0 + and when t −→∞. 10.4. SOME USEFUL RESULTS 203 Theorem 10.4.1 (First Limit Theorem) Suppose L(f(t)) exists. Then lim t−→0 + f(t) = lim s−→∞ sF(s). Proof. We know sF(s) −f(0) = L(f (t)) . Therefore lim s−→∞ sF(s) = f(0) + lim s−→∞ 0 e −st f (t)dt = f(0) + 0 lim s−→∞ e −st f (t)dt = f(0). as lim s−→∞ e −st = 0. Example 10.4.2 1. For t ≥ 0, let Y (s) = L(y(t)) = a(1 +s 2 ) −1/2 . Determine a such that y(0) = 1. Solution: Theorem 10.4.1 implies 1 = lim s−→∞ sY (s) = lim s−→∞ as (1 +s 2 ) 1/2 = lim s−→∞ a ( 1 s 2 + 1) 1/2 . Thus, a = 1. 2. If F(s) = (s + 1)(s + 3) s(s + 2)(s + 8) find f(0 + ). Solution: Theorem 10.4.1 implies f(0 + ) = lim s−→∞ sF(s) = lim s−→∞ s (s + 1)(s + 3) s(s + 2)(s + 8) = 1. On similar lines, one has the following theorem. But this theorem is valid only when f(t) is bounded as t approaches infinity. Theorem 10.4.3 (Second Limit Theorem) Suppose L(f(t)) exists. Then lim t−→∞ f(t) = lim s−→0 sF(s) provided that sF(s) converges to a finite limit as s tends to 0. Proof. lim s−→0 sF(s) = f(0) + lim s−→0 0 e −st f (t)dt = f(0) + lim s−→0 lim t−→∞ t 0 e −sτ f (τ)dτ = f(0) + lim t−→∞ t 0 lim s−→0 e −sτ f (τ)dτ = lim t−→∞ f(t). Example 10.4.4 If F(s) = 2(s + 3) s(s + 2)(s + 8) find lim t−→∞ f(t). Solution: From Theorem 10.4.3, we have lim t−→∞ f(t) = lim s−→0 sF(s) = lim s−→0 s 2(s + 3) s(s + 2)(s + 8) = 6 16 = 3 8 . We now generalise the lemma on Laplace transform of an integral as convolution theorem. Definition 10.4.5 (Convolution of Functions) Let f(t) and g(t) be two smooth functions. The convolu- tion, f g, is a function defined by (f g)(t) = t 0 f(τ)g(t −τ)dτ. 204 CHAPTER 10. LAPLACE TRANSFORM Check that 1. (f g)(t) = g f(t). 2. If f(t) = cos(t) then (f f)(t) = t cos(t) + sin(t) 2 . Theorem 10.4.6 (Convolution Theorem) If F(s) = L(f(t)) and G(s) = L(g(t)) then L ¸ t 0 f(τ)g(t −τ)dτ = F(s) G(s). Remark 10.4.7 Let g(t) = 1 for all t ≥ 0. Then we know that L(g(t)) = G(s) = 1 s . Thus, the Convolution Theorem 10.4.6 reduces to the Integral Lemma 10.3.12. 10.5 Application to Differential Equations Consider the following example. Example 10.5.1 Solve the following Initial Value Problem: af (t) +bf (t) +cf(t) = g(t) with f(0) = f 0 , f (0) = f 1 . Solution: Let L(g(t)) = G(s). Then G(s) = a(s 2 F(s) −sf(0) −f (0)) +b(sF(s) −f(0)) +cF(s) and the initial conditions imply G(s) = (as 2 +bs +c)F(s) −(as +b)f 0 −af 1 . Hence, F(s) = G(s) as 2 +bs +c . .. . non−homogeneous part + (as +b)f 0 as 2 +bs +c + af 1 as 2 +bs +c . .. . initial conditions . (10.5.1) Now, if we know that G(s) is a rational function of s then we can compute f(t) from F(s) by using the method of partial fractions (see Subsection 10.3.1 ). Example 10.5.2 1. Solve the IVP y −4y −5y = f(t) = t if 0 ≤ t < 5 t + 5 if t ≥ 5 . with y(0) = 1 and y (0) = 4. Solution: Note that f(t) = t +U 5 (t). Thus, L(f(t)) = 1 s 2 + e −5s s . Taking Laplace transform of the above equation, we get s 2 Y (s) −sy(0) −y (0) −4 (sY (s) −y(0)) −5Y (s) = L(f(t)) = 1 s 2 + e −5s s . 10.5. APPLICATION TO DIFFERENTIAL EQUATIONS 205 Which gives Y (s) = s (s + 1)(s −5) + e −5s s(s + 1)(s −5) + 1 s 2 (s + 1)(s −5) = 1 6 ¸ 5 s −5 + 1 s + 1 + e −5s 30 ¸ 6 s + 5 s + 1 + 1 s −5 + 1 150 ¸ 30 s 2 + 24 s 25 s + 1 + 1 s −5 . Hence, y(t) = 5e 5t 6 + e −t 6 +U 5 (t) ¸ 1 5 + e −(t−5) 6 + e 5(t−5) 30 + 1 150 −30t + 24 −25e −t +e 5t . Remark 10.5.3 Even though f(t) is a discontinuous function at t = 5, the solution y(t) and y (t) are continuous functions of t, as y exists. In general, the following is always true: Let y(t) be a solution of ay +by +cy = f(t). Then both y(t) and y (t) are continuous functions of time. Example 10.5.4 1. Consider the IVP ty (t) + y (t) + ty(t) = 0, with y(0) = 1 and y (0) = 0. Find L(y(t)). Solution: Applying Laplace transform, we have d ds s 2 Y (s) −sy(0) −y (0) + (sY (s) −y(0)) − d ds Y (s) = 0. Using initial conditions, the above equation reduces to d ds (s 2 + 1)Y (s) −s − sY (s) + 1 = 0. This equation after simplification can be rewritten as Y (s) Y (s) = − s s 2 + 1 . Therefore, Y (s) = a(1 +s 2 ) 1 2 . From Example 10.4.2.1, we see that a = 1 and hence Y (s) = (1 +s 2 ) 1 2 . 2. Show that y(t) = t 0 f(τ)g(t −τ)dτ is a solution of y (t) +ay (t) +by(t) = f(t), with y(0) = y (0) = 0; where L[g(t)] = 1 s 2 +as +b . Solution: Here, Y (s) = F(s) s 2 +as +b = F(s) 1 s 2 +as +b . Hence, y(t) = (f g)(t) = t 0 f(τ)g(t −τ)dτ. 3. Show that y(t) = 1 a t 0 f(τ) sin(a(t −τ))dτ is a solution of y (t) +a 2 y(t) = f(t), with y(0) = y (0) = 0. 206 CHAPTER 10. LAPLACE TRANSFORM Solution: Here, Y (s) = F(s) s 2 +a 2 = 1 a F(s) a s 2 +a 2 . Hence, y(t) = 1 a f(t) sin(at) = 1 a t 0 f(τ) sin(a(t −τ))dτ. 4. Solve the following IVP. y (t) = t 0 y(τ)dτ +t −4 sint, with y(0) = 1. Solution: Taking Laplace transform of both sides and using Theorem 10.3.5, we get sY (s) −1 = Y (s) s + 1 s 2 −4 1 s 2 + 1 . Solving for Y (s), we get Y (s) = s 2 −1 s(s 2 + 1) = 1 s −2 1 s 2 + 1 . So, y(t) = 1 −2 t 0 sin(τ)dτ = 1 + 2(cos t −1) = 2 cos t −1. 10.6 Transform of the Unit-Impulse Function Consider the following example. Example 10.6.1 Find the Laplace transform, D h (s), of δ h (t) = 0 t < 0 1 h 0 ≤ t < h 0 t > h. Solution: Note that δ h (t) = 1 h U 0 (t) −U h (t) . By linearity of the Laplace transform, we get D h (s) = 1 h 1 −e −hs s . Remark 10.6.2 1. Observe that in Example 10.6.1, if we allow h to approach 0, we obtain a new function, say δ(t). That is, let δ(t) = lim h−→0 δ h (t). This new function is zero everywhere except at the origin. At origin, this function tends to infinity. In other words, the graph of the function appears as a line of infinite height at the origin. This new function, δ(t), is called the unit-impulse function (or Dirac’s delta function). 2. We can also write δ(t) = lim h−→0 δ h (t) = lim h−→0 1 h U 0 (t) −U h (t) . 3. In the strict mathematical sense lim h−→0 δ h (t) does not exist. Hence, mathematically speaking, δ(t) does not represent a function. 4. However, note that 0 δ h (t)dt = 1, for all h. 10.6. TRANSFORM OF THE UNIT-IMPULSE FUNCTION 207 5. Also, observe that L(δ h (t)) = 1 −e −hs hs . Now, if we take the limit of both sides, as h approaches zero (apply L’Hospital’s rule), we get L(δ(t)) = lim h−→0 1 −e −hs hs = lim h−→0 se −hs s = 1. 208 CHAPTER 10. LAPLACE TRANSFORM Part IV Numerical Applications 209 Chapter 11 Newton’s Interpolation Formulae 11.1 Introduction In many practical situations, for a function y = f(x), which either may not be explicitly specified or may be difficult to handle, we often have a tabulated data (x i , y i ), where y i = f(x i ), and x i < x i+1 for i = 0, 1, 2, . . . , N. In such cases, it may be required to represent or replace the given function by a simpler function, which coincides with the values of f at the N + 1 tabular points x i . This process is known as Interpolation. Interpolation is also used to estimate the value of the function at the non tabular points. Here, we shall consider only those functions which are sufficiently smooth, i.e., they are differentiable sufficient number of times. Many of the interpolation methods, where the tabular points are equally spaced, use difference operators. Hence, in the following we introduce various difference operators and study their properties before looking at the interpolation methods. We shall assume here that the tabular points x 0 , x 1 , x 2 , . . . , x N are equally spaced, i.e., x k x k−1 = h for each k = 1, 2, . . . , N. The real number h is called the step length. This gives us x k = x 0 + kh. Further, y k = f(x k ) gives the value of the function y = f(x) at the k th tabular point. The points y 1 , y 2 , . . . , y N are known as nodes or nodal values. 11.2 Difference Operator 11.2.1 Forward Difference Operator Definition 11.2.1 (First Forward Difference Operator) We define the forward difference opera- tor, denoted by ∆, as ∆f(x) = f(x +h) −f(x). The expression f(x + h) − f(x) gives the first forward difference of f(x) and the operator ∆ is called the first forward difference operator. Given the step size h, this formula uses the values at x and x + h, the point at the next step. As it is moving in the forward direction, it is called the forward difference operator. 1 k−1 k k+1 0 n Forward Backward x x x x x x 211 212 CHAPTER 11. NEWTON’S INTERPOLATION FORMULAE Definition 11.2.2 (Second Forward Difference Operator) The second forward difference operator, ∆ 2 , is defined as 2 f(x) = ∆ ∆f(x) = ∆f(x +h) −∆f(x). We note that 2 f(x) = ∆f(x +h) −∆f(x) = f(x + 2h) −f(x +h) f(x +h) −f(x) = f(x + 2h) −2f(x +h) +f(x). In particular, for x = x k , we get, ∆y k = y k+1 −y k and 2 y k = ∆y k+1 −∆y k = y k+2 −2y k+1 +y k . Definition 11.2.3 (r th Forward Difference Operator) The r th forward difference operator, ∆ r , is defined as r f(x) = ∆ r−1 f(x +h) −∆ r−1 f(x), r = 1, 2, . . . , with ∆ 0 f(x) = f(x). Exercise 11.2.4 Show that ∆ 3 y k = ∆ 2 (∆y k ) = ∆(∆ 2 y k ). In general, show that for any positive integers r and m with r > m, r y k = ∆ r−m (∆ m y k ) = ∆ m (∆ r−m y k ). Example 11.2.5 For the tabulated values of y = f(x) find ∆y 3 and ∆ 3 y 2 i 0 1 2 3 4 5 x i 0 0.1 0.2 0.3 0.4 0.5 y i 0.05 0.11 0.26 0.35 0.49 0.67 . Solution: Here, ∆y 3 = y 4 −y 3 = 0.49 −0.35 = 0.14, and 3 y 2 = ∆(∆ 2 y 2 ) = ∆(y 4 −2y 3 +y 2 ) = (y 5 −y 4 ) −2(y 4 −y 3 ) + (y 3 −y 2 ) = y 5 −3y 4 + 3y 3 −y 2 = 0.67 −3 0.49 + 3 0.35 −0.26 = −0.01. Remark 11.2.6 Using mathematical induction, it can be shown that r y k = r ¸ j=0 (−1) r−j r j y k+j . Thus the r th forward difference at y k uses the values at y k , y k+1 , . . . , y k+r . Example 11.2.7 If f(x) = x 2 +ax +b, where a and b are real constants, calculate ∆ r f(x). 11.2. DIFFERENCE OPERATOR 213 Solution: We first calculate ∆f(x) as follows: ∆f(x) = f(x +h) −f(x) = (x +h) 2 +a(x +h) +b x 2 +ax +b = 2xh +h 2 +ah. Now, 2 f(x) = ∆f(x +h) −∆f(x) = [2(x +h)h +h 2 +ah] −[2xh +h 2 +ah] = 2h 2 , and ∆ 3 f(x) = ∆ 2 f(x) −∆ 2 f(x) = 2h 2 −2h 2 = 0. Thus, ∆ r f(x) = 0 for all r ≥ 3. Remark 11.2.8 In general, if f(x) = x n +a 1 x n−1 +a 2 x n−2 + +a n−1 x+a n is a polynomial of degree n, then it can be shown that n f(x) = n! h n and ∆ n+r f(x) = 0 for r = 1, 2, . . . . Remark 11.2.9 1. For a set of tabular values, the horizontal forward difference table is written as: x0 y0 ∆y0 = y1 −y0 ∆ 2 y0 = ∆y1 −∆y0 · · · ∆ n y0 = ∆ n−1 y1 −∆ n−1 y0 x1 y1 ∆y1 = y2 −y1 ∆ 2 y1 = ∆y2 −∆y1 · · · x2 y2 ∆y2 = y3 −y2 ∆ 2 y2 = ∆y3 −∆y2 . . . xn−1 yn−1 ∆yn−1 = yn −yn−1 xn yn 2. In many books, a diagonal form of the difference table is also used. This is written as: x0 y0 ∆y0 x1 y1 ∆ 2 y0 ∆y1 ∆ 3 y0 x2 y2 ∆ 2 y1 . . . ∆yn−1 xn−2 yn−2 ∆ 2 yn−3 ∆yn−2 ∆ 3 yn−3 xn−1 yn−1 ∆ 2 yn−2 ∆yn−1 xn yn However, in the following, we shall mostly adhere to horizontal form only. 11.2.2 Backward Difference Operator Definition 11.2.10 (First Backward Difference Operator) The first backward difference oper- ator, denoted by ∇, is defined as ∇f(x) = f(x) −f(x −h). Given the step size h, note that this formula uses the values at x and x − h, the point at the previous step. As it moves in the backward direction, it is called the backward difference operator. 214 CHAPTER 11. NEWTON’S INTERPOLATION FORMULAE Definition 11.2.11 (r th Backward Difference Operator) The r th backward difference operator, ∇ r , is defined as r f(x) = ∇ r−1 f(x) −∇ r−1 f(x −h), r = 1, 2, . . . , with ∇ 0 f(x) = f(x). In particular, for x = x k , we get ∇y k = y k −y k−1 and ∇ 2 y k = y k −2y k−1 +y k−2 . Note that ∇ 2 y k = ∆ 2 y k−2 . Example 11.2.12 Using the tabulated values in Example 11.2.5, find ∇y 4 and ∇ 3 y 3 . Solution: We have ∇y 4 = y 4 −y 3 = 0.49 −0.35 = 0.14, and 3 y 3 = ∇ 2 y 3 −∇ 2 y 2 = (y 3 −2y 2 +y 1 ) −(y 2 −2y 1 +y 0 ) = y 3 −3y 2 + 3y 1 −y 0 = 0.35 −3 0.26 + 3 0.11 −0.05 = −0.15. Example 11.2.13 If f(x) = x 2 +ax +b, where a and b are real constants, calculate ∇ r f(x). Solution: We first calculate ∇f(x) as follows: ∇f(x) = f(x) −f(x −h) = x 2 +ax +b (x −h) 2 +a(x −h) +b = 2xh −h 2 +ah. Now, 2 f(x) = ∇f(x) −∆f(x −h) = [2xh −h 2 +ah] −[2(x −h)h −h 2 +ah] = 2h 2 , and ∇ 3 f(x) = ∇ 2 f(x) −∇ 2 f(x) = 2h 2 −2h 2 = 0. Thus, ∇ r f(x) = 0 for all r ≥ 3. Remark 11.2.14 For a set of tabular values, backward difference table in the horizontal form is written as: x0 y0 x1 y1 ∇y1 = y1 −y0 x2 y2 ∇y2 = y2 −y1 ∇ 2 y2 = ∇y2 −∇y1 . . . xn−2 yn−2 · · · · · · xn−1 yn−1 ∇yn−1 = yn−1 −yn−2 · · · · · · xn yn ∇yn = yn −yn−1 ∇ 2 yn = ∇yn −∇yn−1 · · · ∇ n yn = ∇ n−1 yn −∇ n−1 yn−1 Example 11.2.15 For the following set of tabular values (x i , y i ), write the forward and backward difference tables. x i 9 10 11 12 13 14 y i 5.0 5.4 6.0 6.8 7.5 8.7 Solution: The forward difference table is written as 11.2. DIFFERENCE OPERATOR 215 x y ∆y ∆ 2 y ∆ 3 y ∆ 4 y ∆ 5 y 9 5 0.4 = 5.4 - 5 0.2 = 0.6 - 0.4 0= 0.2-0.2 -.3 = -0.3 - 0.0 0.6 = 0.3 - (-0.3) 10 5.4 0.6 0.2 -0.3 0.3 11 6.0 0.8 -0.1 0.0 12 6.8 0.7 -0.1 13 7.5 0.6 14 8.1 In the similar manner, the backward difference table is written as follows: x y ∇y ∇ 2 y ∇ 3 y ∇ 4 y ∇ 5 y 9 5 10 5.4 0.4 11 6 0.6 0.2 12 6.8 0.8 0.2 0.0 13 7.5 0.7 -0.1 - 0.3 -0.3 14 8.1 0.6 -0.1 0.0 0.3 0.6 Observe from the above two tables that ∆ 3 y 1 = ∇ 3 y 4 , ∆ 2 y 3 = ∇ 2 y 5 , ∆ 4 y 1 = ∇ 4 y 5 etc. Exercise 11.2.16 1. Show that ∆ 3 y 4 = ∇ 3 y 7 . 2. Prove that ∆(∇y k ) = ∆ 2 y k+1 = ∇ 2 y k−1 . 3. Obtain ∇ k y k in terms of y 0 , y 1 , y 2 , . . . , y k . Hence show that ∇ k y k = ∆ k y 0 . Remark 11.2.17 In general it can be shown that ∆ k f(x) = ∇ k f(x +kh) or ∆ k y m = ∇ k y k+m Remark 11.2.18 In view of the remarks (11.2.8) and (11.2.17) it is obvious that, if y = f(x) is a polynomial function of degree n, then ∇ n f(x) is constant and ∇ n+r f(x) = 0 for r > 0. 11.2.3 Central Difference Operator Definition 11.2.19 (Central Difference Operator) The first central difference operator, de- noted by δ, is defined by δf(x) = f(x + h 2 ) −f(x − h 2 ) and the r th central difference operator is defined as δ r f(x) = δ r−1 f(x + h 2 ) −δ r−1 f(x − h 2 ) with δ 0 f(x) = f(x). Thus, δ 2 f(x) = f(x +h) −2f(x) +f(x −h). In particular, for x = x k , define y k+ 1 2 = f(x k + h 2 ), and y k− 1 2 = f(x k h 2 ), then δy k = y k+ 1 2 −y k− 1 2 and δ 2 y k = y k+1 −2y k +y k−1 . Thus, δ 2 uses the table of (x k , y k ). It is easy to see that only the even central differences use the tabular point values (x k , y k ). 216 CHAPTER 11. NEWTON’S INTERPOLATION FORMULAE 11.2.4 Shift Operator Definition 11.2.20 (Shift Operator) A shift operator, denoted by E, is the operator which shifts the value at the next point with step h, i.e., Ef(x) = f(x +h). Thus, Ey i = y i+1 , E 2 y i = y i+2 , and E k y i = y i+k . 11.2.5 Averaging Operator Definition 11.2.21 (Averaging Operator) The averaging operator, denoted by µ, gives the average value between two central points, i.e., µf(x) = 1 2 f(x + h 2 ) +f(x − h 2 ) . Thus µy i = 1 2 (y i+ 1 2 +y i− 1 2 ) and µ 2 y i = 1 2 µy i+ 1 2 +µy i− 1 2 = 1 4 [y i+1 + 2y i +y i−1 ] . 11.3 Relations between Difference operators 1. We note that Ef(x) = f(x +h) = [f(x +h) −f(x)] +f(x) = ∆f(x) +f(x) = (∆ + 1)f(x). Thus, E ≡ 1 + ∆ or ∆ ≡ E −1. 2. Further, ∇(E(f(x)) = ∇(f(x +h)) = f(x +h) −f(x). Thus, (1 −∇)Ef(x) = E(f(x)) −∇(E(f(x)) = f(x +h) −[f(x +h) −f(x)] = f(x). Thus E ≡ 1 + ∆, gives us (1 −∇)(1 + ∆)f(x) = f(x) for all x. So we write, (1 + ∆) −1 = 1 −∇ or ∇ = 1 −(1 + ∆) −1 , and (1 −∇) −1 = 1 + ∆ = E. Similarly, ∆ = (1 −∇) −1 −1. 3. Let us denote by E 1 2 f(x) = f(x + h 2 ). Then, we see that δf(x) = f(x + h 2 ) −f(x − h 2 ) = E 1 2 f(x) −E 1 2 f(x). Thus, δ = E 1 2 −E 1 2 . Recall, δ 2 f(x) = f(x +h) −2f(x) +f(x −h) = [f(x +h) + 2f(x) +f(x −h)] −4f(x) = 4(µ 2 −1)f(x). 11.4. NEWTON’S INTERPOLATION FORMULAE 217 So, we have, µ 2 δ 2 4 + 1 or µ ≡ 1 + δ 2 4 . That is, the action of 1 + δ 2 4 is same as that of µ. 4. We further note that, ∆f(x) = f(x +h) −f(x) = 1 2 f(x +h) −2f(x) +f(x −h) + 1 2 f(x +h) −f(x −h) = 1 2 δ 2 (f(x)) + 1 2 f(x +h) −f(x −h) and δµf(x) = δ ¸ 1 2 f(x + h 2 ) +f(x − h 2 ) = 1 2 ¦f(x +h) −f(x)¦ +¦f(x) −f(x −h)¦ = 1 2 [f(x +h) −f(x −h)] . Thus, ∆f(x) = ¸ 1 2 δ 2 +δµ f(x), i.e., ∆ ≡ 1 2 δ 2 +δµ ≡ 1 2 δ 2 1 + δ 2 4 . In view of the above discussion, we have the following table showing the relations between various difference operators: E ∆ ∇ δ E E ∆ + 1 (1 −∇) −1 1 2 δ 2 1 + δ 2 4 + 1 ∆ E −1 ∆ (1 −∇) −1 −1 1 2 δ 2 1 + 1 4 δ 2 ∇ 1 −E −1 1 −(1 +∇) −1 ∇ − 1 2 δ 2 1 + 1 4 δ 2 δ E 1/2 −E −1/2 ∆(1 + ∆) −1/2 ∇(1 −∇) −1/2 δ Exercise 11.3.1 1. Verify the validity of the above table. 2. Obtain the relations between the averaging operator and other difference operators. 3. Find ∆ 2 y 2 , ∇ 2 y 2 , δ 2 y 2 and µ 2 y 2 for the following tabular values: i 0 1 2 3 4 x i 93.0 96.5 100.0 103.5 107.0 y i 11.3 12.5 14.0 15.2 16.0 11.4 Newton’s Interpolation Formulae As stated earlier, interpolation is the process of approximating a given function, whose values are known at N+1 tabular points, by a suitable polynomial, P N (x), of degree N which takes the values y i at x = x i for i = 0, 1, . . . , N. Note that if the given data has errors, it will also be reflected in the polynomial so obtained. In the following, we shall use forward and backward differences to obtain polynomial function ap- proximating y = f(x), when the tabular points x i ’s are equally spaced. Let f(x) ≈ P N (x), 218 CHAPTER 11. NEWTON’S INTERPOLATION FORMULAE where the polynomial P N (x) is given in the following form: P N (x) = a 0 + a 1 (x −x 0 ) +a 2 (x −x 0 )(x −x 1 ) + +a k (x −x 0 )(x −x 1 ) (x −x k−1 ) +a N (x −x 0 )(x −x 1 ) (x −x N−1 ). (11.4.1) for some constants a 0 , a 1 , ...a N , to be determined using the fact that P N (x i ) = y i for i = 0, 1, . . . , N. So, for i = 0, substitute x = x 0 in (11.4.1) to get P N (x 0 ) = y 0 . This gives us a 0 = y 0 . Next, P N (x 1 ) = y 1 ⇒y 1 = a 0 + (x 1 −x 0 )a 1 . So, a 1 = y1−y0 h = ∆y 0 h . For i = 2, y 2 = a 0 + (x 2 −x 0 )a 1 + (x 2 −x 1 )(x 2 −x 0 )a 2 , or equivalently 2h 2 a 2 = y 2 −y 0 −2h( ∆y 0 h ) = y 2 −2y 1 +y 0 = ∆ 2 y 0 . Thus, a 2 = 2 y 0 2h 2 . Now, using mathematical induction, we get a k = k y 0 k! h k for k = 0, 1, 2, . . . , N. Thus, P N (x) = y 0 + ∆y 0 h (x −x 0 ) + 2 y 0 2! h 2 (x −x 0 )(x −x 1 ) + + k y 0 k! h k (x −x 0 ) (x −x k−1 ) + N y 0 N! h N (x −x 0 )...(x −x N−1 ). As this uses the forward differences, it is called Newton’s Forward difference formula for inter- polation, or simply, forward interpolation formula. Exercise 11.4.1 Show that a 3 = 3 y 0 3! h 3 and a 4 = 4 y 0 4! h 2 and in general, a k = k y 0 k!h k , for k = 0, 1, 2, . . . , N. For the sake of numerical calculations, we give below a convenient form of the forward interpolation formula. Let u = x −x 0 h , then x −x 1 = hu +x 0 −(x 0 +h) = h(u −1), x −x 2 = h(u −2), . . . , x −x k = h(u −k), etc.. With this transformation the above forward interpolation formula is simplified to the following form: P N (u) = y 0 + ∆y 0 h (hu) + 2 y 0 2! h 2 ¦(hu)(h(u −1))¦ + + k y 0 h k k! h k u(u −1) (u −k + 1) + + N y 0 N! h N ¸ (hu) h(u −1) h(u −N + 1) . = y 0 + ∆y 0 (u) + 2 y 0 2! (u(u −1)) + + k y 0 k! ¸ u(u −1) (u −k + 1) + + N y 0 N! ¸ u(u −1)...(u −N + 1) . (11.4.2) If N=1, we have a linear interpolation given by f(u) ≈ y 0 + ∆y 0 (u). (11.4.3) 11.4. NEWTON’S INTERPOLATION FORMULAE 219 For N = 2, we get a quadratic interpolating polynomial: f(u) ≈ y 0 + ∆y 0 (u) + 2 y 0 2! [u(u −1)] (11.4.4) and so on. It may be pointed out here that if f(x) is a polynomial function of degree N then P N (x) coincides with f(x) on the given interval. Otherwise, this gives only an approximation to the true values of f(x). If we are given additional point x N+1 also, then the error, denoted by R N (x) = [P N (x) − f(x)[, is estimated by R N (x) · N+1 y 0 h N+1 (N + 1)! (x −x 0 ) (x −x N ) . Similarly, if we assume, P N (x) is of the form P N (x) = b 0 +b 1 (x −x N ) +b 1 (x −x N )(x −x N−1 ) + +b N (x −x N )(x −x N−1 ) (x −x 1 ), then using the fact that P N (x i ) = y i , we have b 0 = y N b 1 = 1 h (y N −y N−1 ) = 1 h ∇y N b 2 = y N −2y N−1 +y N−2 2h 2 = 1 2h 2 (∇ 2 y N ) . . . b k = 1 k! h k k y N . Thus, using backward differences and the transformation x = x N + hu, we obtain the Newton’s backward interpolation formula as follows: P N (u) = y N +u∇y N + u(u + 1) 2! 2 y N + + u(u + 1) (u +N −1) N! N y N . (11.4.5) Exercise 11.4.2 Derive the Newton’s backward interpolation formula (11.4.5) for N = 3. Remark 11.4.3 If the interpolating point lies closer to the beginning of the interval then one uses the Newton’s forward formula and if it lies towards the end of the interval then Newton’s backward formula is used. Remark 11.4.4 For a given set of n tabular points, in general, all the n points need not be used for interpolating polynomial. In fact N is so chosen that N th forward/backward difference almost remains constant. Thus N is less than or equal to n. Example 11.4.5 1. Obtain the Newton’s forward interpolating polynomial, P 5 (x) for the following tab- ular data and interpolate the value of the function at x = 0.0045. x 0 0.001 0.002 0.003 0.004 0.005 y 1.121 1.123 1.1255 1.127 1.128 1.1285 Solution: For this data, we have the Forward difference difference table x i y i ∆y i 2 y 3 3 y i 4 y i 5 y i 0 1.121 0.002 0.0005 -0.0015 0.002 -.0025 .001 1.123 0.0025 -0.0010 0.0005 -0.0005 .002 1.1255 0.0015 -0.0005 0.0 .003 1.127 0.001 -0.0005 .004 1.128 0.0005 .005 1.1285 220 CHAPTER 11. NEWTON’S INTERPOLATION FORMULAE Thus, for x = x 0 +hu, where x 0 = 0, h = 0.001 and u = x −x 0 h , we get P 5 (x) = 1.121 +u .002 + u(u −1) 2 (.0005) + u(u −1)(u −2) 3! (−.0015) + u(u −1)(u −2)(u −3) 4! (.002) + u(u −1)(u −2)(u −3)(u −4) 5! (−.0025). Thus, P 5 (0.0045) = P 5 (0 + 0.001 4.5) = 1.121 + 0.002 4.5 + 0.0005 2 4.5 3.5 − 0.0015 6 4.5 3.5 2.5 + 0.002 24 4.5 3.5 2.5 1.5 − 0.0025 120 4.5 3.5 2.5 1.5 0.5 = 1.12840045. 2. Using the following table for tan x, approximate its value at 0.71. Also, find an error estimate (Note tan(0.71) = 0.85953). x i 0.70 72 0.74 0.76 0.78 tan x i 0.84229 0.87707 0.91309 0.95045 0.98926 Solution: As the point x = 0.71 lies towards the initial tabular values, we shall use Newton’s Forward formula. The forward difference table is: x i y i ∆y i 2 y i 3 y i 4 y i 0.70 0.84229 0.03478 0.00124 0.0001 0.00001 0.72 0.87707 0.03602 0.00134 0.00011 0.74 0.91309 0.03736 0.00145 0.76 0.95045 0.03881 0.78 0.98926 In the above table, we note that ∆ 3 y is almost constant, so we shall attempt 3 rd degree polynomial interpolation. Note that x 0 = 0.70, h = 0.02 gives u = 0.71 −0.70 0.02 = 0.5. Thus, using forward interpolating polynomial of degree 3, we get P 3 (u) = 0.84229 + 0.03478u + 0.00124 2! u(u −1) + 0.0001 3! u(u −1)(u −2). Thus, tan(0.71) ≈ 0.84229 + 0.03478(0.5) + 0.00124 2! 0.5 (−0.5) + 0.0001 3! 0.5 (−0.5) (−1.5) = 0.859535. An error estimate for the approximate value is 4 y 0 4! u(u −1)(u −2)(u −3) u=0.5 = 0.00000039. Note that exact value of tan(0.71) (upto 5 decimal place) is 0.85953. and the approximate value, obtained using the Newton’s interpolating polynomial is very close to this value. This is also reflected by the error estimate given above. 11.4. NEWTON’S INTERPOLATION FORMULAE 221 3. Apply 3 rd degree interpolation polynomial for the set of values given in Example 11.2.15, to estimate the value of f(10.3) by taking (i) x 0 = 9.0, (ii) x 0 = 10.0. Also, find approximate value of f(13.5). Solution: Note that x = 10.3 is closer to the values lying in the beginning of tabular values, while x = 13.5 is towards the end of tabular values. Therefore, we shall use forward difference formula for x = 10.3 and the backward difference formula for x = 13.5. Recall that the interpolating polynomial of degree 3 is given by f(x 0 +hu) = y 0 + ∆y 0 u + 2 y 0 2! u(u −1) + 3 y 0 3! u(u − 1)(u −2). Therefore, (a) for x 0 = 9.0, h = 1.0 and x = 10.3, we have u = 10.3 −9.0 1 = 1.3. This gives, f(10.3) ≈ 5 +.4 1.3 + .2 2! (1.3) .3 + .0 3! (1.3) .3 (−0.7) = 5.559. (b) for x 0 = 10.0, h = 1.0 and x = 10.3, we have u = 10.3 −10.0 1 = .3. This gives, f(10.3) ≈ 5.4 +.6 .3 + .2 2! (.3) (−0.7) + −0.3 3! (.3) (−0.7) (−1.7) = 5.54115. Note: as x = 10.3 is closer to x = 10.0, we may expect estimate calculated using x 0 = 10.0 to be a better approximation. (c) for x 0 = 13.5, we use the backward interpolating polynomial, which gives, f(x N +hu) ≈ y 0 +∇y N u + 2 y N 2! u(u + 1) + 3 y N 3! u(u + 1)(u + 2). Therefore, taking x N = 14, h = 1.0 and x = 13.5, we have u = 13.5 −14 1 = −0.5. This gives, f(13.5) ≈ 8.1 +.6 (−0.5) + −0.1 2! (−0.5) 0.5 + 0.0 3! (−0.5) 0.5 (1.5) = 7.8125. Exercise 11.4.6 1. Following data is available for a function y = f(x) x 0 0.2 0.4 0.6 0.8 1.0 y 1.0 0.808 0.664 0.616 0.712 1.0 Compute the value of the function at x = 0.3 and x = 1.1 2. The speed of a train, running between two station is measured at different distances from the starting station. If x is the distance in km. from the starting station, then v(x), the speed (in km/hr) of the train at the distance x is given by the following table: x 0 50 100 150 200 250 v(x) 0 60 80 110 90 0 Find the approximate speed of the train at the mid point between the two stations. 222 CHAPTER 11. NEWTON’S INTERPOLATION FORMULAE 3. Following table gives the values of the function S(x) = x 0 sin( π 2 t 2 )dt at the different values of the tabular points x, x 0 0.04 0.08 0.12 0.16 0.20 S(x) 0 0.00003 0.00026 0.00090 0.00214 0.00419 Obtain a fifth degree interpolating polynomial for S(x). Compute S(0.02) and also find an error estimate for it. 4. Following data gives the temperatures (in o C) between 8.00 am to 8.00 pm. on May 10, 2005 in Kanpur: Time 8 am 12 noon 4 pm 8pm Temperature 30 37 43 38 Obtain Newton’s backward interpolating polynomial of degree 3 to compute the temperature in Kanpur on that day at 5.00 pm. Chapter 12 Lagrange’s Interpolation Formula 12.1 Introduction In the previous chapter, we derived the interpolation formula when the values of the function are given at equidistant tabular points x 0 , x 1 , . . . , x N . However, it is not always possible to obtain values of the function, y = f(x) at equidistant interval points, x i . In view of this, it is desirable to derive an in- terpolating formula, which is applicable even for unequally distant points. Lagrange’s Formula is one such interpolating formula. Unlike the previous interpolating formulas, it does not use the notion of differences, however we shall introduce the concept of divided differences before coming to it. 12.2 Divided Differences Definition 12.2.1 (First Divided Difference) The ratio f(x i ) −f(x j ) x i −x j for any two points x i and x j is called the first divided difference of f(x) relative to x i and x j . It is denoted by δ[x i , x j ]. Let us assume that the function y = f(x) is linear. Then δ[x i , x j ] is constant for any two tabular points x i and x j , i.e., it is independent of x i and x j . Hence, δ[x i , x j ] = f(x i ) −f(x j ) x i −x j = δ[x j , x i ]. Thus, for a linear function f(x), if we take the points x, x 0 and x 1 then, δ[x 0 , x] = δ[x 0 , x 1 ], i.e., f(x) −f(x 0 ) x −x 0 = δ[x 0 , x 1 ]. Thus, f(x) = f(x 0 ) + (x −x 0 )δ[x 0 , x 1 ]. So, if f(x) is approximated with a linear polynomial, then the value of the function at any point x can be calculated by using f(x) ≈ P 1 (x) = f(x 0 ) + (x − x 0 )δ[x 0 , x 1 ], where δ[x 0 , x 1 ] is the first divided difference of f relative to x 0 and x 1 . Definition 12.2.2 (Second Divided Difference) The ratio δ[x i , x j , x k ] = δ[x j , x k ] −δ[x i , x j ] x k −x i is defined as second divided difference of f(x) relative to x i , x j and x k . 223 224 CHAPTER 12. LAGRANGE’S INTERPOLATION FORMULA If f(x) is a second degree polynomial then δ[x 0 , x] is a linear function of x. Hence, δ[x i , x j , x k ] = δ[x j , x k ] −δ[x i , x j ] x k −x i is constant. In view of the above, for a polynomial function of degree 2, we have δ[x, x 0 , x 1 ] = δ[x 0 , x 1 , x 2 ]. Thus, δ[x, x 0 ] −δ[x 0 , x 1 ] x −x 1 = δ[x 0 , x 1 , x 2 ]. This gives, δ[x, x 0 ] = δ[x 0 , x 1 ] + (x −x 1 )δ[x 0 , x 1 , x 2 ]. From this we obtain, f(x) = f(x 0 ) + (x −x 0 )δ[x 0 , x 1 ] + (x −x 0 )(x −x 1 )δ[x 0 , x 1 , x 2 ]. So, whenever f(x) is approximated with a second degree polynomial, the value of f(x) at any point x can be computed using the above polynomial, which uses the values at three points x 0 , x 1 and x 2 . Example 12.2.3 Using the following tabular values for a function y = f(x), obtain its second degree poly- nomial approximation. i 0 1 2 x i 0.1 0.16 0.2 f(x i ) 1.12 1.24 1.40 Also, find the approximate value of the function at x = 0.13. Solution: We shall first calculate the desired divided differences. δ[x 0 , x 1 ] = (1.24 −1.12)/(0.16 −0.1) = 2, δ[x 1 , x 2 ] = (1.40 −1.24)/(0.2 −0.16) = 4, and δ[x 0 , x 1 , x 2 ] = δ[x 1 , x 2 ] −δ[x 0 , x 1 ] x 2 −x 0 = (4 −2)/(0.2 −0.1) = 20. Thus, f(x) ≈ P 2 (x) = 1.12 + 2(x −0.1) + 20(x −0.1)(x −0.16). Therefore f(0.13) ≈ 1.12 + 2(0.13 −0.1) + 20(0.13 −0.1)(0.13 −0.16) = 1.162. Exercise 12.2.4 1. Using the following table, which gives values of log(x) corresponding to certain values of x, find approximate value of log(323.5) with the help of a second degree polynomial. x 322.8 324.2 325 log(x) 2.50893 2.51081 2.5118 2. Show that δ[x 0 , x 1 , x 2 ] = f(x 0 ) (x 0 −x 1 )(x 0 −x 2 ) + f(x 1 ) (x 1 −x 0 )(x 1 −x 2 ) + f(x 2 ) (x 2 −x 0 )(x 2 −x 1 ) . So, δ[x 0 , x 1 , x 2 ] = δ[x 0 , x 2 , x 1 ] = δ[x 1 , x 0 , x 2 ] = δ[x 1 , x 2 , x 0 ] = δ[x 2 , x 0 , x 1 ] = δ[x 2 , x 1 , x 0 ]. That is, the second divided difference remains unchanged regardless of how its arguments are interchanged. 3. Show that for equidistant points x 0 , x 1 and x 2 , δ[x 0 , x 1 , x 2 ] = 2 y 0 2h 2 = 2 y 2 2h 2 , where y k = f(x k ), and h = x 1 −x 0 = x 2 −x 1 . 12.2. DIVIDED DIFFERENCES 225 4. Show that for a linear function, the second divided difference with respect to any three points, x i , x j and x k , is always zero. Now, we define the k th divided difference. Definition 12.2.5 (k th Divided Difference) The k th divided difference of f(x) relative to the tab- ular points x 0 , x 1 , . . . , x k , is defined recursively as δ[x 0 , x 1 , . . . , x k ] = δ[x 1 , x 2 , . . . , x k ] −δ[x 0 , x 1 , . . . , x k−1 ] x k −x 0 . It can be shown by mathematical induction that for equidistant points, δ[x 0 , x 1 , . . . , x k ] = k y 0 k!h k = k y k k!h k (12.2.1) where, y 0 = f(x 0 ), and h = x 1 −x 0 = x 2 −x 1 = = x k −x k−1 . In general, δ[x i , x i+1 , . . . , x i+n ] = n y i n!h n , where y i = f(x i ) and h is the length of the interval for i = 0, 1, 2, . . . . Remark 12.2.6 In view of the remark (11.2.18) and (12.2.1), it is easily seen that for a polynomial function of degree n, the n th divided difference is constant and the (n + 1) th divided difference is zero. Example 12.2.7 Show that f(x) can be written as f(x) = f(x 0 ) +δ[x 0 , x 1 ](x −x 0 ) +δ[x, x 0 , x 1 ](x −x 0 )(x −x 1 ). Solution:By definition, we have δ[x, x 0 , x 1 ] = δ[x, x 0 ] −δ[x 0 , x 1 ] (x −x 1 ) , so, δ[x, x 0 ] = δ[x 0 , x 1 ] + (x −x 0 )δ[x, x 0 , x 1 ]. Now since, δ[x, x 0 ] = f(x) −f(x 0 ) (x −x 0 ) , we get the desired result. Exercise 12.2.8 Show that f(x) can be written in the following form: f(x) = P 2 (x) +R 3 (x), where, P 2 (x) = f(x 0 ) +δ[x 0 , x 1 ](x −x 0 ) +δ[x 0 , x 1 , x 2 ](x −x 0 )(x −x 1 ) and R 3 (x) = δ[x, x 0 , x 1 , x 2 ](x −x 0 )(x −x 1 )(x −x 2 ). Further show that P 2 (x i ) = f(x i ) for i = 0, 1. Remark 12.2.9 In general it can be shown that f(x) = P n (x) +R n+1 (x), where, P n (x) = f(x 0 ) +δ[x 0 , x 1 ](x −x 0 ) +δ[x 0 , x 1 , x 2 ](x −x 0 )(x −x 1 ) + +δ[x 0 , x 1 , x 2 , . . . , x n ](x −x 0 )(x −x 1 )(x −x 2 ) (x −x n−1 ), and R n+1 (x) = (x −x 0 )(x −x 1 )(x −x 2 ) (x −x n )δ[x, x 0 , x 1 , x 2 , . . . , x n ]. Here, R n+1 (x) is called the remainder term. It may be observed here that the expression P n (x) is a polynomial of degree n and P n (x i ) = f(x i ) for i = 0, 1, , (n −1). Further, if f(x) is a polynomial of degree n, then in view of the Remark 12.2.6, the remainder term, R n+1 (x) = 0, as it is a multiple of the (n + 1) th divided difference, which is 0. 226 CHAPTER 12. LAGRANGE’S INTERPOLATION FORMULA 12.3 Lagrange’s Interpolation formula In this section, we shall obtain an interpolating polynomial when the given data has unequal tabular points. However, before going to that, we see below an important result. Theorem 12.3.1 The k th divided difference δ[x 0 , x 1 , . . . , x k ] can be written as: δ[x0, x1, . . . , x k ] = f(x0) (x0 −x1)(x0 −x2) · · · (x0 −x k ) + f(x1) (x1 −x0)(x1 −x2) · · · (x1 −x k ) +· · · + f(x k ) (x k −x0)(x k −x1) · · · (x k −x k−1 ) = f(x0) k Q j=1 (x0 −xj) +· · · + f(x l ) k Q j=0, j=l (x l −xj) +· · · + f(x k ) k Q j=0, j=k (x k −xj) Proof. We will prove the result by induction on k. The result is trivially true for k = 0. For k = 1, δ[x 0 , x 1 ] = f(x 1 ) −f(x 0 ) x 1 −x 0 = f(x 0 ) x 0 −x 1 + f(x 1 ) x 1 −x 0 . Let us assume that the result is true for k = n, i.e., δ[x0, x1, . . . , xn] = f(x0) (x0 −x1)(x0 −x2) · · · (x0 −xn) + f(x1) (x1 −x0)(x1 −x2) · · · (x1 −xn) +· · · + f(xn) (xn −x0)(xn −x1) · · · (xn −xn−1) . Consider k = n + 1, then the (n + 1) th divided difference is δ[x0, x1, . . . , xn+1] = δ[x1, x2, . . . , xn+1] −δ[x0, x1, . . . , xn] xn+1 −x0 = 1 xn+1 −x0 » f(x1) (x1 −x2) · · · (x1 −xn+1) + f(x2) (x2 −x1)(x2 −x3) · · · (x2 −xn+1) + · · · + f(xn+1) (xn+1 −x1) · · · (xn+1 −xn) 1 xn+1 −x0 » f(x0) (x0 −x1) · · · (x0 −xn) + f(x1) (x1 −x0)(x1 −x2) · · · (x1 −xn) +· · · + f(xn) (xn −x0) · · · (xn −xn−1) which on rearranging the terms gives the desired result. Therefore, by mathematical induction, the proof of the theorem is complete. Remark 12.3.2 In view of the theorem 12.3.1 the k th divided difference of a function f(x), remains unchanged regardless of how its arguments are interchanged, i.e., it is independent of the order of its arguments. Now, if a function is approximated by a polynomial of degree n, then , its (n+1) th divided difference relative to x, x 0 , x 1 , . . . , x n will be zero,(Remark 12.2.6) i.e., δ[x, x 0 , x 1 , . . . , x n ] = 0 Using this result, Theorem 12.3.1 gives f(x) (x −x0)(x −x1) · · · (x −xn) + f(x0) (x0 −x)(x0 −x1) · · · (x0 −xn) + f(x1) (x1 −x)(x1 −x2) · · · (x1 −xn) +· · · + f(xn) (xn −x)(xn −x0) · · · (xn −xn−1) = 0, 12.3. LAGRANGE’S INTERPOLATION FORMULA 227 or, f(x) (x −x0)(x −x1) · · · (x −xn) = − » f(x0) (x0 −x)(x0 −x1) · · · (x0 −xn) + f(x1) (x1 −x)(x1 −x0)(x1 −x2) · · · (x1 −xn) +· · · + f(xn) (xn −x)(xn −x0) · · · (xn −xn−1) , which gives , f(x) = (x −x1)(x −x2) · · · (x −xn) (x0 −x1) · · · (x0 −xn) f(x0) + (x −x0)(x −x2) · · · (x −xn) (x1 −x0)(x1 −x2) · · · (x1 −xn) f(x1) + · · · + (x −x0)(x −x1) · · · (x −xn−1) (xn −x0)(xn −x1) · · · (xn −xn−1) f(xn) = n X i=0 0 @ n Y j=0, j=i x −xj xi −xj 1 A f(xi) = n X i=0 n Q j=0 (x −xj) (x −xi) n Q j=0, j=i (xi −xj) f(xi) = n Y j=0 (x −xj) n X i=0 f(xi) (x −xi) n Q j=0, j=i (xi −xj) . Note that the expression on the right is a polynomial of degree n and takes the value f(x i ) at x = x i for i = 0, 1, , (n −1). This polynomial approximation is called Lagrange’s Interpolation Formula. Remark 12.3.3 In view of the Remark (12.2.9), we can observe that P n (x) is another form of Lagrange’s Interpolation polynomial formula as obtained above. Also the remainder term R n+1 gives an estimate of error between the true value and the interpolated value of the function. Remark 12.3.4 We have seen earlier that the divided differences are independent of the order of its arguments. As the Lagrange’s formula has been derived using the divided differences, it is not necessary here to have the tabular points in the increasing order. Thus one can use Lagrange’s formula even when the points x 0 , x 1 , , x k , , x n are in any order, which was not possible in the case of Newton’s Difference formulae. Remark 12.3.5 One can also use the Lagrange’s Interpolating Formula to compute the value of x for a given value of y = f(x). This is done by interchanging the roles of x and y, i.e. while using the table of values, we take tabular points as y k and nodal points are taken as x k . Example 12.3.6 Using the following data, find by Lagrange’s formula, the value of f(x) at x = 10 : i 0 1 2 3 4 x i 9.3 9.6 10.2 10.4 10.8 y i = f(x i ) 11.40 12.80 14.70 17.00 19.80 Also find the value of x where f(x) = 16.00. Solution: To compute f(10), we first calculate the following products: 4 ¸ j=0 (x −x j ) = 4 ¸ j=0 (10 −x j ) = (10 −9.3)(10 −9.6)(10 −10.2)(10 −10.4)(10 −10.8) = −0.01792, 4 ¸ j=1 (x 0 −x j ) = 0.4455, n ¸ j=0, j=1 (x 1 −x j ) = −0.1728, n ¸ j=0, j=2 (x 2 −x j ) = 0.0648, n ¸ j=0, j=3 (x 3 −x j ) = −0.0704, and n ¸ j=0, j=4 (x 4 −x j ) = 0.4320. 228 CHAPTER 12. LAGRANGE’S INTERPOLATION FORMULA Thus, f(10) ≈ −0.01792 ¸ 11.40 0.7 0.4455 + 12.80 0.4 (−0.1728) + 14.70 (−0.2) 0.0648 + 17.00 (−0.4) (−0.0704) + 19.80 (−0.8) 0.4320 = 13.197845. Now to find the value of x such that f(x) = 16, we interchange the roles of x and y and calculate the following products: 4 ¸ j=0 (y −y j ) = 4 ¸ j=0 (16 −y j ) = (16 −11.4)(16 −12.8)(16 −14.7)(16 − 17.0)(16 −19.8) = 72.7168, 4 ¸ j=1 (y 0 −y j ) = 217.3248, n ¸ j=0, j=1 (y 1 −y j ) = −78.204, n ¸ j=0, j=2 (y 2 −y j ) = 73.5471, n ¸ j=0, j=3 (y 3 −y j ) = −151.4688, and n ¸ j=0, j=4 (y 4 −y j ) = 839.664. Thus,the required value of x is obtained as: x ≈ 217.3248 ¸ 9.3 4.6 217.3248 + 9.6 3.2 (−78.204) + 10.2 1.3 73.5471 + 10.40 (−1.0) (−151.4688) + 10.80 (−3.8) 839.664 ≈ 10.39123. Exercise 12.3.7 The following table gives the data for steam pressure P vs temperature T: T 360 365 373 383 390 P = f(T) 154.0 165.0 190.0 210.0 240.0 Compute the pressure at T = 375. Exercise 12.3.8 Compute from following table the value of y for x = 6.20 : x 5.60 5.90 6.50 6.90 7.20 y 2.30 1.80 1.35 1.95 2.00 Also find the value of x where y = 1.00 12.4 Gauss’s and Stirling’s Formulas In case of equidistant tabular points a convenient form for interpolating polynomial can be derived from Lagrange’s interpolating polynomial. The process involves renaming or re-designating the tabular points. We illustrate it by deriving the interpolating formula for 6 tabular points. This can be generalized for more number of points. Let the given tabular points be x 0 , x 1 = x 0 +h, x 2 = x 0 −h, x 3 = x 0 +2h, x 4 = x 0 − 2h, x 5 = x 0 + 3h. These six points in the given order are not equidistant. We re-designate them for the sake of convenience as follows: x −2 = x 4 , x −1 = x 2 , x 0 = x 0 , x 1 = x 1 , x 2 = x 3 , x 3 = x 5 . These 12.4. GAUSS’S AND STIRLING’S FORMULAS 229 re-designated tabular points in their given order are equidistant. Now recall from remark (12.3.3) that Lagrange’s interpolating polynomial can also be written as : f(x) ≈ f(x 0 ) +δ[x 0 , x 1 ](x −x 0 ) +δ[x 0 , x 1 , x 2 ](x −x 0 )(x −x 1 ) +δ[x 0 , x 1 , x 2 , x 3 ](x −x 0 )(x −x 1 )(x −x 2 ) +δ[x 0 , x 1 , x 2 , x 3 , x 4 ](x −x 0 )(x −x 1 )(x −x 2 )(x −x 3 ) +δ[x 0 , x 1 , x 2 , x 3 , x 4 , x 5 ](x −x 0 )(x −x 1 )(x −x 2 )(x −x 3 )(x −x 4 ), which on using the re-designated points give: f(x) ≈ f(x 0 ) +δ[x 0 , x 1 ](x −x 0 ) +δ[x 0 , x 1 , x −1 ](x − x 0 )(x −x 1 ) +δ[x 0 , x 1 , x −1 , x 2 ](x −x 0 )(x −x 1 )(x −x −1 ) +δ[x 0 , x 1 , x −1 , x 2 , x −2 ](x −x 0 )(x −x 1 )(x −x −1 )(x −x 2 ) +δ[x 0 , x 1 , x −1 , x 2 , x −2 , x 3 ](x −x 0 )(x −x 1 )(x −x −1 )(x −x 2 )(x −x −2 ). Now note that the points x −2 , x −1 , x 0 , x 1 , x 2 and x 3 are equidistant and the divided difference are independent of the order of their arguments. Thus, we have δ[x 0 , x 1 ] = ∆y 0 h , δ[x 0 , x 1 , x −1 ] = δ[x −1 , x 0 , x 1 ] = 2 y −1 2h 2 , δ[x 0 , x 1 , x −1 , x 2 ] = δ[x −1 , x 0 , x 1 , x 2 ] = 3 y −1 3!h 3 , δ[x 0 , x 1 , x −1 , x 2 , x −2 ] = δ[x −2 , x −1 , x 0 , x 1 , x 2 ] = 4 y −2 4!h 4 , δ[x 0 , x 1 , x −1 , x 2 , x −2 , x 3 ] = δ[x −2 , x −1 , x 0 , x 1 , x 2 , x 3 ] = 5 y −2 5!h 5 , where y i = f(x i ) for i = −2, −1, 0, 1, 2. Now using the above relations and the transformation x = x 0 +hu, we get f(x 0 +hu) ≈ y 0 + ∆y 0 h (hu) + 2 y −1 2h 2 (hu)(hu −h) + 3 y −1 3!h 3 (hu)(hu −h)(hu +h) + 4 y −2 4!h 4 (hu)(hu −h)(hu +h)(hu −2h) + 5 y −2 5!h 5 (hu)(hu −h)(hu +h)(hu −2h)(hu + 2h). Thus we get the following form of interpolating polynomial f(x 0 +hu) ≈ y 0 +u∆y 0 +u(u −1) 2 y −1 2! +u(u 2 −1) 3 y −1 3! +u(u 2 −1)(u −2) 4 y −2 4! +u(u 2 −1)(u 2 −2 2 ) 5 y −2 5! . (12.4.1) Similarly using the tabular points x 0 , x 1 = x 0 −h, x 2 = x 0 +h, x 3 = x 0 −2h, x 4 = x 0 +2h, x 5 = x 0 −3h, and the re-designating them, as x −3 , x −2 , x −1 , x 0 , x 1 and x 2 , we get another form of interpolating polynomial as follows: f(x 0 +hu) ≈ y 0 +u∆y −1 +u(u + 1) 2 y −1 2! +u(u 2 −1) 3 y −2 3! +u(u 2 −1)(u + 2) 4 y −2 4! +u(u 2 −1)(u 2 −2 2 ) 5 y −3 5! . (12.4.2) 230 CHAPTER 12. LAGRANGE’S INTERPOLATION FORMULA Now taking the average of the two interpoating polynomials (12.4.1) and (12.4.2) (called Gauss’s first and second interpolating formulas, respectively), we obtain Sterling’s Formula of interpolation: f(x 0 +hu) ≈ y 0 +u ∆y −1 + ∆y 0 2 +u 2 2 y −1 2! + u(u 2 −1) 2 ¸ 3 y −2 + ∆ 3 y −1 3! +u 2 (u 2 −1) 4 y −2 4! + u(u 2 −1)(u 2 −2 2 ) 2 ¸ 5 y −3 + ∆ 5 y −2 5! + . (12.4.3) These are very helpful when, the point of interpolation lies near the middle of the interpolating interval. In this case one usually writes the diagonal form of the difference table. Example 12.4.1 Using the following data, find by Sterling’s formula, the value of f(x) = cot(πx) at x = 0.225 : x 0.20 0.21 0.22 0.23 0.24 f(x) 1.37638 1.28919 1.20879 1.13427 1.06489 Here the point x = 0.225 lies near the central tabular point x = 0.22. Thus , we define x −2 = 0.20, x −1 = 0.21, x 0 = 0.22, x 1 = 0.23, x 2 = 0.24, to get the difference table in diagonal form as: x −2 = 0.20 y −2 = 1.37638 ∆y −2 = −.08719 x −1 = .021 y −1 = 1.28919 ∆ 2 y −2 = .00679 ∆y −1 = −.08040 ∆ 3 y −2 = −.00091 x 0 = 0.22 y 0 = 1.20879 ∆ 2 y −1 = .00588 ∆ 4 y −2 = .00017 ∆y 0 = −.07452 ∆ 3 y −1 = −.00074 x 1 = 0.23 y 1 = 1.13427 ∆ 2 y 0 = .00514 ∆y 1 = −.06938 x 2 = 0.24 y 2 = 1.06489 (here, ∆y 0 = y 1 − y 0 = 1.13427 − 1.20879 = −.07452; ∆y −1 = 1.20879 − 1.28919 = −0.08040; and 2 y −1 = ∆y 0 −∆y −1 = .00588, etc.). Using the Sterling’s formula with u = 0.225 −0.22 0.01 = 0.5, we get f(0.225) as follows: f(0.225) = 1.20879 + 0.5 −.08040 −.07452 2 + (−0.5) 2 .00588 2 + −0.5(0.5 2 −1) 2 (−.00091 −.00074) 3! 0.5 2 (0.5 2 −1) .00017 4! = 1.1708457 Note that tabulated value of cot(πx) at x = 0.225 is 1.1708496. Exercise 12.4.2 Compute from the following table the value of y for x = 0.05 : x 0.00 0.02 0.04 0.06 0.08 y 0.00000 0.02256 0.04511 0.06762 0.09007 Chapter 13 Numerical Differentiation and Integration 13.1 Introduction Numerical differentiation/ integration is the process of computing the value of the derivative of a function, whose analytical expression is not available, but is specified through a set of values at certain tabular points x 0 , x 1 , , x n In such cases, we first determine an interpolating polynomial approximating the function (either on the whole interval or in sub-intervals) and then differentiate/integrate this polynomial to approximately compute the value of the derivative at the given point. 13.2 Numerical Differentiation In the case of differentiation, we first write the interpolating formula on the interval (x 0 , x n ). and the differentiate the polynomial term by term to get an approximated polynomial to the derivative of the function. When the tabular points are equidistant, one uses either the Newton’s Forward/ Backward Formula or Sterling’s Formula; otherwise Lagrange’s formula is used. Newton’s Forward/ Backward formula is used depending upon the location of the point at which the derivative is to be computed. In case the given point is near the mid point of the interval, Sterling’s formula can be used. We illustrate the process by taking (i) Newton’s Forward formula, and (ii) Sterling’s formula. Recall, that the Newton’s forward interpolating polynomial is given by f(x) = f(x 0 +hu) ≈ y 0 + ∆y 0 u + 2 y 0 2! (u(u −1)) + + k y 0 k! ¦u(u −1) (u −k + 1)¦ + + n y 0 n! ¦u(u −1)...(u −n + 1)¦. (13.2.1) Differentiating (13.2.1), we get the approximate value of the first derivative at x as df dx = 1 h df du 1 h ¸ ∆y 0 + 2 y 0 2! (2u −1) + 3 y 0 3! (3u 2 −6u + 2) + + n y 0 n! nu n−1 n(n −1) 2 2 u n−2 + + (−1) (n−1) (n −1)! . (13.2.2) where, u = x −x 0 h . 231 232 CHAPTER 13. NUMERICAL DIFFERENTIATION AND INTEGRATION Thus, an approximation to the value of first derivative at x = x 0 i.e. u = 0 is obtained as : df dx x=x0 = 1 h ¸ ∆y 0 2 y 0 2 + 3 y 0 3 − + (−1) (n−1) n y 0 n . (13.2.3) Similarly, using Stirling’s formula: f(x 0 +hu) ≈ y 0 +u ∆y −1 + ∆y 0 2 +u 2 2 y −1 2! + u(u 2 −1) 2 3 y −2 + ∆ 3 y −1 3! +u 2 (u 2 −1) 4 y −2 4! + u(u 2 −1)(u 2 −2 2 ) 2 5 y −3 + ∆ 5 y −2 5! + (13.2.4) Therefore, df dx = 1 h df du 1 h ¸ ∆y −1 + ∆y 0 2 +u∆ 2 y −1 + (3u 2 −1) 2 (∆ 3 y −2 + ∆ 3 y −1 ) 3! +2u(2u 2 −1) 4 y −2 4! + (5u 4 −15u 2 + 4)(∆ 5 y −3 + ∆ 5 y −2 ) 2 5! + (13.2.5) Thus, the derivative at x = x 0 is obtained as: df dx x=x 0 = 1 h ¸ ∆y −1 + ∆y 0 2 (1) 2 (∆ 3 y −2 + ∆ 3 y −1 ) 3! + 4 (∆ 5 y −3 + ∆ 5 y −2 ) 2 5! + . (13.2.6) Remark 13.2.1 Numerical differentiation using Stirling’s formula is found to be more accurate than that with the Newton’s difference formulae. Also it is more convenient to use. Now higher derivatives can be found by successively differentiating the interpolating polynomials. Thus e.g. using (13.2.2), we get the second derivative at x = x 0 as d 2 f dx 2 x=x0 = 1 h 2 ¸ 2 y 0 −∆ 3 y 0 + 2 11 ∆ 4 y 0 4! . Example 13.2.2 Compute from following table the value of the derivative of y = f(x) at x = 1.7489 : x 1.73 1.74 1.75 1.76 1.77 y 1.772844100 1.155204006 1.737739435 1.720448638 1.703329888 Solution: We note here that x 0 = 1.75, h = 0.01, so u = (1.7489 − 1.75)/0.01 = −0.11, and ∆y 0 = −.0017290797, ∆ 2 y 0 = .0000172047, ∆ 3 y 0 = −.0000001712, ∆y −1 = −.0017464571, ∆ 2 y −1 = .0000173774, ∆ 3 y −1 = −.0000001727, 3 y −2 = −.0000001749, ∆ 4 y −2 = −.0000000022 Thus, f (1.7489) is obtained as: (i) Using Newton’s Forward difference formula, f (1.4978) ≈ 1 0.01 ¸ −0.0017290797 + (2 −0.11 −1) 0.0000172047 2 + (3 (−0.11) 2 −6 −0.11 + 2) −0.0000001712 3! = −0.173965150143. (ii) Using Stirling’s formula, we get: f (1.4978) ≈ 1 .01 ¸ (−.0017464571) + (−.0017290797) 2 + (−0.11) .0000173774 + (3 (−0.11) 2 −1) 2 ((−.0000001749) + (−.0000001727)) 3! + 2 (−0.11) (2(−0.11) 2 −1) (−.0000000022) 4! = −0.17396520185 13.2. NUMERICAL DIFFERENTIATION 233 It may be pointed out here that the above table is for f(x) = e −x , whose derivative has the value -0.1739652000 at x = 1.7489. Example 13.2.3 Using only the first term in the formula (13.2.6) show that f (x 0 ) ≈ y 1 −y −1 2h . Hence compute from following table the value of the derivative of y = e x at x = 1.15 : x 1.05 1.15 1.25 e x 2.8577 3.1582 3.4903 Solution: Truncating the formula (13.2.6)after the first term, we get: f (x 0 ) ≈ 1 h ¸ ∆y −1 + ∆y 0 2 = (y 0 −y −1 ) + (y 1 −y 0 ) 2h = y 1 −y −1 2h . Now from the given table, taking x 0 = 1.15, we have f (1.15) ≈ 3.4903 −2.8577 2 0.1 = 3.1630. Note the error between the computed value and the true value is 3.1630 −3.1582 = 0.0048. Exercise 13.2.4 Retaining only the first two terms in the formula (13.2.3), show that f (x 0 ) ≈ −3y 0 + 4y 1 −y 2 2h . Hence compute the derivative of y = e x at x = 1.15 from the following table: x 1.15 1.20 1.25 e x 3.1582 3.3201 3.4903 Also compare your result with the computed value in the example (13.2.3). Exercise 13.2.5 Retaining only the first two terms in the formula (13.2.6), show that f (x 0 ) ≈ y −2 −8y −1 + 8y 1 −y 2 12h . Hence compute from following table the value of the derivative of y = e x at x = 1.15 : x 1.05 1.10 1.15 1.20 1.25 e x 2.8577 3.0042 3.1582 3.3201 3.4903 Exercise 13.2.6 Following table gives the values of y = f(x) at the tabular points x = 0 + 0.05 k, k = 0, 1, 2, 3, 4, 5. x 0.00 0.05 0.10 0.15 0.20 0.25 y 0.00000 0.10017 0.20134 0.30452 0.41075 0.52110 Compute (i)the derivatives y/ and y// at x = 0.0 by using the formula (13.2.2). (ii)the second derivative y// at x = 0.1 by using the formula (13.2.6). 234 CHAPTER 13. NUMERICAL DIFFERENTIATION AND INTEGRATION Similarly, if we have tabular points which are not equidistant, one can use Lagrange’s interpolating polynomial, which is differentiated to get an estimate of first derivative. We shall see the result for four tabular points and then give the general formula. Let x 0 , x 1 , x 2 , x 3 be the tabular points, then the corresponding Lagrange’s formula gives us: f(x) ≈ (x −x1)(x −x2)(x −x3) (x0 −x1)(x0 −x2)(x0 −x3) f(x0) + (x −x0)(x −x2)(x −x3) (x1 −x0)(x1 −x2)(x1 −x3) f(x1) + (x −x0)(x −x1)(x −x3) (x2 −x0)(x2 −x1)(x2 −x3) f(x2) + (x −x0)(x −x1)(x −x2) (x3 −x0)(x3 −x1)(x3 −x2) f(x3) Differentiation of the above interpolating polynomial gives: df(x) dx (x −x2)(x −x3) + (x −x1)(x −x2) + (x −x1)(x −x3) (x0 −x1)(x0 −x2)(x0 −x3) f(x0) + (x −x2)(x −x3) + (x −x0)(x −x2) + (x −x0)(x −x3) (x1 −x0)(x1 −x2)(x1 −x3) f(x1) + (x −x1)(x −x2) + (x −x0)(x −x1) + (x −x0)(x −x3) (x2 −x0)(x2 −x1)(x2 −x3) f(x2) + (x −x1)(x −x2) + (x −x0)(x −x2) + (x −x0)(x −x1) (x3 −x0)(x3 −x1)(x3 −x2) f(x3) = 3 Y r=0 (x −xr) 2 6 6 6 4 3 X i=0 f(xi) (x −xi) 3 Q j=0, j=i (xi −xj) 0 @ 3 X k=0, k=i 1 (x −x k ) 1 A 3 7 7 7 5 . (13.2.7) In particular, the value of the derivative at x = x 0 is given by df dx ˛ ˛ ˛ ˛ x=x 0 » 1 (x0 −x1) + 1 (x0 −x2) + 1 (x0 −x3) f(x0) + (x0 −x2)(x0 −x3) (x1 −x0)(x1 −x2)(x1 −x3) f(x1) + (x0 −x1)(x0 −x3) (x2 −x0)(x2 −x1)(x2 −x3) f(x2) + (x0 −x1)(x0 −x2) (x3 −x0)(x3 −x1)(x3 −x2) f(x3). Now, generalizing Equation (13.2.7) for n + 1 tabular points x 0 , x 1 , , x n we get: df dx = n Y r=0 (x −xr) 2 6 6 4 n X i=0 f(xi) (x −xi) n Q j=0, j=i (xi −xj) 0 @ n X k=0, k=i 1 (x −x k ) 1 A 3 7 7 5 . Example 13.2.7 Compute from following table the value of the derivative of y = f(x) at x = 0.6 : x 0.4 0.6 0.7 y 3.3836494 4.2442376 4.7275054 Solution: The given tabular points are not equidistant, so we use Lagrange’s interpolating polynomial with three points: x 0 = 0.4, x 1 = 0.6, x 2 = 0.7 . Now differentiating this polynomial the derivative of the function at x = x 1 is obtained in the following form: df dx ˛ ˛ ˛ ˛ x=x 1 (x1 −x2) (x0 −x1)(x0 −x2) f(x0) + » 1 (x1 −x2) + 1 (x1 −x0) f(x1) + (x1 −x0) (x2 −x0)(x2 −x1) f(x2). Now, using the values from the table, we get: df dx x=0.6 (0.6 −0.7) (0.4 −0.6)(0.4 −0.7) 3.3836494 + ¸ 1 (0.6 −0.7) + 1 (0.6 −0.4) 4.2442376 + (0.6 −0.4) (0.7 −0.4)(0.7 −0.6) 4.7225054 = −5.63941567 −21.221188 + 31.48336933 = 4.6227656. 13.3. NUMERICAL INTEGRATION 235 For the sake of comparison, it may be pointed out here that the above table is for the function f(x) = 2e x +x, and the value of its derivative at x = 0.6 is 4.6442376. Exercise 13.2.8 For the function, whose tabular values are given in the above example(13.2.8), compute the value of its derivative at x = 0.5. Remark 13.2.9 It may be remarked here that the numerical differentiation for higher derivatives does not give very accurate results and so is not much preferred. 13.3 Numerical Integration Numerical Integration is the process of computing the value of a definite integral, b a f(x)dx, when the values of the integrand function, y = f(x) are given at some tabular points. As in the case of Numerical differentiation, here also the integrand is first replaced with an interpolating polynomial, and then the integrating polynomial is integrated to compute the value of the definite integral. This gives us ’quadrature formula’ for numerical integration. In the case of equidistant tabular points, either the Newton’s formulae or Stirling’s formula are used. Otherwise, one uses Lagrange’s formula for the interpolating polynomial. We shall consider below the case of equidistant points: x 0 , x 1 , , x n . Let f(x k ) = y k be the nodal value at the tabular point x k for k = 0, 1, , x n , where x 0 = a and x n = x 0 + nh = b. Now, a general quadrature formula is obtained by replacing the integrand by Newton’s forward difference interpolating polynomial. Thus, we get, b a f(x)dx = b a ¸ y 0 + ∆y 0 h (x −x 0 ) + 2 y 0 2!h 2 (x −x 0 )(x −x 1 ) + 3 y 0 3!h 3 (x −x 0 )(x −x 1 )(x −x 2 ) + 4 y 0 4!h 4 (x −x 0 )(x −x 1 )(x −x 2 )(x − x 3 ) + dx This on using the transformation x = x 0 +hu gives: b a f(x)dx = h n 0 ¸ y 0 +u∆y 0 + 2 y 0 2! u(u −1) + 3 y 0 3! u(u −1)(u −2) + 4 y 0 4! u(u −1)(u −2)(u −3) + du which on term by term integration gives, b a f(x)dx = h ¸ ny 0 + n 2 2 ∆y 0 + 2 y 0 2! n 3 3 n 2 2 + 3 y 0 3! n 4 4 −n 3 +n 2 + 4 y 0 4! n 5 5 3n 4 2 + 11n 3 3 −3n 2 + (13.3.1) For n = 1, i.e., when linear interpolating polynomial is used then, we have b a f(x)dx = h ¸ y 0 + ∆y 0 2 = h 2 [y 0 +y 1 ] . (13.3.2) 236 CHAPTER 13. NUMERICAL DIFFERENTIATION AND INTEGRATION Similarly, using interpolating polynomial of degree 2 (i.e. n = 2), we obtain, b a f(x)dx = h ¸ 2y 0 + 2∆y 0 + 8 3 4 2 2 y 0 2 = 2h ¸ y 0 + (y 1 −y 0 ) + 1 3 y 2 −2y 1 +y 0 2 = h 3 [y 0 + 4y 1 +y 2 ] . (13.3.3) In the above we have replaced the integrand by an interpolating polynomial over the whole interval [a, b] and then integrated it term by term. However, this process is not very useful. More useful Numerical integral formulae are obtained by dividing the interval [a, b] in n sub-intervals [x k , x k+1 ], where, x k = x 0 +kh for k = 0, 1, , n with x 0 = a, x n = x 0 +nh = b. 13.3.2 Trapezoidal Rule Here, the integral is computed on each of the sub-intervals by using linear interpolating formula, i.e. for n = 1 and then summing them up to obtain the desired integral. Note that b a f(x)dx = x1 x0 f(x)dx + x2 x1 f(x)dx + + x k x k+1 f(x)dx + + xn−1 xn f(x)dx Now using the formula ( 13.3.2) for n = 1 on the interval [x k , x k+1 ], we get, x k+1 x k f(x)dx = h 2 [y k +y k+1 ] . Thus, we have, b a f(x)dx = h 2 [y 0 +y 1 ] + h 2 [y 1 +y 2 ] + + h 2 [y k +y k+1 ] + + h 2 [y n−2 +y n−1 ] + h 2 [y n−1 +y n ] i.e. b a f(x)dx = h 2 [y 0 + 2y 1 + 2y 2 + + 2y k + + 2y n−1 +y n ] = h ¸ y 0 +y n 2 + n−1 ¸ i=1 y i ¸ . (13.3.4) This is called Trapezoidal Rule. It is a simple quadrature formula, but is not very accurate. Remark 13.3.1 An estimate for the error E 1 in numerical integration using the Trapezoidal rule is given by E 1 = − b −a 12 2 y, where ∆ 2 y is the average value of the second forward differences. Recall that in the case of linear function, the second forward differences is zero, hence, the Trapezoidal rule gives exact value of the integral if the integrand is a linear function. 13.3. NUMERICAL INTEGRATION 237 Example 13.3.2 Using Trapezoidal rule compute the integral 1 0 e x 2 dx, where the table for the values of y = e x 2 is given below: x 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 y 1.00000 1.01005 1.04081 1.09417 1.17351 1.28402 1.43332 1.63231 1.89648 2.2479 2.71828 Solution: Here, h = 0.1, n = 10, y 0 +y 10 2 = 1.0 + 2.71828 2 = 1.85914, and 9 ¸ i=1 y i = 12.81257. Thus, 1 0 e x 2 dx = 0.1 [1.85914 + 12.81257] = 1.467171 13.3.3 Simpson’s Rule If we are given odd number of tabular points,i.e. n is even, then we can divide the given integral of integration in even number of sub-intervals [x 2k , x 2k+2 ]. Note that for each of these sub-intervals, we have the three tabular points x 2k , x 2k+1 , x 2k+2 and so the integrand is replaced with a quadratic interpolating polynomial. Thus using the formula (13.3.3), we get, x 2k+2 x 2k f(x)dx = h 3 [y 2k + 4y 2k+1 +y 2k+2 ] . In view of this, we have b a f(x)dx = x2 x0 f(x)dx + x4 x2 f(x)dx + + x 2k+2 x 2k f(x)dx + + xn xn−2 f(x)dx = h 3 [(y 0 + 4y 1 +y 2 ) + (y 2 + 4y 3 +y 4 ) + + (y n−2 + 4y n−1 +y n )] = h 3 [y 0 + 4y 1 + 2y 2 + 4y 3 + 2y 4 + + 2y n−2 + 4y n−1 +y n ] , which gives the second quadrature formula as follows: b a f(x)dx = h 3 [(y 0 +y n ) + 4 (y 1 +y 3 + +y 2k+1 + +y n−1 ) + 2 (y 2 +y 4 + +y 2k + + y n−2 )] = h 3 (y 0 +y n ) + 4 ¸ n−1 ¸ i=1, i−odd y i ¸ + 2 ¸ n−2 ¸ i=2, i−even y i ¸ ¸ ¸ . (13.3.5) This is known as Simpson’s rule. Remark 13.3.3 An estimate for the error E 2 in numerical integration using the Simpson’s rule is given by E 2 = − b −a 180 4 y, (13.3.6) where ∆ 4 y is the average value of the forth forward differences. 238 CHAPTER 13. NUMERICAL DIFFERENTIATION AND INTEGRATION Example 13.3.4 Using the table for the values of y = e x 2 as is given in Example 13.3.2, compute the integral 1 0 e x 2 dx, by Simpson’s rule. Also estimate the error in its calculation and compare it with the error using Trapezoidal rule. Solution: Here, h = 0.1, n = 10, thus we have odd number of nodal points. Further, y 0 +y 10 = 1.0 + 2.71828 = 3.71828, 9 ¸ i=1, i−odd y i = y 1 +y 3 +y 5 +y 7 +y 9 = 7.26845, and 8 ¸ i=2, i−even y i = y 2 +y 4 +y 6 +y 8 = 5.54412. Thus, 1 0 e x 2 dx = 0.1 3 [3.71828 + 4 7.268361 + 2 5.54412] = 1.46267733 To find the error estimates, we consider the forward difference table, which is given below: x i y i ∆y i 2 y i 3 y i 4 y i 0.0 1.00000 0.01005 0.02071 0.00189 0.00149 0.1 1.01005 0.03076 0.02260 0.00338 0.00171 0.2 1.04081 0.05336 0.02598 0.00519 0.00243 0.3 1.09417 0.07934 0.03117 0.00762 0.00320 0.4 1.17351 0.11051 0.3879 0.01090 0.00459 0.5 1.28402 0.14930 0.04969 0.01549 0.00658 0.6 1.43332 0.19899 0.06518 0.02207 0.00964 0.7 1.63231 0.26417 0.08725 0.03171 0.8 1.89648 0.35142 0.11896 0.9 2.24790 0.47038 1.0 2.71828 Thus, error due to Trapezoidal rule is, E 1 = − 1 −0 12 2 y = − 1 12 0.02071 + 0.02260 + 0.02598 + 0.03117 + 0.03879 + 0.04969 + 0.06518 + 0.08725 + 0.11896 9 = −0.004260463. Similarly, error due to Simpson’s rule is, E 2 = − 1 −0 180 4 y = − 1 180 0.00149 + 0.00171 + 0.00243 + 0.00328 + 0.00459 + 0.00658 + 0.00964 7 = −2.35873 10 −5 . It shows that the error in numerical integration is much less by using Simpson’s rule. Example 13.3.5 Compute the integral 1 0.05 f(x)dx, where the table for the values of y = f(x) is given below: x 0.05 0.1 0.15 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 y 0.0785 0.1564 0.2334 0.3090 0.4540 0.5878 0.7071 0.8090 0.8910 0.9511 0.9877 1.0000 Solution: Note that here the points are not given to be equidistant, so as such we can not use any of the above two formulae. However, we notice that the tabular points 0.05, 0.10, 0, 15 and 0.20 are equidistant 13.3. NUMERICAL INTEGRATION 239 and so are the tabular points 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 and 1.0. Now we can divide the interval in two subinterval: [0.05, 0.2] and [0.2, 1.0]; thus, 1 0.05 f(x)dx = 0.2 0.05 f(x)dx + 1 0.2 f(x)dx . The integrals then can be evaluated in each interval. We observe that the second set has odd number of points. Thus, the first integral is evaluated by using Trapezoidal rule and the second one by Simpson’s rule (of course, one could have used Trapezoidal rule in both the subintervals). For the first integral h = 0.05 and for the second one h = 0.1. Thus, 0.2 0.05 f(x)dx = 0.05 ¸ 0.0785 + 0.3090 2 + 0.1564 + 0.2334 = 0.0291775, and 1.0 0.2 f(x)dx = 0.1 3 ¸ (0.3090 + 1.0000) + 4 (0.4540 + 0.7071 + 0.8910 + 0.9877) +2 (0.5878 + 0.8090 + 0.9511) = 0.6054667, which gives, 1 0.05 f(x)dx = 0.0291775 + 0.6054667 = 0.6346442 It may be mentioned here that in the above integral, f(x) = sin(πx/2) and that the value of the integral is 0.6346526. It will be interesting for the reader to compute the two integrals using Trapezoidal rule and compare the values. Exercise 13.3.6 1. Using Trapezoidal rule, compute the integral b a f(x)dx, where the table for the values of y = f(x) is given below. Also find an error estimate for the computed value. (a) x a=1 2 3 4 5 6 7 8 9 b=10 y 0.09531 0.18232 0.26236 0.33647 0.40546 0.47000 0.53063 0.58779 0.64185 0.69314 (b) x a=1.50 1.55 1.60 1.65 1.70 1.75 b=1.80 y 0.40546 0.43825 0.47000 0.5077 0.53063 0.55962 0.58779 (c) x a = 1.0 1.5 2.0 2.5 3.0 b = 3.5 y 1.1752 2.1293 3.6269 6.0502 10.0179 16.5426 2. Using Simpson’s rule, compute the integral b a f(x)dx. Also get an error estimate of the computed integral. (a) Use the table given in Exercise 13.3.6.1b. (b) x a = 0.5 1.0 1.5 2.0 2.5 3.0 b = 3.5 y 0.493 0.946 1.325 1.605 1.778 1.849 1.833 3. Compute the integral 1.5 0 f(x)dx, where the table for the values of y = f(x) is given below: x 0.0 0.5 0.7 0.9 1.1 1.2 1.3 1.4 1.5 y 0.00 0.39 0.77 1.27 1.90 2.26 2.65 3.07 3.53 252 CHAPTER 13. NUMERICAL DIFFERENTIATION AND INTEGRATION Chapter 15 Appendix 15.1 System of Linear Equations Theorem 15.1.1 (Existence and Non-existence) Consider a linear system Ax = b, where A is a m n matrix, and x, b are vectors with orders n 1, and m 1, respectively. Suppose rank (A) = r and rank([A b]) = r a . Then exactly one of the following statement holds: 1. if r a = r < n, the set of solutions of the linear system is an infinite set and has the form ¦u 0 +k 1 u 1 +k 2 u 2 + +k n−r u n−r : k i ∈ R, 1 ≤ i ≤ n −r¦, where u 0 , u 1 , . . . , u n−r are n 1 vectors satisfying Au 0 = b and Au i = 0 for 1 ≤ i ≤ n −r. 2. if r a = r = n, the solution set of the linear system has a unique n 1 vector x 0 satisfying Ax 0 = 0. 3. If r < r a , the linear system has no solution. Proof. Suppose [C d] is the row reduced echelon form of the augmented matrix [A b]. Then by Theorem 2.2.5, the solution set of the linear system [C d] is same as the solution set of the linear system [A b]. So, the proof consists of understanding the solution set of the linear system Cx = d. 1. Let r = r a < n. Then [C d] has its first r rows as the non-zero rows. So, by Remark 2.3.5, the matrix C = [c ij ] has r leading columns. Let the leading columns be 1 ≤ i 1 < i 2 < < i r ≤ n. Then we observe the following: (a) the entries c li l for 1 ≤ l ≤ r are leading terms. That is, for 1 ≤ l ≤ r, all entries in the i th l column of C is zero, except the entry c li l . The entry c li l = 1; (b) corresponding is each leading column, we have r basic variables, x i1 , x i2 , . . . , x ir ; (c) the remaining n − r columns correspond to the n − r free variables (see Remark 2.3.5), x j1 , x j2 , . . . , x jn−r . So, the free variables correspond to the columns 1 ≤ j 1 < j 2 < < j n−r ≤ n. For 1 ≤ l ≤ r, consider the l th row of [C d]. The entry c li l = 1 and is the leading term. Also, the first r rows of the augmented matrix [C d] give rise to the linear equations x i l + n−r ¸ k=1 c lj k x j k = d l , for 1 ≤ l ≤ r. 253 254 CHAPTER 15. APPENDIX These equations can be rewritten as x i l = d l n−r ¸ k=1 c lj k x j k = d l , for 1 ≤ l ≤ r. Let y t = (x i1 , . . . , x ir , x j1 , . . . , x jn−r ). Then the set of solutions consists of y = x i1 . . . x ir x j1 . . . x jn−r ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ = d 1 n−r ¸ k=1 c 1j k x j k . . . d r n−r ¸ k=1 c rj k x j k x j1 . . . x jn−r ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ . (15.1.1) As x js for 1 ≤ s ≤ n−r are free variables, let us assign arbitrary constants k s ∈ R to x js . That is, for 1 ≤ s ≤ n −r, x js = k s . Then the set of solutions is given by y = d 1 n−r ¸ s=1 c 1js x js . . . d r n−r ¸ s=1 c rjs x js x j1 . . . x jn−r ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ = d 1 n−r ¸ s=1 c 1js k s . . . d r n−r ¸ s=1 c rjs k s k 1 . . . k n−r ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ = d 1 . . . d r 0 0 . . . 0 0 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ −k 1 c 1j1 . . . c rj1 −1 0 . . . 0 0 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ −k 2 c 1j2 . . . c rj2 0 −1 . . . 0 0 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ − −k n−r c 1jn−r . . . c rjn−r 0 0 . . . 0 −1 ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸ . Let us write v 0 t = (d 1 , d 2 , . . . , d r , 0, . . . , 0) t . Also, for 1 ≤ i ≤ n −r, let v i be the vector associated with k i in the above representation of the solution y. Observe the following: (a) if we assign k s = 0, for 1 ≤ s ≤ n −r, we get Cv 0 = Cy = d. (15.1.2) (b) if we assign k 1 = 1 and k s = 0, for 2 ≤ s ≤ n −r, we get d = Cy = C(v 0 +v 1 ). (15.1.3) So, using (15.1.2), we get Cv 1 = 0. (c) in general, if we assign k t = 1 and k s = 0, for 1 ≤ s = t ≤ n −r, we get d = Cy = C(v 0 +v t ). (15.1.4) So, using (15.1.2), we get Cv t = 0. 15.1. SYSTEM OF LINEAR EQUATIONS 255 Note that a rearrangement of the entries of y will give us the solution vector x t = (x 1 , x 2 , . . . , x n ) t . Suppose that for 0 ≤ i ≤ n −r, the vectors u i ’s are obtained by applying the same rearrangement to the entries of v i ’s which when applied to y gave x. Therefore, we have Cu 0 = d and for 1 ≤ i ≤ n −r, Cu i = 0. Now, using equivalence of the linear system Ax = b and Cx = d gives Au 0 = b and for 1 ≤ i ≤ n −r, Au i = 0. Thus, we have obtained the desired result for the case r = r 1 < n. 2. r = r a = n, m ≥ n. Here the first n rows of the row reduced echelon matrix [C d] are the non-zero rows. Also, the number of columns in C equals n = rank (A) = rank (C). So, by Remark 2.3.5, all the columns of C are leading columns and all the variables x 1 , x 2 , . . . , x n are basic variables. Thus, the row reduced echelon form [C d] of [A b] is given by [C d] = ¸ I n ˜ d 0 0 ¸ . Therefore, the solution set of the linear system Cx = d is obtained using the equation I n x = ˜ d. This gives us, a solution as x 0 = ˜ d. Also, by Theorem 2.3.11, the row reduced form of a given matrix is unique, the solution obtained above is the only solution. That is, the solution set consists of a single vector ˜ d. 3. r < r a . As C has n columns, the row reduced echelon matrix [C d] has n + 1 columns. The condition, r < r a implies that r a = r + 1. We now observe the following: (a) as rank(C) = r, the (r + 1)th row of C consists of only zeros. (b) Whereas the condition r a = r + 1 implies that the (r + 1) th row of the matrix [C d] is non-zero. Thus, the (r + 1) th row of [C d] is of the form (0, . . . , 0, 1). Or in other words, d r+1 = 1. Thus, for the equivalent linear system Cx = d, the (r + 1) th equation is 0 x 1 + 0 x 2 + + 0 x n = 1. This linear equation has no solution. Hence, in this case, the linear system Cx = d has no solution. Therefore, by Theorem 2.2.5, the linear system Ax = b has no solution. We now state a corollary whose proof is immediate from previous results. Corollary 15.1.2 Consider the linear system Ax = b. Then the two statements given below cannot hold together. 1. The system Ax = b has a unique solution for every b. 2. The system Ax = 0 has a non-trivial solution. 256 CHAPTER 15. APPENDIX 15.2 Determinant In this section, S denotes the set ¦1, 2, . . . , n¦. Definition 15.2.1 1. A function σ : S−→S is called a permutation on n elements if σ is both one to one and onto. 2. The set of all functions σ : S−→S that are both one to one and onto will be denoted by o n . That is, o n is the set of all permutations of the set ¦1, 2, . . . , n¦. Example 15.2.2 1. In general, we represent a permutation σ by σ = 1 2 n σ(1) σ(2) σ(n) . This representation of a permutation is called a two row notation for σ. 2. For each positive integer n, o n has a special permutation called the identity permutation, denoted Id n , such that Id n (i) = i for 1 ≤ i ≤ n. That is, Id n = 1 2 n 1 2 n . 3. Let n = 3. Then o 3 = τ 1 = 1 2 3 1 2 3 , τ 2 = 1 2 3 1 3 2 , τ 3 = 1 2 3 2 1 3 , τ 4 = 1 2 3 2 3 1 , τ 5 = 1 2 3 3 1 2 , τ 6 = 1 2 3 3 2 1 ¸ (15.2.5) Remark 15.2.3 1. Let σ ∈ o n . Then σ is determined if σ(i) is known for i = 1, 2, . . . , n. As σ is both one to one and onto, ¦σ(1), σ(2), . . . , σ(n)¦ = S. So, there are n choices for σ(1) (any element of S), n − 1 choices for σ(2) (any element of S different from σ(1)), and so on. Hence, there are n(n − 1)(n − 2) 3 2 1 = n! possible permutations. Thus, the number of elements in o n is n!. That is, [o n [ = n!. 2. Suppose that σ, τ ∈ o n . Then both σ and τ are one to one and onto. So, their composition map σ ◦ τ, defined by (σ ◦ τ)(i) = σ τ(i) , is also both one to one and onto. Hence, σ ◦ τ is also a permutation. That is, σ ◦ τ ∈ o n . 3. Suppose σ ∈ o n . Then σ is both one to one and onto. Hence, the function σ −1 : S−→S defined by σ −1 (m) = if and only if σ() = m for 1 ≤ m ≤ n, is well defined and indeed σ −1 is also both one to one and onto. Hence, for every element σ ∈ o n , σ −1 ∈ o n and is the inverse of σ. 4. Observe that for any σ ∈ o n , the compositions σ ◦ σ −1 = σ −1 ◦ σ = Id n . Proposition 15.2.4 Consider the set of all permutations o n . Then the following holds: 1. Fix an element τ ∈ o n . Then the sets ¦σ ◦ τ : σ ∈ o n ¦ and ¦τ ◦ σ : σ ∈ o n ¦ have exactly n! elements. Or equivalently, o n = ¦τ ◦ σ : σ ∈ o n ¦ = ¦σ ◦ τ : σ ∈ o n ¦. 2. o n = ¦σ −1 : σ ∈ o n ¦. Proof. For the first part, we need to show that given any element α ∈ o n , there exists elements β, γ ∈ o n such that α = τ ◦ β = γ ◦ τ. It can easily be verified that β = τ −1 ◦ α and γ = α ◦ τ −1 . For the second part, note that for any σ ∈ o n , (σ −1 ) −1 = σ. Hence the result holds. 15.2. DETERMINANT 257 Definition 15.2.5 Let σ ∈ o n . Then the number of inversions of σ, denoted n(σ), equals [¦(i, j) : i < j, σ(i) > σ(j) ¦[. Note that, for any σ ∈ o n , n(σ) also equals n ¸ i=1 [¦σ(j) < σ(i), for j = i + 1, i + 2, . . . , n¦[. Definition 15.2.6 A permutation σ ∈ o n is called a transposition if there exists two positive integers m, r ∈ ¦1, 2, . . . , n¦ such that σ(m) = r, σ(r) = m and σ(i) = i for 1 ≤ i = m, r ≤ n. For the sake of convenience, a transposition σ for which σ(m) = r, σ(r) = m and σ(i) = i for 1 ≤ i = m, r ≤ n will be denoted simply by σ = (m r) or (r m). Also, note that for any transposition σ ∈ o n , σ −1 = σ. That is, σ ◦ σ = Id n . Example 15.2.7 1. The permutation τ = 1 2 3 4 3 2 1 4 is a transposition as τ(1) = 3, τ(3) = 1, τ(2) = 2 and τ(4) = 4. Here note that τ = (1 3) = (3 1). Also, check that n(τ) = [¦(1, 2), (1, 3), (2, 3)¦[ = 3. 2. Let τ = 1 2 3 4 5 6 7 8 9 4 2 3 5 1 9 8 7 6 . Then check that n(τ) = 3 + 1 + 1 + 1 + 0 + 3 + 2 + 1 = 12. 3. Let , m and r be distinct element from ¦1, 2, . . . , n¦. Suppose τ = (m r) and σ = (m ). Then (τ ◦ σ)() = τ σ() = τ(m) = r, (τ ◦ σ)(m) = τ σ(m) = τ() = (τ ◦ σ)(r) = τ σ(r) = τ(r) = m, and (τ ◦ σ)(i) = τ σ(i) = τ(i) = i if i = , m, r. Therefore, τ ◦ σ = (m r) ◦ (m ) = 1 2 m r n 1 2 r m n = (r l) ◦ (r m). Similarly check that σ ◦ τ = 1 2 m r n 1 2 m r n . With the above definitions, we state and prove two important results. Theorem 15.2.8 For any σ ∈ o n , σ can be written as composition (product) of transpositions. Proof. We will prove the result by induction on n(σ), the number of inversions of σ. If n(σ) = 0, then σ = Id n = (1 2) ◦ (1 2). So, let the result be true for all σ ∈ o n with n(σ) ≤ k. For the next step of the induction, suppose that τ ∈ o n with n(τ) = k + 1. Choose the smallest positive number, say , such that τ(i) = i, for i = 1, 2, . . . , −1 and τ() = . As τ is a permutation, there exists a positive number, say m, such that τ() = m. Also, note that m > . Define a transposition σ by σ = ( m). Then note that (σ ◦ τ)(i) = i, for i = 1, 2, . . . , . 258 CHAPTER 15. APPENDIX So, the definition of “number of inversions” and m > implies that n(σ ◦ τ) = n ¸ i=1 [¦(σ ◦ τ)(j) < (σ ◦ τ)(i), for j = i + 1, i + 2, . . . , n¦[ = ¸ i=1 [¦(σ ◦ τ)(j) < (σ ◦ τ)(i), for j = i + 1, i + 2, . . . , n¦[ + n ¸ i=+1 [¦(σ ◦ τ)(j) < (σ ◦ τ)(i), for j = i + 1, i + 2, . . . , n¦[ = n ¸ i=+1 [¦(σ ◦ τ)(j) < (σ ◦ τ)(i), for j = i + 1, i + 2, . . . , n¦[ n ¸ i=+1 [¦τ(j) < τ(i), for j = i + 1, i + 2, . . . , n¦[ as m > , < (m−) + n ¸ i=+1 [¦τ(j) < τ(i), for j = i + 1, i + 2, . . . , n¦[ = n(τ). Thus, n(σ ◦ τ) < k + 1. Hence, by the induction hypothesis, the permutation σ ◦ τ is a composition of transpositions. That is, there exist transpositions, say α i , 1 ≤ i ≤ t such that σ ◦ τ = α 1 ◦ α 2 ◦ ◦ α t . Hence, τ = σ ◦ α 1 ◦ α 2 ◦ ◦ α t as σ ◦ σ = Id n for any transposition σ ∈ o n . Therefore, by mathematical induction, the proof of the theorem is complete. Before coming to our next important result, we state and prove the following lemma. Lemma 15.2.9 Suppose there exist transpositions α i , 1 ≤ i ≤ t such that Id n = α 1 ◦ α 2 ◦ ◦ α t , then t is even. Proof. Observe that t = 1 as the identity permutation is not a transposition. Hence, t ≥ 2. If t = 2, we are done. So, let us assume that t ≥ 3. We will prove the result by the method of mathematical induction. The result clearly holds for t = 2. Let the result be true for all expressions in which the number of transpositions t ≤ k. Now, let t = k + 1. Suppose α 1 = (m r). Note that the possible choices for the composition α 1 ◦ α 2 are (m r) ◦ (m r) = Idn, (m r) ◦ (m ) = (r ) ◦ (r m), (m r) ◦ (r ) = ( r) ◦ ( m) and (m r) ◦ ( s) = ( s) ◦ (m r), where and s are distinct elements of ¦1, 2, . . . , n¦ and are different from m, r. In the first case, we can remove α 1 ◦ α 2 and obtain Id n = α 3 ◦ α 4 ◦ ◦ α t . In this expression for identity, the number of transpositions is t −2 = k −1 < k. So, by mathematical induction, t −2 is even and hence t is also even. In the other three cases, we replace the original expression for α 1 ◦ α 2 by their counterparts on the right to obtain another expression for identity in terms of t = k +1 transpositions. But note that in the new expression for identity, the positive integer m doesn’t appear in the first transposition, but appears in the second transposition. We can continue the above process with the second and third transpositions. At this step, either the number of transpositions will reduce by 2 (giving us the result by mathematical induction) or the positive number m will get shifted to the third transposition. The continuation of this process will at some stage lead to an expression for identity in which the number of transpositions is 15.2. DETERMINANT 259 t − 2 = k − 1 (which will give us the desired result by mathematical induction), or else we will have an expression in which the positive number m will get shifted to the right most transposition. In the later case, the positive integer m appears exactly once in the expression for identity and hence this expression does not fix m whereas for the identity permutation Id n (m) = m. So the later case leads us Hence, the process will surely lead to an expression in which the number of transpositions at some stage is t −2 = k −1. Therefore, by mathematical induction, the proof of the lemma is complete. Theorem 15.2.10 Let α ∈ o n . Suppose there exist transpositions τ 1 , τ 2 , . . . , τ k and σ 1 , σ 2 , . . . , σ such that α = τ 1 ◦ τ 2 ◦ ◦ τ k = σ 1 ◦ σ 2 ◦ ◦ σ then either k and are both even or both odd. Proof. Observe that the condition τ 1 ◦ τ 2 ◦ ◦ τ k = σ 1 ◦ σ 2 ◦ ◦ σ and σ ◦ σ = Id n for any transposition σ ∈ o n , implies that Id n = τ 1 ◦ τ 2 ◦ ◦ τ k ◦ σ ◦ σ −1 ◦ ◦ σ 1 . Hence by Lemma 15.2.9, k + is even. Hence, either k and are both even or both odd. Thus the result follows. Definition 15.2.11 A permutation σ ∈ o n is called an even permutation if σ can be written as a composition (product) of an even number of transpositions. A permutation σ ∈ o n is called an odd permutation if σ can be written as a composition (product) of an odd number of transpositions. Remark 15.2.12 Observe that if σ and τ are both even or both odd permutations, then the permu- tations σ ◦ τ and τ ◦ σ are both even. Whereas if one of them is odd and the other even then the permutations σ ◦ τ and τ ◦ σ are both odd. We use this to define a function on o n , called the sign of a permutation, as follows: Definition 15.2.13 Let sgn : o n −→¦1, −1¦ be a function defined by sgn(σ) = 1 if σ is an even permutation −1 if σ is an odd permutation . Example 15.2.14 1. The identity permutation, Id n is an even permutation whereas every transposition is an odd permutation. Thus, sgn(Id n ) = 1 and for any transposition σ ∈ o n , sgn(σ) = −1. 2. Using Remark 15.2.12, sgn(σ ◦ τ) = sgn(σ) sgn(τ) for any two permutations σ, τ ∈ o n . We are now ready to define determinant of a square matrix A. Definition 15.2.15 Let A = [a ij ] be an n n matrix with entries from F. The determinant of A, denoted det(A), is defined as det(A) = ¸ σ∈Sn sgn(σ)a 1σ(1) a 2σ(2) . . . a nσ(n) = ¸ σ∈Sn sgn(σ) n ¸ i=1 a iσ(i) . Remark 15.2.16 1. Observe that det(A) is a scalar quantity. The expression for det(A) seems complicated at the first glance. But this expression is very helpful in proving the results related with “properties of determinant”. 260 CHAPTER 15. APPENDIX 2. If A = [a ij ] is a 3 3 matrix, then using (15.2.5), det(A) = ¸ σ∈Sn sgn(σ) 3 ¸ i=1 a iσ(i) = sgn(τ 1 ) 3 ¸ i=1 a iτ1(i) + sgn(τ 2 ) 3 ¸ i=1 a iτ2(i) + sgn(τ 3 ) 3 ¸ i=1 a iτ3(i) + sgn(τ 4 ) 3 ¸ i=1 a iτ4(i) + sgn(τ 5 ) 3 ¸ i=1 a iτ5(i) + sgn(τ 6 ) 3 ¸ i=1 a iτ6(i) = a 11 a 22 a 33 −a 11 a 23 a 32 −a 12 a 21 a 33 +a 12 a 23 a 31 + a 13 a 21 a 32 −a 13 a 22 a 31 . Observe that this expression for det(A) for a 3 3 matrix A is same as that given in (2.6.1). 15.3 Properties of Determinant Theorem 15.3.1 (Properties of Determinant) Let A = [a ij ] be an n n matrix. Then 1. if B is obtained from A by interchanging two rows, then det(B) = −det(A). 2. if B is obtained from A by multiplying a row by c then det(B) = c det(A). 3. if all the elements of one row is 0 then det(A) = 0. 4. if A is a square matrix having two rows equal then det(A) = 0. 5. Let B = [b ij ] and C = [c ij ] be two matrices which differ from the matrix A = [a ij ] only in the m th row for some m. If c mj = a mj +b mj for 1 ≤ j ≤ n then det(C) = det(A) + det(B). 6. if B is obtained from A by replacing the th row by itself plus k times the mth row, for = m then det(B) = det(A). 7. if A is a triangular matrix then det(A) = a 11 a 22 a nn , the product of the diagonal elements. 8. If E is an elementary matrix of order n then det(EA) = det(E) det(A). 9. A is invertible if and only if det(A) = 0. 10. If B is an n n matrix then det(AB) = det(A) det(B). 11. det(A) = det(A t ), where recall that A t is the transpose of the matrix A. Proof. Proof of Part 1. Suppose B = [b ij ] is obtained from A = [a ij ] by the interchange of the th and m th row. Then b j = a mj , b mj = a j for 1 ≤ j ≤ n and b ij = a ij for 1 ≤ i = , m ≤ n, 1 ≤ j ≤ n. 15.3. PROPERTIES OF DETERMINANT 261 Let τ = ( m) be a transposition. Then by Proposition 15.2.4, o n = ¦σ ◦ τ : σ ∈ o n ¦. Hence by the definition of determinant and Example 15.2.14.2, we have det(B) = ¸ σ∈Sn sgn(σ) n ¸ i=1 b iσ(i) = ¸ σ◦τ∈Sn sgn(σ ◦ τ) n ¸ i=1 b i(σ◦τ)(i) = ¸ σ◦τ∈Sn sgn(τ) sgn(σ) b 1(σ◦τ)(1) b 2(σ◦τ)(2) b (σ◦τ)() b m(σ◦τ)(m) b n(σ◦τ)(n) = sgn(τ) ¸ σ∈Sn sgn(σ) b 1σ(1) b 2σ(2) b σ(m) b mσ() b nσ(n) = − ¸ σ∈Sn sgn(σ) a 1σ(1) a 2σ(2) a mσ(m) a σ() a nσ(n) as sgn(τ) = −1 = −det(A). Proof of Part 2. Suppose that B = [b ij ] is obtained by multiplying the m th row of A by c = 0. Then b mj = c a mj and b ij = a ij for 1 ≤ i = m ≤ n, 1 ≤ j ≤ n. Then det(B) = ¸ σ∈Sn sgn(σ)b 1σ(1) b 2σ(2) b mσ(m) b nσ(n) = ¸ σ∈Sn sgn(σ)a 1σ(1) a 2σ(2) ca mσ(m) a nσ(n) = c ¸ σ∈Sn sgn(σ)a 1σ(1) a 2σ(2) a mσ(m) a nσ(n) = c det(A). Proof of Part 3. Note that det(A) = ¸ σ∈Sn sgn(σ)a 1σ(1) a 2σ(2) . . . a nσ(n) . So, each term in the expression for determinant, contains one entry from each row. Hence, from the condition that A has a row consisting of all zeros, the value of each term is 0. Thus, det(A) = 0. Proof of Part 4. Suppose that the th and m th row of A are equal. Let B be the matrix obtained from A by interchanging the th and m th rows. Then by the first part, det(B) = −det(A). But the assumption implies that B = A. Hence, det(B) = det(A). So, we have det(B) = −det(A) = det(A). Hence, det(A) = 0. Proof of Part 5. By definition and the given assumption, we have det(C) = ¸ σ∈Sn sgn(σ)c 1σ(1) c 2σ(2) c mσ(m) c nσ(n) = ¸ σ∈Sn sgn(σ)c 1σ(1) c 2σ(2) (b mσ(m) +a mσ(m) ) c nσ(n) = ¸ σ∈Sn sgn(σ)b 1σ(1) b 2σ(2) b mσ(m) b nσ(n) + ¸ σ∈Sn sgn(σ)a 1σ(1) a 2σ(2) a mσ(m) a nσ(n) = det(B) + det(A). Proof of Part 6. Suppose that B = [b ij ] is obtained from A by replacing the th row by itself plus k times the mth row, for = m. Then b j = a j + k a mj and b ij = a ij for 1 ≤ i = m ≤ n, 1 ≤ j ≤ n. 262 CHAPTER 15. APPENDIX Then det(B) = ¸ σ∈Sn sgn(σ)b 1σ(1) b 2σ(2) b σ() b mσ(m) b nσ(n) = ¸ σ∈Sn sgn(σ)a 1σ(1) a 2σ(2) (a σ() +ka mσ(m) ) a mσ(m) a nσ(n) = ¸ σ∈Sn sgn(σ)a 1σ(1) a 2σ(2) a σ() a mσ(m) a nσ(n) +k ¸ σ∈Sn sgn(σ)a 1σ(1) a 2σ(2) a mσ(m) a mσ(m) a nσ(n) = ¸ σ∈Sn sgn(σ)a 1σ(1) a 2σ(2) a σ() a mσ(m) a nσ(n) use Part 4 = det(A). Proof of Part 7. First let us assume that A is an upper triangular matrix. Observe that if σ ∈ o n is different from the identity permutation then n(σ) ≥ 1. So, for every σ = Id n ∈ o n , there exists a positive integer m, 1 ≤ m ≤ n −1 (depending on σ) such that m > σ(m). As A is an upper triangular matrix, a mσ(m) = 0 for each σ(= Id n ) ∈ o n . Hence the result follows. A similar reasoning holds true, in case A is a lower triangular matrix. Proof of Part 8. Let I n be the identity matrix of order n. Then using Part 7, det(I n ) = 1. Also, recalling the notations for the elementary matrices given in Remark 2.3.14, we have det(E ij ) = −1, (using Part 1) det(E i (c)) = c (using Part 2) and det(E ij (k) = 1 (using Part 6). Again using Parts 1, 2 and 6, we get det(EA) = det(E) det(A). Proof of Part 9. Suppose A is invertible. Then by Theorem 2.5.8, A is a product of elementary matrices. That is, there exist elementary matrices E 1 , E 2 , . . . , E k such that A = E 1 E 2 E k . Now a repeated application of Part 8 implies that det(A) = det(E 1 ) det(E 2 ) det(E k ). But det(E i ) = 0 for 1 ≤ i ≤ k. Hence, det(A) = 0. Now assume that det(A) = 0. We show that A is invertible. On the contrary, assume that A is not invertible. Then by Theorem 2.5.8, the matrix A is not of full rank. That is there exists a positive integer r < n such that rank(A) = r. So, there exist elementary matrices E 1 , E 2 , . . . , E k such that E 1 E 2 E k A = ¸ B 0 ¸ . Therefore, by Part 3 and a repeated application of Part 8, det(E 1 ) det(E 2 ) det(E k ) det(A) = det(E 1 E 2 E k A) = det ¸ B 0 ¸ = 0. But det(E i ) = 0 for 1 ≤ i ≤ k. Hence, det(A) = 0. This contradicts our assumption that det(A) = 0. Hence our assumption is false and therefore A is invertible. Proof of Part 10. Suppose A is not invertible. Then by Part 9, det(A) = 0. Also, the product matrix AB is also not invertible. So, again by Part 9, det(AB) = 0. Thus, det(AB) = det(A) det(B). Now suppose that A is invertible. Then by Theorem 2.5.8, A is a product of elementary matrices. That is, there exist elementary matrices E 1 , E 2 , . . . , E k such that A = E 1 E 2 E k . Now a repeated application of Part 8 implies that det(AB) = det(E 1 E 2 E k B) = det(E 1 ) det(E 2 ) det(E k ) det(B) = det(E 1 E 2 E k ) det(B) = det(A) det(B). 15.3. PROPERTIES OF DETERMINANT 263 Proof of Part 11. Let B = [b ij ] = A t . Then b ij = a ji for 1 ≤ i, j ≤ n. By Proposition 15.2.4, we know that o n = ¦σ −1 : σ ∈ o n ¦. Also sgn(σ) = sgn(σ −1 ). Hence, det(B) = ¸ σ∈Sn sgn(σ)b 1σ(1) b 2σ(2) b nσ(n) = ¸ σ∈Sn sgn(σ −1 )b σ −1 (1) 1 b σ −1 (2) 2 b σ −1 (n) n = ¸ σ∈Sn sgn(σ −1 )a −1 (1) b −1 (2) b −1 (n) = det(A). Remark 15.3.2 1. The result that det(A) = det(A t ) implies that in the statements made in Theo- rem 15.3.1, where ever the word “row” appears it can be replaced by “column”. 2. Let A = [a ij ] be a matrix satisfying a 11 = 1 and a 1j = 0 for 2 ≤ j ≤ n. Let B be the submatrix of A obtained by removing the first row and the first column. Then it can be easily shown that det(A) = det(B). The reason being is as follows: for every σ ∈ o n with σ(1) = 1 is equivalent to saying that σ is a permutation of the elements ¦2, 3, . . . , n¦. That is, σ ∈ o n−1 . Hence, det(A) = ¸ σ∈Sn sgn(σ)a 1σ(1) a 2σ(2) a nσ(n) = ¸ σ∈Sn,σ(1)=1 sgn(σ)a 2σ(2) a nσ(n) = ¸ σ∈Sn−1 sgn(σ)b 1σ(1) b nσ(n) = det(B). We are now ready to relate this definition of determinant with the one given in Definition 2.6.2. Theorem 15.3.3 Let A be an n n matrix. Then det(A) = n ¸ j=1 (−1) 1+j a 1j det A(1[j) , where recall that A(1[j) is the submatrix of A obtained by removing the 1 st row and the j th column. Proof. For 1 ≤ j ≤ n, define two matrices B j = 0 0 a 1j 0 a 21 a 22 a 2j a 2n . . . . . . . . . . . . . . . a n1 a n2 a nj a nn ¸ ¸ ¸ ¸ ¸ ¸ n×n and C j = a 1j 0 0 0 a 2j a 21 a 22 a 2n . . . . . . . . . . . . . . . a nj a n1 a n2 a nn ¸ ¸ ¸ ¸ ¸ ¸ n×n . Then by Theorem 15.3.1.5, det(A) = n ¸ j=1 det(B j ). (15.3.6) We now compute det(B j ) for 1 ≤ j ≤ n. Note that the matrix B j can be transformed into C j by j −1 interchanges of columns done in the following manner: first interchange the 1 st and 2 nd column, then interchange the 2 nd and 3 rd column and so on (the last process consists of interchanging the (j − 1) th column with the j th column. Then by Remark 15.3.2 and Parts 1 and 2 of Theorem 15.3.1, we have det(B j ) = a 1j (−1) j−1 det(C j ). Therefore by (15.3.6), det(A) = n ¸ j=1 (−1) j−1 a 1j det A(1[j) = n ¸ j=1 (−1) j+1 a 1j det A(1[j) . 264 CHAPTER 15. APPENDIX 15.4 Dimension of M +N Theorem 15.4.1 Let V (F) be a finite dimensional vector space and let M and N be two subspaces of V. Then dim(M) + dim(N) = dim(M +N) + dim(M ∩ N). (15.4.7) Proof. Since M ∩ N is a vector subspace of V, consider a basis B 1 = ¦u 1 , u 2 , . . . , u k ¦ of M ∩ N. As, M ∩ N is a subspace of the vector spaces M and N, we extend the basis B 1 to form a basis B M = ¦u 1 , u 2 , . . . , u k , v 1 , . . . , v r ¦ of M and also a basis B N = ¦u 1 , u 2 , . . . , u k , w 1 , . . . , w s ¦ of N. We now proceed to prove that that the set B 2 = ¦u 1 , u 2 , . . . , u k , w 1 , . . . , w s , v 1 , v 2 , . . . , v r ¦ is a basis of M +N. To do this, we show that 1. the set B 2 is linearly independent subset of V, and 2. L(B 2 ) = M +N. The second part can be easily verified. To prove the first part, we consider the linear system of equations α 1 u 1 + +α k u k 1 w 1 + +β s w s 1 v 1 + +γ r v r = 0. (15.4.8) This system can be rewritten as α 1 u 1 + +α k u k 1 w 1 + +β s w s = −(γ 1 v 1 + +γ r v r ). The vector v = −(γ 1 v 1 + + γ r v r ) ∈ M, as v 1 , . . . , v r ∈ B M . But we also have v = α 1 u 1 + + α k u k 1 w 1 + + β s w s ∈ N as the vectors u 1 , u 2 , . . . , u k , w 1 , . . . , w s ∈ B N . Hence, v ∈ M ∩ N and therefore, there exists scalars δ 1 , . . . , δ k such that v = δ 1 u 1 2 u 2 + +δ k u k . Substituting this representation of v in Equation (15.4.8), we get 1 −δ 1 )u 1 + + (α k −δ k )u k 1 w 1 + +β s w s = 0. But then, the vectors u 1 , u 2 , . . . , u k , w 1 , . . . , w s are linearly independent as they form a basis. Therefore, by the definition of linear independence, we get α i −δ i = 0, for 1 ≤ i ≤ k and β j = 0 for 1 ≤ j ≤ s. Thus the linear system of Equations (15.4.8) reduces to α 1 u 1 + +α k u k 1 v 1 + + γ r v r = 0. The only solution for this linear system is α i = 0, for 1 ≤ i ≤ k and γ j = 0 for 1 ≤ j ≤ r. Thus we see that the linear system of Equations (15.4.8) has no non-zero solution. And therefore, the vectors are linearly independent. Hence, the set B 2 is a basis of M + N. We now count the vectors in the sets B 1 , B 2 , B M and B N to get the required result. 15.5. PROOF OF RANK-NULLITY THEOREM 265 15.5 Proof of Rank-Nullity Theorem Theorem 15.5.1 Let T : V −→W be a linear transformation and ¦u 1 , u 2 , . . . , u n ¦ be a basis of V . Then 1. {(T) = L(T(u 1 ), T(u 2 ), . . . , T(u n )). 2. T is one-one ⇐⇒ ^(T) = ¦0¦ is the zero subspace of V ⇐⇒ ¦T(u i ) : 1 ≤ i ≤ n¦ is a basis of {(T). 3. If V is finite dimensional vector space then dim({(T)) ≤ dim(V ). The equality holds if and only if ^(T) = ¦0¦. Proof. Part 1) can be easily proved. For 2), let T be one-one. Suppose u ∈ ^(T). This means that T(u) = 0 = T(0). But then T is one-one implies that u = 0. If ^(T) = ¦0¦ then T(u) = T(v) ⇐⇒ T(u −v) = 0 implies that u = v. Hence, T is one-one. The other parts can be similarly proved. Part 3) follows from the previous two parts. The proof of the next theorem is immediate from the fact that T(0) = 0 and the definition of linear independence/dependence. Theorem 15.5.2 Let T : V −→W be a linear transformation. If ¦T(u 1 ), T(u 2 ), . . . , T(u n )¦ is linearly independent in {(T) then ¦u 1 , u 2 , . . . , u n ¦ ⊂ V is linearly independent. Theorem 15.5.3 (Rank Nullity Theorem) Let T : V −→W be a linear transformation and V be a finite dimensional vector space. Then dim( Range(T)) + dim(^(T)) = dim(V ), or ρ(T) +ν(T) = n. Proof. Let dim(V ) = n and dim(^(T)) = r. Suppose ¦u 1 , u 2 , . . . , u r ¦ is a basis of ^(T). Since ¦u 1 , u 2 , . . . , u r ¦ is a linearly independent set in V, we can extend it to form a basis of V. Now there exists vectors ¦u r+1 , u r+2 , . . . , u n ¦ such that the set ¦u 1 , . . . , u r , u r+1 , . . . , u n ¦ is a basis of V. Therefore, Range (T) = L(T(u 1 ), T(u 2 ), . . . , T(u n )) = L(0, . . . , 0, T(u r+1 ), T(u r+2 ), . . . , T(u n )) = L(T(u r+1 ), T(u r+2 ), . . . , T(u n )) which is equivalent to showing that Range (T) is the span of ¦T(u r+1 ), T(u r+2 ), . . . , T(u n )¦. We now prove that the set ¦T(u r+1 ), T(u r+2 ), . . . , T(u n )¦ is a linearly independent set. Suppose the set is linearly dependent. Then, there exists scalars, α r+1 , α r+2 , . . . , α n , not all zero such that α r+1 T(u r+1 ) +α r+2 T(u r+2 ) + +α n T(u n ) = 0. Or T(α r+1 u r+1 r+2 u r+2 + +α n u n ) = 0 which in turn implies α r+1 u r+1 r+2 u r+2 + +α n u n ^(T) = L(u 1 , . . . , u r ). So, there exists scalars α i , 1 ≤ i ≤ r such that α r+1 u r+1 r+2 u r+2 + +α n u n = α 1 u 1 2 u 2 + +α r u r . That is, α 1 u 1 + + +α r u r −α r+1 u r+1 − −α n u n = 0. Thus α i = 0 for 1 ≤ i ≤ n as ¦u 1 , u 2 , . . . , u n ¦ is a basis of V. In other words, we have shown that the set ¦T(u r+1 ), T(u r+2 ), . . . , T(u n )¦ is a basis of Range (T). Now, the required result follows. we now state another important implication of the Rank-nullity theorem. 266 CHAPTER 15. APPENDIX Corollary 15.5.4 Let T : V −→V be a linear transformation on a finite dimensional vector space V. Then T is one-one ⇐⇒ T is onto ⇐⇒T has an inverse. Proof. Let dim(V ) = n and let T be one-one. Then dim(^(T)) = 0. Hence, by the rank-nullity Theorem 15.5.3 dim( Range (T)) = n = dim(V ). Also, Range(T) is a subspace of V. Hence, Range(T) = V. That is, T is onto. Suppose T is onto. Then Range(T) = V. Hence, dim( Range (T)) = n. But then by the rank-nullity Theorem 15.5.3, dim(^(T)) = 0. That is, T is one-one. Now we can assume that T is one-one and onto. Hence, for every vector u in the range, there is a unique vectors v in the domain such that T(v) = u. Therefore, for every u in the range, we define T −1 (u) = v. That is, T has an inverse. Let us now assume that T has an inverse. Then it is clear that T is one-one and onto. 15.6 Condition for Exactness Let D be a region in xy-plane and let M and N be real valued functions defined on D. Consider an equation M(x, y(x))dx +N(x, y(x))dy = 0, (x, y(x)) ∈ D. (15.6.9) Definition 15.6.1 (Exact Equation) The Equation (15.6.9) is called Exact if there exists a real valued twice continuously differentiable function f such that ∂f ∂x = M and ∂f ∂y = N. Theorem 15.6.2 Let M and N be “smooth” in a region D. The equation (15.6.9) is exact if and only if ∂M ∂y = ∂N ∂x . (15.6.10) Proof. Let Equation (15.6.9) be exact. Then there is a “smooth” function f (defined on D) such that M = ∂f ∂x and N = ∂f ∂y . So, ∂M ∂y = 2 f ∂y∂x = 2 f ∂x∂y = ∂N ∂x and so Equation (15.6.10) holds. Conversely, let Equation (15.6.10) hold. We now show that Equation (15.6.10) is exact. Define G(x, y) on D by G(x, y) = M(x, y)dx +g(y) where g is any arbitrary smooth function. Then ∂G ∂x = M(x, y) which shows that ∂x ∂G ∂y = ∂y ∂G ∂x = ∂M ∂y = ∂N ∂x . So ∂x (N − ∂G ∂y ) = 0 or N − ∂G ∂y is independent of x. Let φ(y) = N − ∂G ∂y or N = φ(y) + ∂G ∂y . Now M(x, y) +N dy dx = ∂G ∂x + ¸ ∂G ∂y +φ(y) dy dx = ¸ ∂G ∂x + ∂G ∂y dy dx + d dy φ(y)dy dy dx = d dx G(x, y(x)) + d dx φ(y)dy where y = y(x) = d dx f(x, y) where f(x, y) = G(x, y) + φ(y)dy 2 Contents I Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 9 9 10 10 12 14 15 15 16 18 21 21 23 23 25 28 29 31 32 36 36 37 39 41 42 44 47 48 51 51 51 53 55 56 59 1 Matrices 1.1 Definition of a Matrix . . . . . . 1.1.1 Special Matrices . . . . . 1.2 Operations on Matrices . . . . . 1.2.1 Multiplication of Matrices 1.2.2 Inverse of a Matrix . . . . 1.3 Some More Special Matrices . . . 1.3.1 Submatrix of a Matrix . . 1.3.2 Block Matrices . . . . . . 1.4 Matrices over Complex Numbers 2 Linear System of Equations 2.1 Introduction . . . . . . . . . . . . . . . . . . . 2.1.1 A Solution Method . . . . . . . . . . . 2.2 Row Operations and Equivalent Systems . . . 2.2.1 Gauss Elimination Method . . . . . . 2.3 Row Reduced Echelon Form of a Matrix . . . 2.3.1 Gauss-Jordan Elimination . . . . . . . 2.3.2 Elementary Matrices . . . . . . . . . . 2.4 Rank of a Matrix . . . . . . . . . . . . . . . . 2.5 Existence of Solution of Ax = b . . . . . . . . 2.5.1 Example . . . . . . . . . . . . . . . . . 2.5.2 Main Theorem . . . . . . . . . . . . . 2.5.3 Equivalent conditions for Invertibility 2.5.4 Inverse and the Gauss-Jordan Method 2.6 Determinant . . . . . . . . . . . . . . . . . . . 2.6.1 Adjoint of a Matrix . . . . . . . . . . 2.6.2 Cramer’s Rule . . . . . . . . . . . . . 2.7 Miscellaneous Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Finite Dimensional Vector Spaces 3.1 Vector Spaces . . . . . . . . . . 3.1.1 Definition . . . . . . . . . 3.1.2 Examples . . . . . . . . . 3.1.3 Subspaces . . . . . . . . . 3.1.4 Linear Combinations . . . 3.2 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.3 3.4 Bases 3.3.1 CONTENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Important Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Ordered Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 71 4 Linear Transformations 4.1 4.2 4.3 4.4 Definitions and Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Matrix of a linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Rank-Nullity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Similarity of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 89 5 Inner Product Spaces 5.1 5.2 5.3 Definition and Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Gram-Schmidt Orthogonalisation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Orthogonal Projections and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.3.1 Matrix of the Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . 105 109 6 Eigenvalues, Eigenvectors and Diagonalisation 6.1 6.2 6.3 6.4 Introduction and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Diagonalisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Diagonalisable matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Sylvester’s Law of Inertia and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 123 II Ordinary Differential Equation 131 133 7 Differential Equations 7.1 7.2 7.3 7.4 7.5 7.6 7.7 Introduction and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Separable Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 7.2.1 7.3.1 Equations Reducible to Separable Form . . . . . . . . . . . . . . . . . . . . . . . . 137 Integrating Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Miscellaneous Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 7.6.1 Orthogonal Trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 155 8 Second Order and Higher Order Equations 8.1 8.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 More on Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 8.2.1 8.2.2 Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Method of Reduction of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 8.3 8.4 8.5 8.6 8.7 Second Order equations with Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . 162 Non Homogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Variation of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Higher Order Equations with Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . 168 Method of Undetermined Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 CONTENTS 9 Solutions Based on Power Series 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Properties of Power Series . . . . . . . . . . . . . 9.2 Solutions in terms of Power Series . . . . . . . . . . . . 9.3 Statement of Frobenius Theorem for Regular (Ordinary) 9.4 Legendre Equations and Legendre Polynomials . . . . . 9.4.1 Introduction . . . . . . . . . . . . . . . . . . . . 9.4.2 Legendre Polynomials . . . . . . . . . . . . . . . 5 177 . 177 . 179 . 181 . 182 . 183 . 183 . 184 . . . . . . . . . . . . Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 193 . 193 . 193 . 194 . 196 . 201 . 201 . 202 . 202 . 204 . 206 10 Laplace Transform 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . 10.2 Definitions and Examples . . . . . . . . . . . . . 10.2.1 Examples . . . . . . . . . . . . . . . . . . 10.3 Properties of Laplace Transform . . . . . . . . . 10.3.1 Inverse Transforms of Rational Functions 10.3.2 Transform of Unit Step Function . . . . . 10.4 Some Useful Results . . . . . . . . . . . . . . . . 10.4.1 Limiting Theorems . . . . . . . . . . . . . 10.5 Application to Differential Equations . . . . . . . 10.6 Transform of the Unit-Impulse Function . . . . . IV Numerical Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 211 . 211 . 211 . 211 . 213 . 215 . 216 . 216 . 216 . 217 223 223 223 226 228 11 Newton’s Interpolation Formulae 11.1 Introduction . . . . . . . . . . . . . . . 11.2 Difference Operator . . . . . . . . . . 11.2.1 Forward Difference Operator . 11.2.2 Backward Difference Operator 11.2.3 Central Difference Operator . . 11.2.4 Shift Operator . . . . . . . . . 11.2.5 Averaging Operator . . . . . . 11.3 Relations between Difference operators 11.4 Newton’s Interpolation Formulae . . . 12 Lagrange’s Interpolation Formula 12.1 Introduction . . . . . . . . . . . . 12.2 Divided Differences . . . . . . . . 12.3 Lagrange’s Interpolation formula 12.4 Gauss’s and Stirling’s Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Numerical Differentiation and Integration 13.1 Introduction . . . . . . . . . . . . . . . . . . 13.2 Numerical Differentiation . . . . . . . . . . 13.3 Numerical Integration . . . . . . . . . . . . 13.3.1 A General Quadrature Formula . . . 13.3.2 Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 . 231 . 231 . 235 . 235 . 236 . . . . 15. . . . . . . . .2 Determinant . . . . . . . . . . . .6 Condition for Exactness . . . . . . . . . . . . . . . .5 Proof of Rank-Nullity Theorem 15. . . . . . 241 241 242 244 245 247 248 248 250 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 . . . . . . . . . . . . . . . . . . . . . 266 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Error Estimates and Convergence . . . . . . . . . . . .3 Simpson’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Runge-Kutta Method of Order 4 . . . .3.3. . . . . . . . . . . . . . . . . . . . 253 . . . . . . . . . . . . .1 System of Linear Equations . . . . . 14. . 14. . . . . . . . 260 . 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Algorithm for Predictor-Corrector Method . . . . . . . . . . 264 . .3 Runge-Kutta Method . . . . . . . . . . . . . 14. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15. 265 . .4. . . . . . . . . . . . . . . . . . 14. . . . . . . . . 14. . . . . . 14. . . . . . . . . . . . .1. . . . .1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . .1 Euler’s Method . . . . . . . . . . . . . . .3 Properties of Determinant . . . . . . . . . . . . . . . .3. . . .1 Algorithm: Runge-Kutta Method of Order 2 14.4 Predictor-Corrector Methods . . . . . . . . . . . . . .4 Dimension of M + N . . 237 14 Numerical Methods 14. . . . . 15 Appendix 15. . . . . . . . . . . . . . . . . . . . 256 . . 15. . . .6 CONTENTS 13. . . . . . . . . . . . . . Part I Linear Algebra 7 . . .1 (Matrix) A rectangular array of numbers is called a matrix..  . am2 ··· ··· .  . . .1. a21 = 4.  to represent a matrix. ···  a1n  a2n  . a13 = 7. 4 5 6 A matrix having only one column is called a column vector.  . A matrix having m rows and n columns is said to have the order m × n.2 Some books also use  . . Then a11 = 1. . Definition 1. it should be understood from the context whether it is a row vector or a column vector. .Chapter 1 Matrices 1. we also denote the matrix A by [aij ] by suppressing its order. Whenever a vector is used. In other words.1 Definition of a Matrix Definition 1. A matrix A of order m × n can be represented in the following form: a11   a21 A= .1. a22 = 5. The horizontal arrays of a matrix are called its rows and the vertical arrays are called its columns. and a matrix with only one row is called a row vector. am1  a12 a22 . . am2 ··· ··· . .3 (Equality of two Matrices) Two matrices A = [aij ] and B = [bij ] having the same order m × n are equal if aij = bij for each i = 1. 2. .  amn 1 3 7 . . a12 = 3. We shall mostly be concerned with matrices having real numbers as entries. . am1 Let A =  a12 a22 .  . two matrices are said to be equal if they have the same order and their corresponding entries are equal. 2. 9 . and a23 = 6. ···  a1n  a2n  . In a more concise manner. m and j = 1.1. . . .  . a11  a21  Remark 1. . .. n. .  amn where aij is the entry at the intersection of the ith row and j th column.  . For example. the 4 0 non-zero entries appear only on the principal diagonal. .10 CHAPTER 1. is called a square matrix.2. .1 Special Matrices 1. ann are called the diagonal entries and form the principal diagonal of A. with bij = aji for 1 ≤ i ≤ m and 1 ≤ j ≤ n. An upper triangular matrix will be represented 0 0 −2   a11 a12 · · · a1n    0 a22 · · · a2n   . by the transpose of an m × n matrix A.  . . I2 = 0 1 0 an identity matrix if di = 1 for all i = 1. .1. . dn is denoted by D = diag(d1 . . MATRICES Example 1. That is. . denoted by 0. .1. . .  . . . we mean a matrix of order n × m having the rows of A as its columns and the columns of A as its rows.  0 0 · · · ann 1. . 4. If di = d for all i = 1. So. 3 2 : 5 1.. In a square matrix.  1 1 0  and I3 = 0 For example. In other words. . The transpose of A is denoted by At . dn ). of order n. . . A diagonal matrix D of order n with the diagonal entries d1 .2 Operations on Matrices Definition 1. 2.1 (Transpose of a Matrix) The transpose of an m × n matrix A = [aij ] is defined as the n × m matrix B = [bij ].4 The linear system of equations 2x + 3y = 5 and 3x + 2y = 5 can be identified with the 2 3 : 5 matrix . by  . . This  0 0  1 0 . 02×2 = and 02×3 = 2. A = [aij ]. . . . 0 1 The subscript n is suppressed in case the order is clear from the context or if no confusion arises. . . .   2 1 4   For example 0 3 −1 is an upper triangular matrix. A matrix in which each entry is zero is called a zero-matrix. A diagonal matrix A of order n is called matrix is denoted by In . For 0 0 0 0 0 0 0 0 0 . 2. . n then the diagonal matrix D is called a scalar matrix.1. A matrix for which the number of rows equals the number of columns.5 example. the entries a11 . if A is a n × n matrix then A is said to have order n. n. . . 5. A square matrix A = [aij ] is said to be a diagonal matrix if aij = 0 for i = j. d2 . A square matrix A = [aij ] is said to be an lower triangular matrix if aij = 0 for i < j. . a22 . A square matrix A = [aij ] is said to be an upper triangular matrix if aij = 0 for i > j. . A square matrix A is said to be triangular if it is an upper or a lower triangular matrix. the zero matrix 0n and 0 1 are a few diagonal matrices. . 0 Definition 1. 3. 6. Then A + B = [aij ] + [bij ] = [aij + bij ] = [bij + aij ] = [bij ] + [aij ] = B + A as real numbers commute. 2. 0 1 2 5 2 Thus. The reader is required to prove the other parts as all the results follow from the properties of real numbers. 4. Suppose A + B = 0.3 (Addition of Matrices) let A = [aij ] and B = [bij ] be are two m × n matrices. B and C be matrices of order m × n. For example.1. we define kA = [kaij ]. if A = then At = 4 1 .2. Then the sum A + B is defined to be the matrix C = [cij ] with cij = aij + bij . j and the result follows. A + 0 = 0 + A = A. This matrix B is called the additive inverse of A. Proof. Then show that B = (−1)A = [−aij ]. Let A = [aij ] and B = [bij ]. (associativity). Definition 1.7 (Additive Inverse) Let A be an m × n matrix. Hence. 11 Definition 1. the transpose of a row vector is a column vector and vice-versa. for the matrix 0m×n . .5 Let A. Suppose A + B = A.2 For any matrix A.4 (Multiplying a Scalar to a Matrix) Let A = [aij ] be an m × n matrix. we define the sum of two matrices only when the order of the two matrices are same.2. 1. and let k.2. ∈ R.2. OPERATIONS ON MATRICES  1 0 1 4 5   For example. Also. the definition of transpose gives cij = bji = aij for all i. (A + B) + C = A + (B + C) 3. and k = 5. and is denoted by −A = (−1)A. Let A = [aij ]. Part 1. Then. Exercise 1. Proof. A + B = B + A 2. Then show that B = 0. k( A) = (k )A. 2. if A = 1 4 0 1 5 20 25 5 . Then for any element k ∈ R. Then there exists a matrix B with A + B = 0. (k + )A = kA + A. Definition 1. Then 1.2. Note that.  Theorem 1.6 1.2. then 5A = 0 5 10 2 Theorem 1. (commutativity). the matrix 0m×n is called the additive identity.2. (At )t = A. At = [bij ] and (At )t = [cij ]. 2. MATRICES 1. Then A(B + C) = AB + AC. . Definition 1. both the product AB and BA are defined.2. we have • the first row of DA is d1 times the first row of A. ···  That is. a scalar matrix of order n commutes with any square matrix of order n.10 1. 1. for square matrices A and B of the same order.2. . . the matrix multiplication is associative. the product BA is not defined. 3. The product AB is a matrix C = [cij ] of order m × r.9 Two square matrices A and B are said to commute if AB = BA. the matrix product is not commutative. If A is an n × n matrix then AIn = In A = A. . • for 1 ≤ i ≤ n.2.12 CHAPTER 1. B and C are so chosen that the matrix multiplications are defined.  1 2 1 1 2 3   For example. .  ain  ···      ···  and Bn×r =  . Remark 1. while AB is defined. (kA)B = k(AB) = A(kB). Then (AB)C = A(BC). That is. That is. . Observe that the product AB is defined if and only if the number of columns of A = the number of  rows of B. 5. multiplication distributes over addition. if Am×n AB = [(AB)ij ]m×r and (AB)ij = ai1 b1j + ai2 b2j + · · · + ain bnj . bmj ··· ··· . consider the following two 1 0 1 1 . with n cij = k=1 aik bkj = ai1 b1j + ai2 b2j + · · · + ain bnj .11 Suppose that the matrices A.2. . Also. d2 . . 4.1 Multiplication of Matrices Definition 1. For any k ∈ R. Then check that the matrix product and B = matrices A = 1 0 0 0 AB = 2 0 0 1 = 0 1 1 = BA. However. . 4 18    =  ai1     ai2 ··· ··· ··· ··· ···    then   Note that in this example. For example. 2. if A = and B = 0 0 3 then 2 4 1 1 0 4 AB = 1+0+3 2+0+0 2+0+1 4+0+0 1 + 6 + 12 4 = 2 + 12 + 4 3 2 19 .  ··· b1j b2j .2. dn ).   . In general. For any square matrix A of order n and D = diag(d1 . 1 Theorem 1. Note that if A is a square matrix of order n then AIn = In A.8 (Matrix Multiplication / Product) Let A = [aij ] be an m × n matrix and B = [bij ] be an n × r matrix.   . the ith row of DA is di times the ith row of A. 1  1 1  0 1 0 0  1  1 . . (a) Suppose that the matrix product AB is defined.2. then prove t that (A + B) = At + B t . 1  1 1  1 1 1 1  1  1 . Exercise 1. . an ] and B =  . Let n be a positive integer. Compute the matrix products AB and BA. OPERATIONS ON MATRICES A similar statement holds for the columns of A when A is multiplied on the right by D. . 2. . Let A = [aij ]m×n . B = [bij ]n×p and C = [cij ]p×q . n. Also. (b) Suppose that the matrix products AB and BA are defined. If the matrix addition A + B is defined. Let A = [a1 . a2 . For all j = 1. . . . Proof. the required result follows. Then the product BA need not be defined. The reader is required to prove the other parts. n n p A(BC) ij = = aik BC k=1 p n kj = k=1 aik n =1 p bk c j aik bk c k=1 =1 p n j = k=1 =1 t aik bk c AB =1 j = =1 k=1 aik bk c (AB)C ij j = i c j = Part 5. . . . if the matrix product AB is defined then prove that (AB)t = B t At . bn 3.12 1. . Hence. we have n (DA)ij = k=1 dik akj = di aij as dik = 0 whenever i = k. Find examples for the following statements. Then AB and BA may or may not be equal. 1 Can you guess a formula for An and prove it by induction? 4. Then the matrices AB and BA can have different orders. Let A and B be two matrices. Then p n 13 (BC)kj = =1 bk c j and (AB)i = k=1 aik bk . Part 1. Compute An for the following matrices: 1 0 1 . Therefore.2.  .   b1    b2  2.1. . (c) Suppose that the matrices A and B are square matrices of order n. 2. then we get AB = BA = I. A matrix A is said to be invertible (or is said to have an inverse) if there exists a matrix B such that AB = BA = In . Thus. the definition. From the above lemma.2. Proof. Let A be an inveritble matrix.2. . Theorem 1. . . or equivalently (A−1 )−1 = A. Then 1. Lemma 1. Then prove that A cannot have a row or column consisting of only zeros. Remark 1. 2. . (AB)−1 = B −1 A−1 . Let A1 . By definition AA−1 = A−1 A = I.14 Let A be an n × n matrix. Proof of Part 1.15 1. Hence. if we denote A−1 by B. Exercise 1. AA−1 = A−1 A = I. Ar be invertible matrices. 3. 3. we get (AA−1 )t = (A−1 A)t = I t ⇐⇒ (A−1 )t At = At (A−1 )t = I. That is. 2. we observe that if a matrix A is invertible. Taking transpose.13 (Inverse of a Matrix) Let A be a square matrix of order n.2.14 CHAPTER 1.17 1. Suppose that there exist n × n matrices B and C such that AB = In and CA = In . MATRICES 1. then B = C. As the inverse of a matrix A is unique. implies B −1 = A. if AC = In . A square matrix C is called a right inverse of A. We know AA−1 = A−1 A = I. Verify that (AB)(B −1 A−1 ) = I = (B −1 A−1 )(AB). Proof. Note that C = CIn = C(AB) = (CA)B = In B = B. .2 Inverse of a Matrix Definition 1. (At )−1 = (A−1 )t .2. (A−1 )−1 = A. Hence. Proof of Part 2. by definition (At )−1 = (A−1 )t . 1.2.16 Let A and B be two matrices with inverses A−1 and B −1 . Prove that the product A1 A2 · · · Ar is also an invertible matrix. then the inverse is unique. respectively. A2 .2. Proof of Part 3. A square matrix B is said to be a left inverse of A if BA = In . we denote it by A−1 . 2. Let A = [aij ] be an n × n matrix with aij = n − 1. A.3. Then A is a symmetric matrix and 0  1 2. 5. T = 2 (A − At ) is 2 skew-symmetric. Show that for any square matrix A. 2.2 1.4 A matrix obtained by deleting some of the rows and/or columns of a matrix is said to be a submatrix of the given matrix.    0 1 1 2 3    Example 1. Let A = 1 0 . if A = 1 0 4 5 . Is the matrix AB symmetric or skew-symmetric? 6. Then A is an orthogonal matrix. Let A =  √2 1 √ 6 3 3 1 − √2 1 √ 6 √ √ 3 2 − √6  0  . 1 1 1 1 .3. Definition 1. A similar statement holds for upper triangular matrices.3. Let A and B be symmetric matrices. Let A = 2 4 −1 and B = −1 0 −2 3 3 −1 4 B is a skew-symmetric matrix. The least positive integer k for which Ak = 0 is called the order of nilpotency.1 2. The matrices A for which a positive integer k exists such that Ak = 0 are called nilpotent matrices.3 1. 1 Exercise 1.3. Show that the product of two lower triangular matrices is a lower triangular matrix. The matrices that satisfy the condition that A2 = A are called 0 0 idempotent matrices.  1 1 1  √  2  −3 . Let A be a symmetric matrix of order n with A2 = 0. 3. (The reader is advised to give reasons. A matrix A over R is called symmetric if At = A and skew-symmetric if At = −A. SOME MORE SPECIAL MATRICES 15 1.1 Submatrix of a Matrix Definition 1.3. For example. A matrix A is said to be orthogonal if AAt = At A = I. 2 But the matrices 4 1 4 and are not submatrices of A. 1. Then A2 = A. Is it necessarily true that A = 0? 7. Show that AB is symmetric if and only if AB = BA. Then An = 0 and A = 0 for 1 ≤ ≤ 3.) 0 0 2 . Show that the diagonal entries of a skew-symmetric matrix are zero. Show that there exists a matrix B such that B(I + A) = I = (I + A)B. 4.  1 0 if i = j + 1 otherwise .3 Some More Special Matrices 1. 0 0 5 .3.1. Let A be a nilpotent matrix. Let A. a few submatrices of A are 1 2 [1]. and A = S + T. S = 1 (A + At ) is symmetric. 4. B be skew-symmetric matrices with AB = BA. [2]. [1 5]. we can decompose the H matrices A and B as A = [P Q] and B = .   a b 1 2 0   For example. Theorem 1. 2. n × r. Theorem 1. H and K are submatrices of B and H consists of the first r rows of B and K consists of the last m − r rows of B. or A =  3 1 4 −2 5 −3 as follows:    . In this case. Or when we want to prove results using induction. Proof. It may be possible to block the matrix in such a way that a few blocks are either identity matrices or zero matrices. r × p. where P has order n × r and H has order r × p. then A can be decomposed −3   2 0 −1 2   4  . H.3. Then. etc.3. Let P = [Pij ]. H = [Hij ]. We now prove the following important theorem.16 CHAPTER 1. n × (m − r) and (m − r) × p. Q and K are respectively.2 Block Matrices Let A be an n × m matrix and B be an m × p matrix. That K is. −3  2  4  . d 0 2a + 5c 2b + 5d −3  2  4  and so on. Then 2 5 0 e f AB = 0 −1  If A =  3 1 −2 5  0 −1  A= 3 1 −2 5  0 −1  A= 3 1 −2 5  1 2 2 5 a c b 0 a + 2c b + 2d + [e f ] = . then we may assume the result for r × r submatrices and then look for (r + 1) × (r + 1) submatrices. if A = and B =  c d  . the matrices P and Q are submatrices of A and P consists of the first r columns of A and Q consists of the last m − r columns of A.5 is very useful due to the following reasons: 1. we have m r m (AB)ij = k=1 r aik bkj = Pik Hkj + k=1 aik bkj + k=1 m k=r+1 aik bkj = Qik Kkj k=r+1 = (P H)ij + (QK)ij = (P H + QK)ij .3. Q.5 Let A = [aij ] = [P Q] and B = [bij ] = H be defined as above. 3. Then. and K = [kij ]. it may be easy to handle the matrix product using the block form. The order of the matrices P. for 1 ≤ i ≤ n and 1 ≤ j ≤ p. The matrix products P H and QK are valid as the order of the matrices P. Suppose r < m. H and K are smaller than that of A or B. Similarly. MATRICES 1. Then K AB = P H + QK. First note that the matrices P H and QK are each of order n × p. or . Q = [Qij ]. Show that. are called the blocks of the matrices A and B.   1 2   8. H. denoted by tr (A) as tr (A) = a11 + a22 + · · · ann . respectively. we can talk of matrix product AB as block product of matrices.3. we define trace of A. show 3 1 that there does not exist any matrix C such that AC = I3 . Let A = 2 1 . R+G S+H Similarly. Let x = 1. Miscellaneous Exercises Exercise 1. Let A and B be two m × n matrices and let x be an n × 1 column vector.2. the product P E need not be defined. Then for two square matrices. show the following: (a) tr (A + B) = tr (A) + tr (B). (c) Is C = AB? 4. if the product AB is defined. F. G. (b) Prove that if Ax = Bx for all x. y= and B = . But. y2 ] and zt = [z1 . (a) Prove that if Ax = 0 for all x. there do not exist matrices A and B such that AB − BA = cIn for any c = 0. Then the matrices P. A and B of the same order. Show that A = αI for some α ∈ R.3.11. then A = B. For a square matrix A of order n. and y2 = b21 z1 + b22 z2 x2 = a21 y1 + a22 y2 (a) Compose the two transformations to express x1 . yt = [y1 . (b) If xt = [x1 . z2 . (b) tr (AB) = tr (BA). 5. RE + SG RF + SH That is. And P E + QG P F + QH in this case. Complete the proofs of Theorems 1. then A is the zero matrix. R. 7. the orders of P and E may not be same and hence. Consider the two coordinate transformations y1 = b11 z1 + b12 z2 x1 = a11 y1 + a12 y2 .1. . the partition of B has to be properly chosen for purposes of block addition or multiplication. Therefore. Q. once a partition of A is fixed. S and n2 R S r2 G H E. Let A be an n × n matrix such that AB = BA for all n × n matrices B. − sin θ . 6. we may not be able P +E Q+F to add A and B in the block form.5 and 1. we have AB = . Even if A + B is defined. SOME MORE SPECIAL MATRICES m1 m2 Suppose A = s1 s2 17 and B = r1 P Q n1 E F . x2 in terms of z1 .6 2. if A + B and P + E is defined then A + B = . B and C such that x = Ay. 3. if both the products AB and P E are defined. z2 ] then find matrices A. Show that there exist infinitely many matrices B such that BA = I2 . y = Bz and x = Cz. x2 ]. Also. A= x2 y2 sin θ 0 −1 and y = Bx. Geometrically interpret y = Ax cos θ x1 y1 cos θ 1 0 .2. . then show that   a1 B    a2 B  AB = [Ab1 . R and S are symmetric. If P. what can you say about A? Are P. is same as multiplying each column of B by A. left multiplication by A. 1 −i − 2 2. right multiplication by B. denoted by A.4 Matrices over Complex Numbers Here the entries of the matrix are complex numbers. when A is symmetric? 11. Let A = [aij ] and B = [bij ] be two matrices. Then 1 i−2  1 0   A∗ = 4 − 3i 1 . A square matrix A over C is called skew-Hermitian if A∗ = −A. Let A = 1 0 4 + 3i i . Q. One just needs to look at the following additional definitions. 1 5 3 2 0 1 1 1 0 1 1 0 3 1 1 7 7 7 0 5 1 1 10. is the matrix B = [bij ] with bij = aji . All the definitions still hold. denoted by A∗ . Abp ] =  .4.  . MATRICES 1 6 0 9.   . 4. . . is the matrix B = [bij ] with bij = aij . .  an B [That is. Similarly. Q. R and S R S symmetric. . For example. If the product AB is defined. . Suppose a1 . . Definition 1. A square matrix A over C is called Hermitian if A∗ = A. b2 . . Ab2 . If A = [aij ] then the Conjugate of A. If A = [aij ] then the Conjugate Transpose of A.] 1. bp are the columns of B.18 CHAPTER 1. is same as multiplying each row of A by B. . . A square matrix A over C is called unitary if A∗ A = AA∗ = I. . . Let A = 1 0 4 + 3i i . Then 1 i−2 A= 1 0 4 − 3i −i . 5. −i −i − 2  3. For example. . an are the rows of A and b1 . Let A be an m × n matrix over C. Let A be an m×n matrix over C. Compute the matrix product AB using the block matrix multiplication for the matrices A = 6 6 4 0 0 1 6 1 6 and B = 6 4 1 −1 2 2 1 2 2 1 1 1 −1 1 1 7 7 7.  . a2 .1 (Conjugate Transpose of a Matrix) 1. Let A = P Q . Show that for any square matrix A. A+A∗ 2 is Hermitian. 19 Exercise 1. MATRICES OVER COMPLEX NUMBERS 6. T = A−A∗ 2 is skew-Hermitian. 3. skew-Hermitian and unitary matrices that have entries with non-zero imaginary parts.2 If A = [aij ] with aij ∈ R. . and 4. Show that if A is a complex triangular matrix and AA∗ = A∗ A then A is a diagonal matrix.4.4. Remark 1.1. Give examples of Hermitian.4. 2. then A∗ = At . Restate the results on transpose in terms of conjugate transpose. S = A = S + T.3 1. A square matrix A over C is called Normal if AA∗ = A∗ A. 20 CHAPTER 1. MATRICES . 0). In other words. 1)t with y arbitrary. Observe that in this case. b (a) If a = 0 then the system has a unique solution x = a . a1 b2 − a2 b1 = 0. 2. Consider the system ax = b. consider 3 equations in 3 unknowns.Chapter 2 Linear System of Equations 2. Thus for the system a1 x + b1 y = c1 and a2 x + b2 y = c2 . (b) If a = 0 and i.1 Introduction Let us look at some examples of linear systems. We now consider a system with 2 equations in 2 unknowns. y)t = (1. 0. Here again. the set of solutions is given by the points of intersection of the two lines. If one of the coefficients. a1 c2 − a2 c1 = 0 and b1 c2 − b2 c1 = 0. ii. The unique solution is (x. we have three cases. Observe that in this case. c) = (0. A linear equation ax + by + cz = d represent a plane in R3 provided (a. There are three cases to be considered. 1. Each case is illustrated by an example. b = 0 then the system has no solution. 0)t + y(−2. y)t = (1 − 2y. The equations represent a pair of parallel lines and hence there is no point of intersection. 3. As in the case of 2 equations in 2 unknowns. a1 b2 − a2 b1 = 0. (b) Infinite Number of Solutions x + 2y = 1 and 2x + 4y = 2. (a) Unique Solution x + 2y = 1 and x + 3y = 1. The set of solutions is (x. Observe that in this case. Suppose a. namely all x ∈ R. As a last example. The three cases are illustrated by examples. 21 . b = 0 then the system has infinite number of solutions. both the equations represent the same line. y)t = (1. a or b is non-zero. we have to look at the points of intersection of the given three planes. b ∈ R. (c) No Solution x + 2y = 1 and 2x + 4y = 3. 0)t . Consider the equation ax + by = c. b. then this linear equation represents a line in R2 . a1 b2 − a2 b1 = 0 but a1 c2 − a2 c1 = 0. 3 (Solution of a Linear System) A solution of the linear system Ax = b is a column vector y with entries y1 . The unique solution to this system is (x.1) is called homogeneous if b1 = 0 = b2 = · · · = bm and non-homogeneous otherwise.  . . for 1 ≤ i ≤ m and 1 ≤ j ≤ n.. the three planes intersect at a point.1) is satisfied by substituting yi in place of xi . 1. and b =  . . . bi ∈ R.1) = where for 1 ≤ i ≤ n. That is.1. . 2. . . if it satisfies Ax = 0. In this case. and is called the trivial solution. . the system Ax = 0 is called the associated homogeneous system. . x + 2y + 2z = 5 and 3x + 4y + 4z = 11. For a system of linear equations Ax = b. The set of solutions to this system is (x.22 CHAPTER 2. z)t = (1. yn such that the linear system (2. x =  .1.1.. . y2 . 0)t + z(0. (b) Infinite Number of Solutions Consider the system x + y + z = 3. A non-zero n-tuple x. x+ 4y + 2z = 7 and 4x+ 10y − z = 13.  .  .  . i. 1)t .  . . . z)t = (1. is the augmented matrix of the linear system (2.  am1 am2 · · · amn xn bm The matrix A is called the coefficient matrix and the block matrix [A b] . y. aij . and 1 ≤ j ≤ m. We rewrite the above equations in the form Ax = where b. . xn is a set of equations of the form a11 x1 + a12 x2 + · · · + a1n xn a21 x1 + a22 x2 + · · · + a2n xn .   . with z arbitrary: the three planes intersect on a line.   . 1)t. (c) No Solution The system x + y + z = 3. .e. .  . y2 .1. bm (2. if yt = [y1 . z)t = (1. the entry aij of the coefficient matrix A corresponds to the ith equation and j th variable xj .  . .1. . Definition 2. Remark 2.2 Observe that the ith row of the augmented matrix [A b] represents the ith equation and the j th column of the coefficient matrix A corresponds to coefficients of the j th variable xj . .  . is called a non-trivial solution. LINEAR SYSTEM OF EQUATIONS (a) Unique Solution Consider the system x+ y + z = 3.  . Definition 2.1. . y. Note: The zero n-tuple x = 0 is always a solution of the system Ax = 0. −1. x2 .      a11 a12 · · · a1n x1 b1        a21 a22 · · · a2n   x2   b2  A= .1 (Linear System) A linear system of m equations in n unknowns x1 . .1). 2 − z. .  . we get three parallel lines as intersections of the above planes taken two at a time. am1 x1 + am2 x2 + · · · + amn xn = = b1 b2 . yn ] then Ay = b holds. That is. x + 2y + 2z = 5 and 3x + 4y + 4z = 13 has no solution. .1. The readers are advised to supply the proof. Linear System (2. 2 and 3 are called elementary operations. interchange of two equations. Now. z) = (1. 1. ROW OPERATIONS AND EQUIVALENT SYSTEMS 23 2. Or in terms of a vector.2) have the same set of solutions. and 4x + 10y − z = 13. y. x + y + z = 3.) (obtained by subtracting 4 times the first equation from the third equation.1.1. (why?) 3. 1)}. which has the same set of solution as the system (2.4 Let us solve the linear system x + 7y + 3z = 11. (why?) 2. z = 1 implies y = 4−1 = 1 and x = 3 − (1 + 1) = 1. (2.1 (Elementary Operations) The following operations 1. =3 =4 =1 divide the second equation by 2 divide the third equation by 7 (2. (why?) 5. we eliminate x from 2nd and 3rd equation to get the linear system x+y+z 6y + 2z 6y − 5z =3 =8 =1 (obtained by subtracting the first equation from the second equation.1.1.2.1 A Solution Method Example 2.2.1.) This system and the system (2.5) (2. say “interchange the ith and j th equations”.3) 2.3) to get the system x+y+z 6y + 2z 7z =3 =8 =7 obtained by subtracting the third equation from the second equation.1. (why?) 4.) .2 Row Operations and Equivalent Systems Definition 2.4) and system x+y+z 3y + z z has the same set of solution.3). Solution: 1. y.1. 1. we eliminate y from the last equation of system (2. The system (2.2. The above linear system and the linear system x+y+z x + 7y + 3z 4x + 10y − z =3 = 11 = 13 Interchange the first two equations. the set of solution 3 t is { (x.1.2) with the original system.1. (compare the system (2.1.1.2) has the same set of solution.4) (2. z) : (x. Using the 1st equation. Using the 2cd equation. αn ) is also a solution for the k th Equation (2. and aj1 α1 + aj2 α2 + · · · ajn αn = bj . (compare the system (2. we get back to the linear system from which we had started. . (2. It will be a useful exercise for the reader to identify the inverse operations at each step in Example 2. in Example 2.4). inverse operation sends us back to the step where we had precisely started.2. “divide the k th equation by c = 0”. 2. Let (α1 .2. Definition 2. 2.2.2. using Equation (2. .1. (α1 . Proof.) Remark 2. .1) Therefore. 3. Then substituting for αi ’s in place of xi ’s in the k th and j th equations. 1.4. The linear systems at each step in Example 2. if we interchange the first and the second equation.” The reader is advised to prove the result for other elementary operations. This means the operation at Step 1. . after applying a finite number of elementary operations.1. has an inverse operation. But then the k th equation of the linear system Cx = d is (ak1 + caj1 )x1 + (ak2 + caj2 )x2 + · · · + (akn + cajn )xn = bk + cbj .2) or the system (2.1.1. . multiply a non-zero constant throughout an equation. the application of a finite number of elementary operations helped us to obtain a simpler system whose solution can be obtained directly. So. In other words. Note that at Step 1.5).1). . namely.1.5) and the system (2. αn ) be a solution of the linear system Ax = b. In this case. (ak1 + caj1 )α1 + (ak2 + caj2 )α2 + · · · + (akn + cajn )αn = bk + cbj .1. say “multiply the k th equation by c = 0”. replace an equation by itself plus a constant multiple of another equation.2. Therefore. (compare the system (2. observe that the elementary operations helped us in getting a linear system (2. Lemma 2. In Example 2. which was easily solvable.2 1. have corresponding inverse operations.3) with (2.2.3).2. LINEAR SYSTEM OF EQUATIONS 2.4.4) with (2.1.1. “interchange the ith and j th equations”. We prove the result for the elementary operation “the k th equation is replaced by k th equation plus c times the j th equation. . Then the linear systems Ax = b and Cx = d have the same set of solutions. a simpler linear system is obtained which can be easily solved.4 Let Cx = d be the linear system obtained from the linear system Ax = b by a single elementary operation.24 CHAPTER 2. .1. .1. we get ak1 α1 + ak2 α2 + · · · akn αn = bk .3 (Equivalent Linear Systems) Two linear systems are said to be equivalent if one can be obtained from the other by a finite number of elementary operations. say “replace the k th equation by k th equation plus c times the j th equation”.4 are equivalent to each other and also to the original linear system. Note that the three elementary operations defined above.2) (2. the systems Ax = b and Cx = d vary only in the k th equation. “replace the k th equation by k th equation minus c times the j th equation”. That is.) 3.2). α2 .4. α2 .1. say “replace the k th row by k th row plus c times the j th row”. .2.2.2. Exercise 2.2.  . [A b] =  . multiply a non-zero constant throughout a row.2. .5). We prove the theorem by induction on n. Proof.       2 0 3 5 1 0 3 5 0 1 1 2 2 2 − −→ −  −− −   −→   2 0 3 5 R12 0 1 1 2 R1 (1/2) 0 1 1 2  .5. For solving a linear system of equations. Lemma 2. we applied elementary operations to equations.2. interchange of two rows. .4 again at the “last step” (that is.9 The three matrices given below are row equivalent. 1 1 1 3 1 1 1 3 1 1 1 3   0 1 1 2   Whereas the matrix 2 0 3 5 is not row equivalent to the matrix 1 1 1 3  1  0 1 0 1 2 3 1 1  2  5 . assume that the theorem is true for n = m. suppose n = m + 1.  . say “interchange the ith and j th rows”. we have the proof in this case.4 answers the question. xn and the sign of equality (that is. If n = 1. . . . . Lemma 2.6 (Elementary Row Operations) The elementary row operations are defined as: 1. 2.2.2.2.1 Gauss Elimination Method Definition 2. ROW OPERATIONS AND EQUIVALENT SYSTEMS 25 Use a similar argument to show that if (β1 . Now. Therefore.. . Example 2. “ = ”) are not disturbed. Let us formalise the above section which led to Theorem 2. denoted Rk (c).  .7 Find the inverse row operations corresponding to the elementary row operations that have been defined just above. It is observed that in performing the elementary operations. denoted Rij . replace a row by itself plus a constant multiple of another row. at the (m + 1)th step from the mth step) to get the required result using induction. If n > 1.10 (Forward/Gauss Elimination Method) Gaussian elimination is a method of solving a linear system Ax = b (consisting of m equations in n unknowns) by bringing the augmented matrix   a11 a12 · · · a1n b1    a21 a22 · · · a2n b2   . 3. Definition 2. say “multiply the k th row by c = 0”. Definition 2.4 is now used as an induction step to prove the main result of this section (Theorem 2. . denoted Rkj (c).  . . the calculations were made on the coefficients (numbers).2.8 (Row Equivalent Matrices) Two matrices are said to be row-equivalent if one can be obtained from the other by a finite number of elementary row operations. we just need to work with the coefficients. Apply the Lemma 2. x2 . βn ) is a solution of the linear system Cx = d then it is also a solution of the linear system Ax = b. β2 . am1 am2 · · · amn bm .5 Two equivalent systems have the same set of solutions. . in place of looking at the system of equations as a whole. . Let n be the number of elementary operations performed on Ax = b to get Cx = d. Hence. These coefficients when arranged in a rectangular array gives us the augmented matrix [A b].2. 3 2. Theorem 2.2. .2. . The variables x1 . . .2. 3 = 5 2 =2 =1  1 0  0 1 0 0 3 2 3 x + 2z y+z z 1 1  2 . . 2x + 3z y+z x+y+z =5 =2 =3  2  0 1  0 1 1  3 5  1 2 . Interchange 1st and 2nd equation (or R12 ). the augmented matrix is 2 0 1 1 lowing steps. LINEAR SYSTEM OF EQUATIONS       c11 0 . 1 5 2 The last equation gives z = 1. the second equation now gives y = 1. Hence the set of solutions is (x.  3 5 1 0 x + 3z = 2 2 2  y+z =2 0 1 1 1 1 y − 1z = 2 0 1 −2 2 4. 1)t . a unique solution. y+z 2x + 3z = = 2 5 3 1 3 1  2  5 . Finally the first equation gives x = 1. . 1 3  2.  3 5 1 0 x + 3z = 2 2 2  y+z =2 0 1 1 3 = −2 −3z 0 0 −3 2 2 5.2. cmn d1 d2 .26 to an upper triangular form CHAPTER 2. 1. . Multiply the 3rd equation by −2 3  2 . Example 2. Add −1 times the 1st equation to the 3rd equation (or R31 (−1)). . 1. Divide the 1st equation by 2 (or R1 (1/2)). Add −1 times the 2nd equation to the 3rd equation (or R32 (−1)).. 0 c12 c22 . . .11 Solve the linear system by Gauss elimination method. y. .   x+y+z =  0 1  Solution: In this case. −3 2  5 2  (or R3 (− 2 )). dm  This elimination process is also called the forward elimination method. . 3  5 2 3. The following examples illustrate the Gauss elimination procedure. 1 2 5 2  2 . z)t = (1. 3 x + 2z y+z x+y+z = 5 2 =2 =3 1 0  0 1 1 1 3 2 1 1  2 . . . 0 ··· ··· . The method proceeds along the fol3   . ··· c1n c2n . 1 2  1 3  1 2 . Add −3 times the first equation to the third equation.2. z)t = (1. the set of solutions is (x. x+y+z y+z 3x + 4y + 4z =3 =2 = 11  1 1  0 1 3 4  1  0 0  1  0 0  1 3  1 2 .12 Solve the linear system by Gauss elimination method. 0 0 2. 4 11  1 3  1 2 .2.2. Add −3 times the first equation to the third equation.13 Solve the linear system by Gauss elimination method. x+y+z x + 2y + 2z = 3 = 5 27 3x + 4y + 4z = 11   1 1 1 3   Solution: In this case. 1)t.2. z)t = (1. with z arbitrary. In other words. Example 2. x+y+z x + 2y + 2z = 3 = 5 3x + 4y + 4z = 12   1 1 1 3   Solution: In this case. x+y+z y+z y+z =3 =2 =2 1 1 1 3. 2 − z. 2. x+y+z y+z y+z =3 =2 =3 1 1 1 . y. −1. 0)t + z(0. the augmented matrix is 1 2 2 5  and the method proceeds as follows: 3 4 4 11 1. 1 3 2. Add −1 times the first equation to the second equation. Add −1 times the second equation to the third equation x+y+z y+z =3 =2 1 1 0 Thus. x+y+z y+z 3x + 4y + 4z =3 =2 = 12  1 1  0 1 3 4  1  0 0  1 3  1 2 . the system has infinite number of solutions. ROW OPERATIONS AND EQUIVALENT SYSTEMS Example 2. Add −1 times the first equation to the second equation. 4 12  1 3  1 2 . the augmented matrix is 1 2 2 5  and the method proceeds as follows: 3 4 4 12 1. δij is usually referred to as the Kronecker delta function.3 Row Reduced Echelon Form of a Matrix Definition 2. Example 2. z. 0 0  5 2   is not in the row reduced form. (why?) 1 0 Definition 2. 2. the column containing this 1 has all its other entries zero.2. In . Hence.3 (Leading Term.3.3. The matrix  0 0  0 1   are also in row reduced form. This can never hold for any value of x. Let [C d] be the row-reduced matrix obtained by applying the Gauss elimination method to the augmented matrix [A b].4 (Basic.3. Leading Column) For a row-reduced matrix.1 (Row Reduced Form of a Matrix) A matrix C is said to be in the row reduced form if 1. Then the variables corresponding to the leading columns in the first n columns of [C d] are called the basic variables. 2. Recall that the (i. The variables which are not basic are called free variables. One of the most important examples of a row reduced matrix is the n × n identity matrix. 3. y. the system has no solution. 0 1 The third equation in the last step is 0x + 0y + 0z = 1. 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 1 0 −1 0 1 0   0 0  0 0   and  0 0 0 1 1 0 0 0 0 0 1 0 4 0 1 0  0 0  2. j)th entry of the identity matrix is  1 if i = j . . Add −1 times the second equation to the third equation x+y+z y+z 0 =3 =2 =1 LINEAR SYSTEM OF EQUATIONS  1  0 0 1 1 0  1 3  1 2 . Definition 2.28 CHAPTER 2.2 1. A matrix in the row reduced form is also called a row reduced matrix. the first non-zero entry in each row of C is 1. Iij = δij = 0 if i = j.14 Note that to solve a linear system.3. the first non-zero entry of any row is called a leading term. one needs to apply only the elementary row operations to the augmented matrix [A b]. The columns containing the leading terms are called the leading columns. Free Variables) Consider the linear system Ax = b in n variables and m equations. The matrices  0 0  1 0  3. Ax = b. Remark 2. 3.3. j) or (k. (e) x + y + z = 3. if we start with a 3 × 3 identity matrix I3 . j if (k. x + y + 4z = 6 and x + y − 4z = −1. . which is obtained by the application of the elementary  1   th entry of E is (E ) matrix. In .15  1  1. Eij (c). 1 0 That is.3. ) = (j. 9 11 13 15 −3 −1 13 10 2  2. Let A = 2 3  1 2  2 0 3 4 3 3 5   1 2 0 −  −→  4 R23 3 4 2 0 6   1 3 0   5 6 = 0 0 3 4  0 0  0 1 A = E23 A. the (k.14 There are three types of elementary matrices. 1 2. 2.2 Elementary Matrices Definition 2. ) c if (k.j)    0 otherwise Example 2. In . j) = c if i = j = k .13 A square matrix E of order n is called an elementary matrix if it is obtained by applying exactly one elementary row operation to the identity matrix.  . j) . Eij . Thus. k k (i. which is obtained by the application of the elementary row operation Rij (c) to the identity  1 if k =   th entry of E (c) is (E ) matrix. )    0 otherwise In particular. x + y − z = 1. interchanging the two rows of the matrix A is same as multiplying on the left by the corresponding elementary matrix. In . ) =    0 row operation Rij to the identity if k = and = i. In . we see that the left multiplication of elementary matrices to a matrix results in elementary row operations. x + y − z = 1 and x + y + 4z = 6. which is obtained by the application of the elementary row operation Rk (c) to the identity  1 if i = j and i = k   th entry of E (c) is (E (c)) matrix.  −6 −2   8 6 4 0 −2 −4    −8 −10 −12 −4 −6 −8 31 2. then E23  1  = 0 0   c 0 0 0   0 1 . The (k. 1.3. Then 6   1 0 0   0 . otherwise 3. The (i. In other words. E1 (c) = 0 1 0 0 1 0 2 3 0 3 4 5  0  4 . ) = (i. ROW REDUCED ECHELON FORM OF A MATRIX (d) x + y + z = 3. Ek (c).   −1 1 3 5 1 3 5 7   1. Find the row-reduced echelon form of the following matrices.3. i) .2. ) = (i. Remark 2. and E23 (c) = 0 1 0 0 1  0  c . ) 1 ij ij (k. ij ij (k. When is this P unique? 4. 3.32 CHAPTER 2. That is.  0 1  2.3. Let A and B be two m × n matrices.18 1. Then f (A) = 2 3 0 = A 0 0 1 = AE23 .16 The column transformations obtained by right multiplication of elementary matrices are called elementary column operations. consider an m × n matrix A and an elementary matrix E of order n. namely 1.  1 2  Example 2. E is the matrix obtained from I by applying the elementary row operation e. Does the Gauss-Jordan method also corresponds to multiplying by elementary matrices on the left? Give reasons. we solved linear systems using Gauss elimination method or the Gauss-Jordan method. In the examples considered. there is a corresponding column transformation. We summarize: Definition 2. Is the inverse of an elementary matrix. 2 0 6 42 1 1 0 1 1 3 1 3 2 7 55 3 2 1 6 42 0 2 1 6 40 0 2 1 6 40 0 1 0 1 1 1 0 0 1 0 1 3 1 1 1 3 0 0 1 3 2 3 2 3 1 1 1 3 1 1 1 3 3 −−− 6 −−→ −→ 6 − 7 7 7 1 1 25 55 R21 (−2) 40 −2 1 −15 R23 40 0 −2 1 −1 0 1 1 2 2 3 2 3 2 3 1 0 0 1 1 1 1 3 3 −−→ − −→ 7 7− − − 6 7−− − 6 25 R3 (1/3) 40 1 1 25 R12 (−1) 40 1 1 25 0 0 1 1 0 0 1 1 3 3 1 7 15 1 −→ − R13 −−→ −− R32 (2) −−− −−→ R23 (−1) Now.3. Then multiplying by E on the right to A corresponds to applying column transformation on the matrix A. . 2.3. Let e be an elementary row operation and let E = e(I) be the corresponding elementary matrix. Show that e(A) = EA. 0 1 0 3 5 4 Exercise 2. Show that the Gauss elimination method is same as multiplying by a series of elementary matrices on the left to the augmented matrix. we have encountered three possibilities. where P is product of elementary matrices.17 Let A = 2 0 3 4  3  3 and consider the elementary column operation f which interchanges 5     1 0 0 1 3 2     the second and the third column of A.4 Rank of a Matrix In previous sections. Show that every elementary matrix is invertible. Consider the augmented matrix [A b] = 2 0 1 1 same as the matrix product LINEAR SYSTEM OF EQUATIONS  1 2  3 5 . for each elementary matrix. also an elementary matrix? 2. Then prove that the two matrices A. Then the result of the steps given below is 1 3 E23 (−1)E12 (−1)E3 (1/3)E32 (2)E23 E21 (−2)E13 [A b]. B are row-equivalent if and only if B = P A. existence of a unique solution. Therefore. 1 1 0 Solution: Here we have     1 2 1 1 2 1 − − − − − −→   −− − − − − −   (a) 2 3 1 R21 (−2). it is clear that row-equivalent matrices have the same row-rank.1 (Consistent. 2 as follows. 0 −1 1 1 1 2     1 2 1 1 2 1 −− − − − −  − − − − −→    (b) 0 −1 −1 R2 (−1). Based on the above possibilities. we write ‘row-rank (A)’ to denote the row-rank of A. Thus. and 3. The question arises. 0 0 2 0 −1 1     1 0 −1 1 2 1 −−−−−−  − − − − − − →  (c) 0 1 1 R3 (1/2).3 1. 0 −1 −1 1 1 0 . RANK OF A MATRIX 2. row-rank(A) = 3. as to whether there are conditions under which the linear system Ax = b is consistent. The last matrix in Step 1d is the row reduced form of A which has 3 non-zero rows. R13 (1) 0 1 0 0 0 1 0 0 1  1  1 .4.2 (Row rank of a Matrix) The number of non-zero rows in the row reduced form of a matrix is called the row-rank of the matrix.4. By the very definition. no solution. we have the following definition. Determine the row-rank of A = 2 3 1 1 Solution: To determine the row-rank of A. existence of an infinite number of solutions. 33 Definition 2.  1 2  Example 2. 0 0 1 0 0 2     1 0 0 1 0 −1 −−−−−→  − − − − − −   (d) 0 1 1  R23 (−1). This result can also be easily deduced from the last matrix in Step 1b. we proceed     1 2 1 1 2 1 − − − − − −→   −− − − − − −   (a) 2 3 1 R21 (−2). To proceed further. note that the number of non-zero rows in either the row reduced form or the row reduced echelon form of a matrix are same. R31 (−1) 0 −1 −1 . R32 (1) 0 1 1 . Determine the row-rank of A = 2 3 1 .4. Also.4. For a matrix A. Inconsistent) A linear system is called consistent if it admits a solution and is called inconsistent if it admits no solution. we need a few definitions and remarks. R12 (−2) 0 1 1  . Definition 2. R31 (−1) 0 −1 −1 .   1 2 1   2. Recall that the row reduced echelon form of a matrix is unique and therefore. the number of non-zero rows is a unique number. The answer to this question is in the affirmative.2. Then the matrix D can be written in the s Ir B form . . Then there exist elementary matrices E1 . Note that. Then the row-reduced echelon form of A agrees with the first n columns of [A b].5. The first non-zero entry (the leading term) in each non-zero column moves down in successive columns. i2 . We now apply column operations to the matrix C. . 1) block of D is an identity matrix. the ith column s will have 1 in the sth row and zero elsewhere.3. . It will be proved later that row-rank(A) = column-rank(A). ir . Therefore. . F = . As the (1. C will have r leading columns.7 Let A be a matrix of rank r. Let C be the row reduced echelon matrix obtained by applying elementary row operations to the given matrix A. Let D be the matrix obtained from C by successively interchanging the sth and ith column of C for 1 ≤ s ≤ r. which has the following properties: 1. denoted rank (A). This gives the required result. Es and F1 . . we can have a matrix. Thus we are led to the following definition. we deduce row-rank(A) = 2. . . The first nonzero entry in each column is 1. F such that Ir 0 E1 E2 .34   1 2 1 1 2 − − − − −→  −− − − − −   (b) 0 −1 −1 R2 (−1). the matrix C will have the first r rows as the non-zero rows. E2 . Then the system of equations Ax = 0 has infinite number of solutions. . 2.3. we can define column-rank of A as the number of non-zero columns in B.4. 0 0 Proof. for 1 ≤ s ≤ r. 0 LINEAR SYSTEM OF EQUATIONS From the last matrix in Step 2b. Corollary 2. 2) can be made the zero matrix by application of column operations to D.16) to the matrix A.  1  1 . and hence row-rank(A) ≤ row-rank([A b]).6 The number of non-zero rows in the row reduced form of a matrix A is called the rank of A. The reader is advised to supply a proof. Remark 2. After application of a finite number of elementary column operations (see Definition 2. . Definition 2. 3. . 0 0 the block (1.4 Let Ax = b be a linear system with m equations and n unknowns. As rank(A) = r.8 Let A be a n × n matrix of rank r < n. . say B. . Theorem 2. F2 . So by Remark 2. say i1 .4.4.4. R32 (1) 0 1 0 −1 −1 0 0  CHAPTER 2. A column containing only 0’s comes after all columns with at least one non-zero entry. .4. .5 Consider a matrix A. . Remark 2. Es A F1 F2 . where B is a matrix of appropriate size. . . prove B1 A = 0 0 S3 0 0 0 0 0 that the matrix A1 is an r × r invertible matrix. B2 AC2 = . F2 . Then A can be written as A = BC. (b) if AB is defined.2. Let A and B be two matrices. If P and Q are invertible matrices and P AQ is defined then show that rank (P AQ) = rank (A). . Then show that A = ABX for some matrix X. [Hint: Choose non-singular matrices P.17. Then show that there exists invertible matrices Bi . Let A be an m × n matrix of rank r.2. Let A be an n × n matrix with rank(A) = n.] 0 A1 0 0 0 and P (AB)R = 9. Determine the ranks of the coefficient and the augmented matrices that appear in Part 1 and Part 2 of Exercise 2. . Show that (a) if A + B is defined. 3. Q2 . AC1 = . . . Let A be any matrix of rank r. Let Q1 . . RANK OF A MATRIX 35 Proof. By Theorem 2. if BA is defined and rank (A) = rank (BA). . . then rank(AB) ≤ rank(A) and rank(AB) ≤ rank(B). . Hence. Let A and B be two matrices such that AB is defined and rank (A) = rank (AB).7. . r + 2. . . Define Q = F1 F2 . E2 . 1 3 2 0 1 0 5. 8.12. and B3 AC3 = . . Then check that AQi = 0 for i = r + 1. . Also. as the elementary martices Ei ’s are being multiplied on the left of the matrix Exercise 2. 7. then show that A C B 0 −1 = 0 B −1 C −1 .4. . . we can use the Qi ’s which are non-zero (Use Exercise 1. . Q and R such that P AQ = " C 0 0 C −1 A1 . Find matrices P and Q which are product of elementary matrices such that B = P AQ where A = 2 4 8 1 0 0 and B = . Similarly. . 4.3. 10. n. If matrices B and C are invertible and the involved partitioned products are defined. Ci such that S1 0 A1 0 Ir 0 R1 R2 . Let A = [aij ] be an invertible matrix and let B = [pi−j aij ] for some nonzero real number p.4.2) to generate infinite number of solutions. Then the matrix 0 0    AQ =   0  Ir 0 . . Es and F1 . where both B and C have rank r and B is a matrix of size m × r and C is a matrix of size r × n. . . there exist elementary matrices E1 . 6. Define X = R 0 0 # " # 0 Q−1 . F . Es A F1 F2 . then rank(A + B) ≤ rank(A) + rank(B). −B −1 AC −1 . .9 1. F such that Ir 0 E1 E2 . F = . then A = # BA Y " for some matrix Y. Qn 0 0 be the columns of the matrix Q. . Then prove that A is row-equivalent to In . 2. Find the inverse of B.4. the respective variables x1 . we arrive at the existence and uniqueness results for the linear system Ax = b. Observations: 1. 2. B12 = −(A−1 A12 )P −1 . x5 and x6 are the basic variables.36 CHAPTER 2. Suppose A is the inverse of a matrix B. Partition A and B as follows: A11 A21 A12 B11 . k2 and k3 to the free variables x3 . then show that −1 B11 = A−1 + (A−1 A12 )P −1 (A21 A11 ). B22 A= −1 If A11 is invertible and P = A22 − A21 (A11 A12 ). x4 and x7 . B21 = −P −1 (A21 A−1 ). Thus. 3.1 Example Consider a linear system Ax = b which after the application of the Gauss-Jordan method reduces to a matrix [C d] with  1  0  0 [C d] =  0   0 0 0 1 0 0 0 0 2 1 0 0 0 0 −1 3 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 2 5 −1 1 0 0  8  1  2 . . We start with some observations. This number is also equal to the number of non-zero rows in [C d]. 4. x4 and x7 are free variables. The remaining variables. x3 . Based on this observation.5 Existence of Solution of Ax = b We try to understand the properties of the set of solutions of a linear system through an example. B= A22 B21 B12 . 2. LINEAR SYSTEM OF EQUATIONS 11. This example is more or less a motivation. we want to see the set of solutions. 2. The number of non-zero rows in C is 4. 2. 5. respectively. 11 11 11 11 and B22 = P −1 .5. The first non-zero entry in the non-zero rows appear in columns 1. 4   0 0 For this particular matrix [C d]. using the Gauss-Jordan method. x2 . 5 and 6. We assign arbitrary constants k1 . the solution set of the linear system has a unique n × 1 vector x0 satisfying Ax0 = b. The interested readers can read the proof in Appendix 15. The following corollary of Theorem 2.5. Cui = 0. Then exactly one of the following statement holds: 1.1. . EXISTENCE OF SOLUTION OF AX = B Hence.1 [Existence and Non-existence] Consider a linear system Ax = b.5. respectively. and m×1. and for 1 ≤ i ≤ 3. u2 =  1  and u3 =  0  .5. un−r are n × 1 vectors satisfying Au0 = b and Aui = 0 for 1 ≤ i ≤ n − r. Suppose rank (A) = r and rank([A b]) = ra . 3.1 is a very important result about the homogeneous linear system Ax = 0. where u0 . . A similar idea is used in the proof of the next theorem and is omitted. b are vectors with orders n×1. if ra = r = n.2 Let A be an m × n matrix and consider the linear system Ax = b. and x. the linear system has no solution. 2.5. we have the set of solutions as   x1 x   2   x3    x4  =     x5    x6  x7   8 − 2k1 + k2 − 2k3 1 − k − 3k − 5k  1 2 3      k1     k2     2 + k3       4 − k3 37 = k3         8 −2 1 −2 1 −1 −3 −5                 0 1 0 0         0 + k1  0  + k2  1  + k3  0  . u1 =  0  . . .2. If r < ra . 2.                 1 0 0 2         −1 0 0 4 1 0 0 0 Then it can easily be verified that Cu0 = d. u1 . .         −2 1 −2 8 −5 −3 −1 1                 0 0 1 0         Let u0 = 0 . if ra = r < n.5. Remark 2.1.5.                 2 0 0 1         4 0 0 −1 0 0 0 1 where k1 . we see that the linear system Ax = b is consistent if and only if rank (A) = rank([A b]). 1 ≤ i ≤ n − r}.2 Main Theorem Theorem 2. the set of solutions of the linear system is an infinite set and has the form {u0 + k1 u1 + k2 u2 + · · · + kn−r un−r : ki ∈ R. k2 and k3 are arbitrary. where A is a m × n matrix. Then by Theorem 2. we need to show that rank(A) < n. 2. we have obtained a non-trivial solution x0 . 2x + 3y + kz = 3. x + 2y + cz = 4. x − 2y + 7z = c is consistent. where x0 is a particular solution of Ax = b and xh is a solution Ax = 0. b.4 Consider the linear system Ax = b. That is. 2x + 6y − 11z = b.3. So. Then the homogeneous system Ax = 0 has a non-trivial solution if and only if rank(A) < n. Under this assumption. From this infinite set.5. c so that the linear system x + 2y − 3z = a. ii) a unique . (c) x + y + 2z = 3. k2 ∈ R. u − v = xh for some solution xh of Ax = 0.1). (b) x + y + z = 3.5.5. by Theorem 2. 2. by the uniqueness of the solution under the condition r = ra = n (see Theorem 2. (f) x − 2y = 1. x + y + 2cz = 7. n = rank(A) = rank [A 0] = ra . The system Ax = 0 has a non-trivial solution. (a) x + y + z = 3. 2x + 3y + 2cz = k. Proof. Ax0 = 0 and x0 = 0.6 1. (d) kx + y + z = 1. x2 are two solutions of Ax = 0. The system Ax = b has a unique solution for every b.38 CHAPTER 2. Proposition 2. Exercise 2. (e) x + 2y − z = 1. That is.5. Find the condition on a. x0 . Let A be an n × n matrix. solution and iii) infinite number of solutions. If u.5. we get x0 = 0. x + y + kz = 1. x + ky + 3z = 2. for b = 0. Now. Then ra = rank [A 0] = rank(A) < n.5. x − y + kz = 1. 3. we can choose any vector x0 that is different from 0. That is. x + 2y + 3cz = k. In conclusion. the set of solutions of the system Ax = b is of the form. So. Then the two statements given below cannot hold together. Suppose x1 .1. Suppose the system Ax = 0 has a non-trivial solution. we have a solution x0 = 0. That is. Thus. A contradiction to the fact that x0 was a given non-trivial solution. For what values of c and k-the following systems have i) no solution. v are two solutions of Ax = b then u − v is a solution of the system Ax = 0.3 Let A be an m × n matrix. x + 2y + 4z = k. If the system A2 x = 0 has a non trivial solution then show that Ax = 0 also has a non trivial solution. On the contrary. let us assume that rank(A) < n. 2. LINEAR SYSTEM OF EQUATIONS Corollary 2. the solution set of the linear system Ax = 0 has infinite number of vectors x satisfying Ax = 0. Then k1 x1 + k2 x2 is also a solution of Ax = 0 for any k1 . Hence. any two solutions of Ax = b differ by a solution of the associated homogeneous system Ax = 0.5. x + 2y + cz = 5. We now state another important result whose proof is immediate from Theorem 2. 1.5.1 and Corollary 2. Remark 2.5 1. assume that rank(A) = n. x + ky + z = 1. {x0 + xh }. Also A0 = 0 implies that 0 is a solution of the linear system Ax = 0. ky + 4z = 6. 2. rank (A) = n. That is. 1 =⇒ 2 Let if possible rank(A) = r < n. Since A is invertible.5. where B1 is an r × n matrix. Then there exists an invertible matrix P (a product of elementary B1 B2 C1 matrices) such that P A = .5.5. 0. E2 . . Hence. .5. 0.5. the last row of the row reduced echelon form of A will be (0. = 0 C2 (2. 4.2. Let B = matrices) such that P A = B2 0 0 P = P In = P (AB) = (P A)B = C1 0 C2 0 C1 B1 + C2 B2 B1 . That is. 2 =⇒ 3 Suppose A is of full rank.1) Thus the matrix P has n − r rows as zero rows. Thus. 1). 3 =⇒ 4 Since A is row-equivalent to the identity matrix there exist elementary matrices E1 . 0 0 C2 where C1 is an r × n matrix.5.9 Let A be a square matrix of order n. Then A−1 exists. Then . A is of full rank. A is a product of elementary matrices. the row reduced echelon form of A is the identity matrix. Proof. 1. A is invertible.8 will be used in the next subsection to find the inverse of an invertible matrix. EXISTENCE OF SOLUTION OF AX = B 39 2. the row reduced echelon form of A has all non-zero rows. Then A−1 exists.8 For a square matrix A of order n. We will prove that the matrix A is of full rank. A is row-equivalent to the identity matrix. A contradiction to P being a product of invertible matrices. rank(A) = r < n. let A−1 = . where B1 is an r×r matrix. . Let if possible. the following statements are equivalent. This implies. Ek such that A = E1 E2 · · · Ek In . Suppose there exists a matrix B such that AB = In . We know that elementary matrices are invertible and product of invertible matrices is also invertible. Suppose that AB = In . where the Ei ’s are elementary matrices. .5. A is of full rank. Proof. P cannot be invertible. Then there exists an invertible matrix P (a product of elementary B1 C1 C2 . A is product of elementary matrices. We repeat the proof for the sake of clarity. But A has as many columns as rows and therefore. 4 =⇒ 1 Suppose A = E1 E2 · · · Ek . Then P = P In = P (AA−1 ) = (P A)A−1 = B1 0 B2 0 B1 C1 + B2 C2 C1 . . 1. The ideas of Theorem 2. Theorem 2. . The idea used in the proof of the first part also gives the following important Theorem. Suppose there exists a matrix C such that CA = In . Hence. Theorem 2. we get the required result. 2.2) . . = 0 B2 (2. 3.5.3 Equivalent conditions for Invertibility Definition 2.7 A square matrix A or order n is said to be of full rank if rank (A) = n. . . 2.11 The following statements are equivalent for a square matrix A of order n. A is invertible as well. for the linear system Ax = 0. 1.8.8 A is of full rank. 3.5. 3 =⇒ 1 For 1 ≤ i ≤ n.5. . So. 3. the system Ax = b has a unique solution x = A−1 b. Hence. x2 . it is enough to show the existence of 1. . . . Using the first part.12 is non-zero. 1 . Let A be a 1 × 2 matrix and B be a 2 × 1 matrix having positive entries. . A is of full rank. Therefore. Remark 2. That is. This contradicts the assumption that Ax = 0 has only the trivial solution x = 0. xn ]. en ] = In . for every b. A is an invertible matrix. BA = In as well. the linear system Ax = 0 has infinite number of solutions. by Theorem 2. Show that a triangular matrix A is invertible if and only if each diagonal entry of A 2. Thus. Then by Theorem 2. Proof. A contradiction to P being a product of invertible matrices. Then AB = A[x1 . . x2 . Ax = b has a solution x for every b. Thus. . Hence. 0. 1 ≤ i ≤ n.9. Axn ] = [e1 . Exercise 2. Let A be an n × m matrix and B be an m × n matrix.10 This theorem implies the following: “if we want to show that a square matrix A of order n is invertible. . ith position By assumption. either a matrix B such that AB = In 2. xn ] = [Ax1 . Ax = 0 has only the trivial solution x = 0. 1. . rank (A) = n. That is.40 CHAPTER 2.5. Define a matrix B = [x1 . this system has a solution xi for each i. . Ax2 . . Thus by Corollary 2. it is clear that the matrix C in the second part. . the ith column of B is the solution of the system Ax = ei . . 1 =⇒ 2 Since A is invertible. That is.1 the system Ax = 0 has a unique solution x = 0. . . by Theorem 2. . and consider the linear system Ax = ei . Theorem 2. the number of unknowns is equal to the rank of the matrix A.5. . or a matrix C such that CA = In .5.5.5. 1 =⇒ 3 Since A is invertible. the matrix A is invertible. P cannot be invertible.3. . . . 2 =⇒ 1 Let if possible A be non-invertible. 0)t . by Theorem 2.8. LINEAR SYSTEM OF EQUATIONS Thus the matrix P has n − r rows as zero rows. define ei = (0. . Prove that the matrix I − BA is invertible if and only if the matrix I − AB is invertible. 0. using Theorem 2. the matrix A is not of full rank. is invertible.5. Hence AC = In = CA. e2 .5. That is. Which of BA or AB is invertible? Give reasons. A is invertible. −1/4 −1/4 3/4  3 2 − − −→ 1 1 0 0 −− − − 2 7 R23 (−1/3) 6 0 5 − − − − 40 1 0 − − −→ 3 R13 (−1/2) 0 0 1 4 2 3 −3 1 0 0 8 −− − − 6 − − −→ −1 7 5 R12 (−1/2) 40 1 0 4 3 0 0 1 4 5 8 −1 4 −1 4 1 8 3 4 −1 4 3 4 −1 4 −1 4 −1 4 3 4 −1 4 −3 8 −1 7 4 5 3 4 3 −1 4 −1 7 . Thus.2. Let A be a square matrix of order n.5. 1 1 2   2 1 1 1 0 0   Solution: Consider the matrix 1 2 1 0 1 0 . If B = In . 0 2 2 − 2 1 0 R2 (2/3) 0 1 1 − 3 2 0 3 3 1 1 3 1 0 1 2 −2 0 1 0 2 3 −2 0 1 2 2     1 1 1 0 0 0 0 1 1 2 1 1 1 2 2 2 2 2 − − −→  − − − −   1 1 1 1 2 4. then A−1 = C or else A is not invertible. Also.5.14 Find the inverse of the matrix 1 2 1 using the Gauss-Jordan method.4 Inverse and the Gauss-Jordan Method We first give a consequence of Theorem 2. 0 1 3 − 3 2 0 R32 (−1/2) 0 1 3 − 3 0 3 3 3 1 1 0 1 2 −2 0 1 0 0 4 −3 −1 1 2 3 3     1 1 1 1 1 1 1 2 2 0 0 0 0 1 2 2 2 2 − −→  −− −   1 1 2 2 1 5. Suppose the row reduced echelon form of the matrix [A In ] is [B C]. E2 . This implies A−1 = E1 E2 · · · Ek .5. 4 5 3 4 3 . .   2 1 1   Example 2. 60 4 0 2 1 7. . . Corollary 2. EXISTENCE OF SOLUTION OF AX = B 41 2. Suppose that a sequence of elementary row-operations reduces A to the identity matrix. Summary: Let A be an n × n matrix. Proof. Then E1 E2 · · · Ek In = A−1 .8 and then use it to find the inverse of an invertible matrix. A sequence of steps in the Gauss-Jordan method 1 1 2 0 0 1 are:     1 1 2 1 1 0 0 2 1 1 1 0 0 2 2 − −→  −− −   1. 0 1 3 − 3 0 R3 (3/4) 0 1 1 − 3 0 3 3 3 1 3 4 1 0 0 3 −3 −1 1 0 0 1 −4 −1 4 3 4 2 1 6. let E1 .5. Then the same sequence of elementary row-operations when applied to the identity matrix yields A−1 .5. the inverse of the given matrix is −1/4 3/4 −1/4 . 1 2 1 0 1 0 − − − 0 2 1 − 2 1 0 −−→ 2 R31 (−1) 1 1 0 2 3 −2 0 1 1 1 2 0 0 1 2     1 1 1 1 1 1 2 1 2 1 0 0 0 0 2 2 2 2 −− −  − −→    1 1 1 3 3. 60 4 0 1 2 1 0 1 2 1 2 1 3 1 0 0 1 1 2 −1 3 −1 4 0 2 3 −1 4 1 8 3 4 −1 4 1 0 5 8 −1 4 −1 4  3/4 −1/4 −1/4   8. Apply the Gauss-Jordan method to the matrix [A In ]. . Ek be a sequence of elementary row operations such that E1 E2 · · · Ek A = In .13 Let A be an invertible n× n matrix. 1 2 1 0 1 0 R1 (1/2) 1 2 1 0 1 0 1 1 2 0 0 1 1 1 2 0 0 1     1 1 1 −−→ 1 1 1 1 2 2 1 0 0 −−− 0 0 2 2 2 2    R21 (−1)  1 3 2. we mean the submatrix B of A. Cofactor of a Matrix) The number det (A(i|j)) is called the (i. ii)   6 −7 1 3 2 0 3 5 of the following matrices. we associate inductively (on n) a number.6. 2|1. We write Aij = det (A(i|j)) .   2 −1 3 3 3    3 2 . Definition 2.5  1 2 0 4  i)  0 0 0 0 2 2 1 3 1 +3· −2· 1 1 2 2 2 3 = 4 − 2(3) + 3(1) = 1.6.6. LINEAR SYSTEM OF EQUATIONS inverse of the following matrices using the Gauss-Jordan method. Then. a33 = a11 (a22 a33 − a23 a32 ) − a12 (a21 a33 − a31 a23 ) + a13 (a21 a32 − a31 a22 ) = a11 a22 a33 − a11 a23 a32 − a12 a21 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 (2. and 2 4 Definition 2. Then. written det(A) (or |A|) by  if A = [a] (n = 1).1)  1  For example. otherwise. (ii) 2 2 2 4 7 CHAPTER 2.6 Determinant Notation: For an n × n matrix A.2 (Determinant of a Square Matrix) Let A be a square matrix of order n. Let A = 1 2 a11 a21 2 1 a12 .6. is the number (−1)i+j Aij . by A(α|β). The (i. denoted Cij . called the determinant of A. for A =  a11  2. (iii) −1 3 −2 . 1 2  Example 2. A(1|3) = 7 1 3 .4 1.5. which is obtained by deleting the αth row and β th column. Example 2. j)th cofactor of A. if A = 2 1  2 3  3 1 then 2 2 det(A) = |A| = 1 · Exercise 2.1 Consider a matrix A = 1 3 2 4 A(1. iii) 1 b b2  . 3) = [4]. det(A) = |A| = a11 A11 − a12 A12 = a11 a22 − a12 a21 .  a n det(A) = (−1)1+j a1j det A(1|j) .  j=1 For example. Let A = a21 a31 a12 a22 a32 det(A) = |A| = a11 A11 − a12 A12 + a13 A13 = a11 a22 a32 a21 a23 − a12 a31 a33 a21 a23 + a13 a31 a33 a22 a32  a13  a23  . Then A(1|2) = 7 1 2 2 .6. a22 det(A) = |A| = 1 − 2 · 2 = −3.   3  2 . 2 4 1 4 7 2. j)th minor of A.  0 1 c c2 0 .15 Find the    1 1 2 3    (i) 1 3 2 . 2 7 3 2 0 1. Find the determinant   3 5 2 8  0 2 0 2   .3 (Minor.42 Exercise 2.6.    1 1 a a2  5    . With A. Then consider the parallelogram. Part 1 of Lemma 2. u2 ) and vt = (v1 . It is called non-singular if det(A) = 0. Area(P QRS) = = (u) (v) sin(θ) = (u) (v) 1− u•v (u) (v) 2 (u)2 + (v)2 − (u • v)2 = (u1 v2 − u2 v1 )2 = |u1 v2 − u2 v1 |.” Hence. Many authors define the determinant using “Permutations. v2 ) be two vectors in R2 . Q = u. Remark 2. the determinant is ± times the area of the parallelogram. P QRS. is the length of the 2 1 vector u. .6.6. DETERMINANT 2.3. The proof of the next theorem is omitted. if all the elements of one row or column are 0 then det(A) = 0. 4. if B is obtained from A by interchanging two rows. 2. if θ is the angle between the vectors u and v.2. With the above notation. We denote the length by (u).7 implies that “one can also calculate the determinant by expanding along any row. Show that the determinant of a triangular matrix is the product of its diagonal entries. one also has n det(A) = j=1 (−1)k+j akj det A(k|j) . formed by the vertices {P = (0.6.6 A matrix A is said to be a singular matrix if det(A) = 0.8 1. the claim holds. if B is obtained from A by multiplying a row by c then det(B) = c det(A).6. S = v. R = u + v}. Let ut = (u1 . where i = j then det(B) = det(A). 2.6. and u • u = (u2 + u2 ). if A is a square matrix having two rows equal then det(A) = 0.6. That is. in R2 . if B is obtained from A by replacing the jth row by itself plus k times the ith row. 3. Hence. for every k. √ Recall that the dot product.7 Let A be an n × n matrix. Remark 2. cos(θ) = (u) (v) Which tells us. 43 Definition 2.” It turns out that the way we have defined determinant is usually called the expansion of the determinant along the first row. 1 ≤ k ≤ n. 0)t . Theorem 2. for an n × n matrix A. u • v = u1 v1 + u2 v2 . Then 1. then det(B) = − det(A). then u•v .9 1. The interested reader is advised to go through Appendix 15. 5. We Claim: Area (P QRS) = det u1 u2 v1 v2 = |u1 v2 − u2 v1 |. . . denoted Adj(A). and so on. Definition 2. to compute the volume of the parallelopiped P. Then the following properties of det(A) also hold for the volume of an n-dimensional parallelopiped formed with 0 ∈ Rn×1 as one vertex and the vectors u1 . | det(A)| = volume (P ). then the vectors u1 . v. 0. 1.6.10 (Adjoint of a Matrix) Let A be an n × n matrix. . volume (P ) = |w • (u × v)|. 0) as a vertex and the vectors u. . this parallelopiped lies on an (n − 1)-dimensional hyperplane. volume of a unit n-dimensional cube is 1. j)th minor and the (i. w3 Let P be the parallelopiped formed with (0. w as adjacent vertices. v3 ) and w = (w1 . Recall that the cross product of two vectors in R3 is. v2 . 0)t . . C12     4 2 −7 2 3    3 1 . for 1 ≤ i.6. where θ is the angle between the vector w and the normal vector to the parallelogram formed by u and v. LINEAR SYSTEM OF EQUATIONS 2. . 0)t . then the determinant of the new matrix is α · det(A).6. for any n × n matrix A. . Note here that if A = [ut . 2. | det(A)| = |0| = 0. Then Adj(A) = −3 −1 5  .6. So. . u3 ). un ] be an n × n matrix. for some α ∈ R. u2 = (0. 0. 1 0 −1 2 2 1+2 = (−1) A12 = −3. wt ]. (c) If u1 = ui for some i. . u2 . C13 = (−1)1+3 A13 = 1. its n-dimensional volume will be zero. . 0. w3 ) be three elements of R3 . v = (v1 . . u2 . for 1 ≤ i ≤ n. vt . u2 . In general. un will give rise to an (n − 1)dimensional parallelopiped.12 Let A be an n × n matrix. . then det(A) = 1. . u3 v1 − u1 v3 . un ∈ Rn×1 and let A = [u1 . . . and un = (0. . n aij Cij = j=1 j=1 aij (−1)i+j Aij = det(A). . . . 1)t . . 3. . u2 . j)th cofactor of A. Hence. . Also. w2 . . . . u2 . Thus. . Let u1 . as the original volume gets multiplied by α. So. 1  Example 2. u1 v2 − u2 v1 ). Then 1.11 Let A = 2 1 1+1 as C11 = (−1) A11 = 4. . This is also true for the volume. n Theorem 2. 2 ≤ i ≤ n. So. . (b) If we replace the vector ui by αui . . u × v = (u2 v3 − u3 v2 . 0. un as adjacent vertices: (a) If u1 = (1. The matrix B = [bij ] with bij = Cji .1 Adjoint of a Matrix Recall that for a square matrix A. Then observe that u × v is a vector perpendicular to the plane that contains the parallelogram formed by the vectors u and v. . . .44 CHAPTER 2. The actual proof is beyond the scope of this book. Let u = (u1 . Also. j ≤ n is called the Adjoint of A. we need to look at cos(θ). it can be proved that | det(A)| is indeed equal to the volume of the n-dimensional parallelopiped. then u1 det(A) = u2 u3 v1 v2 v3 w1 w2 = u • (v × w) = v • (w × u) = w • (u × v). the notations Aij and Cij = (−1)i+j Aij were respectively used to denote the (i. . 6. By construction again.6.5. j=1 aij C j aij (−1) +j A j = 0. Thus.12 (recall Theorem 2. j=1 n n A Adj(A) ij = k=1 aik Adj(A) 0 det(A) kj = k=1 aik Cjk = if i = j if i = j Thus. Since.2) 0 = det(B) = j=1 n (−1) (−1) j=1 +j b j det B( |j) = aij det A( |j) = (−1) j=1 n +j aij det B( |j) = Now. By the construction of B.6. • the other rows of B are the same as that of A.6. by Theorem 2.14 If A is a non-singular matrix. for i = . two rows (ith and th ) are equal. then n det(A) aij Cik = Adj(A) A = det(A)In and 0 i=1  1/2 −1/2 1/2   = −1/2 −1/2 1/2  . Hence. A(Adj(A)) = det(A)In . det(A) (2.2. By Part 5 of Lemma 2. det(A) = 0 ⇒ A−1 = Proof.5. DETERMINANT n n 45 = j=1 2.9 A has an inverse and A−1 = 1 Adj(A) = In . we have n n 1 Adj(A). A has a right det(A) 1 Adj(A). Let B = [bij ] be a square matrix with • the th row of B as the ith row of A. A−1 The next corollary is an easy consequence of Theorem 2. and 3.12. +j aij C j . det(A) = 0.6. Corollary 2.13 Let A = 0 1 1 . Then 1 2 1    −1 1 −1   Adj(A) =  1 1 −1 −1 −3 1  and det(A) = −2. A inverse.6. by Remark 2.7.6. By Theorem 2.3. det(B) = 0. Therefore. det(A)  1 −1 0   Example 2. det A( |j) = det B( |j) for 1 ≤ j ≤ n.8. if j = k . A(Adj(A)) = det(A)In .6.9). Thus. 1/2 3/2 −1/2 if j = k . A is non-singular.6.6. . 2 and 4 of Lemma 2. Hence. This implies that det(A) = 0.5.6. Then det(A) = 0 and therefore. Ek be elementary matrices such that A = E1 E2 · · · Ek . . we get det(AB) = = = = = = det(E1 E2 · · · Ek B) = det(E1 ) det(E2 · · · Ek B) det(E1 E2 ) det(E3 · · · Ek B) . A = P −1 C. If A is a non-singular Corollary 2. we get det(A) det(B) = det(AB) = det(I) = 1.46 CHAPTER 2. This means. Suppose A is non-singular. let E1 . . by using Parts 1. Hence.15 Let A and B be square matrices of order n. A−1 = has an inverse. we get the required result in case A is non-singular. 0 as P −1 is non-singular det(P ) · 0 = 0 = 0 · det(B) = det(A) det(B). det(At ) = 0. Proof. either A is an elementary matrix or is a product of elementary matrices (see Theorem 2. Thus. Then A is non-singular if and only if A has an inverse.6.7 repeatedly. Step 1. Then A is not invertible. Corollary 2. Step 2. . Proof. E2 .6. At also doesn’t have an inverse (for if At has an inverse then A−1 = (At )−1 ). . Taking determinant of both sides. Therefore. Therefore.8). there exists an invertible matrix P such that P A = C. Then. Hence. Thus again by Corollary 2. Theret fore. Suppose det(A) = 0. Proof. . where C = So.16. Then det(A) = det(At ). Suppose A has an inverse.16 Let A be a square matrix. .14 gives det(A) = det(At ). 1 Adj(A). Then there exists a matrix B such that AB = I = BA. A is invertible. Thus. det(E1 ) det(E2 ) det(E3 · · · Ek B) Thus. we again have det(A) = 0 = det(At ). Let det(A) = 0. we have det(A) = det(At ).17 Let A be a square matrix. A doesn’t have an inverse. and therefore det(AB) = = = det((P −1 C)B) = det(P −1 (CB)) = det P −1 det(P −1 ) · det C1 B 0 C1 B 0 C1 .6.6. A det(A) Theorem 2. If A is singular. then det(A) = 0. the proof of the theorem is complete. So. Then det(AB) = det(A) det(B). Thus.16. det(E1 E2 · · · Ek ) det(B) det(A) det(B). LINEAR SYSTEM OF EQUATIONS Theorem 2. by Corollary 2. 2.6. DETERMINANT 47 2.6.2 Cramer’s Rule Recall the following: • The linear system Ax = b has a unique solution for every b if and only if A−1 exists. • A has an inverse if and only if det(A) = 0. Thus, Ax = b has a unique solution for every b if and only if det(A) = 0. The following theorem gives a direct method of finding the solution of the linear system Ax = b when det(A) = 0. Theorem 2.6.18 (Cramer’s Rule) Let Ax = b be a linear system with n equations in n unknowns. If det(A) = 0, then the unique solution to this system is xj = det(Aj ) , det(A) for j = 1, 2, . . . , n, where Aj is the matrix obtained from A by replacing the jth column of A by the column vector b. Proof. Since det(A) = 0, A−1 = x= 1 Adj(A). Thus, the linear system Ax = b has the solution det(A) 1 Adj(A)b. Hence, xj , the jth coordinate of x is given by det(A) xj = det(Aj ) b1 C1j + b2 C2j + · · · + bn Cnj = . det(A) det(A) The theorem implies that b1 b2 1 x1 = . det(A) . . bn and in general a11 a12 1 xj = . . det(A) . a1n for j = 2, 3, . . . , n.  1  Example 2.6.19 Suppose that A = 2 1 that Ax = b.    1 2 3    3 1 and b = 1 . Use Cramer’s rule to find a vector x such 1 2 2 3 1 = −1, 2 ··· ··· .. . ··· a1j−1 a2j−1 . . . anj−1 b1 b2 . . . bn a1j+1 a2j+1 . . . anj+1 ··· ··· .. . ··· a1n a2n . . . ann a12 a22 . . . an2 ··· ··· .. . ··· a1n a2n . , . . ann 1 2 Solution: Check that det(A) = 1. Therefore x1 = 1 3 1 2 1 x2 = 2 1 1 2 1 3 1 1 = 1, and x3 = 2 3 1 2 1 2 1 1 = 0. That is, xt = (−1, 1, 0). 1 48 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS 2.7 Miscellaneous Exercises 1. Let A be an orthogonal matrix. Show that det A = ±1. Exercise 2.7.1 2. If A and B are two n × n non-singular matrices, are the matrices A + B and A − B non-singular? Justify your answer. 3. For an n × n matrix A, prove that the following conditions are equivalent: (a) A is singular (A−1 doesn’t exist). (b) rank(A) = n. (c) det(A) = 0. (d) A is not row-equivalent to In , the identity matrix of order n. (e) Ax = 0 has a non-trivial solution for x. (f) Ax = b doesn’t have a unique solution, i.e., it has no solutions or it has infinitely many solutions.   2 0 6 0 4   5 3 2 2 7   4. Let A = 2 5 7 5 5 . We know that the numbers 20604, 53227, 25755, 20927 and 78421 are     2 0 9 2 7 7 8 4 2 1 all divisible by 17. Does this imply 17 divides det(A)? j−1 5. Let A = [aij ]n×n where aij = xi . Show that det(A) = 1≤i<j≤n (xj − xi ). [The matrix A is usually called the Van-dermonde matrix.] 6. Let A = [aij ] with aij = max{i, j} be an n × n matrix. Compute det A. 7. Let A = [aij ] with aij = 1/(i + j) be an n × n matrix. Show that A is invertible. 8. Solve the following system of equations by Cramer’s rule. i) x + y + z − w = 1, x + y − z + w = 2, 2x + y + z − w = 7, x + y + z + w = 3. ii) x − y + z − w = 1, x + y − z + w = 2, 2x + y − z − w = 7, x − y − z + w = 3. 9. Suppose A = [aij ] and B = [bij ] are two n × n matrices such that bij = pi−j aij for 1 ≤ i, j ≤ n for some non-zero real number p. Then compute det(B) in terms of det(A). 10. The position of an element aij of a determinant is called even or odd according as i + j is even or odd. Show that (a) If all the entries in odd positions are multiplied with −1 then the value of the determinant doesn’t change. (b) If all entries in even positions are multiplied with −1 then the determinant i. does not change if the matrix is of even order. ii. is multiplied by −1 if the matrix is of odd order. 11. Let A be an n × n Hermitian matrix, that is, A∗ = A. Show that det A is a real number. [A is a matrix with complex entries and A∗ = At .] 12. Let A be an n × n matrix. Then show that A is invertible ⇐⇒ Adj(A) is invertible. 2.7. MISCELLANEOUS EXERCISES 13. Let A and B be invertible matrices. Prove that Adj(AB) = Adj(B)Adj(A). 14. Let P = 49 A B be a rectangular matrix with A a square matrix of order n and |A| = 0. Then show C D that rank (P ) = n if and only if D = CA−1 B. 50 CHAPTER 2. LINEAR SYSTEM OF EQUATIONS Chapter 3 Finite Dimensional Vector Spaces Consider the problem of finding the set of points of intersection of the two planes 2x + 3y + z + u = 0 and 3x + y + 2z + u = 0. Let V be the set of points of intersection of the two planes. Then V has the following properties: 1. The point (0, 0, 0, 0) is an element of V. 2. For the points (−1, 0, 1, 1) and (−5, 1, 7, 0) which belong to V ; the point (−6, 1, 8, 1) = (−1, 0, 1, 1)+ (−5, 1, 7, 0) ∈ V. 3. Let α ∈ R. Then the point α(−1, 0, 1, 1) = (−α, 0, α, α) also belongs to V. Similarly, for an m × n real matrix A, consider the set V, of solutions of the homogeneous linear system Ax = 0. This set satisfies the following properties: 1. If Ax = 0 and Ay = 0, then x, y ∈ V. Then x + y ∈ V as A(x + y) = Ax + Ay = 0 + 0 = 0. Also, x + y = y + x. 2. It is clear that if x, y, z ∈ V then (x + y) + z = x + (y + z). 3. The vector 0 ∈ V as A0 = 0. 4. If Ax = 0 then A(−x) = −Ax = 0. Hence, −x ∈ V. 5. Let α ∈ R and x ∈ V. Then αx ∈ V as A(αx) = αAx = 0. Thus we are lead to the following. 3.1 3.1.1 Vector Spaces Definition Definition 3.1.1 (Vector Space) A vector space over F, denoted V (F), is a non-empty set, satisfying the following axioms: 1. Vector Addition: To every pair u, v ∈ V there corresponds a unique element u ⊕ v in V such that (a) u ⊕ v = v ⊕ u (Commutative law). (b) (u ⊕ v) ⊕ w = u ⊕ (v ⊕ w) (Associative law). (c) There is a unique element 0 in V (the zero vector) such that u ⊕ 0 = u, for every u ∈ V (called the additive identity). 51 where 1 ∈ R. Intuitively. 2.3 Let V be a vector space over F.1 is the following useful result. Proof of Part 1. (−1) Proof. 3. using the distributive law. In the same way. the first part implies α 0 u = (0 + 0) 0 = 0. the vector space is called a real vector space. Hence. Proof of Part 2. 2. Some interesting consequences of Definition 3. these results seem to be obvious but for better understanding of the axioms it is desirable to go through the proof.1. using the first part. β ∈ F and u ∈ V. Hence. ⊕ is called vector addition. Theorem 3. let us assume α = 0 (note that 1 exists and α is a real or complex number. u in u = u for every u ∈ V. Now suppose α u = 0. u ⊕ v = u is equivalent to −u ⊕ (u ⊕ v) = −u ⊕ u ⇐⇒ (−u ⊕ u) ⊕ v = 0 ⇐⇒ 0 ⊕ v = 0 ⇐⇒ v = 0. For u ∈ V. v ∈ V. the vector space is called a complex vector space. Scalar Multiplication: For each u ∈ V and α ∈ F. β ∈ F and u. Distributive Laws: relating vector addition with scalar multiplication For any α. We may sometimes write V for a vector space if F is understood from the context. α u = 0 if and only if either u is the zero vector or α = 0. u ⊕ v = u implies v = 0. Therefore. u = (0 u) ⊕ (0 u). If F = R.1. for any α ∈ F. one has 0 u = 0 for any u ∈ V.2 The elements of F are called scalars. If α = 0 then the proof is over. Remark 3. we have α 0=α (0 ⊕ 0) = (α 0) ⊕ (α 0). As 0 = 0 ⊕ 0. there corresponds a unique element α V such that (a) α · (β (b) 1 u) = (αβ) u for every α. hence α 0= 1 α 0= 1 α (α u) = ( 1 α) α u=1 u=u . FINITE DIMENSIONAL VECTOR SPACES (d) For every u ∈ V there is a unique element −u ∈ V such that u ⊕ (−u) = 0 (called the additive inverse). and that of V are called vectors. u = −u for every u ∈ V. by Axiom 1d there exists −u ∈ V such that −u ⊕ u = 0. (b) (α + β) Note: the number 0 is the element of F whereas 0 is the zero vector.52 CHAPTER 3. the following distributive laws hold: (a) α (u ⊕ v) = (α u = (α u) ⊕ (α u) ⊕ (β v). 3.1. u). Thus. If F = C. Then 1. VECTOR SPACES as 1 u = u for every vector u ∈ V.3. αx2 − 3α + 3) for (x1 . x2 + iy2 ∈ C and α ∈ R. . x2 + y2 ) and α Then R2 is a real vector space. . x2 ) = (αx1 . The set R of real numbers. (x1 . x2 ) ⊕ (y1 . bn ) in V and α ∈ R. . .1. x2 ) = (αx1 + α − 1. y2 ) = (x1 + y1 . .4 1. Let V = R+ (the set of positive real numbers). x2 + y2 − 3). be the set of n-tuples of real numbers. y2 ) ∈ R2 and α ∈ R. Then it can be easily verified that the vector (−1. . define. 53 3. an ). αan ) (x1 . define. y ∈ R} of complex numbers. α (x1 . called the real vector space of n-tuples. define. We now define a new vector addition and scalar multiplication as v1 ⊕ v2 = v1 · v2 and α v = vα for all v1 . (b) For x1 + iy1 . . y2 ) = (x1 + y1 + 1. Recall 6. This is not a vector space under usual operations of addition and scalar multiplication (why?). This vector space is denoted by Rn . v2 . (x1 + iy1 ) ⊕ (x2 + iy2 ) α Then C is a real vector space. Let V = R2 . 4. αx2 ). (x1 + iy1 ) ⊕ (x2 + iy2 ) = (x1 + x2 ) + i(y1 + y2 ) and (α + iβ) (x1 + iy1 ) = (αx1 − βy1 ) + i(αy1 + βx1 ). (x1 + iy1 ) = (x1 + x2 ) + i(y1 + y2 ) and = (αx1 ) + i(αy1 ). ⊕ ≡ + and ≡ ·) forms a vector space over R. . For u = (a1 . x2 ) : x1 .1. with the usual addition and multiplication (i.. . Then C forms a complex vector space. . 5. . 3) is the additive identity and V is indeed a real vector space. . . an + bn ) and α u = (αa1 . 2. Then V is a real vector space with addition and scalar multiplication defined as above. x2 ).e. Thus we have shown that if α = 0 and α u = 0 then u = 0. x2 ) ⊕ (y1 . 3. x2 ∈ R}. Consider the set R2 = {(x1 . v = (b1 . Let Rn = {(a1 . v ∈ R+ and α ∈ R. . y1 . Proof of Part 3.1. Define (x1 . . . (called component wise or coordinate wise operations). x2 . 1 ≤ i ≤ n}. . x2 + iy2 ∈ C and α + iβ ∈ C. We have 0 = 0u = (1 + (−1))u = u + (−1)u and hence (−1)u = −u. . we define u ⊕ v = (a1 + b1 . (a) For x1 + iy1 . √ −1 is denoted i. For x1 . y2 ∈ R and α ∈ R. .2 Examples Example 3. . Then R+ is a real vector space with 1 as the additive identity. a2 . . Consider the set C = {x + iy : x. (y1 . an ) : ai ∈ R. The operations defined above are called point wise addition and scalar multiplication. zn ) ⊕ (w1 . 10. . wn ) α (z1 . . . Then P(R) forms a real vector space. Then C([−1. It can be verified that Pn (R) is a real vector space with the addition and scalar multiplication defined by: f (x) ⊕ g(x) α f (x) = = (a0 + b0 ) + (a1 + b1 )x + · · · + (an + bn )xn . define. . Pn (R). Then f (x) = a0 + a1 x + a2 x2 + · · · + an xn and g(x) = b0 + b1 x + b2 x2 + · · · + bn xn for some ai . then Cn is a real vector space having n-tuple of complex numbers as its vectors. . Consider the set P(R). . zn ) = (z1 + w1 . g(x) ∈ P(R). Algebraically. . 1]. 11. α A=α [aij ] = [αaij ]. . Let f (x). . . . . . 1]) and α ∈ R. Observe that a polynomial of the form a0 + a1 x + · · · + am xm can be written as a0 + a1 x + · · · + am xm + 0 · xm+1 + · · · + 0 · xp for any p > m. of all polynomials of degree ≤ n with coefficients from R in the indeterminate x. . . FINITE DIMENSIONAL VECTOR SPACES 7. αzn ). we can assume f (x) = a0 + a1 x + a2 x2 + · · · + ap xp and g(x) = b0 + b1 x + b2 x2 + · · · + bp xp for some ai . Whereas. . . We now define the vector addition and scalar multiplication as f (x) ⊕ g(x) = α f (x) = (a0 + b0 ) + (a1 + b1 )x + · · · + (ap + bp )xp . then Cn is a complex vector space having n-tuple of complex numbers as its vectors. Remark 3. Hence. . Fix a positive integer n and let Mn (R) denote the set of all n × n matrices with real entries. 8. . Let f (x). . For f. . For (z1 . (z1 . bi ∈ R. . Consider the set. define (f ⊕ g)(x) (α f )(x) = f (x) + g(x). the scalars are Complex numbers and hence i(1. zn ) : zi ∈ C for 1 ≤ i ≤ n}. 0 ≤ i ≤ p. (a) If the set F is the set C of complex numbers.5 In Example 7a.54 CHAPTER 3. g(x) ∈ Pn (R). . . 0). Consider the set Cn = {(z1 . 9. 1]. zn + wn ) and = (αz1 . and αa0 + αa1 x + · · · + αan xn for α ∈ R. . Then Mn (R) is a real vector space with vector addition and scalar multiplication defined by A ⊕ B = [aij ] ⊕ [bij ] = [aij + bij ]. . 0) = (i. . (b) If the set F is the set R of real numbers. . wn ) ∈ Cn and α ∈ F. 0). of all polynomials with real coefficients. the scalars are Real Numbers and hence we cannot write i(1. for some large positive integer p. . . in Example 7b. 0 ≤ i ≤ n. z2 . Pn (R) = {a0 + a1 x + a2 x2 + · · · + an xn : ai ∈ R. 0 ≤ i ≤ n}. .1. . bi ∈ R. and = αf (x). for all x ∈ [−1. and αa0 + αa1 x + · · · + αap xp for α ∈ R. . g ∈ C([−1. (w1 . 1]) forms a real vector space. Fix a positive integer n. 1]) be the set of all real valued continuous functions on the interval [−1. Let C([−1. 0) = (i. zn ). .9 1. . z) ∈ R3 : z = x}. VECTOR SPACES 55 12. Then (a) S = {0}. From now on.1. . Let V (F) be a vector space. z) ∈ R3 : x + y + z = 3}. Let V and W be real vector spaces with binary operations (+. x2 . 2. . Then S is a subspace of R3 . V × W also forms a real vector space.1. Then S is not a subspace of R3 . (b) Let V (F) be a vector space. Then W is a subspace of the real vector space.1. and y1 ). •) and (⊕. ). (b) {(x1 . Let S = {(x. α • x1 and α y1 come from scalar multiplication in V and W. . On the right hand side. (S is again a plane in R3 but it doesn’t pass through the origin.1. The vector space Pn (R) is a subspace of the vector space P(R). xn ) : x1 + 2x2 = 4x3 }. .3 Subspaces Definition 3. 1]) : f (1/2) = 0}. y2 ) = α ◦ (x1 .3. v ∈ S. Consider the following operations on the set V × W : for (x1 . With the above definitions. Which of the following are correct statements? (a) Let S = {(x. define (x1 . Then S is a subspace of R3 . 5. Then the set {αx : α ∈ F} forms a vector subspace of V. . Let S = {(x. 2. y1 ) = (α • x1 . . x2 . Remark 3. (d) {(x1 . we will use ‘u + v’ in place of ‘u ⊕ v’ and ‘α · u or αu’ in place of ‘α u’. . . (x2 . .) 4. y1 ⊕ y2 ). .8 1. . α (x1 + x2 . z) ∈ R3 : z = x2 }. y. y. the set consisting of the zero vector 0. 3. 3 (c) {(x1 . z) ∈ R3 : x + y − z = 0}. . 1]). The readers are advised to justify the statements made in the above examples. C([−1. respectively. x2 .1. S(F) is said to be a subspace of V (F) if αu + βv ∈ S whenever α. (S is a plane in R3 passing through the origin. Which of the following are subspaces of Rn (R)? (a) {(x1 . y. y1 ). Then S is a subspace of R3 . x2 . Let x ∈ V.) 3. Let S = {(x. (b) S = V are vector subspaces of V. Similarly. y2 ) ∈ V × W and α ∈ R. .1. xn ) : x1 = x2 }. These are called trivial subspaces. respectively. y1 ) ⊕ (x2 . while y1 ⊕ y2 is the addition in W. Example 3. . . xn ) : x1 is rational }.7 Any subspace is a vector space in its own right with respect to the vector addition and scalar multiplication that is defined for V (F). we write x1 + x2 to mean the addition in V.6 (Vector Subspace) Let S be a non-empty subset of V. β ∈ F and u. where the vector addition and scalar multiplication are the same as that of V (F). y. (c) Let W = {f ∈ C([−1. Exercise 3. xn ) : x1 ≥ 0}. . (−1. 4)+ 0(3. 1. Then. z) ∈ R3 : 2x − y = z}. 1. 3) : α. α + β. 0). 5.10 (Linear Span) Let V (F) be a vector space and let S = {u1 . . .1. . 5. x2 . 5) does not have a unique expression as linear combination of vectors (1. . in this case. . 0).1. 2. un } be a non-empty subset of V. . 2)? Solution: We want to find α1 . 1. β ∈ R} {(α + 2β. zn ) :| z1 |=| z2 |}. 3). Proof. 3)+ (−1)(−1. The linear span of S = {(1. u + v = (α1 + β)w1 + · · · + (αn + βn )wn ∈ L(S). 5) is not a linear combination of the vectors (1. Which of the following are subspaces of i)Cn (R) ii)Cn (C)? (a) {(z1 . 1. βi ∈ F such that u = α1 w1 + α2 w2 + · · · + αn wn and v = β1 w1 + β2 w2 + · · · + βn wn . 0.1) Check that 3(1. . 5. . v ∈ L(S). for 1 ≤ i ≤ n there exist vectors wi ∈ S. Thus. . The linear span of S is the set defined by L(S) = {α1 u1 + α2 u2 + · · · + αn un : αi ∈ F. and if z = 2x − y. L(S) is a vector subspace of V (F). . 1. y. α2 . 5) a linear combination of (1. α + 3β) : α. Also. Let u. 5. 2. zn ) : z1 is real }. 2. (f) {(x1 . Hence. 3)} over R is L(S) = = = {α(1. . x2 . (b) {(z1 . 1 ≤ i ≤ n} If S is an empty set we define L(S) = {0}. Is (4.56 CHAPTER 3. Verify that (4. xn ) : |x1 | ≤ 1}. (1. 2. . (3. 0). β ∈ R} {(x. 1) − 1(1. u2 . S ⊂ L(S) and hence L(S) is non-empty subset of V. 1. . Then L(S) is a subspace of V (F). 0) + 0(1.1. 2) = (4. 2. the linear combination in terms of the vectors (1. 0)? 4.1. α3 ∈ R such that α1 (1. . Example 3. . 1.1. 5. 1. 0. 1. 3. 1. 5. (1. 1. and scalars αi . 1.4 Linear Combinations Definition 3. . z2 . take α = 2y − x and β = x − y. 5) = 5(1. 4) and (3.11 1. the vector (4. z2 . zn ) : z1 + z2 = z3 }. 3. . 5) is a linear combination of (1. (2. 3) + α2 (−1. 3). 0).12 (Linear Span is a subspace) Let V (F) be a vector space and let S be a non-empty subset of V. 3. 3. 4) + α3 (3. xn ) : either x1 or x2 or both is0}. Note that (4. 1) + β(2. 2). . (c) {(z1 . 4) and (3. 1) and (1. Lemma 3. as 2(α + 2β) − (α + β) = α + 3β. 1) as (4. 1) is unique. FINITE DIMENSIONAL VECTOR SPACES (e) {(x1 . For each vector. . . . and (1. 1. 3. and (1. . By definition. 2) = (4. 3. 1). 5). z2 . (−1. 5. 0. 1. 0). 5). . . 3. 1. 2. 1. . a2 . am ) = {α1 a1 + · · · + αi−1 ai−1 + αi (ai + caj ) + · · · m +αm am : α ∈ R. . ColumnSpace(A) = Range(A). NullSpace(A). ai + caj .1. the non-zero row vectors of a matrix in row-reduced form. . ai . Range(A). Definition 3. We prove the result for the elementary matrix Eij (c). . b2 . ai−1 . . . Proof. Lemma 3. forms a basis for the row-space. . denoted N (A) as {xt ∈ Rn : Ax = 0}. 3. . ColumnSpace(A) = L(b1 . bn ∈ Rm . . b2 . at ∈ Rn 1 2 m and columns b1 . Let at .u ∈ L(S) and therefore. Theorem 3. . 2. To show L(S) is the smallest subspace of V containing S. Then Row Space(A) = Row Space(B). 4. L(S) ⊆ W and hence the result follows. . . consider any subspace W of V containing S. Hence dim( Row Space(A)) = row rank of (A). .16 Let A be a real m × n matrix. . denoted Im (A) = {y : Ax = y for some xt ∈ Rn }.1. 1 ≤ ≤ m β a : β ∈ R. am ). Hence. RowSpace(A) = L(a1 . Then by Proposition 3. . . 1 ≤ ≤ m = =1 = L(a1 .13 Let V (F) be a vector space and W ⊂ V be a subspace. at 1 2 m be the rows of the matrix A.1. . . bn ). . . .14 Let S be a non-empty subset of a vector space V. . . If S ⊂ W. . S ⊆ L(S). Then 1.3. at . . 1 ≤ ≤ m} = =1 m α a + αi · caj : α ∈ R. .13. where c = 0 and i < j. at . .1. For every u ∈ S. N (A) is a subspace of Rn . 2. . Proof. then L(S) ⊂ W is a subspace of W as W is a vector space in its own right. . am ) = Row Space(A) Theorem 3. . . .1. Then using the rows at . ai−1 .17 Let A be an m × n matrix with real entries. . Then L(S) is the smallest subspace of V containing S. Note that the “column space” of a matrix A consists of all b such that Ax = b has a solution. Suppose B = EA for some elementary matrix E.1. Then B = Eij (c)A gives us Row Space(B) = L(a1 .15 Let A be an m × n matrix with real entries. . we define 1. . . u = 1. . VECTOR SPACES 57 Remark 3. 1.1. b2 .2) = 0}. 2. . . x3 = (1. we saw that a vector space has infinite number of vectors. x2 . 8. Give examples to show that the column space of two row-equivalent matrices need not be same. . dt . Show that P ∩ Q is a subspace of V.4. a repeated application of Lemma 3. (x. . Exercise 3. Let C([−1. Define P + Q = {u + v : u ∈ P. 5. Find all the vector subspaces of R2 . . Hence. Recall that Mn (R) is the real vector space of all n × n real matrices. W2 subspaces of C([−1. y1 ) = (x + x1 .18 1. 0). Let P and Q be two subspaces of a vector space V. Let P and Q be two subspaces of a vector space V. 2. Part 1) can be easily proved. then 1 2 m L(a1 . am ) = L(b1 . 0). Suppose we are able to choose certain vectors whose linear span is the whole space. It means that any vector space contains infinite number of other vector subspaces. x2 = (1. Hence the required result follows. . E2 . v ∈ Q}. y) ⊕ (x1 . . 1] (cf. For part 2). 0). 0). and 1 {f ∈ C([−1. the following questions arise: 1.58 CHAPTER 3. y) = (αx. at . 1])? 7. Let S = {x1 . Can we find the minimum number of such vectors? We try to answer these questions in the subsequent sections. 1. Let V = R. x4 } where x1 = (1. 1]) : f (0. Define (x. Show that P + Q is a subspace of V. let D be the row-reduced form of A with non-zero rows dt . . Is it possible to find/choose vectors so that the linear span of the chosen vectors is the whole vector space itself? 3. Example 3. When is P ∪ Q a subspace of V ? 4. 0. 0). if the rows of the matrix A are at . Show that any two row-equivalent matrices have the same row space. . y) : x. . 6. . Define x ⊕ y = x − y and α In this section. 3. Determine all xi such that L(S) = L(S \ {xi }). Then. . . . Therefore. 1]) be the set of all continuous functions on the interval [−1. Also show that L(P ∪ Q) = P + Q. Let V = {(x. FINITE DIMENSIONAL VECTOR SPACES Proof. Let W1 W2 = = {f ∈ C([−1. one can start with any finite collection of vectors and obtain their span. That is. 1.1. br ). .11).16 implies Row Space(A) = Row Space(B). . What are the conditions under which. . dt . x3 . 1. Ek . y ∈ R} over R. . Then B = Ek Ek−1 · · · E2 E1 A for some elementary matrices 1 2 r E1 . Which vector space axioms are not satisfied here? (b) Symn = {A ∈ Mn (R) : A = At } 9. 0) and α Show that V is not a vector space over R. 0. Let A be an m × n matrix. 1]) : f ( )exists }. . x4 = (1. the linear span of two distinct sets the same? 2. at . Prove that the following subsets are subspaces of Mn (R). (a) sln = {A ∈ Mn (R) : trace(A) = 0} (c) Skewn = {A ∈ Mn (R) : A + At = 0} x = −αx. 0. a2 . . 4 Are W1 . Also show that P ∪ Q need not be a subspace of V. 0. . vp+1 } is linearly dependent. β. Then check that 1(1. one needs to consider the equation α1 u1 + α2 u2 + · · · + αm um = 0.3. Therefore. Let S = {0 = u1 .2. . . Hence. . Then equation (3. . Let S = {(1. . 1). γu1 + ou2 + · · · + 0un = 0. (3.1) In case α1 = α2 = · · · = αm = 0 is the only solution of (3. vp . then vp+1 is a linear combination of v1 . Then check that in this case we necessarily have α = β = γ = 0 which shows that the set S = {(1. the set S is linearly dependent. 2. vp } be a linearly independent subset of a vector space V. 1)} is a linearly independent subset of R3 . αp+1 . Proof. . then every subset of S is also linearly independent. LINEAR INDEPENDENCE 59 3. Let S = {(1. 0). vp .2 1. If there exist some non-zero αi ’s 1 ≤ i ≤ m. 1. 1. un } be a set consisting of the zero vector. such that α1 u1 + α2 u2 + · · · + αm um = 0.4 Let {v1 . . v2 . 4)+(−1)(3. . 0).1 (Linear Independence and Dependence) Let S = {u1 . 1)+1(2. . . Let if possible αp+1 = 0. (1. .2) Claim: αp+1 = 0. . 0. . .2. . 2. Otherwise. . v2 . 1. . so the set S is a linearly dependent subset of R3 . for the system α1 u1 + α2 u2 + · · · + αm um = 0.2. if S = {u1 . the set S becomes a linearly independent subset of V. 0.2. 0. 2. . . um } is a non-empty subset of a vector space V.2. um } be any non-empty subset of V. . 1. 1. Then the zero-vector cannot belong to a linearly independent set. then to check whether the set S is linearly dependent or independent. .2 Linear Independence Definition 3. Otherwise. u2 . . α2 . 0)+ γ(1. (3. vp . The reader is required to supply the proof of other parts. 0. Since α1 = 1. vp+1 } is linearly dependent.1). . u2 . 2.2. We give the proof of the first part. then the set S is called a linearly dependent set.2) gives α1 v1 + α2 v2 + · · · + αp vp = 0 with not all αi . If S is a linearly independent subset of V. .2. . the set S becomes a linearly dependent subset of V.2. . 1 ≤ i ≤ p zero. 1. Since the set {v1 . (1. u2 . 1). . Thus. v2 . Then for any γ = o. 0).1). such that the set {v1 . 1)}. . . we have a non-zero solution α1 = γ and o = α2 = · · · = αn . vp } is linearly dependent which is contradictory to our hypothesis. Suppose there exists a vector vp+1 ∈ V. . . . Theorem 3. α2 = 1 and α3 = −1 is a solution of (3. the set S is called linearly independent. Example 3. In other words. Proof.3 Let V be a vector space. 1). 1)+β(1. 3. 3. . (1. If S is a linearly dependent subset of V then every set containing S is also linearly dependent. v2 . 5) = (0. there exist scalars α1 . by the definition of linear independence. αp+1 = 0 and we get vp+1 = − 1 αp+1 (α1 v1 + · · · + αp vp ). 1. Suppose there exists α. .2. 5)}. the set {v1 . γ ∈ R such that α(1. (3. 1) = (0. . . v2 . 4). . 1.2. not all zero such that α1 v1 + α2 v2 + · · · + αp vp + αp+1 vp+1 = 0. 0. Hence. (1. 3. 1. Proposition 3. 0). (2. v} is also linearly independent subset of V. (1. Show that if v ∈ K and v ∈ M then u ∈ H.2. . .6 Let {v1 . 1 ≤ i ≤ p + 1 and hence − ααi ∈ F for 1 ≤ i ≤ p. The same is true for column vectors. w} is linearly dependent but any set of 2 vectors from u.3. Show that S = {(1. v. . f2 . 1.5 Let {u1 . We don’t give their proofs as they are easy consequence of the above theorem. w is linearly independent. Corollary 3. 0). f1 + f2 . . v2 . Show that any set of k vectors in R3 is linearly dependent if k ≥ 4. uk ) = L(u1 . u2 } is linear independent subset of R2 . .2. 2. ( i. In general if {f1 . Suppose there exists a vector v ∈ V. . 1). 0). Then the set {v1 . 1. uk−1 ). 1)} is a linearly independent set in R3 . . Corollary 3. 1. . 1. Is the set of vectors (1. FINITE DIMENSIONAL VECTOR SPACES Note that αi ∈ F for every i.60 CHAPTER 3. 2. . p+1 We now state two important corollaries of the above theorem. Let u1 = (1. Exercise 3. 1)} ⊂ R4 . . Determine whether or not the vector (1. Let S = {(1. −1. . v and w such that {u. Under what conditions on α are the vectors (1 + α. . such that v ∈ L(v1 .4 and Corollary 3. 2). . 1. 1. 3. In R3 . 0).2. 7. and (b) L(B) = V. . (8. A non-empty subset B of a vector space V is called a Definition 3. (1. 5. v. u3 } is linearly independent subset of R2 ? 2. .2. 6. give an example of 3 vectors u. Further. . . 1.e. 0. Then there exists a smallest k.7 1. 1 + α) in C2 (R) linearly independent? 11. . v2 . v2 .2. un } be a linearly dependent subset of a vector space V. Find all choices for the vector u2 such that the set {u1 .1 (Basis of a Vector Space) basis of V if (a) B is a linearly independent set. v ∈ V and M be a subspace of V. vp } be a linearly independent subset of a vector space V. i. Consider the vector space R2 . . (1. 1) ∈ L(S)? 4. vp . . u2 . 3. u2 . 0) linearly independent subset of C2 (R)? 10. u2 . Show that S = {(1. (−2. If none of the elements appearing along the principal diagonal of a lower triangular matrix is zero. −1. . 0). Let u. vp ). . . every vector in V can be expressed as a linear combination of the elements of B. . 9. show that the row vectors are linearly independent in Rn . f3 } is a linearly independent set then {f1 .. Hence the result follows. Does there exist choices for vectors u2 and u3 such that the set {u1 . 2 ≤ k ≤ n such that L(u1 . . The next corollary follows immediately from Theorem 3. 3). (1. What is the maximum number of linearly independent vectors in R3 ? 8.3 Bases 1. 10)} is linearly dependent in R3 . u2 . 1). 1 − α) and (α − 1.5. let K be the subspace spanned by M and u and H be the subspace spanned by M and v. 1. f1 + f2 + f3 } is also a linearly independent set. 6. y. Definition 3. . 0). 0. Example 3. 1. 1)} forms an standard basis of R3 . then B = {(1. A vector in B is called a basis vector. e2 . A basis of V can be obtained by the following method: The condition x + y − z = 0 is equivalent to z = x + y.2 Let {v1 . 61 Remark 3. 0. 0) ∈ Rn . . 3). y. V is a complex vector space. . v2 . Otherwise. It can be easily verified that the vector (3. 0). 3. 1)} forms a basis of V. Any element a + ib ∈ V is expressible as a · 1 + b · i. a basis of V is {1}. {(1. By convention. the set B = {e1 . b ∈ R} and F = C. 3). . 3. x2 . 0. 0)} or B = {(1. Hence a basis of V is {1. This set is called the standard basis of Rn . . That is. y. . vp . (0. .3. (2. . 2) + (2. z ∈ R} be a vector subspace of R3 . Hence. This basis has infinite number of vectors as the degree of the polynomial can be any positive integer. the empty set is a basis of the vector space {0}. . 0. S cannot be a basis of V. . x. vp } is linearly independent and therefore the scalars αi − βi for 1 ≤ i ≤ p must all be equal to zero. Let V = {a + ib : a. 1. A basis of this vector space is the set {1. Note that any element a + ib ∈ V can be written as a + ib = (a + ib)1. . 2. 5) ∈ V and (3. 2. 1 i th place a basis of Rn . 0) : x. All the other vector spaces are finite dimensional. b ∈ R} and F = R. Also. Hence. (1. 1. the vector space of all polynomials with real coefficients.3. 0. 1) + y(0. . Then any v ∈ V is a unique linear combination of the basis vectors. . 2. xn . (0. 0).2. . That is. 3) = 4(1. x) + (0. z) : x+y−z = 0. 1. V is a real vector space. . 0.4 (Finite Dimensional Vector Space) A vector space V is said to be finite dimensional if there exists a basis consisting of finite number of elements. Let V = {(x. . the linear span of an empty set is {0}. v2 . . 1.3. αi = βi and we have the uniqueness. . Hence. y. . Recall the vector space P(R).3.3. vp } be a basis of a vector space V (F). 1). . Check that if V = {(x. 1. (0. Observe that i is a vector in C. . 3)} ⊂ V. for 1 ≤ i ≤ p. z) = (x. v2 . Then. . 1. Hence. (1. x.}. 1. the vector space of all polynomials is an example of an infinite dimensional vector space. 1). 0)} or · · · are bases of V. we replace the value of z with x + y to get (x. 1. For 1 ≤ i ≤ n. x + y) = (x. . 4. Then S = {(1. if n = 3. 5) = (1. 1.3 1. i}.3. . i ∈ R and hence i · (1 + 0 · i) is not defined. y. 0. . 0). Then by Remark 3. y. But then the set {v1 . . In Example 3. 2. v1 . Observe that if there exists a v ∈ W such that v = α1 v1 + α2 v2 + · · · + αp vp and v = β1 v1 + β2 v2 + · · · + βp vp then 0 = v − v = (α1 − β1 )v1 + (α2 − β2 )v2 + · · · + (αp − βp )vp . . . (1. . y ∈ R} ⊂ R3 . 0. 0)} or B = {(2.3. let ei = (0. . 0). . . 0. 2). BASES 2. Let V = {a + ib : a. y) = x(1. 0. (0. then the set {(1. 5. . the vector space V is called infinite dimensional. 6. 2) − (1. 2.3. en } forms That is. So. α2 .7 Let {v1 . wm } is linearly independent or not. 3. At the ith step. . we have got a basis of V. we choose a vector.1) with α1 . v2 ). . . . say. . . 1 ≤ j ≤ m. 1 ≤ i ≤ m. v1 ∈ V. . Then prove that each vector in V can be expressed in more than one way as a linear combination of vectors from S. then this set will be linearly dependent. αm as the m unknowns. . Exercise 3.6 1. . wm } is a set of vectors from V with m > n then this set is linearly dependent. . v2 . v2 . Let A be a matrix of rank r. the set {v1 . w2 . 2. . (1. So. . . vn } be a basis of a given vector space V. . vn } is a basis of V and wi ∈ V. v2 . or L(v1 .e. v2 . . vi ). (1. . . . v2 . . L(v1 . Suppose L(S) = V but S is not a linearly independent set. In the first case. v2 . v2 ∈ V such that v2 ∈ L(v1 ). . Therefore. 0). Else there exists a vector. . .3. Then the set {v1 } is linearly independent.6. . .3. 1). 1.  i. .. i. the set {v1 . we consider the linear system α1 w1 + α2 w2 + · · · + αm wm = 0 (3. Since we want to find whether the set {w1 . we have {v1 . . vi+1 ∈ V such that vi+1 ∈ L(v1 . In the second case. vi ) ⊂ V . . by Corollary 3. v3 } is linearly independent. say. vi ).2. such that w1 = a11 v1 + a21 v2 + · · · + an1 vn a12 v1 + a22 v2 + · · · + an2 vn . . . vi } as a basis of V. Show that the set {(1. wm = The set of Equations (3. Else there exists a vector. . vp } be a subset of a vector space V (F). FINITE DIMENSIONAL VECTOR SPACES Remark 3. there exist scalars aij . 1 − i)} is a basis of C3 (C). If {w1 . .1 Important Results Theorem 3.6. w2 = . v2 } is a basis of V. 1 ≤ i ≤ n. v2 .3. Step 3: If V = L(v1 . . .3. vi+1 } is linearly independent. This process will finally end as V is a finite dimensional vector space. for each i. . say. by Corollary 3. v3 ∈ V such that v3 ∈ L(v1 . . As {v1 . . .2. Then show that the r non-zero rows in the row-reduced echelon form of A are linearly independent and they form a basis of the row space of A. Step 2: If V = L(v1 ). the set {v1 . v2 .1) can be rewritten as    n n α1  m j=1 aj1 vj  + α2  m j=1 aj2 vj  + · · · + αm  m   n j=1 ajm vj  = 0 vn = 0. .5 We can use the above results to obtain a basis of any finite dimensional vector space V as follows: Step 1: Choose a non-zero vector. . then {v1 . .62 CHAPTER 3. say. Proof.6. 3. v2 . a1m v1 + a2m v2 + · · · + anm vn . Let S = {v1 . . 0. . v2 } is linearly independent. = . either V = L(v1 . . . . vi ) = V. Then by Corollary 3.2. If the solution set of this linear system of equations has more than one solution.3. . . . i=1 αi a1i v1 + i=1 αi a2i v2 + · · · + αi ani i=1 . v2 . . v2 ). w2 . .3. . 1.3. . . . . v2 . we can apply R23 (− 1 ) 2 to make the third row as the zero-row. 3. Aα = 0 where α = (α1 .3. . . 0. . In this case. vm } is linearly dependent if we take {u1 . R14 (−1)   0 0 −1 0  0 0 −1 0 1 1 0 1 0 −2 0 0 0 −2 0 0 1 −1 1 1 Observe that the rows 1. Then by the above theorem the set {v1 . 3 and 4 are non-zero. Then the set B is a basis of L(S) = V. (1. . . . 1. 1). BASES Since the set {v1 . . (1. Corollary 2. 1)} as a basis of L(S). αm ) and A =  . . . u2 . Hence.5. 1).  . . we get m = n. the equation (3. 1. Proof. Find a basis of L(S). R13 (−1). 1. 1. 2. third and fourth vectors of the set S. −1. denoted dim(V ). 1. 1. . Let B be the set of vectors in S corresponding to the non-zero rows of B. . 1). 1.  . Therefore. 1).  an1 an2 · · · anm of equations is strictly less than the number of unknowns. 1. u2 . in place of the elementary row operation R32 (−2). Hence. 1.1) has a solution with not all αi . . 1)} be a subset of R4 . we have m m m 63 αi a1i = i=1 i=1 αi a2i = · · · = αi ani = 0. . Construct a matrix A whose rows are the vectors in S. . (1. Definition 3. i. 1. 1). Use only the elementary row operations Ri (c) and Rij (c) to get the row-reduced form B of A (in fact we just need to make as many zero-rows as possible). vm } be two bases of V with m > n.3.3. .e. . . (1. wm } is a linearly dependent set. Thus. This contradicts the assumption that {v1 . . .10 Let V be a finite dimensional vector space. 1)} is a basis of L(S). B = {(1.  R32(−2)   R12 (−1). (1.9 Let S = {(1. i=1 Therefore. . v2 .8 Let V be a vector subspace of Rn with spanning set S. 1. zero. (1. (1. −1. . un } and {v1 . . . 1. . vn } is linearly independent. . 1). un } as the basis of V. Then any two bases of V have the same number of vectors. w2 . −1.11 (Dimension of a Vector Space) The dimension of a finite dimensional vector space V is the number of vectors in a basis of V.3. . .3. Observe that at the last step.3 implies that the solution set consists of infinite number of elements.   1 1 1 1 1 1 −1 1   Solution: Here A =   . v2 . . . . . We give a method of finding a basis of V from S. Remark 3. . the set {w1 . α2 .. 1).1) reduces to solving the system of homogeneous equations   a11 a12 · · · a1m    a21 a22 · · · a2m  t  . a basis of L(S) consists of the first. . Example 3. Since n < m.  . v2 . 0. 1 ≤ i ≤ m. finding αi ’s satisfying equation (3. Hence. Applying row-reduction to A. 1.3. the number . . we have 1 1 0 1 1 −1 1 1       1 1 1 1 1 1 1 1 1 1 1 1 1 1 −1 1 − − − − − − − − − − → 0 0 −2 0 − − − 0 0 0 0   − −→   −−−−−−−−−−   . we get {(1. −1. Corollary 3. Let {u1 . vm } is also a basis of V. −1.3.. x. 1). . 1). Then any set of n linearly independent vectors forms a basis of V. The solution set of the linear equations v + x − 3y + z = 0. . . v = y} be two subspaces of R5 . Then the set S can be extended to form a basis of V. v2 . x. 1. m > n. . .2. For 1 ≤ i ≤ n. −1)}. 0. c + id) = (a + ib)(1. 0. v2 . Then it can be easily verified that the set {e1 . define (f ⊕ g)(x) (t Then V is a real vector space.13 It is important to note that the dimension of a vector space may change if the underlying field (the set of scalars) is changed. .3. any vector (a + ib. The next theorem follows directly from Corollary 3. . y. i)} is a basis and dim(V ) = 4. c + id) = a(1. . vn } is a basis of V. (0. z)t = (y. 2. 0.3. 0. e2 . y. −1)t. 2). Also. .15 Let S be a linearly independent subset of a finite dimensional vector space V. {(1.3. we have found a linearly independent set S = {v1 .17 Let V = {(v. Then. 0). Hence. 2)t + x(0. w. z) ∈ R5 : w − x − z = 0. 1)} is a basis of C2 (C) and thus dim(V ) = 2. 2. Suppose.3. Example 3.3.12 1. i). and t ∈ R. . Solution: Let us find a basis of V ∩ W. Remark 3. f )(x) = = f (x) + g(x) and f (tx). In this case. Consider the real vector space C2 (R). 0).3.3.3. (0. Corollary 3. . 2y − x)t = y(1. z) ∈ R5 : v + x − 3y + z = 0} and W = {(v. FINITE DIMENSIONAL VECTOR SPACES Note that the Corollary 3. x2 . (a + ib. g ∈ V.16 Let V be a vector space of dimension n. consider the functions ei (x) = ei (x1 . x. 2. . 0. xn ) = xi . Hence. vr } ⊂ V. (0. .64 CHAPTER 3. (i. .15 is equivalent to the following statement: Let V be a vector space of dimension n. w − x − z = 0 and v = y is given by (v.2. the set {(1. For f. 0. 0). 0) + (c + id)(0. we can proceed as follows: . Example 3. Theorem 3. Find bases of V and W containing a basis of V ∩ W. vn in V such that {v1 . . y.7. 1.6 can be used to generate a basis of any non-trivial finite dimensional vector space. . . 0) + b(i. . 1. y.14 Let V be the set of all functions f : Rn −→R with the property that f (x+y) = f (x)+f (y) and f (αx) = αf (x). .6 and Theorem 3. w. 0) + c(0. To find a basis of W containing a basis of V ∩ W. w. en } is a basis of V and hence dim(V ) = n. every set of m vectors. x. a basis of V ∩ W is {(1. 2y. Then there exist vectors vr+1 . is linearly dependent. the proof is omitted. . 1) + d(0. Example 3. So. Thus. 1. Consider the complex vector space C2 (C). Theorem 3. (0. [Hint: On the contrary. see Appendix 15.1). y. Let V = {(x.3.3. Take the basis of V ∩ W found above as the first two vectors and that of W as the next set of vectors. w) ∈ R4 : x + y − z + w = 0. (0. z ∈ R. (0. 0). 1. find dim(Pn (R)). z. (1. W. Substituting y = 1. So. Find a basis of the vector space Pn (R). 0. 1. 1. 65 2. 2. y ∈ R. (0. (1. 2π]. gives a vector (0. x. 1. w = 1. 3y −v−x) for v. y. What if V is the complex vector space of all n × n Hermitian matrices? .3. 0. 1. V ∩ W and V + W. Heuristically. 0. Is it a basis of C3 (R) also? 4. ek } that are linearly dependent. 0.3. Prove that the collection of vectors {en : 1 ≤ n < ∞} is a linearly independent set. 2. (3. 1)} is a basis of C3 (C). Consider the real vector space. 0. z) gives us the vector (1. z. 0)}. we have the following very important theorem (for a proof. 2π]). 2). ek2 . That is. 2) ∈ V. (1. w) ∈ R4 : x + y − z + w = 0} be a subspace of R4 . 0. Also. w. z. −1).18 Let V (F) be a finite dimensional vector space and let M and N be two subspaces of V. w. Find bases and dimensions of V.2) Exercise 3. y. 1. It can be easily verified that a basis of W is {(1. 0) ∈ V. x + z. Then we have a finite set of vectors.4. Similarly. (0. 2). gives another vector (0. 1. 1. What can you say about the dimension of P(R)? 2. Show that the set {(1. x. 0. there exist scalars αi ∈ R for 1 ≤ i ≤ zero such that α1 sin(k1 x) + α2 sin(k2 x) + · · · + α sin(k x) = 0 for all x ∈ [0. we can also find the basis in the following way: A vector of W has the form (y. x = 1. and z = 0 in (y. . x + y + z + w = 0} and W = {(x. 0) ∈ W.3. Now for different values of m integrate the function Z 2π sin(mx) (α1 sin(k1 x) + α2 sin(k2 x) + · · · + α sin(k x)) dx 0 not all to get the required result. Substituting v = 0. 0). y. 0). Let W = {(x. 1. Then dim(M ) + dim(N ) = dim(M + N ) + dim(M ∩ N ).19 1. 1. 1. 1. 6. Let V be the set of all real symmetric n × n matrices. y. With this definition. a basis of V can be taken as {(1. substituting v = 0. x = 1 and y = 1.8 to get the required basis. 2)}. y. Now use Remark 3. 1. x. . v ∈ N }. 5. 1. For each n consider the vector en defined by en (x) = sin(nx). Find its basis and dimension. 1. Find a basis of W. 1. w = 1. z) for x. 1. y. w) ∈ R4 : x − y − z + w = 0. a vector of V has the form (v. Theorem 3. 1.3.] 3. 1. x + z. the vector subspace M + N is defined by M + N = {u + v : u ∈ M. −1). 0. Recall that for two vector subspaces M and N of a vector space V (F). C([0. 0. Also. Find its basis and dimension. . x. say {ek1 . 0. 0. of all real valued continuous functions. BASES 1. assume that the set is linearly dependent. . x + 2y − w = 0} be two subspaces of R4 . 1. x = 0 and y = 0. uQ such that u = uP + uQ where uP ∈ P and uQ ∈ Q. Let P = L{(1. Then show that each u ∈ Rn can be uniquely expressed as u = uP + uQ where uP ∈ P and uQ ∈ Q. 10. (c) A(n. (1. (1. Is the set. Let M (n. Let A =   be two matrices. For A and B find  . R) = {A ∈ M (n. 0. check that they are subspaces of M (n. R) : A + At = 0}. find its dimension. Is it necessary that uP and uQ are unique? 9. Show that P + Q = R3 and P ∩ Q = {0}. 11. Let P = L{(1. (f) the dimensions of all the vector subspaces so obtained. 15. Before going to the next section. (d) a basis each for the range spaces of A and B. (e) bases of the null spaces of A and B. determine uP . Show that there exists a vector u ∈ R3 such that u cannot be written uniquely in the form u = uP + uQ where uP ∈ P and uQ ∈ Q. 1)} be vector subspaces of R3 .66 CHAPTER 3. Also let W = {A ∈ V : a21 = −a12 }. R) : tr(A) = 0}. Let V be the set of all 2 × 2 matrices with complex entries and a11 + a22 = 0. 8. 13. R) denote the space of all n × n real matrices. If M and N are 4-dimensional subspaces of a vector space V of dimension 7 then show that M and N have at least one vector in common other than the zero vector. (c) a basis each for the row spaces of A and B. 1). 1. FINITE DIMENSIONAL VECTOR SPACES 7. Show that V is a real vector space. 12. Show W is a vector subspace of V. −1. and find its dimension. R) = {A ∈ M (n. Show that P + Q = R3 and P ∩ Q = {0}. and B =  −3 −5 1 −4 2 −2 4 0 8  −1 −1 1 2 4 2 5 6 10 the following: (a) their row-reduced echelon forms. Let P and Q be subspaces of Rn such that P + Q = Rn and P ∩ Q = {0}. Let W1 be a k-dimensional subspace of an n-dimensional vector space V (F) where k ≥ 1. Find its basis. 2. 0). 0)} and Q = L{(1. (a) sl(n. Recall the vector space P4 (R). (b) S(n. 1. 0). 1. 1. For the sets given below. R) and also find their dimension. Prove that there exists an (n − k)-dimensional subspace W2 of V such that W1 ∩ W2 = {0} and W1 + W2 = V. 1)} be vector subspaces of R3 . where recall that tr(A) stands for trace of A. If u ∈ R3 . R) = {A ∈ M (n. . (1. 0)} and Q = L{(1.     2 4 0 6 1 2 1 3 2 −1 0 −2 5  0 2 2 2 4      14. we prove that for any matrix A of order m × n Row rank(A) = Column rank(A). W = {p(x) ∈ P4 (R) : p(−1) = p(1) = 0} a subspace of P4 (R)? If yes. R) : A = At }. (b) the matrices P1 and P2 such that P1 A and P2 B are in row-reduced form.   r αm1 αm2 αmr   αmi uij i=1        i=1 α1i ui1  α1r α12 α11   r            α2r   α22   α21   i=1 α2i ui1   . Note that Row rank(A) = r. . . . . 1 ≤ i ≤ m. R2 . C2 . (α12 . R2 . . α22 .    . .   . Cn are linear combination of the r vectors (α11 . ur ) ∈ Rn . Therefore. Cn be the columns of A.   .   . . C1 =   = u11   . i=1 r i=1 r α1i ui2 . i=1 α2i uin ). . .  .20 Let A be an m × n real matrix. . . 67 Proof.3. (α1r . we have the required result. . . i=1 r α1i uin ). . .   . . . . . C2 . . . r r Rm = αm1 u1 + · · · + αmr ur = ( So. C2 . . Thus. Therefore. . till r α1i ui1 . Rm ) = r. for all i. . . . .  . . . Column rank(A) = dim L(C1 . u2n ).   .3. . . for 1 ≤ j ≤ n. . BASES Proposition 3. .  + u2j  . there exists vectors u1 = (u11 . .   . i=1 αmi uin ). .  . .  + u21  . 1 ≤ i ≤ m. α21 . . there exist real numbers αij . . α2r . Rm be the rows of A and C1 . ur = (ur1 .    . . . u1n ). u2 = (u21 .   r αmr αm2 αm1   αmi ui1 i=1  Therefore.  .   .3. . . . . . α2i ui1 . we observe that the columns C1 . i=1 i=1 αmi ui2 . . Then Row rank(A) = Column rank(A). .  + · · · + ur1  . i=1 i=1 α2i ui2 . Let R1 . . αm1 )t . . 1 ≤ j ≤ r such that r r r R1 = α11 u1 + α12 u2 + · · · + α1r ur = ( R2 = α21 u1 + α22 u2 + · · · + α2r ur = ( and so on. . Hence. . . . . In general. means that dim L(R1 . . . αmr )t . we have   r α1i uij         i=1 α11 α12 α1r   r            α21   α22   α2r   i=1 α2i uij   . . . A similar argument gives Row rank(A) ≤ Column rank(A). . αm2 )t . . . . . urn ) ∈ Rn with Ri ∈ L(u1 . Cn ) =≤ r = Row rank(A). . . . u2 . Cj =   = u1j   . .  + · · · + urj  .  r αmi ui1 .    .  . . . . .   . The set {1 − x. . . un−1 ) are different even though they have the same set of vectors as elements. . Then [x]B3 = (α2 . . 1+x. Example 3. and (un . (1. 0. un ). x2 ) is an ordered basis. . 1) . αn )t and [x]B2 = (αn . . is the second component. . . 1. 0). . 1. . αn−1 )t . . As B is a set. . 0) + 1 · (1. x2 } is a basis of P2 (R). . . . we denote it by [v]B = (β1 . If v = β1 v1 + β2 v2 + · · · + βn vn then the tuple (β1 . the vector space of all polynomials of degree less than or equal to 2 with coefficients from R. u2 . So. . un ). . Consider the ordered bases B1 = (1. 0. a0 + a1 a0 − a1 If we take (1 + x. . 1). . 1). Suppose B1 = (u1 . For any element a0 + a1 x + a2 x2 ∈ P2 (R). 0). vn ) be an ordered basis of a vector space V (F) and let v ∈ V. . . . 1. . then is the first component. un−1 ) are two ordered bases of V. there is no ordering of its elements. 2 · (1. un ) and B2 = (un . v2 . u2 . α2 . n Note that x is uniquely written as i=1 αi ui and hence the coordinates with respect to an ordered basis are unique. . . 0). 0). un ). . 1. a column vector. . 1. α2 . α3 . If (1−x. 0) + (−2) · (1. 1 − x. . . Mathematically. Then. . FINITE DIMENSIONAL VECTOR SPACES 3. 0. . 2 2 a0 + a1 a0 − a1 is the first component. . . u2 as the second vector and so on. . u2 . If the ordered basis has u1 as the first vector.1 (Ordered Basis) An ordered basis for a vector space V (F) of dimension n. α1 . then we denote this ordered basis by (u1 .3 (Coordinates of a Vector) Let B = (v1 . . un } and {1. 2. . we have a0 + a1 x + a2 x2 = a0 + a1 a0 − a1 (1 − x) + (1 + x) + a2 x2 . α1 . . 0. 1. . as ordered bases (u1 . βn ) is called the coordinate of the vector v with respect to the ordered basis B. u1 ). . βn )t .2 Consider P2 (R). . 3.4 Let V = R3 . 1. . [x]B1 = (α1 . In this section. u2 . (1. 1.4. Definition 3.4 Ordered Bases Let B = {u1 . . . (0. . we want to associate an order among the vectors in any basis of V. Then for any x ∈ V there exists unique scalars α1 . . 1) = = = 1 · (1.4. 1. is a basis {u1 . 0. then 2 2 2 and a2 is the third component of the vector a0 + a1 x + a2 x . (0. Therefore. 1) and B3 = (1. . 0) + 2 · (1. . u1 . un . . . . . Suppose that the ordered basis B1 is changed to the ordered basis B3 = (u2 . . un } be a basis of a vector space V (F). . . . . α2 . u3 . . That is.4. . . . u2 . 1. with respect to the above bases we have (1. n}. 1). . . −1. 0. (1. 0. (1. 0) + (−1) · (0. . u2 . 0). . . . u3 . (u2 . 1 · (1. . 0. 0) of V. . 1 + x. 1) + (−2) · (1. . . . u1 . 0) + 1 · (0.4.68 CHAPTER 3. is the 2 2 2 second component. Definition 3. the coordinates of a vector depend on the ordered basis chosen. αn such that x = α1 u1 + α2 u2 + · · · + αn un = αn un + α1 u1 + · · · + αn−1 un−1 . αn )t . Example 3. . un } together with a one-to-one correspondence between the sets {u1 . u2 . . x2 ) as an ordered basis. u2 . 0). and a2 is the third component of the vector a0 + a1 x + a2 x . β2 . u1 . B2 = (1. . Then [(x. there exists unique scalars aij .4. . .  .4. 1. −2. 1). . [v]B1 = A[v]B2 . . j ≤ n such that n vi = l=1 ali ul for 1 ≤ i ≤ n. [vi ]B1 = (a1i . α2 . 0.. 0). 0. . . . y. 1. 1. vn ). (1. . ( 2 2 (x − y) · (1.  . . . 1). 1 ≤ i. 1. That is. 1). . i=1 a2i αi . y. . ···   α1 a1n   a2n   α2  . a2i .  . 1)t . 1) (x − y. v2 . . −1. j=1 i=1 [v]B1 = i=1 a1i αi . 0) + z · (1. (1. Example 3. we have   n n n n n v= αi vi = i=1 i=1 Since B1 is a basis this representation of v in terms of ui ’s is unique. for each i.3. . n n n t αi  j=1 aji uj  = aji αi uj . . .  . Hence. 1. . αn )t . 2)t . Theorem 3.. z)t = . . 1 ≤ i ≤ n. . if we write u = (1. [u]B2 = (2. [v2 ]B1 . −1. .e. B1 is a basis of V. . (1. ORDERED BASES Therefore. .  . . we have proved the following theorem. ani )t . 1) 2 2 +(x − z) · (1. y − z. −2. 0) x−y y−x + z. the ith column of A is the coordinate of the ith vector vi of B2 with respect to the ordered basis B1 .6 Consider two bases B1 = (1. Let A = [[v1 ]B1 . u2 . . 69 In general. . i. a11   a21  . 1. [u]B3 = (1. 1) + · (1. 0) of R3 . . let V be an n-dimensional vector space with ordered bases B1 = (u1 . Let v ∈ V with [v]B2 = (α1 . −1. . v2 .  . 1) and B2 = (1. . Then for any v ∈ V. 1)t . un ) and B2 = (v1 . 1. . an1 ··· ··· . . .  . . −1. So. Since. [vn ]B1 ] . . x − z)t . vn ). z)]B2 = ( y−x x−y + z) · (1. u2 .  αn ann Note that the ith column of the matrix A is equal to [vi ]B1 . un ) and B2 = (v1 . z)]B1 = = and [(x. . As B2 as ordered basis (v1 . 0) + (y − z) · (1. vn ).5 Let V be an n-dimensional vector space with ordered bases B1 = (u1 . 1. (1.4. . then [u]B1 = (1. . 1. i=1 ani αi =  = A[v]B2 . v2 . . 0). z) ∈ R3 .4. 1. Exercise 3. 1. 3 + x2 − x3 ) and B2 = (1. z)]B1 =  y − z  = 0 −2 1  x−y  = A [(x. −1. y. 1). 1). 1.4. (a) Show that B1 = (1 − x. 1)t . z)]B1 . y. 1) of R3 . 1) and (4. 1 − x. z)]B2 = A−1 [(x. That is. 2. 1 + x2 . −1. 1)]B1 = 0 · (1. (2.7 1. 0) + 0 · (1. 1) = (2.   y−x     0 2 0 x−y 2 +z      [(x. 1)t and [(1. 2. 1 − x3 . 1. 2 1 1 0 z x−z 4. (2. Let A = [aij ] = 0 −2 1 . 0) + 1 · (1. 1)]B1 = 2 · (1. In the next chapter. FINITE DIMENSIONAL VECTOR SPACES  0 2 0   2. 2) with respect to the basis B = (2. Consider the vector space P3 (R). 2. 0) + 1 · (1. 0)]B1 = 0 · (1. 1) = (0. 3. −2. (d) Let v = a0 + a1 x + a2 x2 + a3 x3 . (b) Find the coordinates of the vector u = 1 + x + x2 + x3 with respect to the ordered basis B1 and B2 . 1. 0). Determine the coordinates of the vectors (1. 0. −a3 [v]B1 = = = . 0. Then verify the following:   −a1 −a − a + 2a − a   0 1 2 3   −a0 − a1 + a2 − 2a3  a0 + a1 − a2 + a3     a0 + a1 − a2 + a3 0 1 0 0  −1 0 1 0  −a1          −1 0 0 1  a2 1 0 0 0 [v]B2 . 1. 0) + 0 · (1. 1). 1. Note that for any (x. 1. y. y. 0)t . 1 + x2 . 1. 1) = (0. 1 − x3 ) are bases of P3 (R). 1. 0) + (−2) · (1. 0. 1.5 again using the ideas of ‘linear transformations / functions’. z)]B2 . (c) Find the matrix A such that [u]B2 = A[u]B1 . y. The matrix A is invertible and hence [(x. −2.70  CHAPTER 3. [(1. the elements of B2 = (1. 0) + 1 · (1. we try to understand Theorem 3. (1. 1. 1. 0) are expressed in terms of the ordered basis B1 . The columns of the matrix A are obtained by the following rule: 1 1 0 [(1. (1. 0. Define a map TA : Rn −→Rm by TA (x) = Ax for every xt = (x1 . 2. 0)) = (1. . Recall that Pn (R) is the set of all polynomials of degree less than or equal to n with real coefficients. Define T : Rn+1 −→Pn (R) by T ((a1 . .1 Definitions and Basic Properties Throughout this chapter. 3. 3y) = T (x) + T (y). 2. .2 as 1. We now give a few examples of linear transformations. . Then TA is a linear transformation. Let x = (x1 . .Chapter 4 Linear Transformations 4. 2x − y. x2 . 3(x + y)) = (x. 5. 3). . β ∈ F. . n (a) Define T (x) = i=1 xi . . Define T : R−→R2 by T (x) = (x. define T (x) = ai xi . Example 4. . y)) = (x + y. . 1)) = (1. 71 . every m × n real matrix defines a linear transformation from Rn to Rm . −1. Let A be an m × n real matrix. a2 . That is. 4. A map T : V −→W is called a linear transformation if T (αu + βv) = αT (u) + βT (v). . 3x) for all x ∈ R. an+1 ) ∈ Rn+1 . . Note that examples (a) i=1 and (b) can be obtained by assigning particular values for the vector a. . (c) For a fixed vector a = (a1 . Then T is a linear transformation T (x + y) = (x + y. . Define T : R2 −→R3 by T ((x.1 (Linear Transformation) Let V and W be vector spaces over F. define Ti (x) = xi . . a2 . 1 ≤ i ≤ n. 1) and T ((0. x2 . . 3x) + (y. an+1 )) = a1 + a2 x + · · · + an+1 xn for (a1 . a2 . v ∈ V. Then T is a linear transformation. xn ). . the scalar field F is either always the set R or always the set C.1. xn ) ∈ Rn . n (b) For any i. Verify that the maps given below from Rn to R are linear transformations. for all α. and u. x + 3y). an ) ∈ Rn .1. . . Definition 4. Then T is a linear transformation with T ((1. . T (x) is determined by the coordinates (α1 . α2 .72 CHAPTER 4. = (x − y. That is. which of the following are linear transformations T : M2 (R)−→M2 (R)? .4 (Zero Transformation) Let V be a vector space and let T : V −→W be the map defined by T (v) = 0 for every v ∈ V. we just need to know the vectors T (u1 ). Then T (0V ) = 0W . . |x|) = (x + y. 3x − 4y) = (z. . Such a linear transformation is called the Identity transformation and is denoted by I. Therefore. . for every x ∈ V.1. αn such that x = α1 u1 + α2 u2 + · · · + αn un .1. . T (u2 ). y) (c) Let V = W = R2 with T (x. T (u2 ). T (un ) ∈ W. . . . we write 0 for both the zero vector of the domain space and the zero vector of the range space. Since B is a basis of V. From now on.1. So. T (un ). 2x − y. there exist scalars α1 . z. u2 .1. αn . . T (u2 ). Which of the following are linear transformations T : V −→W ? Justify your answers. x2 − y 2 ) = (x − y. LINEAR TRANSFORMATIONS Proposition 4. x. Such a linear transformation is called the zero transformation and is denoted by 0. w) 2. . Proof. . Proof. T (0V ) = 0W as T (0V ) ∈ W. Then the linear transformation T is a linear combination of the vectors T (u1 ). We now prove a result that relates a linear transformation T with its value on a basis of the domain space. we have T (0V ) = T (0V + 0V ) = T (0V ) + T (0V ). . . Then T is a linear transformation. . . y) (d) Let V = R2 and W = −→R4 with T (x. T is determined by T (u1 ). In other words. Then. y) (b) Let V = W = R2 with T (x. αn ) of x with respect to the ordered basis B and the vectors T (u1 ). Observe that. α2 . by the definition of a linear transformation T (x) = T (α1 u1 + · · · + αn un ) = α1 T (u1 ) + · · · + αn T (un ). we know the scalars α1 . 2x + y.1. . . . Definition 4.7 1. x − y. α2 . . . So. Then T is a linear transformation.3 Let T : V −→W be a linear transformation. . . . Since 0V = 0V + 0V . . x + 3y) (a) Let V = R2 and W = R3 with T (x. y) = (x + y + 1. un ) be an ordered basis of V. Exercise 4. y) (e) Let V = W = R4 with T (x. given x ∈ V. Definition 4. . Theorem 4.5 (Identity Transformation) Let V be a vector space and let T : V −→V be the map defined by T (v) = v for every v ∈ V. . . . . . T (u2 ). for any x ∈ V. y. to know T (x). T (un ) in W. . Suppose that 0V is the zero vector in V and 0W is the zero vector of W. . T (un ). .6 Let T : V −→W be a linear transformation and B = (u1 . w. Recall that M2 (R) is the space of all 2 × 2 matrices with real entries. T 3 = 0. x) and (b) f ( (x. 7. and let x0 ∈ Rn with T (x0 ) = y. DEFINITIONS AND BASIC PROPERTIES (a) T (A) = At (b) T (A) = I + A (c) T (A) = A2 73 (d) T (A) = BAB −1 . Is this function a linear transformation? Justify your answer. x) ) = (x. In general. (d) T 2 = I.8 Let T : V −→W be a linear transformation. for k ∈ N. T (x). if T k = 0 for 1 ≤ k ≤ p and T p+1 = 0. T 2 = 0. 8. T p (x)} is linearly independent. Let T : Rn −→ Rm be a linear transformation. . That is. 4. Find all functions f : R2 −→ R2 that satisfy the conditions (a) f ( (x. S = T. Then for each w ∈ W. y) ∈ R2 . prove that T k (x) = Ak x. x) for all (x. Then prove that T 2 (x) := T (T (x)) = A2 x. Consider the linear transformation TA (x) = Ax for every x ∈ Rn . . 3. Show that for every x ∈ S there exists z ∈ N such that x = x0 + z. Suppose that the map T is one-one and onto. S = 0. Then prove that the set {x. In general.1.1. 2. Let x ∈ Rn such that T (x) = 0. Is T linear on (a) C over R (b) C over C. Define a map T : C −→ C by T (z) = z. . is a linear transformation. y) ) = (y. . T = I. (b) T = 0. . then for any vector x ∈ Rn with T p (x) = 0 prove that the set {x. The map T −1 : W −→V defined by T −1 (w) = v whenever T (v) = w. 9. y1 ) for x1 = y1 to its mirror image along the line y = x.4. T ◦ S = 0. Use the ideas of matrices to give examples of linear transformations T. the set T −1 (w) is a set consisting of a single element. Let T : R −→ R be a map. T (x)} is linearly independent. Theorem 4. Let T : Rn −→ Rn be a linear transformation such that T = 0 and T 2 = 0. the complex conjugate of z. Then T is a linear transformation if and only if there exists a unique c ∈ R such that T (x) = cx for every x ∈ R. (c) S 2 = T 2 . f fixes the line y = x and sends the point (x1 . For w ∈ W. 6. where T ◦ S(x) = T S(x) . define the set T −1 (w) = {v ∈ V : T (v) = w}. 1. S ◦ T = 0. 5. Let A be an n × n real matrix. where B is some fixed 2 × 2 matrix. S : R3 −→R3 that satisfy: (a) T = 0. Consider the sets S = {x ∈ Rn : T (x) = y} and N = {x ∈ Rn : T (x) = 0}. T ◦ T −1 = I. x+y x−y . So. . an+1 ) for a1 + a2 x + · · · + an+1 xn ∈ Pn (R). defined as above. for each w ∈ W there exists a vector v ∈ V such that T (v) = w. Let V and W be finite dimensional vector spaces over the set F with respective dimensions m and n. Recall the vector space Pn (R) and the linear transformation T : Rn+1 −→Pn (R) defined by T ((a1 . . conclude that the map T −1 is indeed the inverse of the linear transformation T. then the map T −1 : W −→V defined by T −1 (w) = v whenever T (v) = w is called the inverse of the linear transformation T. for any α1 . Thus. . . an+1 ) ∈ Rn+1 . a2 . y)) = (x + y. . Also. Definition 4. Suppose B1 = (v1 . Then T −1 : Pn (R)−→Rn+1 is defined as T −1 (a1 + a2 x + · · · + an+1 xn ) = (a1 . v2 ∈ V such that T −1 (w1 ) = v1 and T −1 (w2 ) = v2 . x − y).1. Then T −1 : R2 −→R2 is defined T −1 ((x. But by assumption. Let w1 . vn ) is an ordered basis of . Verify that T ◦ T −1 = T −1 ◦ T = I. y)) = ( Note that T ◦ T −1 ((x. 2.2 Matrix of a linear transformation In this section. a2 . )) 2 2 x+y x−y x+y x−y + . we relate linear transformation over finite dimensional vector spaces with matrices. . α2 ∈ F. y)) = = = T (T −1 ((x. .10 by 1. we ask the reader to recall the results on ordered basis. We now show that T −1 as defined above is a linear transformation. Verify that T −1 ◦ T = I. This completes the proof of Part 1. y). 2 2 Hence. Hence T −1 : W −→V. y))) = T (( x+y x−y . For this. . ). . Or equivalently. . . a2 . is a linear transformation. v2 . . there exist unique vectors v1 . Then by Part 1. If the map T is one-one and onto. Example 4. let T : V −→W be a linear transformation. Thus for any α1 . 4. T −1 (α1 w1 + α2 w2 ) = α1 v1 + α2 v2 = α1 T −1 (w1 ) + α2 T −1 (w2 ). α2 ∈ F. Since T is onto. So. the identity transformation. w2 ∈ W.9 (Inverse Linear Transformation) Let T : V −→W be a linear transformation. Hence. LINEAR TRANSFORMATIONS Proof. T is one-one and therefore v1 = v2 . T (v1 ) = w1 and T (v2 ) = w2 . studied in Section 3.1. . Suppose there exist vectors v1 .4. .74 CHAPTER 4. the set T −1 (w) is non-empty. we have T (α1 v1 + α2 v2 ) = α1 w1 + α2 w2 . Define T : R2 −→R2 by T ((x. . . an+1 )) = a1 + a2 x + · · · + an+1 xn for (a1 . v2 ∈ V such that T (v1 ) = T (v2 ). the map T −1 is indeed the inverse of the linear transformation T. − ) ( 2 2 2 2 (x. . . B2 ]. n am1 j=1 amj xj j=1 n j=1 The matrix A is called the matrix of the linear transformation T with respect to the ordered bases B1 and B2 . ··· ( i=1 j=1  a1n  a2n  .  . . Let T : V −→W be a linear transformation. amj ∈ F such that T (v1 ) = T (v2 ) = . am2 ··· ··· . . . 1 ≤ j ≤ n. If B1 is an ordered basis of V and B2 is an ordered basis of W. . x2 . . B2 ] such that [T (x)]B2 = A [x]B1 .  . the vectors T (vj ) ∈ W. . then there exists an m × n matrix A = T [B1 . . .  . In other words.   .4.   a1j    a2j   . and is denoted by T [B1 .  . respectively. . amj ]t . We therefore look at the images of the vectors vj ∈ B1 for 1 ≤ j ≤ n. .2. T (vj ) = i=1 aij wi for 1 ≤ j ≤ n. .  . . for each j. .  .1 Let V and W be finite dimensional vector spaces with dimensions n and m. . MATRIX OF A LINEAR TRANSFORMATION 75 V. for each j. Equivalently.  . In the last section. . the coordinates of T (vj ) with respect to the ordered basis B2 is the column vector [a1j .. w2 . . amj Let [x]B1 = [x1 . Then the coordinates of the vector T (x) with .  . . a2j . . We thus have the following theorem. . 1 ≤ j ≤ n.  . am1 am2 respect to the ordered basis B2 is  n [T (x)]B2   =     ··· ··· .  xn amn = A [x]B1 .2. . 1 ≤ j ≤ n.   . . . . . . .   a1j xj a11   a2j xj   a21 = . T (vn ) = m a12 w1 + a22 w2 + · · · + am2 wm a11 w1 + a21 w2 + · · · + am1 wm a1n w1 + a2n w2 + · · · + amn wm . we saw that a linear transformation is determined by its image on a basis of the domain space. So. ···   x1 a1n   a2n   x2  . Theorem 4. Now for each j. . Or in short. a2j .  . xn ]t be the coordinates of a vector x ∈ V. Then n n T (x) = T ( j=1 n xj vj ) = j=1 m xj T (vj ) = j=1 m xj ( i=1 n aij wi ) aij xj )wi . We now express these vectors in terms of an ordered basis B2 = (w1 . wm ) of W. = a11 a12   a21 a22 Define a matrix A by A =  . . there exist unique scalars a1j . [T (vj )]B2 =    .  amn a12 a22 .. 76 CHAPTER 4. LINEAR TRANSFORMATIONS Remark 4.2.2 Let B1 = (v1 , v2 , . . . , vn ) be an ordered basis of V and B2 = (w1 , w2 , . . . , wm ) be an ordered basis of W. Let T : V −→ W be a linear transformation with A = T [B1 , B2 ]. Then the first column of A is the coordinate of the vector T (v1 ) in the basis B2 . In general, the ith column of A is the coordinate of the vector T (vi ) in the basis B2 . We now give a few examples to understand the above discussion and the theorem. Example 4.2.3 1. Let T : R2 −→R2 be a linear transformation, given by T ( (x, y) ) = (x + y, x − y). We obtain T [B1 , B2 ], the matrix of the linear transformation T with respect to the ordered bases B1 = (1, 0), (0, 1) For any vector (x, y) ∈ R2 , [(x, y)]B1 = x y and B2 = (1, 1), (1, −1) of R2 . as (x, y) = x(1, 0) + y(0, 1). Also, by definition of the linear transformation T, we have T ( (1, 0) ) = (1, 1) = 1 · (1, 1) + 0 · (1, −1). So, [T ( (1, 0) )]B2 = (1, 0)t and T ( (0, 1) ) = (1, −1) = 0 · (1, 1) + 1 · (1, −1). That is, [T ( (0, 1) )]B2 = (0, 1)t . So the T [B1, B2 ] = 1 0 0 . Observe that in this case, 1 x , and y [T ( (x, y) )]B2 = [(x + y, x − y)]B2 = x(1, 1) + y(1, −1) = T [B1 , B2 ] [(x, y)]B1 = 1 0 0 1 x x = [T ( (x, y) )]B2 . = y y 2. Let B1 = (1, 0, 0), (0, 1, 0), (0, 0, 1) , B2 = (1, 0, 0), (1, 1, 0), (1, 1, 1) be two ordered bases of R3 . Define T : R3 −→R3 by T (x) = x. Then T ((1, 0, 0)) = T ((0, 1, 0)) = T ((0, 0, 1)) = Thus, we have T [B1 , B2 ] = [[T ((1, 0, 0))]B2 , [T ((0, 1, 0))]B2 , [T ((0, 0, 1))]B2 ] 1 · (1, 0, 0) + 0 · (1, 1, 0) + 0 · (1, 1, 1), −1 · (1, 0, 0) + 1 · (1, 1, 0) + 0 · (1, 1, 1), and 0 · (1, 0, 0) + (−1) · (1, 1, 0) + 1 · (1, 1, 1). = [(1, 0, 0)t , (−1, 1, 0)t, (0, −1, 1)t ]   1 −1 0   = 0 1 −1 . 0 0 1   1 0 0   Similarly check that T [B1 , B1 ] = 0 1 0 . 0 0 1 4.3. RANK-NULLITY THEOREM 77 3. Let T : R3 −→R2 be define by T ((x, y, z)) = (x + y − z, x + z). Let B1 = (1, 0, 0), (0, 1, 0), (0, 0, 1) and B2 = (1, 0), (0, 1) be the ordered bases of the domain and range space, respectively. Then T [B1 , B2 ] = 1 1 1 −1 . 0 1 Check that that [T (x, y, z)]B2 = T [B1 , B2 ] [(x, y, z)]B1 . Exercise 4.2.4 Recall the space Pn (R) ( the vector space of all polynomials of degree less than or equal to n). We define a linear transformation D : Pn (R)−→Pn (R) by D(a0 + a1 x + a2 x2 + · · · + an xn ) = a1 + 2a2 x + · · · + nan xn−1 . Find the matrix of the linear transformation D. However, note that the image of the linear transformation is contained in Pn−1 (R). Remark 4.2.5 1. Observe that T [B1 , B2 ] = [[T (v1 )]B2 , [T (v2 )]B2 , . . . , [T (vn )]B2 ]. 2. It is important to note that [T (x)]B2 = T [B1 , B2 ] [x]B1 . That is, we multiply the matrix of the linear transformation with the coordinates [x]B1 , of the vector x ∈ V to obtain the coordinates of the vector T (x) ∈ W. 3. If A is an m × n matrix, then A induces a linear transformation TA : Rn −→Rm , defined by TA (x) = Ax. We sometimes write A for TA . Suppose that the standard bases for Rn and Rm are the ordered bases B1 and B2 , respectively. Then observe that T [B1 , B2 ] = A. 4.3 Rank-Nullity Theorem Definition 4.3.1 (Range and Null Space) Let V, W be finite dimensional vector spaces over the same set of scalars and T : V −→W be a linear transformation. We define 1. R(T ) = {T (x) : x ∈ V }, and 2. N (T ) = {x ∈ V : T (x) = 0}. Proposition 4.3.2 Let V and W be finite dimensional vector spaces and let T : V −→W be a linear transformation. Suppose that (v1 , v2 , . . . , vn ) is an ordered basis of V. Then 1. (a) R(T ) is a subspace of W. (b) R(T ) = L(T (v1 ), T (v2 ), . . . , T (vn )). (c) dim(R(T )) ≤ dim(W ). 2. (a) N (T ) is a subspace of V. (b) dim(N (T )) ≤ dim(V ). 78 3. T is one-one ⇐⇒ R(T ). CHAPTER 4. LINEAR TRANSFORMATIONS N (T ) = {0} is the zero subspace of V ⇐⇒ {T (ui ) : 1 ≤ i ≤ n} is a basis of 4. dim(R(T )) = dim(V ) if and only if N (T ) = {0}. Proof. The results about R(T ) and N (T ) can be easily proved. We thus leave the proof for the readers. We now assume that T is one-one. We need to show that N (T ) = {0}. Let u ∈ N (T ). Then by definition, T (u) = 0. Also for any linear transformation (see Proposition 4.1.3), T (0) = 0. Thus T (u) = T (0). So, T is one-one implies u = 0. That is, N (T ) = {0}. Let N (T ) = {0}. We need to show that T is one-one. So, let us assume that for some u, v ∈ V, T (u) = T (v). Then, by linearity of T, T (u − v) = 0. This implies, u − v ∈ N (T ) = {0}. This in turn implies u = v. Hence, T is one-one. The other parts can be similarly proved. Remark 4.3.3 1. The space R(T ) is called the range space of T and N (T ) is called the null space of T. 2. We write ρ(T ) = dim(R(T )) and ν(T ) = dim(N (T )). 3. ρ(T ) is called the rank of the linear transformation T and ν(T ) is called the nullity of T. Example 4.3.4 Determine the range and null space of the linear transformation T : R3 −→R4 with T (x, y, z) = (x − y + z, y − z, x, 2x − 5y + 5z). Solution: By Definition R(T ) = L(T (1, 0, 0), T (0, 1, 0), T (0, 0, 1)). We therefore have R(T ) = = = = = Also, by definition N (T ) = = = = = = = {(x, y, z) ∈ R3 : T (x, y, z) = 0} L (1, 0, 1, 2), (−1, 1, 0, −5), (1, −1, 0, 5) L (1, 0, 1, 2), (1, −1, 0, 5) {α(1, 0, 1, 2) + β(1, −1, 0, 5) : α, β ∈ R} {(α + β, −β, α, 2α + 5β) : α, β ∈ R} {(x, y, z, w) ∈ R4 : x + y − z = 0, 5y − 2z + w = 0}. {(x, y, z) ∈ R3 : (x − y + z, y − z, x, 2x − 5y + 5z) = 0} {(x, y, z) ∈ R3 : x − y + z = 0, y − z = 0, 3 {(x, y, z) ∈ R {(x, y, z) ∈ R3 : y = z, x = 0} {(0, y, y) ∈ R3 : y arbitrary} : y − z = 0, x = 0} x = 0, 2x − 5y + 5z = 0} L((0, 1, 1)) Exercise 4.3.5 1. Let T : V −→W be a linear transformation and let {T (v1 ), T (v2 ), . . . , T (vn )} be linearly independent in R(T ). Prove that {v1 , v2 , . . . , vn } ⊂ V is linearly independent. 4.3. RANK-NULLITY THEOREM 2. Let T : R2 −→R3 be defined by T (1, 0) = (1, 0, 0), T (0, 1) = (1, 0, 0). 79 Then the vectors (1, 0) and (0, 1) are linearly independent whereas T (1, 0) and T (0, 1) are linearly dependent. 3. Is there a linear transformation T : R3 −→ R2 such that T (1, −1, 1) = (1, 2), 4. Recall the vector space Pn (R). Define a linear transformation D : Pn (R)−→Pn (R) by D(a0 + a1 x + a2 x2 + · · · + an xn ) = a1 + 2a2 x + · · · + nan xn−1 . Describe the null space and range space of D. Note that the range space is contained in the space Pn−1 (R). 5. Let T : R3 −→ R3 be defined by T (1, 0, 0) = (0, 0, 1), T (1, 1, 0) = (1, 1, 1) and T (1, 1, 1) = (1, 1, 0). (a) Find T (x, y, z) for x, y, z ∈ R, (b) Find R(T ) and N (T ). Also calculate ρ(T ) and ν(T ). (c) Show that T 3 = T and find the matrix of the linear transformation with respect to the standard basis. 6. Let T : R2 −→ R2 be a linear transformation with T ((3, 4)) = (0, 1), T ((−1, 1)) = (2, 3). Find the matrix representation T [B, B] of T with respect to the ordered basis B = (1, 0), (1, 1) of R2 . 7. Determine a linear transformation T : R3 −→ R3 whose range space is L{(1, 2, 0), (0, 1, 1), (1, 3, 1)}. 8. Suppose the following chain of matrices is given. A −→ B1 −→ B1 −→ B2 · · · −→ Bk−1 −→ Bk −→ B. If row space of B is in the row space of Bk and the row space of Bl is in the row space of Bl−1 for 2 ≤ l ≤ k then show that the row space of B is in the row space of A. We now state and prove the rank-nullity Theorem. This result also follows from Proposition 4.3.2. Theorem 4.3.6 (Rank Nullity Theorem) Let T : V −→W be a linear transformation and V be a finite dimensional vector space. Then dim(R(T )) + dim(N (T )) = dim(V ), or equivalently ρ(T ) + ν(T ) = dim(V ). and T (−1, 1, 2) = (1, 0)? 80 CHAPTER 4. LINEAR TRANSFORMATIONS Proof. Let dim(V ) = n and dim(N (T )) = r. Suppose {u1 , u2 , . . . , ur } is a basis of N (T ). Since {u1 , u2 , . . . , ur } is a linearly independent set in V, we can extend it to form a basis of V (see Corollary 3.3.15). So, there exist vectors {ur+1 , ur+2 , . . . , un } such that {u1 , . . . , ur , ur+1 , . . . , un } is a basis of V. Therefore, by Proposition 4.3.2 R(T ) = L(T (u1 ), T (u2 ), . . . , T (un )) = L(0, . . . , 0, T (ur+1), T (ur+2 ), . . . , T (un )) = L(T (ur+1 ), T (ur+2 ), . . . , T (un )). We now prove that the set {T (ur+1), T (ur+2 ), . . . , T (un )} is linearly independent. Suppose the set is not linearly independent. Then, there exists scalars, αr+1 , αr+2 , . . . , αn , not all zero such that αr+1 T (ur+1 ) + αr+2 T (ur+2 ) + · · · + αn T (un ) = 0. That is, T (αr+1 ur+1 + αr+2 ur+2 + · · · + αn un ) = 0. So, by definition of N (T ), αr+1 ur+1 + αr+2 ur+2 + · · · + αn un ∈ N (T ) = L(u1 , . . . , ur ). Hence, there exists scalars αi , 1 ≤ i ≤ r such that αr+1 ur+1 + αr+2 ur+2 + · · · + αn un = α1 u1 + α2 u2 + · · · + αr ur . That is, α1 u1 + + · · · + αr ur − αr+1 ur+1 − · · · − αn un = 0. But the set {u1 , u2 , . . . , un } is a basis of V and so linearly independent. Thus by definition of linear independence αi = 0 for all i, 1 ≤ i ≤ n. In other words, we have shown that {T (ur+1 ), T (ur+2), . . . , T (un )} is a basis of R(T ). Hence, dim(R(T )) + dim(N (T )) = (n − r) + r = n = dim(V ). Using the Rank-nullity theorem, we give a short proof of the following result. Corollary 4.3.7 Let T : V −→V be a linear transformation on a finite dimensional vector space V. Then T is one-one ⇐⇒ T is onto ⇐⇒ T is invertible. Proof. By Proposition 4.3.2, T is one-one if and only if N (T ) = {0}. By the rank-nullity Theorem 4.3.6 N (T ) = {0} is equivalent to the condition dim(R(T )) = dim(V ). Or equivalently T is onto. By definition, T is invertible if T is one-one and onto. But we have shown that T is one-one if and only if T is onto. Thus, we have the last equivalent condition. Remark 4.3.8 Let V be a finite dimensional vector space and let T : V −→V be a linear transformation. If either T is one-one or T is onto, then T is invertible. The following are some of the consequences of the rank-nullity theorem. The proof is left as an exercise for the reader. Let T. . Let A be an m × n matrix. bm )t such that the system Ax = b does not have any solution.5. . . . Ci2 . then the system Ax = 0 has infinitely many solutions.6 to get the required result. b2 . . . Then for part i) one can proceed as follows. Ci be the linearly independent columns of A. Let T : V −→W be a linear transformation. Prove that Row Rank (A) = Column Rank (A). (b) if n < m. b} is linearly independent.1. Use Theorem 2. . (a) If V is finite dimensional then show that the null space and the range space of T are also finite dimensional. The dimension of the null space of A = n − k. . . i) Let Ci1 . Then rank(A) < rank([A b]) implies that {Ci1 . Then (a) if n > m. There is a k × k submatrix of A with non-zero determinant and every (k + 1) × (k + 1) submatrix of A has zero determinant. 3. if dim(V ) > dim(W ) then T is not one-one. . Ci . . 6. Hence b ∈ L(Ci1 . x and b. . 5. Note that ρ(A) = column rank(A) = dim(R(T )) = (say). [Hint: Consider the linear system of equation Ax = b with the orders of A. . 81 4. respectively as m × n. This implies. Prove Theorem 2. . 1. Hence. (b) If V and W are both finite dimensional then show that i.3. Ci2 .10 1. Ax = 0 has n − r linearly independent solutions.] 5. Ci2 .9 The following are equivalent for an m × n real matrix A. 3. The dimension of the range space of A is k. There exist exactly k rows of A that are linearly independent. . RANK-NULLITY THEOREM Corollary 4. . . There exist exactly k columns of A that are linearly independent. [Hint: Define TA : Rn −→Rm by TA (v) = Av for all v ∈ Rn . . Ci ). b2 . S : V −→V be linear transformations with dim(V ) = n. . Now observe that R(TA ) is the linear span of columns of A and use the rank-nullity Theorem 4.3.3. On similar lines prove the other two parts. n × 1 and m × 1. Define a linear transformation T : Rn −→Rm by T (v) = Av. There is a subset of Rm consisting of exactly k linearly independent vectors b1 . then there exists a non-zero vector b = (b1 .5. 7. the system doesn’t have any solution. Let Row Rank (A) = r.] 4. 2. Let A be an m × n real matrix. ν(TA ) = dim({v ∈ Rn : TA (v) = 0}) = dim({v ∈ Rn : Av = 0}) = n − r. Rank (A) = k. 2. if dim(V ) < dim(W ) then T is onto. First observe that if the solution exists then b is a linear combination of the columns of A and the linear span of the columns of A give us R(T ).1 to show.3.4. . . ii. Exercise 4. bk such that the system Ax = bi for 1 ≤ i ≤ k is consistent. B3 ] = S[B2 . every linear transformation is represented by a matrix with entries from the scalars. . . . Let B1 = (u1 . v2 . the following has been discussed in detail: Given a finite dimensional vector space V of dimension n. wp ) be ordered bases of V. Theorem 4. Hint: Let x ∈ N (TA ) ∩ R(TA ). . For any v ∈ V. . recall the definition of the vector subspace M + N. vk+1 . . That is. vm ) and B3 = (w1 . un ). . [S ◦ T (u2 )]B3 . . W and Z. Deduce that ρ(T + S) ≤ ρ(T ) + ρ(S). vk . for any linear transformation T : V −→V. the matrix of T with respect to the ordered basis B. . . . . determine the dimension of the range space of T. B2 . . x = TA (y) = (TA ◦ TA )(y) = TA TA (y) = TA (x) = 0. . . . B3 ] = [[S ◦ T (u1 )]B3 . Consider the linear transformation TA : Rn −→ Rn . . . . .3. This theorem also enables us to understand why the matrix product is defined somewhat differently. That is. [S ◦ T (un )]B3 ]. .1 (Composition of Linear Transformations) Let V. {TA (vk+1 ). B3 .6. We start with the following important theorem. This implies TA (x) = 0 and x = TA (y) for some y ∈ Rn . Then (S ◦ T ) [B1 . In this section. . Let V be the complex vector space of all complex polynomials of degree at most n. . B3 ] T [B1 . u2 . Let A be an n × n real matrix with A2 = A. . For each k ≥ 1.82 CHAPTER 4. . to obtain the coordinates of v with respect to the ordered basis B. Hint: For two subspaces M. . . W and Z be finite dimensional vector spaces with ordered bases B1 . vk } be a basis of N (TA ). .4. . . defined by TA (v) = Av for all v ∈ Rn . B2 ]. Then by Rank-nullity Theorem 4. Given k distinct complex numbers z1 . . w2 . respectively. Then the composition map S ◦ T : V −→Z is a linear transformation and (S ◦ T ) [B1 . LINEAR TRANSFORMATIONS (a) Show that R(T + S) ⊂ R(T ) + R(S). . Also. 4. So. Prove that (a) TA ◦ TA = TA (use the condition A2 = A). . . . . let T : V −→W and S : W −→Z be linear transformations. z2 . .6 to prove ν(T + S) ≥ ν(T ) + ν(S) − n.3. B]. (c) Rn = N (TA ) + R(TA ). we relate the two n × n matrices T [B1 .4 Similarity of Matrices In the last few sections. we understand the matrix representation of T in terms of different bases B1 and B2 of V. B2 = (v1 . once an ordered basis of V is fixed. Proof. vn } of Rn . 6. we fixed an ordered basis B. respectively. TA (vn )} is a basis of R(TA ). . zk . B1 ] and T [B2 . . N of a vector space V. (b) N (TA ) ∩ R(TA ) = {0}. . Hint: Let {v1 . B2 ]. P (zk ) . 7. Also. we calculated the column vector [v]B . (b) Use the above and the rank-nullity Theorem 4. we got an n × n matrix T [B. Extend it to get a basis {v1 . we define a linear transformation T : V −→ Ck by T P (z) = P (z1 ). P (z2 ). . vk of N (S). So. vk } ⊂ N (T ◦ S) as T (0) = 0. . .4. k Therefore. . u } of N (T ◦ S). B2 ])jt S(vj ) k=1 (S[B2 . B2 ])jt k=1 = = p X p m X X ( (S[B2 . . We extend it to get a basis {v1 . . . v2 . [(S ◦ T ) (ut )]B3 = ((S[B2 . to complete the proof of the second inequality. . N (S) ⊂ N (T ◦ S). u ∈ N (T ◦ S). v2 . B3 ] = [(S ◦ T ) (u1 )]B3 . B3 ])kj wk m X j=1 (T [B1 . . . {v1 . We first prove the second inequality. B2 ])jt )wk k=1 j=1 p X (S[B2 . . c2 . Then ν(T ) + ν(S) ≥ ν(T ◦ S) ≥ max{ν(T ). . Clearly. . B3 ] T [B1 . . This is true as R(S) ⊂ V. . ν(S)}. . vk . Then T ◦ S(v) = T (S(v)) = T (0) = 0. B3 ] T [B1 . c such that c1 S(u1 ) + c2 S(u2 ) + · · · + c S(u ) = 0. .4. αk such that ci ui = i=1 i=1 αi vi . . vk } be a basis of N (S). . . . . B3 ])kj (T [B1 . . . . u2 . the set {S(u1 ). . [(S ◦ T ) (un )]B3 = S[B2 . . S : V −→V be a linear transformations. S(u2 ). So. Then there exist non-zero scalars c1 . observe that ν(T ◦ S) ≥ ν(T ) ⇐⇒ n − ν(T ◦ S) ≤ n − ν(T ) ⇐⇒ ρ(T ◦ S) ≤ ρ(T ). We now prove the first inequality. (S ◦ T ) (ut ) = S(T (ut )) = S = m X j=1 83 „X m j=1 (T [B1 . v2 . Therefore. we need to show that R(T ◦ S) ⊂ R(T ). B2 ]. α2 . . Claim: The set {S(u1 ). . So. B2 ])kt wk . Proposition 4. This completes the proof. B3 ] T [B1 . . Let if possible the given set be linearly dependent. . B2 ])jt vj « = (T [B1 . . . . . (S ◦ T ) [B1 . . B2 ])pt )t . Then using the rank-nullity theorem. SIMILARITY OF MATRICES Now for 1 ≤ t ≤ n. the vector i=1 ci ui ∈ N (S) and is a linear combination of the basis vectors v1 . S(u )} is linearly independent subset of N (T ). . Suppose dim(V ) = n. u2 . there exist scalars α1 . . Proof. ν(S) ≤ ν(T ◦ S). S(u2 ). Hence. As u1 . B3 ] T [B1 . . u1 . . B2 ])1t . . Suppose that v ∈ N (S).4. . Or equivalently k ci ui + i=1 i=1 (−αi )vi = 0. . . .2 Let V be a finite dimensional vector space and let T. (S[B2 . S(u )} is a subset of N (T ). So. Let k = ν(S) and let {v1 . v2 . . a2i . un } and B2 = (v1 . 1). The reader is required to supply the proof (use Theorem 4. B]. Suppose x ∈ V with [x]B1 = (α1 . Theorem 4. Let B = (1. . [I(vn )]B1 ]   a11 a12 · · · a1n    a21 a22 · · · a2n   . an1 an2 · · · ann Thus. find the matrix T [B. j ≤ n such that n vi = I(vi ) = j=1 aji uj for all i. . . vn }. . α2 .4. . . Then [x]B1 = I[B2 . Let V be a vector space with dim(V ) = n. . . .4. vk .8 that if T is an invertible linear Transformation. . un ) and B2 = (v1 . . . 1. 1). α2 . for 1 ≤ i ≤ n. v2 . . . . the 0 vector is a non-trivial linear combination of the basis vectors v1 . Recall from Theorem 4. 2. . β2 . Then the matrix of T and T −1 are related by T [B1 . . . . Also let T : V −→V be an invertible linear transformation. Is T an invertible linear transformation? Give reasons. −1. find T −1 [B. B].4. . ν(T ◦ S) = k + ≤ ν(S) + ν(T ). S(u2 ). B1 ]. (1. . βn )t . Also. we can find scalars aij . Theorem 4. u1 . . . −1. the set {S(u1 ). . (1.  . . x3 ) be an ordered basis of P3 (R).4. B1 ] = = [[I(v1 )]B1 . . . · · · . . v2 . 1. −1. then T −1 : V −→V is a linear transformation defined by T −1 (u) = v whenever T (v) = u. . B1 ] [x]B2 . Since vi ∈ V. vn } be two ordered bases of V. 1. We now state an important result about inverse of a linear transformation. 1) = (1.1. .1. . u2 . Hence. B2 ]−1 = T −1 [B2 . .5 (Change of Basis Theorem) Let V be a finite dimensional vector space with ordered bases B1 = (u1 .1).. . . S(u )} is a linearly independent subset of N (T ) and so ν(T ) ≥ . αn )t and [x]B2 = (β1 . x2 . 1). αn )t and [x]B2 = (β1 . A contradiction. Recall from Definition 4. T (x2 ) = (1 + x)2 . [I(vi )]B1 = [vi ]B1 = (a1i . and B1 is a basis of V. . We now express each vector in B2 as a linear combination of the vectors from B1 . −1) be an ordered basis of R3 . 1. Exercise 4. 1 ≤ i ≤ n. Define T : P3 (R)−→P3 (R) by T (1) = 1. . 1. 1.  .5 that I : V −→V is the identity linear transformation defined by I(x) = x for every x ∈ V. we have proved the following result. LINEAR TRANSFORMATIONS That is. β2 . Hence. −1) = (1. [I(v2 )]B1 . . . 1.84 CHAPTER 4.4 For the linear transformations given below. . ani )t and I[B2 . and T (1.  . βn )t . T (x) = 1 + x. . 1 ≤ i. . . 1) = (1. . Let B1 = (u1 . v2 . T (1. u of N (T ◦ S). .3 (Inverse of a Linear Transformation) Let V be a finite dimensional vector space with ordered bases B1 and B2 . . Thus. and T (x3 ) = (1 + x)3 . . Suppose x ∈ V with [x]B1 = (α1 . Define T : R3 −→R3 by T (1. 1). . . u2 . . Prove that T is an invertible linear transformation.  . −1). . . . Let B = 1. u2 . . x. . . . Another Proof: Let B = [bij ] and C = [cij ].  . B1 ]. Also. the other expression is [T (x)]B2 = = = I[B1 .4.2. . B2 ]. n n (4.1) and (4. vn ) be two ordered bases of V. Let V be a finite dimensional vector space and let B1 and B2 be two ordered bases of V. B2 ] = I[B1 . B2 ] [T (x)]B1 (4.4.4. B1 ]−1 = I −1 [B1 . B1 ] I[B2 . I[B1 . We are now in a position to relate the two matrices T [B1 . B1 ] I[B2 . B1 ] I[B2 . we get I[B1 .4. Equivalently B = ACA−1 .  . Using Theorem 4. an1 αn  a12 a22 . = . B2 ] T [B1. u2 . So.4. ···  a1n  a2n  . un ) and B2 = (v1 . let A = [aij ] = I[B2 . Theorem 4. .2) Hence. Using Theorem 4.4.1) I[B1 .  ann  β1    β2   . B2 ] T [B1 . B1 ] [x]B2 = T [B2 .6 Let V be a finite dimensional vector space and let B1 = (u1 . the first expression is [T (x)]B2 = T [B2 . B1 ] and T [B2 .   . . . B1 ] = T [B2 . . B2 ] as matrix representations of T in bases B1 and B2 . SIMILARITY OF MATRICES Equivalently. be the matrix of the identity linear transformation with respect to the bases B1 and B2 . .  . B1 ] and C = T [B2 .5. .  . we also have [x]B2 = I[B1 .   . B2 ] [x]B1 .  . Proof.4. T (vj ) = = T (I(vj )) = T ( n X n X akj uk ) = k=1 akj ( k=1 n X =1 b ku ) = n n X X ( b k akj )u =1 k=1 n X akj T (uk ) k=1 . Then BA = AC. That is.1. B1 ] [x]B1 I[B1 . . A−1 BA = C or equivalently ACA−1 = B..2). Then for 1 ≤ i ≤ n. B1 ] [x]B2 .4. Since the result is true for all x ∈ V. B2 ]. Therefore. . Let T : V −→V be a linear transformation with B = T [B1 .4. . 85   a11 α1     α2   a21  . . B2 ] T [B1 . .   . B2 ] [x]B2 . B2 ]. using (4. for each j. v2 . B2 ] [x]B2 .3) T (ui ) = j=1 bji uj and T (vi ) = j=1 cji vj .  βn  Note: Observe that the identity linear transformation I : V −→V defined by I(x) = x for every x ∈ V is invertible and I[B2 . (4. we see that for every x ∈ V. 1 ≤ j ≤ n. Let T : V −→V be a linear transformation. an2 ··· ··· . B2 ] T [B1. For any x ∈ V . we represent [T (x)]B2 in two ways. CHAPTER 4.   n anj   bnk akj  n k=1 Also.4.4.8 (Similar Matrices) Two square matrices B and C of the same order are said to be similar if there exists a non-singular matrix P such that B = P CP −1 or equivalently BP = P C. Remark 4. The above observations lead to the following remark and the definition. B]. Therefore. So.4.  . its columns are linearly independent and hence we can take its columns as an ordered basis B1 . n n n n T (vj ) = k=1 n ckj vk = k=1 n ckj I(vk ) = k=1 ckj ( =1 a ku ) = and so ( =1 k=1 a k ckj )u [T (vj )]B1 This gives us T [B2.  . then A = TA [B.9 Observe that if A = T [B. B1 ] = BA. 1 ≤ j ≤ n.3) shows how the matrix representation of a linear transformation T changes if the ordered basis used to compute the matrix representation is changed. we get an n × n matrix T [B. Then note that B = TA [B1 .    . . the matrix I[B1 . B2 ] is called the B1 : B2 change of basis matrix.    . Then we have seen that if the standard basis of Rn is the ordered basis B. B].  . B1 ] = AC. we know that for any vector space we have infinite number of choices for an ordered basis.  . Then for each ordered basis B of V.86 and therefore.  . Recall the linear transformation TA : Rn −→Rn defined by TA (x) = Ax for all x ∈ Rn . Theorem 4. B1 ] = BA. the matrix of the linear transformation changes. Remark 4. B1 ]. for each j.  b1k akj     k=1 a1j   n      b2k akj   a2j    =  k=1  = B  .   . as we change an ordered basis. . similar matrices are just different matrix representations of a single linear transformation. . Also.4. We thus have AC = T [B2 . Now.  a1k ckj     k=1 c1j   n      a2k ckj   c2j    =  k=1  = A . Since P is an invertible matrix. B] then {S −1 AS : S is n × n invertible matrix } is the set of all matrices that are similar to the given matrix A.   n cnj   ank ckj  n k=1 Let V be a vector space with dim(V ) = n.   .6 tells us that all these matrices are related. Definition 4. Hence.  .4. LINEAR TRANSFORMATIONS [T (vj )]B1 Hence T [B2 .7 The identity (4. let A and B be two n × n matrices such that P −1 AP = B for some invertible matrix P. and let T : V −→V be a linear transformation. . Suppose T : R3 −→R3 is a linear transformation defined by T ((x. B2 ] = −2/5 2 9/5  .. 0).4. . . B2 ] = I[B1 . B1 ]. 42 −1 1 1 Find the matrices T [B1. Let V be an n-dimensional vector space and let T : V −→V be a linear transformation. 2. and [2 + x + x2 ]B1 = 1 · 1 + 0 · (1 + x) + 1 · (1 + x + x2 ) = (1. I[B2 . Then prove  0 1   T [B. 1  0 0   0 . (a) Then prove that there exists a vector u ∈ V such that the set {u. B2 ] T [B1 . SIMILARITY OF MATRICES Example 4. 0 that 0 0 1 0 0 0 0 .4. 1 + x. . B] = 0  . 1). T (u). . = = = 87 and B2 = 1 + x − x2 . B1 ] = I −1 [B2 . Check that. 0  2 −2 −2   T [B1 .11 1. z)) = (x + y. 0  −4/5 1 8/5   and T [B2 . B1 ] I[B2 . Consider P2 (R). 2. [1 + 2x + x2 ]B1 . B2 ] T [B1 . 1)t .  . Consider two bases B1 = (1. B2 ] = −2 4 5 . Therefore. . 1. B1 ] = 1 1 0 1   −2  4 . . .10 1. Then 0 0  T [B1 . B1 ] [[I(1 + x − x2 )]B1 . 1 + x + x2 Then [1 + x − x2 ]B1 = 0 · 1 + 2 · (1 + x) + (−1) · (1 + x + x2 ) = (0. B1 ] = I[B2 . y. T n−1 (u)). B2 ] and verify. . −1)t .4. [I(1 + 2x + x2 )]B1 . B2 ]. [I(2 + x + x2 )]B1 ] [[1 + x − x2 ]B1 . 2 1 0 . B2 ]. 0. . 1 + 2x + x2 . Also verify that T [B2 . B1 ] and T [B2 . B1 ] T [B1 . 0). (1. 0. B1 ] T [B2 . ··· . (1. B1 ] = T [B2 . Suppose T has the property that T n−1 = 0 but T n = 0. 2 + x + x2 . B1 ] I[B2 . ··· ··· ··· . 1. x + y + 2z. 8/5 0 −1/5  Find I[B1 . (b) Let B = (u. (1. I[B1 . . [1 + 2x + x2 ]B1 = (−1) · 1 + 1 · (1 + x) + 1 · (1 + x + x2 ) = (−1. [2 + x + x2 ]B1 ] 3 2 0 −1 1 7 6 1 05 . −1). 1) and B2 = (1. . 1) of R3 . (2.. y − z).4. 1)t . B1 ] I[B2 . 1. T n−1 (u)} is a basis of V. T (u). B1 ] I[B2 . . with ordered bases B1 = 1.  Exercise 4. 1. 2. 1. (a) Find the matrices T [B. (b) Find the matrix P such that P −1 T [B. 2). LINEAR TRANSFORMATIONS (c) Let A be an n × n matrix with the property that An−1 = 0 but An = 0. 3. 1. (0. 1). 2. (1. 2). x + y. 1. x − y − 3z. B] and T [B1 . 0. B1 ]. 1). z)) = (x + y + 2z. B] P = T [B1 . Let B be the standard basis and B1 = (1. Let T : R3 −→R3 be a linear transformation given by T ((x. Let B1 = (1. (1. (d) Find the change of basis matrix from the standard basis of R3 to B1 . x + y + z). 4. 0). 2. (c) Verify that P Q = I = QP. Let T : R3 −→R3 be a linear transformation given by T ((x. 0). 1. (a) Find the matrices T [B. −1. B1 ]. z)) = (x. (1. Then prove that A is similar to the matrix given above. (a) Find the change of basis matrix P from B1 to B2 . 1) be another ordered basis.88 CHAPTER 4. B] and T [B1 . 3. Let B be the standard basis and B1 = (1. 0). (0. (1. 1. 1). 2x + 3y + z). 1. What do you notice? . B] P = T [B1 . B1 ]. B1 ]. y. 2. 1. 6) be two ordered bases of R3 . (1. y. (b) Find the change of basis matrix Q from B2 to B1 . (b) Find the matrix P such that P −1 T [B. 2) be another ordered basis. (1. 3) and B2 = (1. 4. . respectively. y. w .3 The first two examples given below are called the standard inner product or the dot product on Rn and Cn . . and x · x ≥ 0 and x · x = 0 if and only if x = 0. v2 . the complex conjugate of u. . u ≥ 0 for all u ∈ V and equality holds if and only if u = 0. . un ) and v = (v1 .1. ) is called an inner product space. x · y = y · x. u.1. In this section. given two vectors x = (x1 . au + bv. . y = (y1 . . . u.1 (Inner Product) Let V (F) be a vector space over F. we know the inner product x · y = x1 y1 + x2 y2 . . Note that for any x. is a map.2 (Inner Product Space) Let V be a vector space with an inner product (V. y2 ). 2. x2 ). we define u. Definition 5. we start by defining a notion of inner product (dot product) in a vector space. denoted by . : V × V −→ F such that for u. any vector in the plane is a linear combination of the vectors i and j. An inner product over V (F). Definition 5. This helps us in finding out whether two vectors are at 90◦ or not. Then Example 5. u . w + b v. we are motivated to define an inner product on an arbitrary vector space. v . Thus. v = v. To do this. 5. . v = u1 v1 + u2 v2 + · · · + un vn = uvt . b ∈ F 1. vn ) of V. . Let V = Rn be the real vector space of dimension n. v.Chapter 5 Inner Product Spaces We had learned that given vectors i and j (which are at an angle of 90◦ ) in a plane. is an inner product. w = a u. Verify . and 3. w ∈ V and a. 1. u2 . z ∈ R2 and α ∈ R. we investigate a method by which any basis of a finite dimensional vector can be transferred to another basis in such a way that the vectors in the new basis are at an angle of 90◦ to each other. 89 .1. . this inner product satisfies the conditions x · (y + αz) = x · y + αx · z.1 Definition and Basic Properties In R2 .. Given two vectors u = (u1 . . in short denoted by ips. . . Note that λu + v. u2 . λu + v ≥ 0 for all λ ∈ F. 3. u |2 v 2− . Then it is easy to verify that the third condition is not valid whereas the first two conditions are valid. (y1 .4 Note that in parts 1 and 2 of Example 5. for λ = − u 2 0 ≤ = = = λu + v. Then it is easy to verify that the first 1 2 condition is not valid whereas the second and third conditions are valid. Remark 5. u + v 2 2 v. y = (x1 . . the inner products are uvt and uv∗ . v. Let V = Cn be a complex vector space of dimension n. we get In particular. we define three products that satisfy two conditions out of the three conditions for an inner product.1. Theorem 5. Then for any u. 3 3 (c) Define x.3. if u = 0. the positive square root. denoted u . In this example. Definition 5. are indeed inner products. u u Proof. u + v 2 2 2 u u u u 2 | v. y = (x1 . u v. y2 ) = x1 y1 .3. The equality holds if and only if the vectors u and v are linearly dependent.1. respectively. y = (x1 . (y1 . Let V = R2 and let A = 4 −1 . u v. y = 10x1 y1 + 3x1 y2 + 3x2 y1 + 2x2 y2 + x2 y3 + x3 y2 + x3 y3 is an inner product in R3 (R). x2 ). The next theorem gives the statement and a proof of this inequality. u 2 . u v. u. u and v are taken as column vectors and hence one uses the notation ut v or u∗ v. Exercise 5. λu + v λλ u 2 + λ u. Then it is easy to verify that the second condition is not valid whereas the first and third conditions are valid. INNER PRODUCT SPACES 2. Hence the three products are not inner products. Further. v = u1 v1 + u2 v2 + · · · + un vn = uv∗ is an inner product. . . x2 . let x = (x1 . v2 . then u u v = v. . 2 2 (b) Define x. Check that −1 2 Hint: Note that xAyt = 4x1 y1 − x1 y2 − x2 y1 + 2x2 y2 . v | ≤ u v . . 5. v + λ v. . vn ) in V. u u 2− u. Show that x. (y1 . y3 ) ∈ R3 . is an inner product. Consider the real vector space R2 . . by u = A very useful and a fundamental inequality concerning the inner product is due to Cauchy and Schwartz. In general. y2 ) = x2 + y1 + x2 + y2 . y = xAyt . Then for u = (u1 . v − v. . v ∈ V | u.. u .1. we define the length (norm) of u.1. x3 ).1.5 Verify that inner products defined in parts 3 and 4 of Example 5. If u = 0. x2 ). (a) Define x. . Define x. 4. This occurs because the vectors u and v are row vectors. y2 ) = x1 y1 + x2 y2 . un ) and v = (v1 .90 CHAPTER 5.7 (Cauchy-Schwartz inequality) Let V (F) be an inner product space. x2 ). u . then the inequality holds.6 (Length/Norm of a Vector) For u ∈ V.1. Let u = 0. check that u. y = (y1 . y2 . such that x = y = 1 and x. x. u v u. DEFINITION AND BASIC PROPERTIES Or. (b) Let u = (1. ej = 0 for 1 ≤ i = j ≤ n. . 3. Then prove that with respect to the standard inner product on Rn . . un } is called mutually orthogonal if ui .1. u . 0)t and e2 = (0. −1) = 0. v . for every real number u. y = 4x1 y1 − x1 y2 − x2 y1 + 2x2 y2 . e2 . y2 )t . 2. 2. v ≤ 1. v . Exercise 5. . y = yt Ax and solve a system of 3 c . in other words | v. Recall the following inner product on R2 : for x = (x1 . A set of vectors {u1 . . v ∈ V. b. 2). u |2 ≤ u and the proof of the inequality is over. such that u v cos θ = u. v is called the angle between the two u v 1. Define x. v = 0. Then for every u. u u 2 Definition 5. . 2) = (2. π] −→ [−1. (2. the vectors ei satisfy the following: (a) ei = 1 for 1 ≤ i ≤ n. y = 0.] a b and (1. we have −1 ≤ u. y ∈ R2 .8 (Angle between two vectors) Let V be a real vector space. 1)t . . We leave it for the reader to prove v = v. [Hint: Consider a symmetric matrix A = equations for the unknowns a. −1) = 1. u u u . Therefore.1. Find v ∈ R2 such that v. 1] is an one-one and onto function. by the Cauchy-Schwartz inequality. (b) ei . (a) Find the angle between the vectors e1 = (1. . 3. u 2 91 v 2 v. u = 0. there exists a unique θ. en } be the standard basis of Rn .9 1. Observe that if u = 0 then the equality holds if and only of λu + v = 0 for λ = − and v are linearly dependent. . b . The vectors u and v in V are said to be orthogonal if u. Find an inner product in R2 such that the following conditions hold: (1. c. u v We know that cos : [0. Let {e1 . uj = 0 for all 1 ≤ i = j ≤ n. 0)t . That is. (c) Find two vectors x. u2 . 0 ≤ θ ≤ π. The real number θ with 0 ≤ θ ≤ π and satisfying cos θ = vectors u and v in V.1.5. x2 )t and y = (y1 . Let x. A > 0 for all non-zero matrices A. 5. Define a map n . Let z1 . A = tr(AAt ) = i=1 (AAt )ii = i=1 j=1 aij aij = i=1 j=1 a2 ij and therefore. . . 2π]. y = (y1 . B = tr(AB t ). For different values of m and n. A . z2 . 2π]. y3 ) ∈ R3 . . (This is called the Parallelogram Law). . u2 . Then n n n n n A. V = 1 C[−2π. A. Then. 8. B = tr(AB t ) = tr( (AB t )t ) = tr(BAt ) = B. 6. Let x = (x1 . y2 . With respect to this inner product. B ∈ Mn×n (R) we define A. For A. . zn ∈ C. x2 . (This is called the polarisation identity). So. . (x and y form adjacent sides of a rhombus as the diagonals x + y and x − y are orthogonal). b2 . v ∈ V. . Observe that x. C = tr (A + B)C t = tr(AC t ) + tr(BC t ) = A. Then show that V is an inner product space with inner product −1 f (x)g(x)dx. Let A = (aij ). bn )t . a2 . . That is. C . . 7. Let V be the real vector space of all continuous functions with domain [−2π. y = 0 ⇐⇒ x − y (b) (c) 2 for every u. A. Show that x. x+y 2 + x−y 2 =2 x 2 + 2 y 2 . . −5. find the angle between the vectors (1. y ∈ Rn .10 i. B is an inner product on Mn×n (R). Show that the above defined map is indeed an inner product. n(|z1 |2 + |z2 |2 + · · · + |zn |2 ). (d) 4 x. 9. un ). Suppose the norm of a vector is given. Let V be a complex vector space with dim(V ) = n. Prove that u+v ≤ u + v This inequality is called the triangle inequality. C + B. the polarisation identity can be used to define an inner product. Use the Cauchy-Schwartz inequality to prove that |z1 + z2 + · · · + zn | ≤ When does the equality hold? 10. an )t and [v]B = (b1 . find the angle between the functions cos(mx) and sin(nx). . : V × V −→ C by u. = x 2
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9718217253684998, "perplexity": 1331.660371828229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446218.95/warc/CC-MAIN-20151124205406-00182-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/spectral-brightness.111174/
# Spectral brightness 1. Feb 18, 2006 ### sachi we have radiation contained within a spectral band of width delta lambbda such tha (delta lambda)/lambda = (10^-4) the laser beam has a diameter of 50 micrometers and it has a divergence in both the horizontal and vertical directions of 10 millirads. we need to calculate the spectral brightness of the beam - i.e the power per unit area per unit solid angle per 0.01 percent bandwith. I can calculate the power, the area (this is the original area of the beam before it diverges) okay, but I'm a bit confused about the total solid angle. how do you convert a divergence is two perpendicular directions into a total solid angle (I have a feeling it has something to do with multiplying them together). thanks very much for your help. Sachi 2. Feb 18, 2006 ### Gokul43201 Staff Emeritus Because the divergences are equal in both directions, the shape of the window formed on a distant plane will be a circle. If the total divergence angle is $\theta$ and this plane is at a distance R from the source, which is large compared to the width of the beam, then the area of the circle formed on the plane is $\pi R^2 tan^2(\theta/2) \approx \pi R^2 \theta^2/4$. Dividing by $R^2$ gives you the total solid angle. Similar Discussions: Spectral brightness
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330835342407227, "perplexity": 290.64257105740194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948520042.35/warc/CC-MAIN-20171212231544-20171213011544-00526.warc.gz"}
http://mathandmultimedia.com/category/college-mathematics/page/29/
## Introduction to Combinations Introduction to Combinations In my Introduction to Permutations post, we have learned that the number of permutations (or arrangements) of $n$ objects taken at $n$ at a time written as $P(n,n)$ is equal to $n! = n(n-1)(n-2) \cdots (3)(2)(1)$, and we have also learned that the number of permutations of $n$ objects taken $k$ at a time written as $P(n,k)$ is equal to $\displaystyle\frac{n!}{(n - k)!}$. In Figure 1, shown are the permutations of $4$ letters, A, B, C and D taken $4$ at a time.  From the figure, we can see that there are indeed $4!= (4)(3)(2)(1) = 24$ of such arrangement. In Figure 2, shown are the permutations of $4$ letters taken $3$ at a time, and we have shown that the number of permutations is equal to $\displaystyle\frac{4!}{ (4 - 3)!} = \frac{4!}{1!} = (4)(3)(2)(1) = 24$.  In Figure 3, we have again listed the permutations of $4$ letters taken $2$ at a time, and have shown that the number of permutations is equal to $\displaystyle\frac{4!}{(4-2)!} =12$. Figure 1 – Permutations of ABCD, taken 4 at a time. Figure 2 – Permutations of ABCD, taken 3 at a time. Figure 3 – Permutations of ABCD, taken 2 at a time. If we talk about combinations, however, the arrangement of objects does not matter. For example, if we want to buy a milk shake and we are allowed to choose to combine any $3$ flavors from Apple, Banana, Cherry and Durian*, then the combination of Apple, Banana and Cherry is the same as the combination Cherry, Apple, Banana. Try to list all the possible combinations of $3$ flavors taken from $4$ before proceeding. If we choose to shorten the name the fruits by selecting the first letter of their names, we only have $4$ possible combinations for question above: ABC, ABD, ACD, and BCD. Notice that these are the only possible combinations. Also, observe that if we list the permutations of ABC, we have ACB, BAC,BCA, CAB and CBA.  This means that in permutations, we have counted each combination of $3$ flavors from $4$ flavors $6$ times (or $3!$ times instead of one. In other words, a combination is just like a subgroup of a group. For instance, if we want to find the number of subgroups containing $3$ objects taken from $4$ objects (or the combination of $4$ objects taken $3$ at a time), it is the same as asking  “how many possible groups of $3$ objects can be taken from $4$ objects?” In Figure 4, all the possible subgroups of $3$ letters taken from $4$ letters are displayed by the orange border. You also would have realized that the number of permutations is an overcounting of the number of combinations. Figure 4 – The combinations of 4 objects taken 3 at a time is the same as the number of subgroups of 3 objects taken from 4 objects. In Figure 2, ABC, ACB, BAC, BCA, CAB and CBA are permutations of Apples, Banana and Cherry. For each subgroup of $3$, we realized that we counted $3! = 6$ times. So, to get the number of combinations, we divide our number of permutations $P(4,3)$ by the number of permutations of our subgroup$P(3,3) = 3!$. Therefore, we can say that the number of combinations of $4$ objects taken $3$ at a time is equal to $\displaystyle\frac{P(4,3)}{P(3,3)} = \frac{\frac{4!}{(4-3)!}}{3!} = \frac {4!}{(4-3)! 3!}$ In general, to get the number of combinations of $n$ objects taken $k$ at a time, we have to divide the number of permutations of $P(n,k)$ by the number of permutations of the subgroup $P(k,k)$. $\displaystyle\frac{P(n,k)}{P(k,k)} = \frac{\frac{n!}{(n-k)!}}{n!} = \frac {n!}{(n-k)! k!}$ The combinations of $n$ objects taken $k$ is usually denoted by $C(n,k)$ or $\displaystyle n \choose k$ _____________________________________________________________ *Durian is a fruit which can be found in the Philippines. It looks like a jackfruit. ## Proof Tutorial 1: Introduction to Mathematical Proofs Introduction Routine problems in mathematics usually require one or many answers . If we are asked to find the smallest of the three consecutive integers whose sum is 18, then our answer would be 5. If we are asked to find the equation of a line passing through (2,3), we can have many answers. Proofs, however, is different. It requires us to think more and to reason with valid arguments. It requires us to be explicit and logical. It requires us to convince our readers and most of all ourselves. Unless a proof problem is already given, finding mathematical statements to prove requires us to see patterns, generalize, and make conjectures about them. The problem stated above about consecutive integers does not require us to reason much or generalize at all. The success of proof writing requires intuition, mathematical maturity, and experience. Contrary to mathematical proofs written in books, the ideas behind arriving at a proof are not “cut and dried” and elegant.  Mathematicians do not reveal the process they go through, or the ideas behind their proofs. This is also a skill that mathematicians and persons who are good in mathematics possess: they are able to read proofs. The skills of reading proofs may be achieved by learning how to write them. Proving in higher mathematics, on the other hand, requires formal training. For instance, we have to know how to use logical connectives like and, or, not, and must understand how conditional and biconditional connectives work. Basic set theory concepts are also important. Moreover, we also have to learn proof strategies like direct proof and proof by contradiction to name some. For now, we will not be discussing these things . Most of the proofs in basic mathematics only require a little intuition and good reasoning. In the tutorial below, I tried to recreate (amateurishly) the process on how mathematicians see patterns, arrive at a conjecture, and how they prove their conjectures.  Of course, in reality, problems mathematicians encounter are a lot harder. In fact, some of the hardest problems take hundreds of years to be solved. For example, no mathematician has proved the Fermat’s Last Theorem for more than 300 years, and the mathematician who proved it solved it for eight years. The proof that we are about to do below is very elementary. For now, we will highlight the process and not the difficulty. The titles of the processes below are not necessarily in order. Recognizing Patterns and Making Conjectures Before mathematicians prove theorems, they usually first see patterns. This happens when they read books, solve problems, or prove other theorems. For example, what do we see when we add two even integers? Let’s add some: 2 + 8 = 10, – 24 + 6 = -18, and – 4 + – 8 = -12. We can easily see that if we add two even integers, then their sum is always even. From here, we might be tempted to say that if we add two integers, then their sum would always be even.  In mathematics, this kind of statement or hypothesis is called a conjecture: an educated and reasonable guess based on patterns observed. Rewriting our guess, we have Conjecture: The sum of two even integers is always even. If we want to disprove a conjecture, we only need one counterexample — an example that can make the conjecture false. (Can you think of one?). Note that  we only need one counterexample to disprove a conjecture.  If we want to prove it, however, we might be tempted to pair a few more integers and say that “oh, their sum is even, so it must be true”. No matter how many integers we pair, if we can’t exhaust all the pairs, then it cannot be considered as a proof.  When we say the sum of two even integers above, we mean ALL even integers.  Of course, there is no way that we can list all pairs of even integers since there are infinitely many of them. Generalizing Patterns Since it is impossible to enumerate all pairs of even integers, we need a representation, algebraic expression in particular, that will represent any even integer. If we can find this expression, then all the even integers would be represented. This process is called generalizing.  Like what we have done above, we generalized by representing all members of the set by a single expression. In our case, the members of our set are all even integers. From the definition, we know that all even integers are divisible by 2. That means that if m is an even integer, then, when we divide m by 2, we can find a quotient which is also an integer. For instance, since 18 is an integer, we are sure that there exists an integer such that 18 divided by 2 is equal to that integer. In general, suppose that quotient of m/2 is q, then it follows that m/2 = q, for any even integer m. Multiplying both sides of the equation by 2, we have m = 2q. That means that if m is an even integer, then there exists an integer q such that m = 2q. Hence, we may represent any even integer m with 2q for some integer q*. Note that q here is a generalized number, which means that an even integer can also be represented by 2x, 2y, 2z or any variable with the condition that they are integers. Connecting the ideas Proving is making logical and relevant statements from definitions, facts, assumptions and other theorems to come to a desired conclusion. Before coming up with an elegant proof, mathematicians usually have scratch work, connecting their ideas to arrive at what they want to prove. Scratch work From the statement above, we have shown that any even integer m, there exists an integer q, such that m = 2q. That means, that if we can show that the sum of two even integers is in the form 2q (or that the sum is divisible by 2), then we can be sure that it is always an even integer. Since we need two integers, we let m and n are the two integers that we will add. Since both of them are even integers, then we can represent them as 2q and 2r respectively for some integers q and r. Adding both of them, we have m + n = 2q + 2r = 2(q + r). Now, q + r is an integer since q is an integer and r is an integer from our definition above. This means that 2(q + r) isof the form 2x for some integer x. This means that m + n is of the form 2x for some integer x. Therefore, m + n is even. Writing (elegantly) the final proof Here, we write our proof in a shorter and more elegant way. Conjectures that are proven are called theorems. So let us write the proof of our first theorem. Theorem 1: The sum of two even integers is always even. Proof. Let m,  n be even integers.  Then m = 2q for some integer q and n = 2r for some integer r. Now, m + n = 2q + 2r = 2(q + r). Since q + r is an integer, clearly, 2(r + s) = m + n is divisible by 2. Therefore, the sum of two even integers is even. Most proofs are written in a concise way, leaving some details for the reader to fill in. For example, the statement “Since q + r is an integer” did not really state the reason why this is so. This is stated in our scratch work, but not in the proof. Going Further If m is an even integer, then m – 1 and m + 1 are odd integers. Since m = 2r, then 2r – 1 and 2r + 1 are also odd integers. In our example below, we will use 2r + 1, to prove that the sum of two odd integers is always even. As an exercise, use 2r – 1 in your proof. Theorem 2: The sum of two odd integers is always even. Let p, q be odd integers. Then p = 2a + 1 and q = 2b + 1 for some integers a and b. Now, adding we have p + q = 2r + 1 + 2s + 1 = 2r + 2s + 2 = 2(r + s + 1). Since r + s + 1 is an integer, then 2(r + s + 1) is divisible by 2. Hence, p + q is divisible by 2. Therefore, the sum of two odd integers is even. Math and Hardwork Being good in math requires hard work. Andrew Wiles worked on the Fermat’s Last Theorem for seven years, have given up several times thinking that it was impossible. In 1995, he finally thought he had proved it, and presented it in a conference.  A month later, his reviewer thought that there is a part of the proof which was vague (or wrong), so he had to review his work and found out that there was a part which was actually wrong.  He almost gave up. He worked more than a year to correct the error. Now, he has carved his place in history. *In technical language, the word “for some” is equivalent to the word “there exists”. Exercises: 1. Prove that the sum of an even number and an odd number is always odd. 2. Prove that the difference of two odd integers is always even. 3. Prove that the product of two even integers is always even. 4. Prove that the product of two odd integers is always odd. 5. Prove that the product of an even number and an odd number is always even. ## An Intuitive Introduction to Limits Limits is one of the most fundamental concepts of calculus. The foundation of calculus was not entirely solid during the time of Leibniz and Newton, but later developments on the concept, particularly the $\epsilon-\delta$ definition by Cauchy, Weierstrass and other mathematicians established its firm foundation. In the discussion below, I shall introduce the concept of limits intuitively as it appears in common problems. For a more rigorous discussion, you can read the post article titled “An extensive explanation about the $\epsilon-\delta$ definition of limits”. Circumference and Limits If we are going to approximate the circumference of a circle using the perimeter of an inscribed polygon, even without computation, we can observe that as the number of sides of the polygon increases, the better the approximation. In fact, we can make the perimeter of the polygon as close as we please to the circumference of the circle by choosing a sufficiently large number of sides.  Notice that no matter how large the number of sides our polygon has, its perimeter will never exceed or equal the circumference of the circle. Figure 1 – As the number of side of the polygons increases, its perimeter gets closer to the circumference of the circle. In a more technical term, we say that the limit of the perimeter of the inscribed polygon as the number of its sides increases without bound (or as the number of sides of the inscribed polygon approaches infinity) is equal to the circumference of the circle.  In symbol, if we let $n$ be the number of sides of the inscribed polygon, $P_n$ be the perimeter of a polygon with $n$ sides, and $C$ be the circumference of the circle, we can say that the limit of $P_n$ as $n \to \infty$ is equal to $C$. Compactly, we can write $\lim_{n \to \infty} P_n = C$. Functions and Limits Consider the function $f(x) = \frac{1}{x}$ where $x$ is a natural number. Calculating the values of the function using the first 20 natural numbers and plotting the points in the $xy$-plane, we arrive at the table and the graph in Figure 2. Figure 2 – As x increases, f(x) gets closer and closer to 0. First, we see that as the value of $x$ increases, the value of $f(x)$ decreases and approaches $0$. Furthermore, we can make the value of $f(x)$ as close to $0$ as we please by choosing a sufficiently large $x$. We also notice that no matter how large the value of $x$ is, the value of $f(x)$ will never reach $0$. Hence, we say that the limit of $f(x) = \frac{1}{x}$ as the value of $x$ increases without bound is equal to $0$, or equivalently the limit of $f(x) = \frac{1}{x}$ as $x$ approaches infinity is equal to $0$. In symbol, we write the limit of $f(x) \to \infty$ as $x \to 0$ or more compactly the $\lim_{x \to \infty} \frac{1}{x} = 0$. Tangent line and Limits Recall that the slope of a line is its “rise” over its “run”. The formula of slope $m$ of a line is $m = \displaystyle\frac{y_2 - y_1}{x_2 - x_1}$, given two points with coordinates $(x_1,y_1)$ and $(x_2,y_2)$.  One of the famous ancient problems in mathematics was the tangent problem, which is getting the slope of a line tangent to a function at a point.  In the Figure 3, line $n$ is tangent to the function $f$ at point $P$. Figure 3 – Line n is tangent to the function f at point P. If we are going to compute for the slope of the line tangent line, we have a big problem because we only have one point, and the slope formula requires two points.  To deal with this problem, we select a point $Q$ on the graph of $f$, draw the secant line $PQ$ and move $Q$ along the graph of $f$ towards $P$. Notice that as $Q$ approaches $P$ (shown as $Q'$ and $Q''$), the secant line gets closer and closer to the tangent line. This is the same as saying that the slope the secant line is getting closer and closer to the slope of the tangent line. Similarly, we can say that as the distance between the x-coordinates of $P$ and $Q$ is getting closer and closer to $0$, the slope of the secant line is getting closer and closer to the slope of the tangent line. Figure 4 – As point Q approaches P, the slope of the secant line is getting closer and closer to the slope of the tangent line. If we let $h$ be the distance between the x-coordinates of $P$ and $Q$, $m_s$ be the slope of the secant line $PQ$ and $m_t$ be the slope of the tangent line, we can say that the limit of the slope of secant line as $h$ approaches $0$ is equal to the slope of the tangent line. Concisely, we can write $\lim_{h \to 0}m_s = m_t$. Area and Limits Another ancient problem is about finding the area under a curve as shown in the leftmost graph in Figure 5. During the ancient time, finding the area of a curved plane was impossible. Figure 5 – As the number of rectangles increases, the sum of the area of the rectangles is getting closer and closer to the area of the bounded plane under the curve. We can approximate the area above in the first graph in Figure 5 by constructing rectangles under the curve such that one of the corners of the rectangle touches the graph as shown in the second and third graph in Figure 5. We can see that as we increase the number of rectangles, the better is our approximation of the area under the curve. We can also see that no matter how large the number of rectangles is, the sum its areas will never exceed (or equal) the area of the plane under the curve. Hence, we say that as the number of rectangles increases without bound, the sum of the areas of the rectangles is equal to the area under the curve; or the limit of the sum of the areas of the rectangles as the number of rectangles approaches infinity is equal to the area of the plane under the curve. If we let $A$ be the area under the curve, $S_n$ be the sum of the areas of $n$ rectangles, then we can say that the limit of $S_n$ as n approaches infinity is equal to $A$. Concisely, we can write $\lim_{n \to\infty} S_n = A$. Numbers and Limits We end with a more familiar example usually found in books. What if we want to find the limit of $2x + 1$ as $x$ approaches $3$? To answer the question, we must find the value $2x + 1$ where $x$ is very close to $3$. Those values would be numbers that are very close to $3$ – some slightly greater than $3$ and some slightly less than $3$. Place the  values in a table we have Figure 6 – As x approaches 3, 2x + 1 approaches 7. From the table, we can clearly see that as the value of $x$ approaches $3$, the value of $2x + 1$ approaches $7$.  Concisely, we can write the $\lim_{x \to 3} 2x + 1 =7$. Mr. Jayson Dyer, author of The Number Warrior has another excellent explanation on the concept of limits in his blog Five intuitive approaches to teaching the infinitely small. ****** The area under the curve problem and the tangent problem are the ancient problems which gave birth to calculus. Calculus was independently invented by Gottfried Leibniz and Isaac Newton in the 17th century. 1 27 28 29 30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 129, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8762174248695374, "perplexity": 171.86642583952514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00517.warc.gz"}
http://physics.stackexchange.com/questions/9846/determine-the-point-at-which-the-electric-field-is-equal-to-zero/9854
# Determine the point at which the electric field is equal to zero? Two point charges, -2.5 micro coulombs and 6 micro coulombs, are separated by a distance of 1m (with the -2.5 charge on the left and 6 on the right). What is the point where the electric field is zero? This seems exceptionally easy, but I can't figure it out. I can calculate the answer if both charges are negative/positive easily, but the fact that they're different is confusing me. Thanks! - Hint: What would be the direction of the electric field, say $100m$ away? –  Qmechanic May 14 '11 at 11:18 As a more general problem solving technique, have you drawn a diagram? –  Nic May 17 '11 at 9:51 Because of symmetry, the point at which the forces cancel must be on the same line as the two charges -- it's effectively a 1-dimensional problem. If a small positive test charge were between the two charges, it would be "pushed" or "pulled" in the same direction by both charges, so it cannot be there. If a small positive test charge were to the right of the two charges, it would always be "pushed" harder to the right by the bigger charge than it would be "pulled" to the left by the smaller charge, so it cannot be there. A small positive charge that is placed to the left will be pulled to the right by the nearby negative charge and pushed to the left by more distant, but larger positive charge, so the balance point must be somewhere here. So, taking the force that pushes or pulls a positive test charge $q$ at a distance $x$ to the right of the negative charge to be positive, there is a force proportional to $2.5q/x^2$ due to the nearby negative charge and a force proportional to $-6q/(1+x)^2$ due to the more distant positive charge. Adding the two together and asking the sum to be zero gives you an equation that you can convert into a quadratic equation. You only want the positive solution, right? Why? What's the key to getting this kind of thing right? Decide what direction you're going to call positive and stick to it ruthlessly. Introducing a test charge and choosing it to be positive helps here by making the question less abstract. Draw a rough diagram to help you keep which is which straight in your head. I had to be careful to make sure that choosing a positive charge instead of a negative charge doesn't make a difference (check why not, why it helped me draw a diagram, and what the diagram would look like if I were to use a negative charge). This homework question is too easy for Physics SE. Ask a different kind of Question next time. It would have been better if you had set out carefully and in more detail in your Question why the approach you took didn't work out. There's a good chance that if you had set it out carefully you would have seen why you had a problem with the calculation and the answers you were getting. - Following Peter Morgan's eqn, I worked out the answer to be ABOUT (I did not do this on a spreadsheet) (13/7) meters on a straight line away from the smaller charge and opposite the larger charge. That is, the smaller charge is between the zero point and on a straight line from the zero point and the larger charge. One way to visualize what is happening: Think of the earth have a gravity field of -2.5/(x^2), where x is your height above earth, measured is AU's (distance of Earth to Sun). So, the earth is going to repel you, and you are floating out in space. Now assume that the Sun is pulling you toward it with a force of 6/(z^2). If you fall off the Earth in any other direction other than straight away from the Sun, you will be pulled into the Sun. But if you are lucky enough to fall off the Earth directly away from the Sun, you will reach a point where the Earth's repulsion of you is exactly countered by the Sun's pull. It would be a Lagrangian point of sorts (the unstable sort, actually). That point would be a little less than (13/7) AUs away from the Earth, and a little less than (20/7) AUs away from the Sun. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886162281036377, "perplexity": 249.1742249568998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990611.52/warc/CC-MAIN-20150728002310-00132-ip-10-236-191-2.ec2.internal.warc.gz"}
http://quant.stackexchange.com/questions/1182/how-do-i-get-the-average-transition-matrix-for-three-consecutive-years
# How do I get the average transition matrix for three consecutive years? I have a one year transition matrix for three consecutive years. Multiplying these three matrices together yields the three year transition matrix. I want to obtain the average transition matrix for the three years (average^3 = 3yrtransition) What is the procedure to be used? Is this possible at all? (I kind of realize that there might be multiple solutions to this problem due to possible multiple paths to achieve the end state). - If the transition matrix has distinct eigenvalues, you can diagonalize it and then take the cube root of the diagonal. E.g., you can compute the SVD, verify that the eigenvalues are distinct, take the cube root of the diagonal matrix, then re-multiply it together. - Johann, is your procedure possible if the transition matrix has complex eigenvalues? –  morsecode May 20 '11 at 0:02 For most, yes, it should. It will work for any normal matrix. –  Johann Hibschman May 23 '11 at 19:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9544572234153748, "perplexity": 434.5700964396452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802771164.85/warc/CC-MAIN-20141217075251-00118-ip-10-231-17-201.ec2.internal.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/18049/grovers-algorithm-outputting-random-incorrect-results
# Grover's algorithm outputting random/incorrect results I am sorry if this question is trivial, I am relatively new to QC. Here is my grover's circuit, as you can see it is displaying that it has a 100% probability of measuring 100 however when placed into the quantum simulator (IBMQ statevector) here is the result: I am stumped as to why this is the case. it clearly displays 100% output probability which means the algorithm is working right? Or am I missing something fundamental? Edit: OKAY? I fixed it somehow? by getting rid of the bottom H gate? I am even more confused than I was before. Why is this working, how does this work? • This link might be helpful, I think. Jun 20 at 12:00 As @narip already mentioned in the comments, the statevector simulator of the IQX (your top picture) shows that one state has 100% measure probability since you added measurements and thus the state collapses. You should only add measurements for shot-based readouts, not if you do statevector simulations. Regarding your question about the Hadamard gate: I think there are actually some Hadamards missing! Based on your circuit I assume the oracle/boolean function you want to implement is $$f(x_1, x_2) = x_1 \text{ and } x_2$$. The Toffoli gate with surrounding X gates you implemented indeed flips a target qubit if both qubit 1 and 2 are 0. But keep in mind that Grover's oracle must do phaseflip and not a bitflip! To convert your oracle you should add two Hadamards around the target qubit, to be ┌───┐ ┌───┐ x_1: ┤ X ├──■──┤ X ├ ├───┤ │ ├───┤ x_2: ┤ X ├──■──┤ X ├ ├───┤┌─┴─┐├───┤ target: ┤ H ├┤ X ├┤ H ├ └───┘└───┘└───┘ And on top of that, you should have an initial layer of Hadamards, to initialize in an equal superposition. In total your circuit would be something like ┌───┐┌───┐ ┌───┐┌───┐┌───┐ ┌───┐┌───┐ x_1: ┤ H ├┤ X ├──■──┤ X ├┤ H ├┤ X ├──■──┤ X ├┤ H ├──── ├───┤├───┤ │ ├───┤├───┤├───┤ │ ├───┤├───┤ x_2: ┤ H ├┤ X ├──■──┤ X ├┤ H ├┤ X ├──■──┤ X ├┤ H ├──── ├───┤├───┤┌─┴─┐├───┤└───┘└───┘ └───┘└───┘ target: ┤ H ├┤ H ├┤ X ├┤ H ├───────────────────────────── └───┘└───┘└───┘└───┘
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8666545748710632, "perplexity": 3738.1737309793198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362230.18/warc/CC-MAIN-20211202145130-20211202175130-00498.warc.gz"}
http://openstudy.com/updates/4f23776ee4b0a2a9c2662f39
## A community for students. Sign up today Here's the question you clicked on: 55 members online • 0 replying • 0 viewing ## anonymous 4 years ago A 32.4 kg wagon is towed up a hill inclined at 18.4◦ with respect to the horizontal. The tow rope is parallel to the incline and has a tension of 112 N in it. Assume that the wagon starts from rest at the bottom of the hill, and neglect friction. The acceleration of gravity is 9.8 m/s^2. How fast is the wagon going after moving 43.8 m up the hill? Answer in units of m/s Delete Cancel Submit • This Question is Closed 1. TuringTest • 4 years ago Best Response You've already chosen the best response. 1 |dw:1327725801420:dw| 2. TuringTest • 4 years ago Best Response You've already chosen the best response. 1 |dw:1327726001389:dw|By paying attention to the geometry of our situation we see that the force of gravity acting against the tension of the rope is$|F_g|\sin\theta$The total force acting on the object along the direction of the incline will be proportional to its acceleration, which we can find by summing the forces along the side of the hill..$F=T-|F_g|\sin\theta$$ma=T-mg\sin\theta$Assuming 'up the hill' is in the direction along the theta direction we can calculate final velocity from the kinematic equation$v_f^2=v_o^2+2ad$In our case vo=0 because the wagon starts from rest. 3. Not the answer you are looking for? Search for more explanations. • Attachments: #### Ask your own question Sign Up Find more explanations on OpenStudy Privacy Policy ## Your question is ready. Sign up for free to start getting answers. ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9559946060180664, "perplexity": 2084.976523804216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00510-ip-10-171-10-70.ec2.internal.warc.gz"}
https://en.wikibooks.org/wiki/Linear_Algebra/Jordan_Canonical_Form
# Linear Algebra/Jordan Canonical Form Linear Algebra ← Polynomials of Maps and Matrices Jordan Canonical Form Topic: Geometry of Eigenvalues → This subsection moves from the canonical form for nilpotent matrices to the one for all matrices. We have shown that if a map is nilpotent then all of its eigenvalues are zero. We can now prove the converse. Lemma 2.1 A linear transformation whose only eigenvalue is zero is nilpotent. Proof If a transformation ${\displaystyle t}$ on an ${\displaystyle n}$-dimensional space has only the single eigenvalue of zero then its characteristic polynomial is ${\displaystyle x^{n}}$. The Cayley-Hamilton Theorem says that a map satisfies its characteristic polynimial so ${\displaystyle t^{n}}$ is the zero map. Thus ${\displaystyle t}$ is nilpotent. We have a canonical form for nilpotent matrices, that is, for each matrix whose single eigenvalue is zero: each such matrix is similar to one that is all zeroes except for blocks of subdiagonal ones. (To make this representation unique we can fix some arrangement of the blocks, say, from longest to shortest.) We next extend this to all single-eigenvalue matrices. Observe that if ${\displaystyle t}$'s only eigenvalue is ${\displaystyle \lambda }$ then ${\displaystyle t-\lambda }$'s only eigenvalue is ${\displaystyle 0}$ because ${\displaystyle t({\vec {v}})=\lambda {\vec {v}}}$ if and only if ${\displaystyle (t-\lambda )\,({\vec {v}})=0\cdot {\vec {v}}}$. The natural way to extend the results for nilpotent matrices is to represent ${\displaystyle t-\lambda }$ in the canonical form ${\displaystyle N}$, and try to use that to get a simple representation ${\displaystyle T}$ for ${\displaystyle t}$. The next result says that this try works. Lemma 2.2 If the matrices ${\displaystyle T-\lambda I}$ and ${\displaystyle N}$ are similar then ${\displaystyle T}$ and ${\displaystyle N+\lambda I}$ are also similar, via the same change of basis matrices. Proof With ${\displaystyle N=P(T-\lambda I)P^{-1}=PTP^{-1}-P(\lambda I)P^{-1}}$ we have ${\displaystyle N=PTP^{-1}-PP^{-1}(\lambda I)}$ since the diagonal matrix ${\displaystyle \lambda I}$ commutes with anything, and so ${\displaystyle N=PTP^{-1}-\lambda I}$. Therefore ${\displaystyle N+\lambda I=PTP^{-1}}$, as required. Example 2.3 The characteristic polynomial of ${\displaystyle T={\begin{pmatrix}2&-1\\1&4\end{pmatrix}}}$ is ${\displaystyle (x-3)^{2}}$ and so ${\displaystyle T}$ has only the single eigenvalue ${\displaystyle 3}$. Thus for ${\displaystyle T-3I={\begin{pmatrix}-1&-1\\1&1\end{pmatrix}}}$ the only eigenvalue is ${\displaystyle 0}$, and ${\displaystyle T-3I}$ is nilpotent. The null spaces are routine to find; to ease this computation we take ${\displaystyle T}$ to represent the transformation ${\displaystyle t:\mathbb {C} ^{2}\to \mathbb {C} ^{2}}$ with respect to the standard basis (we shall maintain this convention for the rest of the chapter). ${\displaystyle {\mathcal {N}}(t-3)=\{{\begin{pmatrix}-y\\y\end{pmatrix}}\,{\big |}\,y\in \mathbb {C} \}\qquad {\mathcal {N}}((t-3)^{2})=\mathbb {C} ^{2}}$ The dimensions of these null spaces show that the action of an associated map ${\displaystyle t-3}$ on a string basis is ${\displaystyle {\vec {\beta }}_{1}\mapsto {\vec {\beta }}_{2}\mapsto {\vec {0}}}$. Thus, the canonical form for ${\displaystyle t-3}$ with one choice for a string basis is ${\displaystyle {\rm {Rep}}_{B,B}(t-3)=N={\begin{pmatrix}0&0\\1&0\end{pmatrix}}\qquad B=\langle {\begin{pmatrix}1\\1\end{pmatrix}},{\begin{pmatrix}-2\\2\end{pmatrix}}\rangle }$ and by Lemma 2.2, ${\displaystyle T}$ is similar to this matrix. ${\displaystyle {\rm {Rep}}_{t}(B,B)=N+3I={\begin{pmatrix}3&0\\1&3\end{pmatrix}}}$ We can produce the similarity computation. Recall from the Nilpotence section how to find the change of basis matrices ${\displaystyle P}$ and ${\displaystyle P^{-1}}$ to express ${\displaystyle N}$ as ${\displaystyle P(T-3I)P^{-1}}$. The similarity diagram describes that to move from the lower left to the upper left we multiply by ${\displaystyle P^{-1}={\bigl (}{\rm {Rep}}_{{\mathcal {E}}_{2},B}({\mbox{id}}){\bigr )}^{-1}={\rm {Rep}}_{B,{\mathcal {E}}_{2}}({\mbox{id}})={\begin{pmatrix}1&-2\\1&2\end{pmatrix}}}$ and to move from the upper right to the lower right we multiply by this matrix. ${\displaystyle P={\begin{pmatrix}1&-2\\1&2\end{pmatrix}}^{-1}={\begin{pmatrix}1/2&1/2\\-1/4&1/4\end{pmatrix}}}$ So the similarity is expressed by ${\displaystyle {\begin{pmatrix}3&0\\1&3\end{pmatrix}}={\begin{pmatrix}1/2&1/2\\-1/4&1/4\end{pmatrix}}{\begin{pmatrix}2&-1\\1&4\end{pmatrix}}{\begin{pmatrix}1&-2\\1&2\end{pmatrix}}}$ which is easily checked. Example 2.4 This matrix has characteristic polynomial ${\displaystyle (x-4)^{4}}$ ${\displaystyle T={\begin{pmatrix}4&1&0&-1\\0&3&0&1\\0&0&4&0\\1&0&0&5\end{pmatrix}}}$ and so has the single eigenvalue ${\displaystyle 4}$. The nullities of ${\displaystyle t-4}$ are: the null space of ${\displaystyle t-4}$ has dimension two, the null space of ${\displaystyle (t-4)^{2}}$ has dimension three, and the null space of ${\displaystyle (t-4)^{3}}$ has dimension four. Thus, ${\displaystyle t-4}$ has the action on a string basis of ${\displaystyle {\vec {\beta }}_{1}\mapsto {\vec {\beta }}_{2}\mapsto {\vec {\beta }}_{3}\mapsto {\vec {0}}}$ and ${\displaystyle {\vec {\beta }}_{4}\mapsto {\vec {0}}}$. This gives the canonical form ${\displaystyle N}$ for ${\displaystyle t-4}$, which in turn gives the form for ${\displaystyle t}$. ${\displaystyle N+4I={\begin{pmatrix}4&0&0&0\\1&4&0&0\\0&1&4&0\\0&0&0&4\end{pmatrix}}}$ An array that is all zeroes, except for some number ${\displaystyle \lambda }$ down the diagonal and blocks of subdiagonal ones, is a Jordan block. We have shown that Jordan block matrices are canonical representatives of the similarity classes of single-eigenvalue matrices. Example 2.5 The ${\displaystyle 3\!\times \!3}$ matrices whose only eigenvalue is ${\displaystyle 1/2}$ separate into three similarity classes. The three classes have these canonical representatives. ${\displaystyle {\begin{pmatrix}1/2&0&0\\0&1/2&0\\0&0&1/2\end{pmatrix}}\qquad {\begin{pmatrix}1/2&0&0\\1&1/2&0\\0&0&1/2\end{pmatrix}}\qquad {\begin{pmatrix}1/2&0&0\\1&1/2&0\\0&1&1/2\end{pmatrix}}}$ In particular, this matrix ${\displaystyle {\begin{pmatrix}1/2&0&0\\0&1/2&0\\0&1&1/2\end{pmatrix}}}$ belongs to the similarity class represented by the middle one, because we have adopted the convention of ordering the blocks of subdiagonal ones from the longest block to the shortest. We will now finish the program of this chapter by extending this work to cover maps and matrices with multiple eigenvalues. The best possibility for general maps and matrices would be if we could break them into a part involving their first eigenvalue ${\displaystyle \lambda _{1}}$ (which we represent using its Jordan block), a part with ${\displaystyle \lambda _{2}}$, etc. This ideal is in fact what happens. For any transformation ${\displaystyle t:V\to V}$, we shall break the space ${\displaystyle V}$ into the direct sum of a part on which ${\displaystyle t-\lambda _{1}}$ is nilpotent, plus a part on which ${\displaystyle t-\lambda _{2}}$ is nilpotent, etc. More precisely, we shall take three steps to get to this section's major theorem and the third step shows that ${\displaystyle V={\mathcal {N}}_{\infty }(t-\lambda _{1})\oplus \cdots \oplus {\mathcal {N}}_{\infty }(t-\lambda _{\ell })}$ where ${\displaystyle \lambda _{1},\ldots ,\lambda _{\ell }}$ are ${\displaystyle t}$'s eigenvalues. Suppose that ${\displaystyle t:V\to V}$ is a linear transformation. Note that the restriction[1] of ${\displaystyle t}$ to a subspace ${\displaystyle M}$ need not be a linear transformation on ${\displaystyle M}$ because there may be an ${\displaystyle {\vec {m}}\in M}$ with ${\displaystyle t({\vec {m}})\not \in M}$. To ensure that the restriction of a transformation to a "part" of a space is a transformation on the partwe need the next condition. Definition 2.6 Let ${\displaystyle t:V\to V}$ be a transformation. A subspace ${\displaystyle M}$ is ${\displaystyle t}$ invariant if whenever ${\displaystyle {\vec {m}}\in M}$ then ${\displaystyle t({\vec {m}})\in M}$ (shorter: ${\displaystyle t(M)\subseteq M}$). Two examples are that the generalized null space ${\displaystyle {\mathcal {N}}_{\infty }(t)}$ and the generalized range space ${\displaystyle {\mathcal {R}}_{\infty }(t)}$ of any transformation ${\displaystyle t}$ are invariant. For the generalized null space, if ${\displaystyle {\vec {v}}\in {\mathcal {N}}_{\infty }(t)}$ then ${\displaystyle t^{n}({\vec {v}})={\vec {0}}}$ where ${\displaystyle n}$ is the dimension of the underlying space and so ${\displaystyle t({\vec {v}})\in {\mathcal {N}}_{\infty }(t)}$ because ${\displaystyle t^{n}(\,t({\vec {v}})\,)}$ is zero also. For the generalized range space, if ${\displaystyle {\vec {v}}\in {\mathcal {R}}_{\infty }(t)}$ then ${\displaystyle {\vec {v}}=t^{n}({\vec {w}})}$ for some ${\displaystyle {\vec {w}}}$ and then ${\displaystyle t({\vec {v}})=t^{n+1}({\vec {w}})=t^{n}(\,t({\vec {w}})\,)}$ shows that ${\displaystyle t({\vec {v}})}$ is also a member of ${\displaystyle {\mathcal {R}}_{\infty }(t)}$. Thus the spaces ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})}$ and ${\displaystyle {\mathcal {R}}_{\infty }(t-\lambda _{i})}$ are ${\displaystyle t-\lambda _{i}}$ invariant. Observe also that ${\displaystyle t-\lambda _{i}}$ is nilpotent on ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})}$ because, simply, if ${\displaystyle {\vec {v}}}$ has the property that some power of ${\displaystyle t-\lambda _{i}}$ maps it to zero— that is, if it is in the generalized null space— then some power of ${\displaystyle t-\lambda _{i}}$ maps it to zero. The generalized null space ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})}$ is a "part" of the space on which the action of ${\displaystyle t-\lambda _{i}}$ is easy to understand. The next result is the first of our three steps. It establishes that ${\displaystyle t-\lambda _{j}}$ leaves ${\displaystyle t-\lambda _{i}}$'s part unchanged. Lemma 2.7 A subspace is ${\displaystyle t}$ invariant if and only if it is ${\displaystyle t-\lambda }$ invariant for any scalar ${\displaystyle \lambda }$. In particular, where ${\displaystyle \lambda _{i}}$ is an eigenvalue of a linear transformation ${\displaystyle t}$, then for any other eigenvalue ${\displaystyle \lambda _{j}}$, the spaces ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})}$ and ${\displaystyle {\mathcal {R}}_{\infty }(t-\lambda _{i})}$ are ${\displaystyle t-\lambda _{j}}$ invariant. Proof For the first sentence we check the two implications of the "if and only if" separately. One of them is easy: if the subspace is ${\displaystyle t-\lambda }$ invariant for any ${\displaystyle \lambda }$ then taking ${\displaystyle \lambda =0}$ shows that it is ${\displaystyle t}$ invariant. For the other implication suppose that the subspace is ${\displaystyle t}$ invariant, so that if ${\displaystyle {\vec {m}}\in M}$ then ${\displaystyle t({\vec {m}})\in M}$, and let ${\displaystyle \lambda }$ be any scalar. The subspace ${\displaystyle M}$ is closed under linear combinations and so if ${\displaystyle t({\vec {m}})\in M}$ then ${\displaystyle t({\vec {m}})-\lambda {\vec {m}}\in M}$. Thus if ${\displaystyle {\vec {m}}\in M}$ then ${\displaystyle (t-\lambda )\,({\vec {m}})\in M}$, as required. The second sentence follows straight from the first. Because the two spaces are ${\displaystyle t-\lambda _{i}}$ invariant, they are therefore ${\displaystyle t}$ invariant. From this, applying the first sentence again, we conclude that they are also ${\displaystyle t-\lambda _{j}}$ invariant. The second step of the three that we will take to prove this section's major result makes use of an additional property of ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})}$ and ${\displaystyle {\mathcal {R}}_{\infty }(t-\lambda _{i})}$, that they are complementary. Recall that if a space is the direct sum of two others ${\displaystyle V={\mathcal {N}}\oplus {\mathcal {R}}}$ then any vector ${\displaystyle {\vec {v}}}$ in the space breaks into two parts ${\displaystyle {\vec {v}}={\vec {n}}+{\vec {r}}}$ where ${\displaystyle {\vec {n}}\in {\mathcal {N}}}$ and ${\displaystyle {\vec {r}}\in {\mathcal {R}}}$, and recall also that if ${\displaystyle B_{\mathcal {N}}}$ and ${\displaystyle B_{\mathcal {R}}}$ are bases for ${\displaystyle {\mathcal {N}}}$ and ${\displaystyle {\mathcal {R}}}$ then the concatenation ${\displaystyle B_{\mathcal {N}}\!{\mathbin {{}^{\frown }}}\!B_{\mathcal {R}}}$ is linearly independent (and so the two parts of ${\displaystyle {\vec {v}}}$ do not "overlap"). The next result says that for any subspaces ${\displaystyle {\mathcal {N}}}$ and ${\displaystyle {\mathcal {R}}}$ that are complementary as well as ${\displaystyle t}$ invariant, the action of ${\displaystyle t}$ on ${\displaystyle {\vec {v}}}$ breaks into the "non-overlapping" actions of ${\displaystyle t}$ on ${\displaystyle {\vec {n}}}$ and on ${\displaystyle {\vec {r}}}$. Lemma 2.8 Let ${\displaystyle t:V\to V}$ be a transformation and let ${\displaystyle {\mathcal {N}}}$ and ${\displaystyle {\mathcal {R}}}$ be ${\displaystyle t}$ invariant complementary subspaces of ${\displaystyle V}$. Then ${\displaystyle t}$ can be represented by a matrix with blocks of square submatrices ${\displaystyle T_{1}}$ and ${\displaystyle T_{2}}$ ${\displaystyle \left({\begin{array}{c|c}T_{1}&Z_{2}\\\hline Z_{1}&T_{2}\end{array}}\right){\begin{array}{ll}\}\dim({\mathcal {N}}){\text{-many rows}}\\\}\dim({\mathcal {R}}){\text{-many rows}}\end{array}}}$ where ${\displaystyle Z_{1}}$ and ${\displaystyle Z_{2}}$ are blocks of zeroes. Proof Since the two subspaces are complementary, the concatenation of a basis for ${\displaystyle {\mathcal {N}}}$ and a basis for ${\displaystyle {\mathcal {R}}}$ makes a basis ${\displaystyle B=\langle {\vec {\nu }}_{1},\dots ,{\vec {\nu }}_{p},{\vec {\mu }}_{1},\ldots ,{\vec {\mu }}_{q}\rangle }$ for ${\displaystyle V}$. We shall show that the matrix ${\displaystyle {\rm {Rep}}_{B,B}(t)=\left({\begin{array}{c|c|c}\vdots &&\vdots \\{\rm {Rep}}_{B}(t({\vec {\nu }}_{1}))&\cdots &{\rm {Rep}}_{B}(t({\vec {\mu }}_{q}))\\\vdots &&\vdots \\\end{array}}\right)}$ has the desired form. Any vector ${\displaystyle {\vec {v}}\in V}$ is in ${\displaystyle {\mathcal {N}}}$ if and only if its final ${\displaystyle q}$ components are zeroes when it is represented with respect to ${\displaystyle B}$. As ${\displaystyle {\mathcal {N}}}$ is ${\displaystyle t}$ invariant, each of the vectors ${\displaystyle {\rm {Rep}}_{B}(t({\vec {\nu }}_{1}))}$, ..., ${\displaystyle {\rm {Rep}}_{B}(t({\vec {\nu }}_{p}))}$ has that form. Hence the lower left of ${\displaystyle {\rm {Rep}}_{B,B}(t)}$ is all zeroes. The argument for the upper right is similar. To see that ${\displaystyle t}$ has been decomposed into its action on the parts, observe that the restrictions of ${\displaystyle t}$ to the subspaces ${\displaystyle {\mathcal {N}}}$ and ${\displaystyle {\mathcal {R}}}$ are represented, with respect to the obvious bases, by the matrices ${\displaystyle T_{1}}$ and ${\displaystyle T_{2}}$. So, with subspaces that are invariant and complementary, we can split the problem of examining a linear transformation into two lower-dimensional subproblems. The next result illustrates this decomposition into blocks. Lemma 2.9 If ${\displaystyle T}$ is a matrices with square submatrices ${\displaystyle T_{1}}$ and ${\displaystyle T_{2}}$ ${\displaystyle T=\left({\begin{array}{c|c}T_{1}&Z_{2}\\\hline Z_{1}&T_{2}\end{array}}\right)}$ where the ${\displaystyle Z}$'s are blocks of zeroes, then ${\displaystyle \left|T\right|=\left|T_{1}\right|\cdot \left|T_{2}\right|}$. Proof Suppose that ${\displaystyle T}$ is ${\displaystyle n\!\times \!n}$, that ${\displaystyle T_{1}}$ is ${\displaystyle p\!\times \!p}$, and that ${\displaystyle T_{2}}$ is ${\displaystyle q\!\times \!q}$. In the permutation formula for the determinant ${\displaystyle \left|T\right|=\sum _{{\text{permutations }}\phi }t_{1,\phi (1)}t_{2,\phi (2)}\cdots t_{n,\phi (n)}\operatorname {sgn}(\phi )}$ each term comes from a rearrangement of the column numbers ${\displaystyle 1,\dots ,n}$ into a new order ${\displaystyle \phi (1),\dots ,\phi (n)}$. The upper right block ${\displaystyle Z_{2}}$ is all zeroes, so if a ${\displaystyle \phi }$ has at least one of ${\displaystyle p+1,\dots ,n}$ among its first ${\displaystyle p}$ column numbers ${\displaystyle \phi (1),\dots ,\phi (p)}$ then the term arising from ${\displaystyle \phi }$ is zero, e.g., if ${\displaystyle \phi (1)=n}$ then ${\displaystyle t_{1,\phi (1)}t_{2,\phi (2)}\dots t_{n,\phi (n)}=0\cdot t_{2,\phi (2)}\dots t_{n,\phi (n)}=0}$. So the above formula reduces to a sum over all permutations with two halves: any significant ${\displaystyle \phi }$ is the composition of a ${\displaystyle \phi _{1}}$ that rearranges only ${\displaystyle 1,\dots ,p}$ and a ${\displaystyle \phi _{2}}$ that rearranges only ${\displaystyle p+1,\dots ,p+q}$. Now, the distributive law (and the fact that the signum of a composition is the product of the signums) gives that this ${\displaystyle \left|T_{1}\right|\cdot \left|T_{2}\right|={\bigg (}\sum _{\begin{array}{c}\\[-19pt]\scriptstyle {\text{perms }}\phi _{1}\\[-5pt]\scriptstyle {\text{of }}1,\dots ,p\end{array}}\!\!\!t_{1,\phi _{1}(1)}\cdots t_{p,\phi _{1}(p)}\operatorname {sgn}(\phi _{1}){\bigg )}}$ ${\displaystyle \cdot {\bigg (}\sum _{\begin{array}{c}\\[-19pt]\scriptstyle {\text{perms }}\phi _{2}\\[-5pt]\scriptstyle {\text{of }}p+1,\dots ,p+q\end{array}}\!\!\!t_{p+1,\phi _{2}(p+1)}\cdots t_{p+q,\phi _{2}(p+q)}\operatorname {sgn}(\phi _{2}){\bigg )}}$ equals ${\displaystyle \left|T\right|=\sum _{{\text{significant }}\phi }t_{1,\phi (1)}t_{2,\phi (2)}\cdots t_{n,\phi (n)}\operatorname {sgn}(\phi )}$. Example 2.10 ${\displaystyle {\begin{vmatrix}2&0&0&0\\1&2&0&0\\0&0&3&0\\0&0&0&3\end{vmatrix}}={\begin{vmatrix}2&0\\1&2\end{vmatrix}}\cdot {\begin{vmatrix}3&0\\0&3\end{vmatrix}}=36}$ From Lemma 2.9 we conclude that if two subspaces are complementary and ${\displaystyle t}$ invariant then ${\displaystyle t}$ is nonsingular if and only if its restrictions to both subspaces are nonsingular. Now for the promised third, final, step to the main result. Lemma 2.11 If a linear transformation ${\displaystyle t:V\to V}$ has the characteristic polynomial ${\displaystyle (x-\lambda _{1})^{p_{1}}\dots (x-\lambda _{\ell })^{p_{\ell }}}$ then (1) ${\displaystyle V={\mathcal {N}}_{\infty }(t-\lambda _{1})\oplus \cdots \oplus {\mathcal {N}}_{\infty }(t-\lambda _{\ell })}$ and (2) ${\displaystyle \dim({\mathcal {N}}_{\infty }(t-\lambda _{i}))=p_{i}}$. Proof Because ${\displaystyle \dim(V)}$ is the degree ${\displaystyle p_{1}+\cdots +p_{\ell }}$ of the characteristic polynomial, to establish statement (1) we need only show that statement (2) holds and that ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})\cap {\mathcal {N}}_{\infty }(t-\lambda _{j})}$ is trivial whenever ${\displaystyle i\neq j}$. For the latter, by Lemma 2.7, both ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})}$ and ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{j})}$ are ${\displaystyle t}$ invariant. Notice that an intersection of ${\displaystyle t}$ invariant subspaces is ${\displaystyle t}$ invariant and so the restriction of ${\displaystyle t}$ to ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})\cap {\mathcal {N}}_{\infty }(t-\lambda _{j})}$ is a linear transformation. But both ${\displaystyle t-\lambda _{i}}$ and ${\displaystyle t-\lambda _{j}}$ are nilpotent on this subspace and so if ${\displaystyle t}$ has any eigenvalues on the intersection then its "only" eigenvalue is both ${\displaystyle \lambda _{i}}$ and ${\displaystyle \lambda _{j}}$. That cannot be, so this restriction has no eigenvalues: ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})\cap {\mathcal {N}}_{\infty }(t-\lambda _{j})}$ is trivial (Lemma V.II.3.10 shows that the only transformation without any eigenvalues is on the trivial space). To prove statement (2), fix the index ${\displaystyle i}$. Decompose ${\displaystyle V}$ as ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})\oplus {\mathcal {R}}_{\infty }(t-\lambda _{i})}$ and apply Lemma 2.8. ${\displaystyle T=\left({\begin{array}{c|c}T_{1}&Z_{2}\\\hline Z_{1}&T_{2}\end{array}}\right){\begin{array}{ll}\}\dim(\,{\mathcal {N}}_{\infty }(t-\lambda _{i})\,){\text{-many rows}}\\\}\dim(\,{\mathcal {R}}_{\infty }(t-\lambda _{i})\,){\text{-many rows}}\end{array}}}$ By Lemma 2.9, ${\displaystyle \left|T-xI\right|=\left|T_{1}-xI\right|\cdot \left|T_{2}-xI\right|}$. By the uniqueness clause of the Fundamental Theorem of Arithmetic, the determinants of the blocks have the same factors as the characteristic polynomial ${\displaystyle \left|T_{1}-xI\right|=(x-\lambda _{1})^{q_{1}}\dots (x-\lambda _{\ell })^{q_{\ell }}}$ and ${\displaystyle \left|T_{2}-xI\right|=(x-\lambda _{1})^{r_{1}}\dots (x-\lambda _{\ell })^{r_{\ell }}}$, and the sum of the powers of these factors is the power of the factor in the characteristic polynomial: ${\displaystyle q_{1}+r_{1}=p_{1}}$, ..., ${\displaystyle q_{\ell }+r_{\ell }=p_{\ell }}$. Statement (2) will be proved if we will show that ${\displaystyle q_{i}=p_{i}}$ and that ${\displaystyle q_{j}=0}$ for all ${\displaystyle j\neq i}$, because then the degree of the polynomial ${\displaystyle \left|T_{1}-xI\right|}$— which equals the dimension of the generalized null space— is as required. For that, first, as the restriction of ${\displaystyle t-\lambda _{i}}$ to ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})}$ is nilpotent on that space, the only eigenvalue of ${\displaystyle t}$ on it is ${\displaystyle \lambda _{i}}$. Thus the characteristic equation of ${\displaystyle t}$ on ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})}$ is ${\displaystyle \left|T_{1}-xI\right|=(x-\lambda _{i})^{q_{i}}}$. And thus ${\displaystyle q_{j}=0}$ for all ${\displaystyle j\neq i}$. Now consider the restriction of ${\displaystyle t}$ to ${\displaystyle {\mathcal {R}}_{\infty }(t-\lambda _{i})}$. By Note V.III.2.2, the map ${\displaystyle t-\lambda _{i}}$ is nonsingular on ${\displaystyle {\mathcal {R}}_{\infty }(t-\lambda _{i})}$ and so ${\displaystyle \lambda _{i}}$ is not an eigenvalue of ${\displaystyle t}$ on that subspace. Therefore, ${\displaystyle x-\lambda _{i}}$ is not a factor of ${\displaystyle \left|T_{2}-xI\right|}$, and so ${\displaystyle q_{i}=p_{i}}$. Our major result just translates those steps into matrix terms. Theorem 2.12 Any square matrix is similar to one in Jordan form ${\displaystyle {\begin{pmatrix}J_{\lambda _{1}}&&{\textit {--zeroes--}}\\&J_{\lambda _{2}}\\&&\ddots \\&&&J_{\lambda _{\ell -1}}\\&&{\textit {--zeroes--}}&&J_{\lambda _{\ell }}\end{pmatrix}}}$ where each ${\displaystyle J_{\lambda }}$ is the Jordan block associated with the eigenvalue ${\displaystyle \lambda }$ of the original matrix (that is, is all zeroes except for ${\displaystyle \lambda }$'s down the diagonal and some subdiagonal ones). Proof Given an ${\displaystyle n\!\times \!n}$ matrix ${\displaystyle T}$, consider the linear map ${\displaystyle t:\mathbb {C} ^{n}\to \mathbb {C} ^{n}}$ that it represents with respect to the standard bases. Use the prior lemma to write ${\displaystyle \mathbb {C} ^{n}={\mathcal {N}}_{\infty }(t-\lambda _{1})\oplus \cdots \oplus {\mathcal {N}}_{\infty }(t-\lambda _{\ell })}$ where ${\displaystyle \lambda _{1},\ldots ,\lambda _{\ell }}$ are the eigenvalues of ${\displaystyle t}$. Because each ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})}$ is ${\displaystyle t}$ invariant, Lemma 2.8 and the prior lemma show that ${\displaystyle t}$ is represented by a matrix that is all zeroes except for square blocks along the diagonal. To make those blocks into Jordan blocks, pick each ${\displaystyle B_{\lambda _{i}}}$ to be a string basis for the action of ${\displaystyle t-\lambda _{i}}$ on ${\displaystyle {\mathcal {N}}_{\infty }(t-\lambda _{i})}$. Jordan form is a canonical form for similarity classes of square matrices, provided that we make it unique by arranging the Jordan blocks from least eigenvalue to greatest and then arranging the subdiagonal ${\displaystyle 1}$ blocks inside each Jordan block from longest to shortest. Example 2.13 This matrix has the characteristic polynomial ${\displaystyle (x-2)^{2}(x-6)}$. ${\displaystyle T={\begin{pmatrix}2&0&1\\0&6&2\\0&0&2\end{pmatrix}}}$ We will handle the eigenvalues ${\displaystyle 2}$ and ${\displaystyle 6}$ separately. Computation of the powers, and the null spaces and nullities, of ${\displaystyle T-2I}$ is routine. (Recall from Example 2.3 the convention of taking ${\displaystyle T}$ to represent a transformation, here ${\displaystyle t:\mathbb {C} ^{3}\to \mathbb {C} ^{3}}$, with respect to the standard basis.) ${\displaystyle {\begin{array}{r|ccc}{\textit {power}}p&(T-2I)^{p}&{\mathcal {N}}((t-2)^{p})&{\textit {nullity}}\\\hline 1&{\begin{pmatrix}0&0&1\\0&4&2\\0&0&0\end{pmatrix}}&\{{\begin{pmatrix}x\\0\\0\end{pmatrix}}\,{\big |}\,x\in \mathbb {C} \}&1\\2&{\begin{pmatrix}0&0&0\\0&16&8\\0&0&0\end{pmatrix}}&\{{\begin{pmatrix}x\\-z/2\\z\end{pmatrix}}\,{\big |}\,x,z\in \mathbb {C} \}&2\\3&{\begin{pmatrix}0&0&0\\0&64&32\\0&0&0\end{pmatrix}}&{\textit {--same--}}&{\textit {---}}\end{array}}}$ So the generalized null space ${\displaystyle {\mathcal {N}}_{\infty }(t-2)}$ has dimension two. We've noted that the restriction of ${\displaystyle t-2}$ is nilpotent on this subspace. From the way that the nullities grow we know that the action of ${\displaystyle t-2}$ on a string basis ${\displaystyle {\vec {\beta }}_{1}\mapsto {\vec {\beta }}_{2}\mapsto {\vec {0}}}$. Thus the restriction can be represented in the canonical form ${\displaystyle N_{2}={\begin{pmatrix}0&0\\1&0\end{pmatrix}}={\rm {Rep}}_{B,B}(t-2)\qquad B_{2}=\langle {\begin{pmatrix}1\\1\\-2\end{pmatrix}},{\begin{pmatrix}-2\\0\\0\end{pmatrix}}\rangle }$ where many choices of basis are possible. Consequently, the action of the restriction of ${\displaystyle t}$ to ${\displaystyle {\mathcal {N}}_{\infty }(t-2)}$ is represented by this matrix. ${\displaystyle J_{2}=N_{2}+2I={\rm {Rep}}_{B_{2},B_{2}}(t)={\begin{pmatrix}2&0\\1&2\end{pmatrix}}}$ The second eigenvalue's computations are easier. Because the power of ${\displaystyle x-6}$ in the characteristic polynomial is one, the restriction of ${\displaystyle t-6}$ to ${\displaystyle {\mathcal {N}}_{\infty }(t-6)}$ must be nilpotent of index one. Its action on a string basis must be ${\displaystyle {\vec {\beta }}_{3}\mapsto {\vec {0}}}$ and since it is the zero map, its canonical form ${\displaystyle N_{6}}$ is the ${\displaystyle 1\!\times \!1}$ zero matrix. Consequently, the canonical form ${\displaystyle J_{6}}$ for the action of ${\displaystyle t}$ on ${\displaystyle {\mathcal {N}}_{\infty }(t-6)}$ is the ${\displaystyle 1\!\times \!1}$ matrix with the single entry ${\displaystyle 6}$. For the basis we can use any nonzero vector from the generalized null space. ${\displaystyle B_{6}=\langle {\begin{pmatrix}0\\1\\0\end{pmatrix}}\rangle }$ Taken together, these two give that the Jordan form of ${\displaystyle T}$ is ${\displaystyle {\rm {Rep}}_{B,B}(t)={\begin{pmatrix}2&0&0\\1&2&0\\0&0&6\end{pmatrix}}}$ where ${\displaystyle B}$ is the concatenation of ${\displaystyle B_{2}}$ and ${\displaystyle B_{6}}$. Example 2.14 Contrast the prior example with ${\displaystyle T={\begin{pmatrix}2&2&1\\0&6&2\\0&0&2\end{pmatrix}}}$ which has the same characteristic polynomial ${\displaystyle (x-2)^{2}(x-6)}$. While the characteristic polynomial is the same, ${\displaystyle {\begin{array}{r|ccc}{\textit {power}}p&(T-2I)^{p}&{\mathcal {N}}((t-2)^{p})&{\textit {nullity}}\\\hline 1&{\begin{pmatrix}0&2&1\\0&4&2\\0&0&0\end{pmatrix}}&\{{\begin{pmatrix}x\\-z/2\\z\end{pmatrix}}\,{\big |}\,x,z\in \mathbb {C} \}&2\\2&{\begin{pmatrix}0&8&4\\0&16&8\\0&0&0\end{pmatrix}}&{\textit {--same--}}&{\textit {---}}\end{array}}}$ here the action of ${\displaystyle t-2}$ is stable after only one application— the restriction of of ${\displaystyle t-2}$ to ${\displaystyle {\mathcal {N}}_{\infty }(t-2)}$ is nilpotent of index only one. (So the contrast with the prior example is that while the characteristic polynomial tells us to look at the action of the ${\displaystyle t-2}$ on its generalized null space, the characteristic polynomial does not describe completely its action and we must do some computations to find, in this example, that the minimal polynomial is ${\displaystyle (x-2)(x-6)}$.) The restriction of ${\displaystyle t-2}$ to the generalized null space acts on a string basis as ${\displaystyle {\vec {\beta }}_{1}\mapsto {\vec {0}}}$ and ${\displaystyle {\vec {\beta }}_{2}\mapsto {\vec {0}}}$, and we get this Jordan block associated with the eigenvalue ${\displaystyle 2}$. ${\displaystyle J_{2}={\begin{pmatrix}2&0\\0&2\end{pmatrix}}}$ For the other eigenvalue, the arguments for the second eigenvalue of the prior example apply again. The restriction of ${\displaystyle t-6}$ to ${\displaystyle {\mathcal {N}}_{\infty }(t-6)}$ is nilpotent of index one (it can't be of index less than one, and since ${\displaystyle x-6}$ is a factor of the characteristic polynomial to the power one it can't be of index more than one either). Thus ${\displaystyle t-6}$'s canonical form ${\displaystyle N_{6}}$ is the ${\displaystyle 1\!\times \!1}$ zero matrix, and the associated Jordan block ${\displaystyle J_{6}}$ is the ${\displaystyle 1\!\times \!1}$ matrix with entry ${\displaystyle 6}$. Therefore, ${\displaystyle T}$ is diagonalizable. ${\displaystyle {\rm {Rep}}_{B,B}(t)={\begin{pmatrix}2&0&0\\0&2&0\\0&0&6\end{pmatrix}}\qquad B=B_{2}\!{\mathbin {{}^{\frown }}}\!B_{6}=\langle {\begin{pmatrix}1\\0\\0\end{pmatrix}},{\begin{pmatrix}0\\1\\-2\end{pmatrix}},{\begin{pmatrix}3\\4\\0\end{pmatrix}}\rangle }$ (Checking that the third vector in ${\displaystyle B}$ is in the nullspace of ${\displaystyle t-6}$ is routine.) Example 2.15 A bit of computing with ${\displaystyle T={\begin{pmatrix}-1&4&0&0&0\\0&3&0&0&0\\0&-4&-1&0&0\\3&-9&-4&2&-1\\1&5&4&1&4\end{pmatrix}}}$ shows that its characteristic polynomial is ${\displaystyle (x-3)^{3}(x+1)^{2}}$. This table ${\displaystyle {\begin{array}{r|ccc}{\textit {power}}p&(T-3I)^{p}&{\mathcal {N}}((t-3)^{p})&{\textit {nullity}}\\\hline 1&{\begin{pmatrix}-4&4&0&0&0\\0&0&0&0&0\\0&-4&-4&0&0\\3&-9&-4&-1&-1\\1&5&4&1&1\end{pmatrix}}&\{{\begin{pmatrix}-(u+v)/2\\-(u+v)/2\\(u+v)/2\\u\\v\end{pmatrix}}\,{\big |}\,u,v\in \mathbb {C} \}&2\\2&{\begin{pmatrix}16&-16&0&0&0\\0&0&0&0&0\\0&16&16&0&0\\-16&32&16&0&0\\0&-16&-16&0&0\end{pmatrix}}&\{{\begin{pmatrix}-z\\-z\\z\\u\\v\end{pmatrix}}\,{\big |}\,z,u,v\in \mathbb {C} \}&3\\3&{\begin{pmatrix}-64&64&0&0&0\\0&0&0&0&0\\0&-64&-64&0&0\\64&-128&-64&0&0\\0&64&64&0&0\end{pmatrix}}&{\textit {--same--}}&{\textit {---}}\end{array}}}$ shows that the restriction of ${\displaystyle t-3}$ to ${\displaystyle {\mathcal {N}}_{\infty }(t-3)}$ acts on a string basis via the two strings ${\displaystyle {\vec {\beta }}_{1}\mapsto {\vec {\beta }}_{2}\mapsto {\vec {0}}}$ and ${\displaystyle {\vec {\beta }}_{3}\mapsto {\vec {0}}}$. A similar calculation for the other eigenvalue ${\displaystyle {\begin{array}{r|ccc}{\textit {power}}p&(T+1I)^{p}&{\mathcal {N}}((t+1)^{p})&{\textit {nullity}}\\\hline 1&{\begin{pmatrix}0&4&0&0&0\\0&4&0&0&0\\0&-4&0&0&0\\3&-9&-4&3&-1\\1&5&4&1&5\end{pmatrix}}&\{{\begin{pmatrix}-(u+v)\\0\\-v\\u\\v\end{pmatrix}}\,{\big |}\,u,v\in \mathbb {C} \}&2\\2&{\begin{pmatrix}0&16&0&0&0\\0&16&0&0&0\\0&-16&0&0&0\\8&-40&-16&8&-8\\8&24&16&8&24\end{pmatrix}}&{\textit {--same--}}&{\textit {---}}\end{array}}}$ shows that the restriction of ${\displaystyle t+1}$ to its generalized null space acts on a string basis via the two separate strings ${\displaystyle {\vec {\beta }}_{4}\mapsto {\vec {0}}}$ and ${\displaystyle {\vec {\beta }}_{5}\mapsto {\vec {0}}}$. Therefore ${\displaystyle T}$ is similar to this Jordan form matrix. ${\displaystyle {\begin{pmatrix}-1&0&0&0&0\\0&-1&0&0&0\\0&0&3&0&0\\0&0&1&3&0\\0&0&0&0&3\end{pmatrix}}}$ We close with the statement that the subjects considered earlier in this Chpater are indeed, in this sense, exhaustive. Corollary 2.16 Every square matrix is similar to the sum of a diagonal matrix and a nilpotent matrix. ## Exercises Problem 1 Do the check for Example 2.3. Problem 2 Each matrix is in Jordan form. State its characteristic polynomial and its minimal polynomial. 1. ${\displaystyle {\begin{pmatrix}3&0\\1&3\end{pmatrix}}}$ 2. ${\displaystyle {\begin{pmatrix}-1&0\\0&-1\end{pmatrix}}}$ 3. ${\displaystyle {\begin{pmatrix}2&0&0\\1&2&0\\0&0&-1/2\end{pmatrix}}}$ 4. ${\displaystyle {\begin{pmatrix}3&0&0\\1&3&0\\0&1&3\\\end{pmatrix}}}$ 5. ${\displaystyle {\begin{pmatrix}3&0&0&0\\1&3&0&0\\0&0&3&0\\0&0&1&3\end{pmatrix}}}$ 6. ${\displaystyle {\begin{pmatrix}4&0&0&0\\1&4&0&0\\0&0&-4&0\\0&0&1&-4\end{pmatrix}}}$ 7. ${\displaystyle {\begin{pmatrix}5&0&0\\0&2&0\\0&0&3\end{pmatrix}}}$ 8. ${\displaystyle {\begin{pmatrix}5&0&0&0\\0&2&0&0\\0&0&2&0\\0&0&0&3\end{pmatrix}}}$ 9. ${\displaystyle {\begin{pmatrix}5&0&0&0\\0&2&0&0\\0&1&2&0\\0&0&0&3\end{pmatrix}}}$ This exercise is recommended for all readers. Problem 3 Find the Jordan form from the given data. 1. The matrix ${\displaystyle T}$ is ${\displaystyle 5\!\times \!5}$ with the single eigenvalue ${\displaystyle 3}$. The nullities of the powers are: ${\displaystyle T-3I}$ has nullity two, ${\displaystyle (T-3I)^{2}}$ has nullity three, ${\displaystyle (T-3I)^{3}}$ has nullity four, and ${\displaystyle (T-3I)^{4}}$ has nullity five. 2. The matrix ${\displaystyle S}$ is ${\displaystyle 5\!\times \!5}$ with two eigenvalues. For the eigenvalue ${\displaystyle 2}$ the nullities are: ${\displaystyle S-2I}$ has nullity two, and ${\displaystyle (S-2I)^{2}}$ has nullity four. For the eigenvalue ${\displaystyle -1}$ the nullities are: ${\displaystyle S+1I}$ has nullity one. Problem 4 Find the change of basis matrices for each example. This exercise is recommended for all readers. Problem 5 Find the Jordan form and a Jordan basis for each matrix. 1. ${\displaystyle {\begin{pmatrix}-10&4\\-25&10\end{pmatrix}}}$ 2. ${\displaystyle {\begin{pmatrix}5&-4\\9&-7\end{pmatrix}}}$ 3. ${\displaystyle {\begin{pmatrix}4&0&0\\2&1&3\\5&0&4\end{pmatrix}}}$ 4. ${\displaystyle {\begin{pmatrix}5&4&3\\-1&0&-3\\1&-2&1\end{pmatrix}}}$ 5. ${\displaystyle {\begin{pmatrix}9&7&3\\-9&-7&-4\\4&4&4\end{pmatrix}}}$ 6. ${\displaystyle {\begin{pmatrix}2&2&-1\\-1&-1&1\\-1&-2&2\end{pmatrix}}}$ 7. ${\displaystyle {\begin{pmatrix}7&1&2&2\\1&4&-1&-1\\-2&1&5&-1\\1&1&2&8\end{pmatrix}}}$ This exercise is recommended for all readers. Problem 6 Find all possible Jordan forms of a transformation with characteristic polynomial ${\displaystyle (x-1)^{2}(x+2)^{2}}$. Problem 7 Find all possible Jordan forms of a transformation with characteristic polynomial ${\displaystyle (x-1)^{3}(x+2)}$. This exercise is recommended for all readers. Problem 8 Find all possible Jordan forms of a transformation with characteristic polynomial ${\displaystyle (x-2)^{3}(x+1)}$ and minimal polynomial ${\displaystyle (x-2)^{2}(x+1)}$. Problem 9 Find all possible Jordan forms of a transformation with characteristic polynomial ${\displaystyle (x-2)^{4}(x+1)}$ and minimal polynomial ${\displaystyle (x-2)^{2}(x+1)}$. This exercise is recommended for all readers. Problem 10 Diagonalize these. 1. ${\displaystyle {\begin{pmatrix}1&1\\0&0\end{pmatrix}}}$ 2. ${\displaystyle {\begin{pmatrix}0&1\\1&0\end{pmatrix}}}$ This exercise is recommended for all readers. Problem 11 Find the Jordan matrix representing the differentiation operator on ${\displaystyle {\mathcal {P}}_{3}}$. This exercise is recommended for all readers. Problem 12 Decide if these two are similar. ${\displaystyle {\begin{pmatrix}1&-1\\4&-3\\\end{pmatrix}}\qquad {\begin{pmatrix}-1&0\\1&-1\\\end{pmatrix}}}$ Problem 13 Find the Jordan form of this matrix. ${\displaystyle {\begin{pmatrix}0&-1\\1&0\end{pmatrix}}}$ Also give a Jordan basis. Problem 14 How many similarity classes are there for ${\displaystyle 3\!\times \!3}$ matrices whose only eigenvalues are ${\displaystyle -3}$ and ${\displaystyle 4}$? This exercise is recommended for all readers. Problem 15 Prove that a matrix is diagonalizable if and only if its minimal polynomial has only linear factors. Problem 16 Give an example of a linear transformation on a vector space that has no non-trivial invariant subspaces. Problem 17 Show that a subspace is ${\displaystyle t-\lambda _{1}}$ invariant if and only if it is ${\displaystyle t-\lambda _{2}}$ invariant. Problem 18 Prove or disprove: two ${\displaystyle n\!\times \!n}$ matrices are similar if and only if they have the same characteristic and minimal polynomials. Problem 19 The trace of a square matrix is the sum of its diagonal entries. 1. Find the formula for the characteristic polynomial of a ${\displaystyle 2\!\times \!2}$ matrix. 2. Show that trace is invariant under similarity, and so we can sensibly speak of the "trace of a map". (Hint: see the prior item.) 3. Is trace invariant under matrix equivalence? 4. Show that the trace of a map is the sum of its eigenvalues (counting multiplicities). 5. Show that the trace of a nilpotent map is zero. Does the converse hold? Problem 20 To use Definition 2.6 to check whether a subspace is ${\displaystyle t}$ invariant, we seemingly have to check all of the infinitely many vectors in a (nontrivial) subspace to see if they satisfy the condition. Prove that a subspace is ${\displaystyle t}$ invariant if and only if its subbasis has the property that for all of its elements, ${\displaystyle t({\vec {\beta }})}$ is in the subspace. This exercise is recommended for all readers. Problem 21 Is ${\displaystyle t}$ invariance preserved under intersection? Under union? Complementation? Sums of subspaces? Problem 22 Give a way to order the Jordan blocks if some of the eigenvalues are complex numbers. That is, suggest a reasonable ordering for the complex numbers. Problem 23 Let ${\displaystyle {\mathcal {P}}_{j}(\mathbb {R} )}$ be the vector space over the reals of degree ${\displaystyle j}$ polynomials. Show that if ${\displaystyle j\leq k}$ then ${\displaystyle {\mathcal {P}}_{j}(\mathbb {R} )}$ is an invariant subspace of ${\displaystyle {\mathcal {P}}_{k}(\mathbb {R} )}$ under the differentiation operator. In ${\displaystyle {\mathcal {P}}_{7}(\mathbb {R} )}$, does any of ${\displaystyle {\mathcal {P}}_{0}(\mathbb {R} )}$, ..., ${\displaystyle {\mathcal {P}}_{6}(\mathbb {R} )}$ have an invariant complement? Problem 24 In ${\displaystyle {\mathcal {P}}_{n}(\mathbb {R} )}$, the vector space (over the reals) of degree ${\displaystyle n}$ polynomials, ${\displaystyle {\mathcal {E}}=\{p(x)\in {\mathcal {P}}_{n}(\mathbb {R} )\,{\big |}\,p(-x)=p(x){\text{ for all }}x\}}$ and ${\displaystyle {\mathcal {O}}=\{p(x)\in {\mathcal {P}}_{n}(\mathbb {R} )\,{\big |}\,p(-x)=-p(x){\text{ for all }}x\}}$ are the even and the odd polynomials; ${\displaystyle p(x)=x^{2}}$ is even while ${\displaystyle p(x)=x^{3}}$ is odd. Show that they are subspaces. Are they complementary? Are they invariant under the differentiation transformation? Problem 25 Lemma 2.8 says that if ${\displaystyle M}$ and ${\displaystyle N}$ are invariant complements then ${\displaystyle t}$ has a representation in the given block form (with respect to the same ending as starting basis, of course). Does the implication reverse? Problem 26 A matrix ${\displaystyle S}$ is the square root of another ${\displaystyle T}$ if ${\displaystyle S^{2}=T}$. Show that any nonsingular matrix has a square root. Solutions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 437, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9736523032188416, "perplexity": 168.688023313532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659496.36/warc/CC-MAIN-20160924173739-00140-ip-10-143-35-109.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/355460/a-generalization-of-discrete-hiberts-transform-montgomerys-inequality
# A generalization of discrete Hibert's transform (Montgomery's inequality) In the paper "Hilbert's inequality", Montgomery and Vaughan proved that a generalization of the discrete Hilbert transform is bounded in $$\ell^2$$. The inequality reads as follows $$\Big| \sum_{k\neq n}\frac{a_k \overline{b_n}}{\lambda_k-\lambda_n} \Big| \leq \frac{\pi}{\delta} \Big(\sum_{k=1}^{\infty} |a_k |^2 \Big)^{1/2}\Big( \sum_{n=1}^{\infty} |b_n |^2 \Big)^{1/2},$$ where $$\{a_k\}, \{ b_n \}\in \ell^2$$, $$\lambda_n$$ is an increasing sequence of real numbers such that $$\delta:= \inf_{k n}| \lambda_k-\lambda_{k+1}|.$$ Of course $$\delta$$ is assumed to be strictly positive. Also the constant appearing in the inequality $$\pi/\delta$$ is optimal. Quite surprisingly all proofs I managed to find use strongly the Hilbert space structure of $$\ell^2$$. Therefore I would like to ask if anything is known for the this inequality when considered on $$\ell^p, p\neq 2$$. Namely, is it true $$\Big| \sum_{k\neq n}\frac{a_k \overline{b_n}}{\lambda_k-\lambda_n} \Big| \leq C(p,\delta) \Big(\sum_{k=1}^{\infty} |a_k |^p \Big)^{1/p}\Big( \sum_{n=1}^{\infty} |b_n |^q \Big)^{1/q},$$ where $$1, $$q$$ is the conjugate exponent of $$p$$ and $$C(p,\delta)>0 ?$$ (I wouldn't venture so far as to ask for an optimal constant in this case, given the difficulty of the problem for the classical discrete Hilbert transform.) • What kind of inequality are you envisioning? If it involves $\ell^p$ and $\ell^q$ norms, you might as well use $\ell^2$. And I don't know how to imagine an inequality without the dual $\ell^q$ norm. – Lucia Mar 22 at 17:37 • @Lucia The inequality involves $p$ and $q$ norms as you said (I edited the question so its more clear) but I don't really understand what you mean by saying you might as well use the $\ell^2$ norm. – an_ordinary_mathematician Mar 22 at 17:52 One can transfer the continuous $$L^p$$ theory to this discrete setting without difficulty. Let's normalise $$\sum_k |a_k|^p = \sum_n |b_n|^q = 1$$. Consider the two quantities $$X_1 := \sum_{k \neq n} \frac{a_k \overline{b_n}}{\lambda_k - \lambda_n}$$ $$X_2 := \sum_{k, n} p.v. \int_{{\bf R}^2} \varphi(s) \varphi(t) \frac{a_k \overline{b_n}}{(\lambda_k+s) - (\lambda_n+t)}\ ds dt$$ where $$\varphi$$ is a bump function of total mass 1. It is not difficult to show that $$p.v. \int_{\bf R} p.v. \int_{\bf R} \varphi(s) \varphi(t) \frac{1}{(\lambda_k+s) - (\lambda_n+t)}\ dt$$ is equal to $$\frac{1}{\lambda_k - \lambda_n} + O_\delta( |k-n|^{-2} )$$ when $$k \neq n$$ and $$O_\delta(1)$$ when $$k=n$$, so we have $$X_1-X_2 = O_{p,\delta}(1)$$ by Schur's test. One can also write $$X_2$$ as $$p.v. \int_{\bf R} \int_{\bf R} \frac{f(x) g(y)}{x-y}\ dx dy$$ where $$f(x) := \sum_k a_k \varphi(x-\lambda_k)$$ and $$g(y) := \sum_n b_n \varphi(x-\lambda_n)$$ so from the $$L^p$$ boundedness of the continuous Hilbert transform we have $$X_2 = O_{p,\delta}(1)$$, and the claim follows. Let me deal with a continuous situation. Let $$\lambda:\mathbb R\rightarrow\mathbb R$$ be an increasing $$C^1$$ diffeomorphism and let $$u,v$$ be in $$L^2(\mathbb R)$$. We have with $$\phi=\lambda^{-1}$$, $$A=\iint \frac{u(y)\overline{u(x)}}{iπ(\lambda (x)-\lambda(y))} dx dy= \iint \frac{u(\phi(t))\overline{u(\phi(s))}}{iπ(s-t)}\phi'(t)\phi'(s) ds dt,$$ so that with $$U(t)=u(\phi(t))\phi'(t)^{1/2}$$, we find $$A=\iint \frac{U(t)\phi'(t)^{1/2}\overline{U(s)\phi'(s)^{1/2}}}{iπ(s-t)}ds dt,$$ and thus assuming $$0 we get the $$L^p$$ boundedness properties from those of the Hilbert transform. • Thank you very much for the answer. I think a similar reasoning can be found also here link – an_ordinary_mathematician Mar 23 at 13:20 In the same idea a paper "SHARP NORM INEQUALITIES FOR THE TRUNCATED HILBERT TRANSFORM" by Enrico Laeng . • Thank you for your contribution, but which part of the paper you think it is related to the Montgomery inequality ? – an_ordinary_mathematician Apr 2 at 16:24 • I read this paper a long time ago, but maybe you can generalize – Emmanuel Preissmann Apr 5 at 12:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 41, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9705737829208374, "perplexity": 1613.701577442702}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737152.0/warc/CC-MAIN-20200807025719-20200807055719-00040.warc.gz"}
http://mathhelpforum.com/trigonometry/79802-trig-equation.html
# Math Help - trig equation 1. ## trig equation Hi! I know this is a fairly simple trig equation but i just keep getting it wrong! equation: tan(40-2x)=3^1/2 (root 3) Thank u!!! =) 2. Originally Posted by Nancy Hi! I know this is a fairly simple trig equation but i just keep getting it wrong! equation: tan(40-2x)=3^1/2 (root 3) You should provide your work so that we can point out any mistakes. Begin by noting that $\sqrt3=\frac{2\sqrt3}2=\frac{\sqrt3/2}{1/2}.$ The tangent is sine divided by cosine, so for what angles does sine equal $\frac{\sqrt3}2$ and cosine equal $\frac12?$ 3. you need to calculate when the argument of the tan is equal to $\sqrt{3}$ Remembering that $\tan{x} = \sqrt{3}$ when $x=\frac{\pi}{3}+k\pi$ as also Reckoner said. Then u can write: $40-2x= \frac{\pi}{3}+k\pi$ $2x=-\frac{\pi}{3}-k\pi+40$ $x=-\frac{\pi}{6}-k\frac{\pi}{2}+20$ simplifying: $x=\frac{120-\pi}{6}-k\frac{\pi}{2}$ 4. Thank u! for ur help....i figured it out....
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9870131611824036, "perplexity": 2092.3144949944212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463287.91/warc/CC-MAIN-20150226074103-00006-ip-10-28-5-156.ec2.internal.warc.gz"}
http://www.global-sci.org/intro/article_detail/cicp/11134.html
Volume 19, Issue 5 Novel Symplectic Discrete Singular Convolution Method for Hamiltonian PDEs 10.4208/cicp.scpde14.32s Commun. Comput. Phys., 19 (2016), pp. 1375-1396. Preview Full PDF BiBTex 103 391 • Abstract This paper explores the discrete singular convolution method for Hamiltonian PDEs. The differential matrices corresponding to two delta type kernels of the discrete singular convolution are presented analytically, which have the properties of high-order accuracy, bandlimited structure and thus can be excellent candidates for the spatial discretizations for Hamiltonian PDEs. Taking the nonlinear Schr ¨odinger equation and the coupled Schr ¨odinger equations for example, we construct two symplectic integrators combining this kind of differential matrices and appropriate symplectic time integrations, which both have been proved to satisfy the square conservation laws. Comprehensive numerical experiments including comparisons with the central finite difference method, the Fourier pseudospectral method, the wavelet collocation method are given to show the advantages of the new type of symplectic integrators. • History Published online: 2018-04 • Keywords
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8593596816062927, "perplexity": 1366.1361320607577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778168.77/warc/CC-MAIN-20200128091916-20200128121916-00083.warc.gz"}
https://mathoverflow.net/questions/326669/guessing-each-others-coins/326940
# Guessing each other's coins I recently thought about the following game (has it been considered before?). Alice and Bob collaborate. Alice observes a sequence of independent unbiased random bits $$(A_n)$$, and then chooses an integer $$a$$. Similarly, Bob observes a sequence of independent unbiased random bits $$(B_n)$$, independent from $$(A_n)$$, and then chooses an integer $$b$$. Alice and Bob are not allowed to communicate. They win the game if $$A_b=B_a=1$$. What is the optimal winning probability $$p_{opt}$$? A strategy for each player is a (Borel) function $$f : \{0,1\}^{\mathbf{N}} \to \mathbf{N}$$, and we want to maximize the winning probability over pairs of strategies $$(f_A,f_B)$$. Constant strategies win with probability $$1/4$$, and it is perhaps counterintuitive that you can do better. Choosing $$f$$ to be the index of the first $$1$$ wins with probability $$1/3$$. This is not optimal though, by running a little program trying randomly modified strategies on a finite window I could find that $$p_{opt} \geq 358/1023 \approx 0.3499$$, with some pair (with $$f_A=f_B$$) lacking any apparent pattern. But a more interesting question is: can you prove any upper bound on $$p_{opt}$$, besides the trivial $$p_{opt} \leq 1/2$$? Edit. As has been pointed out by Édouard Maurel-Segala, the problem has been studied in this paper, where it is proved (as is also proved in the present thread) that $$0.35 \leq p_{opt} \leq 0.375$$, stated without proof that $$p_{opt} \leq \frac{81}{224} \approx 0.3616$$, and conjectured that $$p_{opt} = 0.35$$. Edit (clarifying what I said in the comments). You can ask the same question for the finite version of the game, with strings $$(A_1,\dots,A_N)$$ and $$(B_1,\dots,B_N)$$, giving optimal winning probability $$p_N$$. It can be checked than $$(p_N)$$ is non-decreasing with limit $$p_{opt}$$. Moreover the inequality $$p_{opt} \geq \frac{4^N}{4^N-1} p_N$$ holds, because in the infinite game, when a player sees a string of $$N$$ $$0$$s, he may discard them and apply the strategy to the next $$N$$ bits. We have $$p_1=1/4$$, $$p_2=5/16$$, $$p_3=22/64 > p_2$$. It seems that $$p_4=89/256$$ (therefore $$p_4 > p_3$$, but $$\frac{256}{255} p_4 < \frac{64}{63} p_3$$, so $$4$$-bit strategies are worse than $$3$$-bit for the infinite game), and I know that $$p_5 \geq 358/1024$$ and $$p_6 \geq 1433/4096$$. For $$p_3$$ and $$p_4$$ one strategy achieving the value is: when the observed string contains a single block of $$1$$s, Alice (resp. Bob) picks the index of the $$0$$ immediately after (resp. before) that block; what they do in the remaining cases is irrelevant. • I used a "genetic" (?) algorithm, i.e. start from arbitrary functions $\{0,1\}^N \to \{1,\cdots,N\}$ and apply random mutations which you keep when beneficial. The value $358/1023$ corresponds to $N=5$ and the function which maps the elements of $\{0,1\}^5$ listed in lexicographic order to (5,5,1,3,3,3,3,3,1,1,1,1,3,3,3,3,2,2,2,2,2,2,2,2,1,1,1,1,4,5,4,5). – Guillaume Aubrun Mar 29 at 14:25 • I didn't understand $p = 1/3$ at first, so here goes: If $a < b$, then $B_a = 0$. If $a > b$, then $A_b = 0$. So the players win iff $a = b$, with probability $1/3$. The better strategies allow to get $a = b$ slightly wrong and still win. – student Mar 29 at 22:13 • @student nice, here's another way: consider these cases for the first bits $(A_0,B_0)$ of the two sequences: (0,1), (1,0), (1,1). The players win $1/3$ of these equally-likely cases. In the fourth case (0,0), they recurse on the next bit. – usul Mar 30 at 13:40 • By the way, if we just require the players' bits to match to win (so both finding a zero is also a win), is this the exact same problem with all probabilities doubled? Or is there a difference? – usul Mar 30 at 13:48 • @usul that seems correct and it's a great observation. The reason for that is that, whatever the strategies, $\mathbf{P}(A_b=B_a=0)=\mathbf{P}(A_b=B_a=1)$, since each event $\{A_b=0\}$, $\{A_b=1\}$, $\{B_a=0\}$, $\{B_a=1\}$ has probability $1/2$. – Guillaume Aubrun Mar 30 at 14:33 I discussed this with Arvind Singh a while ago and I think we can show the non trivial inequality $$p_{opt}\leqslant\frac{3}{8}$$ with simple arguments. The proof relies on the symmetry of the problem and the intuition is that one can not find a strategy wich is good simultaneously for a configuration and its inverse. It will be simpler to work with the sets of indices such that the coin is on $$1$$: $$A=\{1\leqslant i\leqslant n : A_i=1\},\quad B=\{1\leqslant i\leqslant n : B_i=1\}.$$ We want to bound $$G=\mathbb P (f_a(B)\in A, f_b(A) \in B).$$ Introducing the function $$g(A,B)=\frac{1}{4}\left(1_{f_a(B)\in A, f_b(A) \in B}+1_{f_a(B^c)\in A^c, f_b(A^c) \in B^c}+1_{f_a(B)\in A^c, f_b(A^c) \in B}+1_{f_a(B^c)\in A, f_b(A) \in B^c}\right),$$ we get by symmetry (since for example $$A^c,B$$ has the same law than $$A,B$$): $$G=\mathbb E [g(A,B)].$$ But there are some incompatibilities in $$g$$: the first term and the third term can not be both equal to $$1$$ since one contains $$f_a(B)\in A$$ and the other $$f_a(B)\in A^c$$. The same thing applies for the second and the fourth. Thus $$g(A,B)\in\{0;\frac{1}{4};\frac{1}{2}\}$$ almost surely. On the event $$E_1=\{f_b(A)\in B, f_b(A^c)\in B\}$$, only the first and the third term can be non vanishing and since they are incompatible $$g(A,B)$$ is at most $$1/4$$ (in fact it is equal to $$1/4$$). Besides, by first conditionning on $$A$$ we see that $$E_1$$ is of probability at least $$1/4$$ (the probability that $$B$$ contains (one or) two elements). The same applies to $$E_2=\{f_b(A)\in B^c, f_b(A^c)\in B^c\}$$. If we consider the event $$E=\{f_b(A)\in B, f_b(A^c)\in B\}\cup\{f_b(A)\in B^c, f_b(A^c)\in B^c\}.$$ we have built an event such that $$g1_E\leqslant \frac{1}{4}$$ and $$\mathbb P(E)\geqslant \frac{1}{2}$$ (the union is disjoint). Thus since $$g\leqslant \frac{1}{2}$$, $$G=\mathbb E[g(A,B)]\leqslant \mathbb E[g1_E]+\mathbb E[g1_{E^c}]\leqslant \frac{1}{4}\mathbb P(E)+\frac{1}{2}(1-\mathbb P(E))\leqslant \frac{1}{4}\frac{1}{2}+\frac{1}{2}(1-\frac{1}{2})=\frac{3}{8}.$$ • This is really great, Édouard! Let me rephrase your argument. If switch from $\{0,1\}$ to $\{-1,1\}$, the inequality is equivalent to $\mathbf{E}[A_b B_a] \leq 1/2$. Now denote by $a$ and $a'$ Alice's output when seeing $(A_n)$ and $(-A_n)$, and same for $b$, $b'$. Your observation is that $$\mathbf{E}[A_b B_a] = \mathbf{E}[- A_{b'} B_a] = \mathbf{E}[- A_{b} B_{a'}] = \mathbf{E}[A_{b'} B_{a'}],$$ and therefore $$4 \mathbf{E}[A_bB_a] = \mathbf{E}[(A_b-A_{b'})(B_a-B_{a'})] \leq 2 \mathbf{E}[|B_a-B_{a'}|] \leq 2.$$ – Guillaume Aubrun Apr 2 at 9:13 • Very nice and much more direct than my formulation ! – Édouard Maurel-Segala Apr 2 at 11:13 • In fact there is a paper by Kariv, van Alten and Dmytro Yeroshkin which generalizes this problem to the case of a parameter p for the coin and they get some upper bounds which is also 3/8 for p=1/2. Besides, they claim that someone proved a better bound : which is 81/224 (>0,361...). Source : front.math.ucdavis.edu/1407.4711 – Édouard Maurel-Segala Apr 2 at 12:31 • Has the 81/224-result been written down somewhere? – Johan Wästlund Apr 4 at 21:12 • I have no idea how the 81/224 was proved. I tried (but failed) to improve on the bound around the following lines: instead of bounding $\mathbf{E} [(A_b-A_{b'})(B_a-B_{a'})]$ by Cauchy-Schwarz which gives also the bound $\sqrt{2}^2 = 2$, first project $A_b-A_{b'}$ onto the subspace (in $L^2$) spanned by variables of the form $B_{f(A_n)}$, which amounts to killing higher-order Walsh-Fourier coefficients on Bob side. If the $L^2$ norm doesn't decrease in the process, this means that Bob used a strategy encoded by low-order Fourier coefficients, for which we should be able to say more. – Guillaume Aubrun Apr 5 at 17:35 First, Alice chooses minimal $$n_a$$ divisible by 3 such that her bits at positions $$n_a, n_a + 1, n_a + 2$$ are not all the same, and Bob similarly chooses $$n_b$$. Looking at triplet $$A_{n_a}, A_{n_a + 1}, A_{n_a + 2}$$. Alice chooses $$m_a$$ according to following rule: {010: 2, 011: 2, 001: 1, 110: 0, 100: 0, 101: 1}. Bob chooses $$m_b$$ in the same way. Now Alice says $$n_a + m_a$$, and Bob says $$n_b + m_b$$. It's easy to check that probability of $$n_a = n_b$$ is $$\frac{3}{5}$$, probability of winning in this case is $$\frac{5}{12}$$. If $$n_a \neq n_b$$, then probability of winning is default $$\frac{1}{4}$$. So winning probability for this strategy is $$\frac{3}{5} \cdot \frac{5}{12} + \frac{2}{5} \cdot \frac{1}{4} = 0.35$$. • In general, your method gives $p_\text{opt} \ge (4^N p_N-1)/(4^N-4)$. Using Guillaume's $p_5$ in this bound also gets you to $7/20$. – Yoav Kallus Mar 31 at 1:18 • For what $N$, $7/20$ works in reverse direction, i.e. $p_N\geq \frac{7}{20} - \frac{2}{5\cdot 4^N}$ ? – Max Alekseyev Mar 31 at 15:04 • @MaxAlekseyev, I found strategies for N=3,5,7,9 satisfying your inequality (with equality, I should note) using a very simple algorithm (fix random f_A and optimize f_B; fix f_B and optimize f_A; and so on until fixed point, repeat with different random initial condition). – Yoav Kallus Mar 31 at 15:21 • A formula that matches the known estimates for even $N=2,4,6$ is $p_N = \frac{7}{20} - \frac{3}{5 \cdot 4^N}$ – Guillaume Aubrun Mar 31 at 21:03 • For what it's worth, I have just used CPLEX to solve the cases $N\in\{2,3,4\}$ and verified that your proposed solution is optimal for these cases. That is, the optimal ratios are $0.3125$, $0.34375$, and $0.3477$. In fact, it remains optimal even if you allow a mixed strategy (i.e. for a given sequence, your selection of the digit is random) – John Gunnar Carlsson Apr 1 at 23:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 84, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9503876566886902, "perplexity": 308.2281972701525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232261326.78/warc/CC-MAIN-20190527045622-20190527071622-00214.warc.gz"}
https://www.clutchprep.com/organic-chemistry/practice-problems/16211/suggest-a-sequence-of-reactions-suitable-for-preparing-each-of-the-following-com
Video Solution: Suggest a sequence of reactions suitable for preparing each ... Problem Suggest a sequence of reactions suitable for preparing each of the following compounds from the indicated starting material. You may use any necessary organic or inorganic reagents. (a) 1-Propanol from 2-propanol Related Practice Problems Provide the reagents necessary to make the indicated product from the ind... Propose an efficient method of converting 3-methyl-1-butanol into 3-methyl-2-b... Provide a curved arrow mechanism for the following reaction. ... Provide an efficient multistep synthesis for the conversion of the given ... Propose a synthetic way to produce the following compound. ... Propose a synthetic way to produce the following compound. ... Propose a synthetic way to produce the following compound. ... Show how you would carry out each of the following reactions. You do  ... Show how you would carry out each of the following reactions. You do  ... Provide an efficient multistep synthesis for the following conversion... Fill in the missing reagents:   ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8500878214836121, "perplexity": 3495.763033910814}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812405.3/warc/CC-MAIN-20180219052241-20180219072241-00033.warc.gz"}
https://mathoverflow.net/questions/298045/can-infiniteness-of-finitely-generated-groups-be-read-by-a-paradoxical-decompo
# Can infiniteness of finitely generated groups be read by a “paradoxical” decomposition? (Edit) Let $G$ be a group. Two subsets $A,B$ of $G$ are said to be equidecomposable if there exists a finite partition $A=\bigsqcup_{i=1}^nA_i$ and $a_i\in G$ such that $B=\bigsqcup_{i=1}^na_iA_i$. Say that a group has Property (X) if it has a subset equidecomposable to a proper subset of itself. Clearly this implies being infinite. The negation of (X) passes to subgroups. The group $\mathbb{Z}$ has (X) (for $n=1$, ans hence all non-torsion (=non-periodic) groups have (X). Using a paradoxical decomposition, all non-amenable groups have (X). Does every infinite finitely generated amenable group have (X)? The only remaining cases are periodic and amenable. • It is not hard to check that a group has (X) if and only if some finitely generated subgroup has (X). In particular, locally finite groups do not have (X), and a positive answer to the question is equivalent to say that groups with (X) are exactly the non-locally-finite groups. – YCor Apr 17 '18 at 9:24 • @YCor Yes, You are exactly right! Thanks for the edition! – Meisam Soleimani Malekan Apr 17 '18 at 13:20 • Additional remark: say that $G$ has (X)$_n$ if it has (X) with this given $n$ (this is weaker when $n$ grows). For instance, $G$ has (X)$_1$ iff $G$ is non-torsion. More generally, it is easy to show that if every $n$-generated subgroup of $G$ is finite, then $G$ does not have (X)$_n$. I don't know if the converse holds, but it is enough to show that there does not exist any uniform $n$ working for all non-locally-finite groups. – YCor Apr 17 '18 at 22:33 • PS now the converse is obtained in Robin's answer! – YCor Apr 18 '18 at 19:21 • Note (slightly in regards with the initial less precise formulation of the question): I do not know any example of a group for which such a decomposition is used to prove that it's infinite (and for which it's not completely trivial for any other reason). – YCor Apr 18 '18 at 19:24 This follows from a theorem of Brandon Seward: Every finitely generated infinite group $G$ admits a translation-like action by the group $\mathbb{Z}$ of integers. https://arxiv.org/abs/1104.1231 What this means is that there exists a free action of $\mathbb{Z}$ on $G$ such that each element of $\mathbb{Z}$ acts as an element of the wobbling group of $G$. That is, there exists a bijection $f:G\rightarrow G$ having no finite orbits, along with a finite partition $G=\bigsqcup_{i=1}^nX_i$ and finitely many group elements $a_1,\dots , a_n\in G$ such that $f(x)=a_ix$ for all $x\in X_i$. If we take $T$ to be a subset of $G$ containing exactly one point from each orbit, and define $A$ to be the set $A = \bigsqcup _{n\geq 0} f^n(T)$, then $f(A)$ is a proper subset of $A$, and the partition $A=\bigsqcup _{i=1}^n A_i$, where $A_i := A\cap X_i$, along with the group elements $a_1,\dots , a_n$, witness that $A$ is equidecomposable with $f(A)$. Edit: I realized there is also a simple direct argument which also shows that $n$ is bounded above by the minimum size of a finite subset $S$ of $G$ which generates an infinite subsemigroup of $G$. Consider the (left) Cayley graph of this subsemigroup $H$ with respect to its generating set $S$, i.e., with vertex set $H$ and with a directed edge from $h$ to $sh$ for each $s\in S$ and $h\in H$. This graph is infinite and locally finite, so by Kőnig's lemma it contains an infinite geodesic ray, say with vertices $h_0,h_1,\dots$. Let $A := \{ h_0,h_1,h_2,\dots \}$ and let $B=A \setminus \{ h_0 \}$. For each $n\geq 0$ there is some $s_n\in S$ such that $h_{n+1}=s_nh_n$, so $A$ is partitioned into finitely many sets $A_s$, $s\in S$, where $A_s$ consists of those $h_n$ for which $s_n=s$. Then $B$ is partitioned by $sA_s$, $s\in S$, so $A$ and $B$ are equidecomposable. • Nice! Indeed, you're right (as regards your edit). Indeed what you really need is a bounded-displacement permutation of $G$ with an infinite orbit, which indeed follows from the existence of a geodesic. The more difficult thing achieved by Seward is to get a bounded-displacement permutation with only infinite orbits. – YCor Apr 18 '18 at 19:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708529114723206, "perplexity": 117.30444267268516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670731.88/warc/CC-MAIN-20191121050543-20191121074543-00369.warc.gz"}
http://math.stackexchange.com/questions/117601/irreducible-algebraic-sets
# Irreducible algebraic sets Let $K := (uw - v^2, w^3 - u^5)$. Show that $V(K)$ consists of two irreducible components, one of which is $V(uw - v^2, w^3 - u^5, u^3-vw) = V(J)$. I don't know how to start this. I see that $V(K)$ is symmetric under the interchange of $v \to -v$ but that $V(J)$ isn't. Does this mean that the other component should be symmetric under this exchange? - This is copy and pasted homework for the Algebraic Geometry course at the University of Edinburgh, coincidentally due in tomorrow. Mary, I suggest you do your own work or at least tag this post as homework. –  user26482 Mar 7 '12 at 20:17 Not the most productive thing to write, A.non. –  Stephen Mar 7 '12 at 20:25 First, we need to find out where $K$ is not prime. More precisely, we need $ab\in K$ but neither $a\in K$ nor $b\in K$. The definition of $J$ helps a bit - notice that $$(u^3-vw)(u^3+vw) = u^6 - v^2w^2 = uw^3 - w^2v^2 - uw^3 + u^6 = w^2(uw-v^2) - u(w^3-u^5) \in K$$ So we're looking for a prime ideal $I$ with $I\cap J=K$ since $V(I\cap J)=V(I)\cup V(J)$. In this case, that'd be $I=(uw-v^2,w^3-u^5,u^3+vw)$. What is left to do is check that both $I$ and $J$ are actually prime.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8775330781936646, "perplexity": 161.81725299714353}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122123549.67/warc/CC-MAIN-20150124175523-00108-ip-10-180-212-252.ec2.internal.warc.gz"}
http://gradestack.com/Class-11th-Commerce/Measures-of-Dispersion/NCERT-Questions/17645-3574-31324-revise-wtw
# Question-1 Calculate Range and Q.D. of the following observations: 20, 25, 29, 30, 35, 39, 41, 48, 51, 60 and 70 Solution: Range is clearly 70 – 20 = 50 For Q.D., we need to calculate values of Q3 and Q1. Q1 is the size of value. Here, n being 11, Q1 is the size of 3rd value. As the values are already arranged in ascending order, it can be seen that Q1, the 3rd value is 29. [What will you do if these values are not in an order?] Similarly, Q3 is size of th value; i.e. 9th value which is 51. Hence Q3 = 51 Q.D = = = 11 Do you notice that Q.D. is the average difference of the Quartiles from the median. # Question-2 Problem 2 For the following distribution of marks scored by a class of 40 students, calculate the Range and Q.D. Class intervals No.of students 0-10 5 10-20 8 20-40 16 40-60 7 60-90 4 40 Solution: Range is just the difference between the upper limit of the highest class and the lower limit of the lowest class. So Range is 90 – 0 = 90. For Q.D., first calculate cumulative frequencies as follows: C.I Frequency Cumulative Frequency 0-10 5 05 10-20 8 13 20-40 16 29 40-60 7 36 60-90 4 40 n 40 Q1 is the size of th value in a continuous series. Thus it is the size of the 10th value. The class containing the 10th value is 10–20. Hence Q1 lies in class 10–20. Now, to calculate the exact value of Q1, the following formula is used: Where L = 10 (lower limit of the relevant Quartile class) c.f. = 5 (Value of c.f. for the class preceding the Quartile class) i = 10 (interval of the Quartile class), and f = 8 (frequency of the Quartile class) Thus, = 16.25 Similarly, Q3 is the size of th value; i.e., 30th value, which lies in class 40–60. Now using the formula for Q3, its value can be calculated as follows: . # Question-3 Calculate the Mean Deviation of the following values; 2, 4, 7, 8 and 9. Solution: The A M = = 6 X {D} 2 4 4 2 7 1 8 2 9 3 12 # Question-4 Problem 4 X {D} 2 5 4 3 7 0 8 1 9 2 11 Solution: Where is the sum of absolute deviations taken from the assumed mean. x is the actual mean. A x is the assumed mean used to calculate deviations. is the number of values below the actual mean including the actual mean. is the number of values above the actual mean. Substituting the values in the above formula: = 2.4 # Question-5 What is Range? Solution: Range (R) is the difference between the largest (L) and the smallest value (S) in a distribution. Thus, R = L – S # Question-6 What is Quartile Deviation? Solution: The presence of even one extremely high or low value in a distribution can reduce the utility of range as a measure of dispersion. Thus, you need a measure which is not unduly affected by the outliers. In such a situation, if the entire data is divided into four equal parts, each containing 25% of the values, we get the values of Quartiles and Median. The upper and lower quartiles (Q3 and Q1, respectively) are used to calculate Inter Quartile Range which is Q3 – Q1. Inter-Quartile Range is based upon middle 50% of the values in a distribution and is, therefore, not affected by extreme values. Half of the Inter-Quartile Range is called Quartile Deviation. Thus: Q .D. = Q.D. is therefore also called Semi- Inter Quartile Range. # Question-7 Explain measures of dispersion from average? Solution: Recall that dispersion was defined as the extent to which values differ from their average. Range and Quartile Deviation do not attempt to calculate, how far the values are, from their average. Yet, by calculating the spread of values, they do give a good idea about the dispersion. Two measures which are based upon deviation of the values from their average are Mean Deviation and Standard Deviation. Since the average is a central value, some deviations are positive and some are negative. If these are added as they are, the sum will not reveal anything. In fact, the sum of deviations from Arithmetic Mean is always zero. # Question-8 Explain the Calculation of Mean Deviation from Arithmetic Mean for ungrouped data. Solution: Steps: (i) The A.M. of the values is calculated (ii) Difference between each value and the A.M. is calculated. All differences are considered positive. These are denoted as |d| (iii) The A.M. of these differences (called deviations) is the Mean Deviation. i.e. MD = # Question-9 What is Standard Deviation? Solution: Standard Deviation is the positive square root of the mean of squared deviations from mean. So if there are five values x1, x2, x3, x4 and x5, first their mean is calculated. Then deviations of the values from mean are calculated. These deviations are then squared. The mean of these squared deviations is the variance. Positive square root of the variance is the standard deviation. (Note that Standard Deviation is calculated on the basis of the mean only). # Question-10 What is the Calculation of Standard Deviation for ungrouped data. Solution: Four alternative methods are available for the calculation of standard deviation of individual values. All these methods result in the same value of standard deviation. These are: (i) Actual Mean Method (ii) Assumed Mean Method (iii) Direct Method (iv) Step-Deviation Method Following formula to be used for calculating SD # Question-11 Explain the steps for calculating step-deviation method. Solution: 1. Calculate class mid-points (Col. 3) and deviations from an arbitrarily chosen value, just like in the assumed mean method. In this example, deviations have been taken from the value 40. (Col. 4) 2. Divide the deviations by a common factor denoted as ‘C’. C = 5 in the above example. The values so obtained are ‘d'’ values (Col. 5). 3. Multiply ‘d'’ values with corresponding ‘f'’ values (Col. 2) to obtain ‘fd'’ values (Col. 6). 4. Multiply ‘fd'’ values with ‘d'’ values to get ‘fd'2’ values (Col. 7) 5. Sum up values in Col. 6 and Col. 7. to get S fd' and S fd'2 values. # Question-12 Explain LORENZ CURVE. Solution: The measures of dispersion discussed so far give a numerical value of dispersion. A graphical measure called Lorenz Curve is available for estimating dispersion. You may have heard of statements like ‘top 10% of the people of a country earn 50% of the national income while top 20% account for 80%’. An idea about income disparities is given by such figures. Lorenz Curve uses the information expressed in a cumulative manner to indicate the degree of variability. It is especially useful in comparing the variability of two or more distributions. # Question-13 What are the steps required for Construction of the Lorenz Curve? Solution: 1. Calculate class mid-points and find cumulative totals as in Col. 3 in the example 16, given above. 2. Calculate cumulative frequencies as in Col. 6. 3. Express the grand totals of Col. 3 and 6 as 100, and convert the cumulative totals in these columns into percentages, as in Col. 4 and 7. 4. Now, on the graph paper, take the cumulative percentages of the variable (incomes) on Y axis and cumulative percentages of frequencies (number of employees) on X-axis, as in figure 6.1. Thus each axis will have values from ‘0’ to ‘100’. 5. Draw a line joining Co-ordinate (0, 0) with (100,100). This is called the line of equal distribution shown as line ‘OC’ in figure 6.1. 6. Plot the cumulative percentages of the variable with corresponding cumulative percentages of frequency. Join these points to get the curve OAC.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9211463332176208, "perplexity": 1181.540141640709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720380.80/warc/CC-MAIN-20161020183840-00043-ip-10-171-6-4.ec2.internal.warc.gz"}
https://infoscience.epfl.ch/record/126240
Infoscience Conference paper # Enhanced viscoplastic modelling of soft soils Geotechnical experimental observations of the influence of strain rate on the mechanical behaviour of soft soils express fundamental microscopic processes. The main macroscopical displays are creep, relaxation and strain-rate effect on strength. With these considerations in mind, an innovative constitutive formulation accounting for the time-dependency of the mechanical behaviour of soft soils is developed based on the unique effective stress-strain-strain rate concept. The performance of the proposed model is presented with numerical examples and comparisons with experimental data.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828962504863739, "perplexity": 2057.5285752013274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00092-ip-10-171-10-70.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/754/pupil?tab=summary
Pupil Reputation Next privilege 500 Rep. Access review queues 4 12 Impact ~17k people reached • 0 posts edited 1 Solution of Second order ODE: theoretical question 1 Bounded Matrix-Vector Multiplication 1 Integrate $\ln x \cos(\ln x) \,dx$ 1 Can decimal numbers be considered “even” or “odd”? 0 Physical applications of Chebyshev's equation. ### Reputation (441) This user has no recent positive reputation changes ### Questions (6) 17 How to deduce the CDF of $W=I^2R$ from the PDFs of $I$ and $R$ independent 12 Proof of upper-tail inequality for standard normal distribution 7 Black Scholes PDE and its many solutions 1 How is this series in denominator converted to a series in numerator? 1 What is the CDF for the following PDF of a cut-off log-normal distribution (in Matlab)? ### Tags (21) 1 differential-equations × 2 1 parity 1 elementary-number-theory 1 self-learning 1 abstract-algebra 1 calculus 1 integration 0 probability-distributions × 3 1 matrices 0 statistics × 2 ### Accounts (48) Stack Overflow 2,517 rep 74288 Quantitative Finance 505 rep 411 Mathematics 441 rep 412 Cross Validated 352 rep 1512 Web Applications 296 rep 212
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770917654037476, "perplexity": 3593.256171213986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00117-ip-10-164-35-72.ec2.internal.warc.gz"}