text
stringlengths 104
605k
|
---|
### Data Can Help Schools Confront ‘Chronic Absence’
By Dian Schaffhauser 09/22/16
https://thejournal.com/articles/2016/09/22/data-can-help-schools-confront-chronic-absence.aspx
The data shared in June by the Office for Civil Rights, which compiled it from a 2013-2014 survey completed by nearly every school district and school in the United States. new is a report from Attendance Works and the Everyone Graduates Center that encourages schools and districts to use their own data to pinpoint ways to take on the challenge of chronic absenteeism.
The first is research that shows that missing that much school is correlated with “lower academic performance and dropping out.” Second, it also helps in identifying students earlier in the semester in order to get a jump on possible interventions.
The report offers a six-step process for using data tied to chronic absence in order to reduce the problem.
The first step is investing in “consistent and accurate data.” That’s where the definition comes in — to make sure people have a “clear understanding” and so that it can be used “across states and districts” with school years that vary in length. The same step also requires “clarifying what counts as a day of attendance or absence.”
The second step is to use the data to understand what the need is and who needs support in getting to school. This phase could involve defining multiple tiers of chronic absenteeism (at-risk, moderate or severe), and then analyzing the data to see if there are differences by student sub-population — grade, ethnicity, special education, gender, free and reduced price lunch, neighborhood or other criteria that require special kinds of intervention.
Step three asks schools and districts to use the data to identify places getting good results. By comparing chronic absence rates across the district or against schools with similar demographics, the “positive outliers” may surface, showing people that the problem isn’t unstoppable but something that can be addressed for the better.
Steps five and six call on schools and districts to help people understand why the absences are happening, develop ways to address the problem.
The report links to free data tools on the Attendance Works website, including a calculator for tallying chronic absences and guidance on how to protect student privacy when sharing data.
The full report is freely available on the Attendance Works website.
++++++++++++++
more on big data in education in this IMS blog
http://blog.stcloudstate.edu/ims?s=data
We know that many of you have been interested in exploring Turnitin in the past, so we are excited to bring you an exclusive standardized price and more information on the roll out of Feedback Studio, replacing the Turnitin you have previously seen. We would like to share some exciting accessibility updates, how Feedback Studio can help faculty deliver formative feedback to students and help students become writers. Starting today thru December 31st non-integrated Feedback Studio will be $2.50 and integrated Feedback Studio will be$3 for new customers! Confused by the name? Don’t be! Turnitin is new and improved! Check out this video to learn about Feedback Studio!
Ariel Ream – Account Executive, Indianapolis [email protected] – 317.650.2795
Juliessa Rivera – Relationship Manager, Oakland [email protected] – 510.764.7698
Juan Valladares – Account Representative, Oakland
Turnitin Webinar
Wednesday, September 21, 2016
11:00 am | Central Daylight Time (Chicago) | 1 hr
Meeting number (access code): 632 474 162
https://mnscu.webex.com/mnscu/j.php?MTID=mebaec2ae9d1d25e6774d16717719008d
+++++++++++++++++++
my notes from the webinar
I am prejudiced against TI and I am not hiding it; that does not mean that I am wrong.
For me, TurnitIn (TI) is an anti-pedagogical “surfer,” using the hype of “technology” to ride the wave of overworked faculty, who hope to streamline increasing workload with technology instead of working on pedagogical resolutions of not that new issues.
Low and behold, Juan, the TI presenter is trying to dazzle me with stuff, which does not dazzle me for a long time.
WCAG 2.0 AA standards of the W3C and section 508 of the rehabilitation act.
the sales pitch: 79% of students believe in feedback, but only %50+ receive it. HIs source is TurnitIn surveys from 2012 to 2016 (very very small font size (ashamed of it?))
It seems to me very much like “massaged” data.
Testimonials: one professor and one students. Ha. the apex of qualitative research…
next sales pitch: TurnitIn feedback studio. Not any more the old Classic. It assesses the originality. Drag and drop macro-style notes. Pushing rubrics. but we still fight for rubrics in D2L. If we have a large amount of adjuncts. Ha. another gem. “I know that you are, guys, IT folks.” So the IT folks are the Trojan horse to get the faculty on board. put comments on
This presentation is structured dangerously askew: IT people but no faculty. If faculty is present, they will object that they ARE capable of doing the same which is proposed to be automated.
More , why do i have to pay for another expensive software, if we have paid already Microsoft? MS Word can do everything that has been presented so far. Between MS Word and D2L, it becomes redundant.
why the heck i am interested about middle school and high school.
TI was sued for illegal collection of paper; paper are stored in their database without the consent of the students’ who wrote it. TI goes “great length to protect the identity of the students,” but still collects their work [illegally?}
November 10 – 30 day free trial
otherwise, \$3 per student, prompts back: between Google, MS Word and D2L (which we already heftily pay for), why pay another exuberant price.
D2L integration: version, which does not work. LTI.
“small price to pay of such a beauty” – it does not matter how quick and easy the integration is, it is a redundancy, which already can be resolved with existing tools, part of which we are paying hefty price for
Play recording (1 hr 4 min 19 sec) https://mnscu.webex.com/mnscu/ldr.php?RCID=a9b182b4ca8c4d74060f0fd29d6a5b5c
### 10 Big Hurdles to Identifying and Educating the Nation’s Smartest Kids
10 Big Hurdles to Identifying and Educating the Nation’s Smartest Kids
1. Just 8.8 percent of U.S. students are classified as “high achievers” in mathematics, according to the most recent international assessments. That’s well below the average of 12.6 percent for affluent nations.
2. No Child Left Behind, the 2001 federal law, incentivizes “just getting kids over a bar,” Finn says. “In the public policies affecting our schools — state and federal — there’s almost no incentive to boost a smart kid up the scale or take someone who’s ‘proficient’ and push them to ‘advanced.’ ” [We’ve written before about proficiency and the tendency, under high-stakes testing, for schools to focus resources on kids who are “on the bubble.”]
7. One promising practice from overseas is screening all kids at third or fourth grade — after they’ve had a few years of school — and directing special resources to the top scorers. Here in the U.S., all third-graders are tested, but the high scorers don’t get anything. Meanwhile, screening for gifted programs usually happens in kindergarten, which creates a heavy bias toward those who come from more affluent homes.
+++++++++++++++++
more on gifted education in this IMS blog
#StarTrek50
### Virtual Reality to Drive Rapid Adoption of 360 Degree Cameras
VR’s applications for education have been much lauded, and tech heavyweights have begun investing in the technology, in part to both enable and capitalize on educational opportunities. Google, for example, has been offering its low-cost Google Cardboard kits, which, coupled with the Google Expeditions service, provides VR-based educational experiences and learning activities.
according to market research firm ABI Research, some 6 million consumer and prosumer cameras are expected to ship by 2021. (That’s out of a total of 70 million VR devices that are forecast to ship by then.)
### Utilizing Augmented Reality For Special Needs Learning
Utilizing Augmented Reality For Special Needs Learning
Augmented reality is a variation of virtual environments, but has a few added advantages for special needs learning. With virtual environments the user is completely immersed in a virtual world and cannot see the real environment around him or her. This may cause some confusion for special needs learners and can hinder learning. In contrast, augmented reality allows the user to see the real world with virtual objects superimposed upon or composited with the real world. This provides the greatest benefit as learners remain part of the world around them and learn easily.
++++++++++++++++++++++++++
more on the topic
Muñoz, Silvia Baldiris Navarro and Ramón, “Gremlings in My Mirror: An Inclusive AR-Enriched Videogame for Logical Math Skills Learning”, Advanced Learning Technologies (ICALT) 2014 IEEE 14th International Conference on, pp. 576-578, 2014.
### Age-Based, Grade-Level System Ignores Huge Numbers of Over-Achieving Students
By Dian Schaffhauser 08/23/16
How Can So Many Students Be Invisible? Large Percentages of American Students Perform Above Grade Level,” produced in the Institute of Education Policy at Johns Hopkins University, examined data sets from five sources: the Common Core-based Smarter Balanced assessments in Wisconsin and California, Florida’s standards assessments, the Northwest Evaluation Association’s (NWEA) Measures of Academic Progress (MAP) and the National Assessment of Educational Progress (NAEP).
between 15 percent and 45 percent of students enter elementary classrooms each fall learning above grade level. The result is that they’re not challenged enough in school, and teacher time and school resources are wasted in trying to teach them stuff they already know.
The entire report is available on the institute’s website. http://education.jhu.edu/edpolicy/commentary/PerformAboveGradeLevel
+++++++++++++++++
more on gifted students in this IMS blog
|
I’d like to use the next couple of posts to compute the first three stable stems, using the Adams spectral sequence. Recall from the linked post that, for a connective spectrum ${X}$ with appropriate finiteness hypotheses, we have a first quadrant spectral sequence
$\displaystyle \mathrm{Ext}^{s,t}_{\mathcal{A}_2^{\vee}}(\mathbb{Z}/2, H_*( X; \mathbb{Z}/2)) \implies \widehat{\pi_{t-s} X} ,$
where the ${\mathrm{Ext}}$ groups are computed in the category of comodules over ${\mathcal{A}_2^{\vee}}$ (the dual of the Steenrod algebra), and the convergence is to the ${2}$-adic completion of the homotopy groups of ${X}$. In the case of ${X}$ the sphere spectrum, we thus get a spectral sequence
$\displaystyle \mathrm{Ext}^{s,t}_{\mathcal{A}_2^{\vee}}(\mathbb{Z}/2, \mathbb{Z}/2) \implies \widehat{\pi_{t-s} S^0},$
converging to the 2-torsion in the stable stems. In this post and the next, we’ll compute the first couple of ${\mathrm{Ext}}$ groups of ${\mathcal{A}_2^{\vee}}$, or equivalently of ${\mathcal{A}_2}$ (this is usually called the cohomology of the Steenrod algebra), and thus show:
1. ${\pi_1 S^0 = \mathbb{Z}/2}$, generated by the Hopf map ${\eta}$ (coming from the Hopf fibration ${S^3 \rightarrow S^2}$).
2. ${\pi_2 S^0 = \mathbb{Z}/2}$, generated by the square ${\eta^2}$ of the Hopf map.
3. ${\pi_3 S^0 = \mathbb{Z}/8}$, generated by the Hopf map ${\nu}$ (coming from the Hopf fibration ${S^7 \rightarrow S^4}$). We have ${\eta^3 = 4 \nu}$. (This is actually true only mod odd torsion; there is also a ${\mathbb{Z}/3}$, so the full thing is a ${\mathbb{Z}/24}$.)
In fact, we’ll be able to write down the first four columns of the Adams spectral sequence by direct computation. There are numerous fancier tools which let one go further.
1. The cobar complex
Let ${M}$ be a comodule over a coalgebra ${A}$ over some field (for instance, ${\mathcal{A}_2^{\vee}}$). There is a useful cofree resolution which one can use for computing ${\mathrm{Ext}}$ groups. Namely, consider the cosimplicial object
$\displaystyle A \otimes M \rightrightarrows A \otimes A \otimes M \dots.$
The various cosimplicial arrows come from the comultiplications on various factors and the counit maps. We can extract from this a normalized chain complex ${C_u^*(M; A)}$, which we can describe as follows:
1. In degree ${s \geq 0}$, ${C_u^s(M, A) = A^{\otimes (s+1)} \otimes M}$. Elements are written as ${a[a_1 | \dots | a_s ] m}$.
2. The coboundary ${\delta: C_u^s(M, A) \rightarrow C_u^{s+1}(M, A)}$ is described via
3. Here the notation is such that the comultiplication ${\Delta: A \rightarrow A \otimes A}$ satisfies ${\Delta(a) = a' \otimes a'}$, ${\Delta(a_i) = a_i ' \otimes a_i''}$, and the map ${ M \rightarrow A \otimes M}$ sends ${m \mapsto a_m' \otimes m'}$. This is an abuse of notation—really ${\Delta(a_i)}$ may be a sum of pure tensors, etc. Note that this is a chain complex of cofree ${A}$-modules (with ${A}$ coacting on the first factor), and (by a formal argument which works in a fair bit of generality), it is a resolution of ${M}$.
4. If ${A}$ is an augmented coalgebra and ${\overline{A}}$ is the cokernel of the augmentation, then we can use a smaller version of the cobar complex: instead take
$\displaystyle C^s(M, A) = A \otimes \overline{A}^{\otimes s} \otimes M.$
This will still be a resolution of ${M}$ by cofree ${A}$-modules, and we will denote this by the ${C^s(M, A)}$. This will be the cobar complex that we use below. The formula for the differential is the same.
As an example (the purpose of this post), let’s write down what the complex to compute ${\mathrm{Ext}^{s,t}_{\mathcal{A}_2^{\vee}}(\mathbb{Z}/2, \mathbb{Z}/2)}$ (the ${E_2}$ page of the ASS for the sphere) looks like. Namely, we have to take the complex
$\displaystyle \mathcal{A}_2^{\vee} \rightarrow \mathcal{A}_2^{\vee} \otimes \overline{\mathcal{A}_2^{\vee}} \rightarrow \dots,$
which is a cofree resolution of ${\mathbb{Z}/2}$, and then take comodule maps of ${\mathbb{Z}/2}$ into this. Taking comodule maps of ${\mathbb{Z}/2}$ into this peels of the first (cofree) factor in each case, so we are left with a complex
$\displaystyle \mathbb{Z}/2 \rightarrow \overline{\mathcal{A}_2^{\vee}} \rightarrow \overline{\mathcal{A}_2^{\vee}} \otimes \overline{\mathcal{A}_2^{\vee}} \rightarrow \dots.$
This cobar complex ${\hom(\mathbb{Z}/2, C^*(\mathbb{Z}/2, \mathcal{A}_2^{\vee})}$ has cohomology which is the ${E_2}$ page of the ASS. So, in degree ${s}$, we have the free vector space on elements
$\displaystyle [x_1| \dots |x_s], \quad x_i \in \overline{\mathcal{A}_2^{\vee}} ,$
and the coboundary is given by
$\displaystyle \delta( [x_1 | \dots | x_s]) = \sum_{i=1}^s [x_1 | \dots | x_{i-1} | x_i' | x_i''| \dots | x_s ],$
where ${\Delta(x_i) = x_i' \otimes x_i''}$ after quotienting out by the image of ${1}$ (so as to get into the reduced thing ${\overline{\mathcal{A}_2^{\vee}}}$). The nice thing about working mod 2 is that we don’t have to worry about signs.
Note that this is a bigraded complex, which means it lets us see the bigrading of ${\mathrm{Ext}}$ (which we need for the ASS). Note also that the cobar complex has a multiplicative structure given by juxtaposing two “bars”: that is, one multiplies ${[ x_1 | \dots |x_s]}$ by ${[y_1 | \dots | y_t]}$ to get ${[x_1 |\dots | x_s | y_1 | \dots | y_t]}$. It turns out that this coincides with the Yoneda product in ${\mathrm{Ext}}$, which corresponds to the ring structure in the stable stems.
2. First steps
We now have a recipe for working out the ${E_2}$ page of the ASS for the sphere. This is a fairly large complex of which we have to compute the cohomology, though, and so we do it only in small dimensions.
For ${s = 0}$, there is only one element ${[]}$, whose differential is trivial: this gives a cycle in ${\mathrm{Ext}^{0, 0}}$.
Let’s start filling in the ${E_2}$ page of the ASS. The topologists like to typeset it with the ${s}$ vertically and the ${t-s}$ horizontally, so right now what we have computed looks like:
The dot in the position ${(0, 0)}$ indicates that there’s a ${\mathbb{Z}/2}$ there. The empty space here means that we don’t know what is there, not that it is zero.
Anyway, the point of drawing the spectral sequence this way is that the vertical lines are parametrized by ${t-s}$, so that any given homotopy group is obtained by going up one of them. The horizontal level is the “Adams filtration.”
For the rest of the post, I’ll be using the notation for the dual Steenrod algebra as in this post: that is, $\mathcal{A}_2^\vee$ is a polynomial algebra on generators $\zeta_i, i = 1, 2, \dots$ of degrees $2^i - 1$. The formula for the coproduct is described there.
Admittedly that wasn’t very interesting. Let’s now get the remaining elements on the zeroth vertical line. These are cocycles in the cobar complex of degree ${t - s =0}$. Any such cocycle is of the form ${[a_1 | \dots |a_s]}$ where the ${a_i}$ are in the augmentation ideal of ${\mathcal{A}_2^{\vee}}$, i.e. have positive degree. So the only such cocycles are
$\displaystyle [ \zeta_1 | \dots | \zeta_1 ] \quad \text{(s times)}$
These can’t be coboundaries, because there are no terms in the cobar complex with ${t-s < 0}$ to annihilate them. So, if ${h_0 \in \mathrm{Ext}^{1,1}_{\mathcal{A}_2^{\vee}}(\mathbb{Z}/2, \mathbb{Z}/2)}$ is represented by ${[\zeta_1]}$, we get a chain of dots:
The zero stem is ${\mathbb{Z}}$ and its completion is the 2-adic integers ${\mathbb{Z}_2}$, so ${h_0}$ must represent multiplication by ${2}$. It isn’t surprising at all that we’ve gotten the above associated graded for the 0-stem. It also isn’t surprising that the multiples of 2 should live in Adams filtration higher than one.
3. The 1-stem
Now let’s do the 1-stem. We need to look for cocycles in the cobar complex with ${t - s =1}$. That is, we need to look for elements of the form
$\displaystyle [x_1 | \dots | x_s]$
where the total degree of the ${x_i}$ is ${s + 1}$. This means that all the ${x_i}$ but one must have degree ${1}$ (i.e., all but one of them has to be ${\zeta_1}$) and the other must have degree two: that is, it is ${\zeta_1^2}$. So the only possible candidates for cycles in the cobar complex with ${t-s = 1}$ are the elements (and their permutations)
$\displaystyle [\zeta_1^2 | \zeta_1 | \zeta_1 |\dots | \zeta_1].$
Now, if we let ${h_1 = [\zeta_1^2] \in \mathrm{Ext}^{1, 2}(\mathbb{Z}/2, \mathbb{Z}/2)}$, then we easily check that ${h_1}$ is a cycle: this corresponds to the fact that ${\zeta_1^2}$ is a primitive element of the Hopf algebra ${\mathcal{A}_2^{\vee}}$. (The dual indecomposable element of the Steenrod algebra is $\mathrm{Sq}^2$.)
So, the elements ${[\zeta_1^2 | \zeta_1 | \zeta_1 |\dots | \zeta_1]}$ are cycles and they are the only cycles with ${t-s = 1}$ in the cobar complex. These represent the classes ${h_0^i h_1}$ in the Adams spectral sequence.
Claim: ${h_1}$ is not zero (i.e., ${[\zeta_1^2]}$ is not a coboundary) but ${h_0 h_1 = 0}$.
In fact, ${[\zeta_1^2]}$ cannot be a coboundary because there is nothing with ${s =0 }$ to cobound it. So ${h_1 \neq 0}$. However,
$\displaystyle \delta( [\zeta_2]) = [\zeta_1^2 | \zeta_1]$
as one sees by going back to the formulas in ${\mathcal{A}_2^{\vee}}$. This exhibits ${h_0 h_1}$ as a coboundary (and corresponds to the fact that twice the Hopf map is stably zero).
So we can extend the spectral sequence:
This means that we’ve computed the first two columns of the ASS, and we find that ${\pi_1 S^0 = \mathbb{Z}/2}$. Using the Freudenthal suspension theorem, we find that the image of the Hopf map ${S^3 \rightarrow S^2}$ must be a generator.
4. The second stem
Now let’s move on to the second stem. We need to look for cocycles in the cobar complex with ${t-s = 2}$. This means that we have terms of the form
$\displaystyle [x_1 | \dots | x_s]$
where the total degree of the ${x_i}$ amounts to ${s + 2}$. This means that any such term is a permutation of
$\displaystyle [\zeta_1^2 | \zeta_1^2 | \zeta_1 | \zeta_1 | \dots | \zeta_1 ] , \quad [ \zeta_1^3 | \zeta_1 | \zeta_1 | \dots | \zeta_1].$
The second term is not a cocycle. The first term is, and represents a power of ${h_0}$ times ${h_1^2}$. Since we saw ${h_0 h_1 =0 }$ earlier, the only possibility for a nontrivial cohomology class is ${h_1^2 = [\zeta_1^2 | \zeta_1^2]}$.
Claim: ${h_1^2 \neq 0}$.
This corresponds to the fact that the square of the Hopf map is stably essential. We can prove it in the cobar complex, though: we have to show that ${[\zeta_1^2 | \zeta_1^2]}$ is not a coboundary. In fact, a cobounding element would have to be something like ${[x]}$ where ${x}$ has degree four in ${\mathcal{A}_2^{\vee}}$: that is, either ${x = \zeta_1^4}$ or ${x = \zeta_2 \zeta_1}$. It’s easy to check that no combination of those possibilities works. So the spectral sequence now looks like:
Now this takes care of the first two stems (it’s easy to see that the spectral sequence has to degenerate here). I think we can treat the third stem in the same way, but it’ll require a little more effort, so it’ll be the subject of the next post.
|
# Updates on Plagiarism Scandal, Journal of K-theory
Plagiarism Scandal
Today’s Nature has an article by Geoff Brumfiel with more details on the plagiarism scandal described here. At last count it involves 15 authors, 67 papers on the arXiv, of which about 35 were refereed and published, in 18 different journals. The arXiv has set up a special page with information about this. As far as I can tell from checking a few examples, most of the published papers are still available online at the journals, with no indication of their plagiarized nature. One exception is the plagiarized paper at JHEP, which has now been removed, with the notation
This paper has been removed because of plagiarism. We regret that the paper was published.
As far as I know, neither JHEP nor any of the journals has given any indication of an intent to change their refereeing procedures because of this scandal.
Journal of K-theory
The editors of the new Journal of K-theory have issued a public statement, explaining in detail their plans for how to handle papers submitted to the older journal, K-theory, where they had resigned as editors.
This entry was posted in Uncategorized. Bookmark the permalink.
### 28 Responses to Updates on Plagiarism Scandal, Journal of K-theory
1. Thomas Love says:
It does make me wonder about the quality of the refereeing.
2. Jon Lester says:
The way the review process is applied in physics is a scandal.
Jon
3. chris says:
Then do you have any suggestions on how to improve it? remember: referees are not payed, they take time off from pursuing their own work and, this is the most important point i never see properly expressed, they are there to check the scientific soundness and impact of the work. they are not there to check spelling, grammar, fraud or plagiarism. when i get a paper to referee, i assume that the authors were honest. if i don’t, i wouldn’t even know where to start. should i redo the whole work or what to see if it is reproducible? sure, the key argumentation and drawing of conclusions are tractable, but if someone says, this and that is the outcome of a particular experimental setup or a one year calculation on a supercomputer reveals this number as a result – how can you as a referee challenge that? you have to rely on the honesty the authors.
it is similar with respect to plagiarism. ideally, of course, you should know all the literature in the field of the paper that you are refereeing. but honestly, there are certain topics that are pursued by only one group of people worldwide and unless you want them to self-referee, someone a bit outside has to be chosen. you can’t seriously expect them to dig thru all the literature in search of plagiarism. this is not what referees are supposed to do!
and on a final note: why the revelation of plagiarism such a scandal for the refereeing system? it just shows, that basically it works. the culprits have been identified and you can be quite sure that their career is over. and given the fact that in order to be of any relevance, a paper has first to be noticed, it just absolutely floors me how these people think they could ever get away with it and have some advantage from plagiarizing. as soon as the work really becomes known, the plagiarization is soon uncovered.
the same by the way is true for fraud. remember the guy claiming to have produced element 117 (or was it 118?)? he stated an experimentally falsifiable wrong statement. just by common sense there are only 2 future prospects for such a statement: either nobody cares in which case you have to ask yourself what the motivation was for cooking up the fraud in the first place. or people are interested and will inevitably uncover the fraud.
so i would say that what we witness here is good proof that science is healthy and has self-correcting mechanisms that work.
4. prague_phys says:
Look at the webpage of ALİ HAVARE,
http://www.mersin.edu.tr/apbs.php?id=335
one of the persons involved. Most of his (former) students, such as Yetkin, Aydogdu, Salti and Korunur were involved as well. Wonder who was the head of the gang.
5. Elisha Feger says:
The K-Theory journal situation still makes my head hurt, no matter how much I read up on it.
6. matt says:
Elisha, can you check the reference number 25, page 343 in K-theory Volume 36 (2005) in the printed version (not online version)
to get an example of the seriousness’ with which Springer has been treating the manuscripts of K-theory!!
7. Peter Woit says:
chris,
I don’t see how this is evidence of a healthy system at work. If the authors had bothered to change the wording and notation when they plaigiarized, no one would be the wiser.
One of the main roles of the referee is to determine whether the result claimed in a paper is not just correct, but original. If a referee knows nothing about whether the main result claimed in a paper is something that has been done before, and they don’t want to take the time to look into this, they shouldn’t be refereeing the paper. What this scandal shows is that getting unoriginal (and uninteresting) work published in most theoretical physics journals is almost trivially easy. One reason for this is that the results are of such little interest that no one other than the referee is likely to actually read the paper and notice that it isn’t original.
The journals should either find a way to do the kind of peer review that they are claiming to do (and charging lots of money for), or admit that it just can’t be done any more and give up. In the meantime, it might be a good idea to deal with the current situation by putting warning labels on these journals saying something like “the editors haven’t been able to determine if these papers contain original research or not”.
8. chris says:
hi peter,
actually i agree with you. and for all practical purposes, these warning labels are already there i would say. because from personal experience i conclude that what counts (in a positive sense) is the recognition of a paper and not journal reference (on the negative, having a preprint without journal reference is a very big warning sign). i think there is consensus that a lot of published work is not worth the paper it was printed on. but i think there is equal consensus, that it is more worthwhile to push ones own research than debuking others unless they make very strong claims or do something that affects you personally.
i am not saying that this state of affairs is ideal. but i see it as the prize to pay for the ease of information exchange. what it ultimately boils down to is that in order to judge a papers merits you actually have to read and understand it yourself.
so in all honesty, classical journals in my opinion are outdated already. they do have some merit, but the speed of todays research just makes a close to 100% identification rate of ‘good’ papers illusionary. the problem in my opinion comes in when what they do publish is taken as gospel. when selection committees count the number of papers rather than reading the 3 topcited ones. but i see this as a problem on the recipient side. a judgment based on metadata of a persons publication record is just not foolproof. today probably less than decades ago.
for plagiarism specifically i can only repeat my claim, that the only way it can stay undiscovered is when nobody cares about the research and nobody has suffered negative consequences. it is kind of sad that people exist who wish to populate this corner. but i doubt that much more resources of serious scientists should be spent to explicitly search for them in cases where their plagiarism has next to no effect.
but to be constructive, let me ask what you think journals or referees should do?
since refereeing can only be sensibly done by active researchers, this implies that more thorough refereeing will leave less time for research.
9. Jon Lester says:
Peter,
What makes me feel sad about the peer review system in physics is that good ideas may fail to go through while rubbish, just because is well recognized fashion, can easily get published on the most important journals. What is worst are the reasons for rejecting a paper that are generally unsound, but now, besides the reviewers, publishers tend to hide also the editors. No person takes on responsibility for the large number of errors the system is badly doing in this historical period.
A paper of mine was rejected by a DAE of PRL because “His work has had no response by the community. I do not understand what he is doing”. Other “reasons” like these can be found as I have a large file of published papers and a lot of unpublished ones and it would be really fun to get the reports of such reviewers known, after so long time, to see how badly they turn out to be wrong. I think there is a lot of people out there in similar situations but we are all silent because we fear to have our other papers no more published.
I worked in almost all field of physics but the absolutely worst situation is in particle physics. A referee claimed that “QCD lattice computations do not reflect reality” to reject a paper of mine. So, there is a lot of people out there wasting time, money and resources!
There is again a lot to say. For the moment I stop here. But I would like to discuss what could make a paper publishable and what should mean “important”, a criteria largely questionable and deemed to the taste of the single person. In this way is truly easy to suggest rejection on questionable personal feelings.
Jon
10. chris says:
hi jon,
did you try to submit said paper to another journal than prl and did it make it?
honestly, i think that every paper which is not totally meritless will certainly find a journal these days.
and even if that should happen, if the paper reaches 50 or 100 citations it doesn’t matter that much anymore. people will get curious as of why it didn’t make it. and if it doesn’t recach that, well, chances are nobody would have cared anyways.
11. hard gluon says:
when i get a paper to referee, i assume that the authors were honest. if i don’t, i wouldn’t even know where to start.
Then you shouldn’t referee. If you don’t know the field well enough to know whether a result is original or not, accepting to referee is dishonest on your part.
12. Jon Lester says:
Hi chris,
Yes, I did it and I get it accepted in a few days by the editor. I think (my personal judgement) that the paper does worth that.
Jon
13. hard gluon said:
“Then you shouldn’t referee. If you don’t know the field well enough to know whether a result is original or not, accepting to referee is dishonest on your part.”
I think your judgment is too harsh. In our age of narrow specialization, there could be only 2 or 3 people in the world (including the author) who know exactly the background of each particular manuscript. Often these people are either collaborators or rivals, in which cases it doesn’t make sense to ask them to review the paper.
For the rest of us it could be almost impossible to read all the references in the reviewed work, to repeat all calculations, or redo the experiment. So, it is impractical to ask the referee to give a 100% “seal of approval”. If that would be possible and only “correct” papers appeared in print, then scientific journals would be 100 times thinner than they are now.
This is not only unrealistic, but also dangerous, because increasing the barrier for publication may prevent novel non-mainstream ideas from being published. My attitude is that “s**t happens”, and that one or two plagiarized papers will not have any effect, except for the embarrassement to their authors. There are things much more dangerous for the health of science. String theory and anthropic groupthinks are high in this list. We should be thankful to Peter for keeping focus on these areas.
Eugene.
14. hard gluon says:
I think your judgment is too harsh.
Maybe it is, hence my nickname :). Besides, nobody is infallible and plagiarism may be hard to detect. Yet, take for example the paper “Brane-world black holes and energy-momentum vector”. I’m sure there are many people out there who are involved sufficiently in the field of black holes in brane worlds to realize whether that paper is lifting its results from some other recent papers. After all, twenty years back nobody was studying brane worlds.
to repeat all calculations, or redo the experiment.
That is obviously something a referee should never do. If there are obvious (to an expert) mistakes, those should be pointed out. But if the results reported in a paper look consistent and plausible enough to an expert eye, no further checking should be necessary. It’s the author’s business, I think, to take care that their results are correct. It is their professional reputations that is at stake.
Having said this, I must say that once a referee found a factor 1/2 wrong in a complicated equation for a cross section in a manuscript I submitted. He/she thought it was a typo. Well, it wasn’t, it was a calculational mistake, and to this very date I have no idea how that referee caught it… This happened some time ago, I’m not sure that kind of quality refereeing happens very often these days.
15. ali says:
Hi Peter,
I happen to be from turkey originally so I know the environment there quite well. These folks in question hardly know any english at all. If you asked them to give a 10 minute presentation in english about anything, they wouldn’t be able to. The system gives promotion based on bean counting in turkey these days so the strategy is “publish as many as you can” without any regard to quality. That is why most of the papers in question are published in obscure journals. Nobody cares where you publish. Turkey as a country is yet to publish one single paper in science or nature.
16. Ian says:
when i get a paper to referee, i assume that the authors were honest. if i don’t, i wouldn’t even know where to start. should i redo the whole work or what to see if it is reproducible?
One thing I’ve started doing with every paper I review is pick a handful of phrases, from the intro, the discussion, the results, and just run them through Google. It takes less than five minutes but has at least a chance of picking up plagiarism. (So far, thank God, I haven’t found any, and I hate to think of what I’d have to do if I did.)
You’re right, you can’t repeat experiments. If you’re lucky, you have experience with a close-enough system to notice if something is plausible or not. You can also take a few minutes to mentally work through a protocol; there’s a well-known instance where a reviewer caught a case of fraud because he realized the experiment as described would have taken thousands of tissue-culture flasks.
Editors are supposedly looking at figures more closely nowadays, including using some automated techniques that will pick up simple fakes. (I’m in biology, and I don’t know if other fields have similar approaches.) I do look with a skeptical eye at figures, but I’m not sure that I’d pick up any but the crudest forgeries.
17. Highly Cited Researcher says:
Ali,
It’s the same all over the Mediterranean (Spain, Italy, Greece, etc.). Promotion depends on the count of papers weighted by impact factor. The more self-citations the better. Governments and universities looking for objective’ criteria for evaluation impose such requirements. The commercial journals depend on this to survive; the journal is outrageously expensive, but can count on a large number of mediocre scientists looking for impact’ to submit their articles. That the publisher produces three journals in the same area is no surprise either; the author and his friends rotate journals, citing each other into the university administration. Once there they see to it that more objective’ criteria for promotion are imposed, favoring ambitious mediocrities like themselves, because such people are incapable of challenging them scientifically, and are easily made dependent.
18. chris says:
hi ian,
“One thing I’ve started doing with every paper I review is pick a handful of phrases, from the intro, the discussion, the results, and just run them through Google.”
that actually is a really good suggestion!
19. prague_phys says:
Highly Cited Researcher, we have similar system here in eastern Europe. My opinion is that if bureaucrats wants numbers, then total number of citations and determination in which decile the author is in terms of citations in his subfield would be much much better than what we have, though still far from ideal.
20. la dernier fois says:
Just like that googling idea, isn’t it natural to suggest a similar service from the arxiv? A referee would get a candidate-to-be-paper, run a quick search there (arxiv), if too many similarities come up, well… then she/he’s done already!
but, wait… there’s really nothing similar already?
because, if there isn’t, well… I guess plagiarism is just being asked for…right?
21. ali says:
Highly cited researcher,
I am aware of it. It would be more appropriate if people paid attention to the journal the article was published as well. For instance, in china, government pays 1000 dollars per impact factor of the journal the article was published. If you publish one article in science, you receive 30K from the government so there is an incentive to publish in high reputation journals. In countries like greece and turkey, there is no such thing. People can barely speak english, let alone write articles about black holes.
22. Anonymous Referee says:
Coincidentally, I was just asked to referee a paper (not found on the arXiv) by different Turkish authors (although from one of the institutions involved), which was also plagurized. However, in this case, the stealing was obvious merely from following up on the paper’s references. It looked like the authors had no idea they were doing anything wrong! I’m not sure what to make of this, except that all this plagurizing must be emblematic of a serious misapprehension of scientific ethics in some corners of the Turkish physics community.
23. Ingwer Angström says:
Ali i take it that you have some idea about the status of the status of the physics community in turkey but please don’t extrapolate it to greece. There are many good greek researchers and people actually do speek english. On the other hand it’s true that the greek ( i guess the turkish too) goverment prefers to buy weapons (20000 million € in the next 10 years) than to seriously invest in research. That’s sad but it will not change soon.
24. Anon. says:
“For instance, in china, government pays 1000 dollars per impact factor of the journal the article was published. ”
Is this true?! Does anybody know?
25. Paul says:
I agree totally with you about Berlinski. His ‘tour of the calculus’ might be the worst book I’ve ever read. One more reason for me to buy yours…
26. Ilja says:
Peter Woit Says “What this scandal shows is that getting unoriginal (and uninteresting) work published in most theoretical physics journals is almost trivially easy. One reason for this is that the results are of such little interest that no one other than the referee is likely to actually read the paper and notice that it isn’t original.”
What do you expect in a world of “publish or perish”? If people are forced to publish, they will publish. Even if they don’t have anything interesting to say.
27. JDR says:
Trust me, it’s not just physics that publishes rubbish. It’s a huge problem that needs to be solved. As Ilja implied, publish or perish needs to be changed and it needs to be changed by us. If you don’t have anything worthwhile to say, don’t say it. If we all exercise restraint and only publish what is really worth something (and we all know when that happens, and it doesn’t happen that often) we can start to change things.
Yes, it may mean not landing the coveted “position”, and it may mean teaching in community college or non-tenure positions. But isn’t it worth the price to be able to say ” I may have only published X number of papers, but they really meant something”. I can’t help but feel this is a personal responsibility issue (as most are).
|
Pair elements of two lists by condition without reusing elements
I have two lists that I want to join on a condition. Unlike in a relational algebra join, if more than one element of each list can be matched, only one should be selected, and shouldn't then be reused. Also, if any of the elements of the first list don't match any of the second list, the process should fail.
Example:
# inputs
list1 = [{'amount': 124, 'name': 'john'},
{'amount': 456, 'name': 'jack'},
{'amount': 456, 'name': 'jill'},
{'amount': 666, 'name': 'manuel'}]
list2 = [{'amount': 124, 'color': 'red'},
{'amount': 456, 'color': 'yellow'},
{'amount': 456, 'color': 'on fire'},
{'amount': 666, 'color': 'purple'}]
keyfunc = lambda e: e['amount']
# expected result
[({'amount': 124, 'name': 'john'}, {'amount': 124, 'color': 'red'}),
({'amount': 456, 'name': 'jack'}, {'amount': 456, 'color': 'yellow'}),
({'amount': 456, 'name': 'jill'}, {'amount': 456, 'color': 'on fire'}),
({'amount': 666, 'name': 'manuel'}, {'amount': 666, 'color': 'purple'})]
I've written a working implementation in Python, but it seems clunky, unclear and inefficient:
def match(al, bl, key):
bl = list(bl)
for a in al:
found = False
for i, b in enumerate(bl):
if key(a) == key(b):
found = True
yield (a, b)
del bl[i]
break
result = list(match(list1, list2, key=keyfunc)
• Is there a specific reason to keep the dictionaries separated? I mean, could the return value be like [{'amount': 124, 'name': 'john', 'color': 'red'}, {'amount': 456, 'name': 'jack', 'color': 'yellow'}, …]? – 409_Conflict Feb 13 '18 at 10:05
• @MathiasEttinger I wanted to keep the match() function generic; the entries may not necessarily be dictionaries, or the caller may want to merge them differently (e.g. only copying a subset of the keys from one to the other). – André Paramés Feb 13 '18 at 10:18
• Is list2 allowed to have unused elements? – Eric Duminil Feb 13 '18 at 21:59
• @EricDuminil yes – André Paramés Feb 14 '18 at 9:46
I think you could simplify your code a bit by considering the second list to be a dictionary mapping the key to list of values with the same key. This would avoid doing a linear search over the second list for each element of the first list, and also lends itself to less code:
from collections import defaultdict
def match(al, bl, key):
table = defaultdict(list)
for b in bl:
table[key(b)].append(b)
return [(a, table[key(a)].pop(0)) for a in al]
The function will raise an IndexError exception in case the key does not reference a matching element in bl.
• What's slightly "meh" with this proposal in my opinion is that the list comprehension relies on a side effect of pop, namely that it removes the selected element from the list. – Frerich Raabe Feb 13 '18 at 20:12
• That's nice and concise. I'd say that pop's main job is to remove an element, it really isn't surprising at all IMHO. Your method doesn't detect if b has unused elements. I'm not sure if that's a bug or a feature. I'll ask OP. – Eric Duminil Feb 13 '18 at 21:58
• @EricDuminil To me, a list comprehension is a declarative construct (as opposed to an imperative for loop) much like in Haskell (from which Python derived list comprehensions). In this mathematical sense, the value to which each element in the input sequence is mapped does not depend on other values (resp. the order in which they are processed). This independence among the elements is broken by the pop call (due to which you depend on a very specific order in which the list comprehension processes the elements in al). – Frerich Raabe Feb 14 '18 at 7:28
• @EricDuminil As for detecting unused elements in bl, this could be tested after the list comprehension by checking if any(not x for x in table.values()) holds. – Frerich Raabe Feb 14 '18 at 7:33
• No need to, according to OP, your behaviour is a feature and not a bug ;) Well done! – Eric Duminil Feb 14 '18 at 9:53
Theory
It might not be clear at first, but you're basically describing a bipartite graph.
You're interested in finding the maximum matching and if it's perfect.
NetworkX is a great Python library for graphs, and the maximum_matching function is already implemented. It uses the Hopcroft-Karp algorithm and runs in $$\O(n^{2.5})\$$ where $$\n\$$ is the number of nodes.
You only have to preprocess your lists into a graph and let networkx do its job.
Code
Here's a slightly modified version of a previous answer on Stack Overflow:
import networkx as nx
import matplotlib.pyplot as plt
def has_a_perfect_match(list1, list2):
if len(list1) != len(list2):
return False
g = nx.Graph()
l = [('l', d['name'], d['amount']) for d in list1]
r = [('r', d['color'], d['amount']) for d in list2]
edges = [(a,b) for a in l for b in r if a[2] == b[2]]
pos = {}
pos.update((node, (1, index)) for index, node in enumerate(l))
pos.update((node, (2, index)) for index, node in enumerate(r))
m = nx.bipartite.maximum_matching(g, l)
colors = ['blue' if m.get(a) == b else 'gray' for a,b in edges]
nx.draw_networkx(g,
pos=pos,
arrows=False,
labels = {n:"%s\n%d" % (n[1], n[2]) for n in g.nodes()},
edge_color=colors)
plt.axis('off')
plt.show()
return len(m) // 2 == len(list1)
As a bonus, it displays a diagram with the graph and maximum matching:
list1 = [{'amount': 124, 'name': 'john'},
{'amount': 456, 'name': 'jack'},
{'amount': 456, 'name': 'jill'},
{'amount': 666, 'name': 'manuel'}]
list2 = [{'amount': 124, 'color': 'red'},
{'amount': 456, 'color': 'yellow'},
{'amount': 456, 'color': 'on fire'},
{'amount': 666, 'color': 'purple'}]
print(has_a_perfect_match(list1, list2))
# True
list1 = [{'amount': 124, 'name': 'john'},
{'amount': 456, 'name': 'jack'},
{'amount': 457, 'name': 'jill'},
{'amount': 666, 'name': 'manuel'}]
list2 = [{'amount': 124, 'color': 'red'},
{'amount': 458, 'color': 'yellow'},
{'amount': 456, 'color': 'on fire'},
{'amount': 666, 'color': 'purple'}]
print(has_a_perfect_match(list1, list2))
# False
Notes
The desired matching is in m and has a slightly different format than what you mentioned:
{('l', 'jack', 456): ('r', 'yellow', 456), ('l', 'jill', 456): ('r', 'on fire', 456), ('l', 'john', 124): ('r', 'red', 124), ('l', 'manuel', 666): ('r', 'purple', 666), ('r', 'red', 124): ('l', 'john', 124), ('r', 'yellow', 456): ('l', 'jack', 456), ('r', 'purple', 666): ('l', 'manuel', 666), ('r', 'on fire', 456): ('l', 'jill', 456)}
It does have enough information, though.
Note that the edge generation isn't optimal (it's $$\O(n^{2})\$$ and could be $$\O(n)\$$ with dicts) but it's concise and still faster than the matching algorithm. Feel free to modify it!
Optimization
@Peilonrayz' answer has a better performance because your problem is easier than the general matching problem : there are no connections between nodes with distinct ids, so a greedy algorithm works fine.
Actually, it's possible to check in 2 lines if the lists match. With a Counter, you just need to check if the distribution (e.g. Counter({124: 1, 456: 2, 666: 1})) is the same for both lists:
from collections import Counter
Counter(map(key, list1)) == Counter(map(key, list2))
# True
• Thanks! While my case is in fact simple and easier, I appreciate knowing the theory of the problem; it'll help me more when I come across similar problems in the future. – André Paramés Feb 13 '18 at 15:36
I think you should split your code into two functions.
1. You should add some code to group your object lists together. I'd call this group_by.
2. Take the first from each match, and error in the match function.
To perform the first function, I'd use collections.defaultdict, so that you only generate the keys that you need. This also has the benefit of having $O(kn)$ performance, rather than $O(n^k)$. Where $n$ is the amount of objects in each list, and $k$ are the amount of lists.
After this you can check if there are enough items to yield correctly. This is two checks, 1 that there are items in the first group. Then you check that there are the same amount of items in both groups. After this, you can use zip to group together items in pairs.
import collections
def group_by(lists, key):
amount = range(len(lists))
d = collections.defaultdict(lambda:tuple([[] for _ in amount]))
for i, objects in enumerate(lists):
for obj in objects:
d[key(obj)][i].append(obj)
return d
def match(al, bl, key):
for key, groups in group_by((al, bl), key).items():
if groups[0] and len(groups[0]) != len(groups[1]):
raise ValueError("Missing value for {!r}".format(key))
yield from zip(*groups)
The type of exception that should be raised is debatable, however it shouldn't be Exception.
• If you think the error is predominantly due to missing values, then using ValueError would probably be best.
• If you think the error is because it requires there to be a common key, then KeyError or LookupError may be better.
• Alternately if you're planning on making an entire rational algebra Python library, create your own exceptions.
• Darn it, too fast, I was thinking about something alike. I wonder, though, how missing entries are handled when considering PEP 479 and adding a from __future__ import generator_stop? – 409_Conflict Feb 13 '18 at 10:30
• I don't want to take only the first item from each group; I just don't want to reuse items in different matches. I can understand the group_by, but I don't see how match here accomplishes the expected result. – André Paramés Feb 13 '18 at 10:40
• @MathiasEttinger That's a good question, there was a bug I failed to notice without that enabled. Generators stop when they encounter StopIteration, I thought it passed through. With the future it fails on the iter rather than the raise now. Thanks, :) – Peilonrayz Feb 13 '18 at 10:41
• @Peilonrayz only one for each match, not in total. Please look at the expected result in the example. Feel free to suggest any improvements on the text of the question as well :) – André Paramés Feb 13 '18 at 10:43
• @Peilonrayz Yes, that's what I thought. I took advantage of this behaviour in the past… Now I feel a bit sad, somehow, as you won't be able to rely on a generator to "swallow" a StopIteration. – 409_Conflict Feb 13 '18 at 10:44
|
# Difference between revisions of "2005 Indonesia MO Problems/Problem 7"
## Problem
Let $ABCD$ be a convex quadrilateral. Square $AB_1A_2B$ is constructed such that the two vertices $A_2,B_1$ is located outside $ABCD$. Similarly, we construct squares $BC_1B_2C$, $CD_1C_2D$, $DA_1D_2A$. Let $K$ be the intersection of $AA_2$ and $BB_1$, $L$ be the intersection of $BB_2$ and $CC_1$, $M$ be the intersection of $CC_2$ and $DD_1$, and $N$ be the intersection of $DD_2$ and $AA_1$. Prove that $KM$ is perpendicular to $LN$.
## Solution
Let the coordinates of $A$ be $(2a,0)$, the coordinates of $B$ be $(2b_1, 2b_2)$, the coordinates of $C$ be $(-2c,0)$, and the coordinates of $D$ be $(2d_1, -2d_2)$, where all variables are rational and $a, b_2, c, d_2 \ge 0$.
$[asy] import graph; size(9.22 cm); real lsf=0.5; pen dps=linewidth(0.7)+fontsize(10); defaultpen(dps); pen ds=black; real xmin=-10.2,xmax=10.2,ymin=-10.2,ymax=10.2; pen cqcqcq=rgb(0.75,0.75,0.75), evevff=rgb(0.9,0.9,1), zzttqq=rgb(0.6,0.2,0); /*grid*/ pen gs=linewidth(0.7)+cqcqcq+linetype("2 2"); real gx=2,gy=2; for(real i=ceil(xmin/gx)*gx;i<=floor(xmax/gx)*gx;i+=gx) draw((i,ymin)--(i,ymax),gs); for(real i=ceil(ymin/gy)*gy;i<=floor(ymax/gy)*gy;i+=gy) draw((xmin,i)--(xmax,i),gs); Label laxis; laxis.p=fontsize(10); draw((-10,0)--(10,0),Arrows); draw((0,10)--(0,-10),Arrows); clip((xmin,ymin)--(xmin,ymax)--(xmax,ymax)--(xmax,ymin)--cycle); pair A=(8,0), B=(2,8), C=(-6,0), D=(-4,-4); draw(A--B--C--D--A); dot(A); label("(2a,0)",A,NE); dot(B); label("(2b_1,2b_2)",B,N); dot(C); label("(-2c,0)",C,NW); dot(D); label("(2d_1,-2d_2)",D,S); [/asy]$
Let $X$ be the midpoint of $AB$, which is point $(a+b_1,b_2)$. Additionally, mark points $Y = (2b_1,b_2)$, $K = (x,y)$, and $Z = (x,b_2)$.
$[asy] import graph; size(9.22 cm); real lsf=0.5; pen dps=linewidth(0.7)+fontsize(10); defaultpen(dps); pen ds=black; real xmin=-0.2,xmax=10.2,ymin=-0.2,ymax=10.2; pen cqcqcq=rgb(0.75,0.75,0.75), evevff=rgb(0.9,0.9,1), zzttqq=rgb(0.6,0.2,0); /*grid*/ pen gs=linewidth(0.7)+cqcqcq+linetype("2 2"); real gx=2,gy=2; for(real i=ceil(xmin/gx)*gx;i<=floor(xmax/gx)*gx;i+=gx) draw((i,ymin)--(i,ymax),gs); for(real i=ceil(ymin/gy)*gy;i<=floor(ymax/gy)*gy;i+=gy) draw((xmin,i)--(xmax,i),gs); Label laxis; laxis.p=fontsize(10); clip((xmin,ymin)--(xmin,ymax)--(xmax,ymax)--(xmax,ymin)--cycle); pair A=(8,0), B=(2,8); draw(A--B); dot(A); label("(2a,0)",A,NE); dot(B); label("(2b_1,2b_2)",B,N); dot((5,4)); label("(a+b_1,b_2)",(5,4),NE); dot((9,7)); label("(x,y)",(9,7),NE); dot((2,4)); label("(2b_1,b_2)",(2,4),SW); dot((9,4)); label("(x,b_2)",(9,4),SE); [/asy]$
Note that since $K$ is the center of square $AB_1A_2B$, $BX \perp KX$ and $BX = KX$. Additionally, $BY \parallel KZ$ and $BY \perp YZ$.
$YZ$ is a line, so $\angle BXY + \angle KXZ + \angle BXK = 180^\circ$. Since $BX \perp KX$, $\angle BXK = 90^\circ$, so $\angle BXY + \angle KXZ = 90^\circ$. Additionally, because $BXY$ is a right triangle, $\angle YBX + \angle YXB = 90^\circ$. Rearranging and substituting results in $\angle KXZ = \angle YBX$.
Since both $BYX$ and $XZK$ are right triangles, by AAS Congruency, $\triangle BYX \cong \triangle XZK$. Therefore $BY = XZ = b_2$ and $YX = ZK = a - b_1$. From this information, the coordinates of $K$ are $(a+b_1+b_2, b_2 + a - b_1)$.
$[asy] import graph; size(9.22 cm); real lsf=0.5; pen dps=linewidth(0.7)+fontsize(10); defaultpen(dps); pen ds=black; real xmin=-10.2,xmax=10.2,ymin=-10.2,ymax=10.2; pen cqcqcq=rgb(0.75,0.75,0.75), evevff=rgb(0.9,0.9,1), zzttqq=rgb(0.6,0.2,0); /*grid*/ pen gs=linewidth(0.7)+cqcqcq+linetype("2 2"); real gx=2,gy=2; for(real i=ceil(xmin/gx)*gx;i<=floor(xmax/gx)*gx;i+=gx) draw((i,ymin)--(i,ymax),gs); for(real i=ceil(ymin/gy)*gy;i<=floor(ymax/gy)*gy;i+=gy) draw((xmin,i)--(xmax,i),gs); Label laxis; laxis.p=fontsize(10); draw((-10,0)--(10,0),Arrows); draw((0,10)--(0,-10),Arrows); clip((xmin,ymin)--(xmin,ymax)--(xmax,ymax)--(xmax,ymin)--cycle); pair A=(8,0), B=(2,8), C=(-6,0), D=(-4,-4), K=(9,7), L=(-6,8), M=(-7,-3), N=(4,-8); draw(A--B--C--D--A); dot(A); label("(2a,0)",A,NE); dot(B); label("(2b_1,2b_2)",B,NE); dot(C); label("(-2c,0)",C,NW); dot(D); label("(2d_1,-2d_2)",D,S); dot(K); label("K",K,NE); dot(L); label("L",L,NW); dot(M); label("M",M,SW); dot(N); label("N",N,SE); [/asy]$
By using similar reasoning, the coordinates of $L$ are $(b_1-b_2-c,b_2+b_1+c)$, the coordinates of $M$ are $(-c+d_1-d_2,-d_2-c-d_1)$, and the coordinates of $N$ are $(a+d_1+d_2,d_1-d_2-a)$.
The slope of $KM$ is $\frac{a+b_2-b_1+c+d_2+d_1}{a+b_1+b_2+c+d_2-d_1}$. The slope of $LN$ is $\frac{b_1+b_2+c+a+d_2-d_1}{b_1-b_2-c-a-d_1-d_2}$. The product of the two slopes is $-1$, so $KM \perp LN$.
|
In this activity, we will explore the composition of functions.
You already know about inputs and outputs of a function. Function composition is a way to use the output of one function as the input for another function.
In the first exercise, you will use what you already know in order to use function composition in a story about an oil spill. 1 .
Now continue the previous exercise in the next problem, where you will use function composition to make one function out of two.
In chapter 5, you studied transformations of a function. There, you took a function $f(x)$ and shifted its graph left or right by adding a number to the input of the function. For instance, $f(x – 3)$ represented shifting the graph $y = f(x)$ to the right by $3$ units.
However, this may also be thought of as the composition of two functions: $f(x)$ and $g(x) = x - 3$
As a composition, this transformation occurs by substituting the function $g(x) = x - 3$ into the function $f(x)\text{:}$ \begin{equation*} f(x - 3) = f(g(x)) \end{equation*} So, we have already been using function composition, though we have waited until now to give it that name.
When you compose functions together, the output from one function becomes the input for the other. In the example above, we would first use a value of $x$ in the inside function $g(x) = x - 3$ to get an ouput. Then, we would take that output and use it as the input for the function $f(x)\text{.}$
When we write $f(g(x))\text{,}$ we read it as “$f$ of $g$ of $x$”.
In the next exercise, you will see an animation of composing two functions.
Now, practice function composition in the next problem, remembering to work inside the parentheses first.
With function composition, the key is to remember to evaluate the inside function first. Evaluating functions is just like doing regular arithmetic — work inside the parentheses before doing anything else.
In the next exercise, you will get practice evaluating a composite function from a table of values. Remember to evaluate a composite function from the inside to the outside.
In Exercise 7.1.1 and Exercise 7.1.2 of this activity, you used composition to find a numerical answer for the area of the oil spill and a formula which found the area as a function of $t\text{.}$
To find a numerical answer, you evaluated the radius function to get a number, and then used that number to evaluate the area function.
To find a formula, you just used the radius formula as the input for the area function. This gave a new formula, but not a particular numerical answer.
If we compose functions together, we think of the resulting formula as a new, single function, written in the form: \begin{equation*} W(J(x)) \end{equation*}
We call this new formula a composite function, because it is composed of two different functions.
in the next exercise, you will be composing two functions to make a new composite function. Pay careful attention to which function is being used as the input for the other function.
Next, use the graphs of two functions to evaluate different compositions.
|
# zinc nitrite molar mass
Take a look at NaOH which contains sodium, oxygen and hydrogen. Definitions of molecular mass, molecular weight, molar mass and molar weight. 3 Oxidizing liquids Category 3 ----- Skin Irrit. Type of paper. It can also be purchased on eBay and Amazon. Can anyone help me out to learn the calculations? https://www.flinnsci.com/zinc-nitrate-solution-0.1-m-500-ml/z0026/, A zinc TTR complex with a novel 2-D structure (TTR = 3-Amino-1H-1,2,4- triazole), Opto-electronic characterization of starch capped zinc chalcogenides (core-shell) nanocomposites and their application as Schottky device. The chemical formula for zinc nitrate is: Zn(NO3), Calculate the molar mass of zinc nitrate. Round your answer to 1 decimal place. Molar volume: 144.063 cm³/mol Zinc Nitrate Zn(NO3)2 Molar Mass, Molecular Weight. Interactive image; ChemSpider: 22926 ECHA InfoCard: 100.029.039: EC Number: 231-943-8 PubChem: RTECS number: ZH4772000 UN number: 1514 SMILES {{{value}}} Properties Chemical formula. (NO 3) 2 MW : 189.40 CAS No. Conditions/substances to avoid are: reducing agents, My second question is, "one beaker containing 20mM of zinc nitrate hexahydrate with equal molar concentration of HMT dissolved in 80ml of DI". or two chemical of zinc nitrate hexahydrate + HMT in one beaker dissolve with 80ml of DI? Element Symbol Atomic weight Atoms Mass percent ; Copper: Cu: 63.546: … Why did the mortgage default rate increase so. How to prepare zinc acetate solution for the nanoparticle synthesis by actinomycetes? example such as , TiO2 solution was prepared by adding 5mM of TiO2 into 40ml ethanol. What is the Difference Between Zinc Citrate and Zinc Gluconate? Liq. Why did housing prices rise rapidly during 2002 2005? Zinc Nitrate formula, also known as Zinc dinitrate formula or Celloxan formula is an inorganic compound consists of two nitrogen atoms, six oxygen atoms, and a zinc atom. Correct answer to the question Calculate the mass of 0.2 moles of zinc nitrate. Sigma-Aldrich offers a number of Zinc nitrate hexahydrate products. Compound: Moles: Weight, g: Zn: Elemental composition of Zn. If you convert the volume into liters, the molarity of the molarity formula you want will be 1x10-3 moles. Molar Mass: 189.3898 Round your answer to 1 decimal place. This problem has been solved! Molar mass of the compound is 189.36 g/mol in the anhydrous state, and 297.49 g/mol in hexahydrate structure. Compound: Moles: Weight, g: Cu(NO3)2: Elemental composition of Cu(NO3)2. zinc(II) nitrate hexahydrate. Calculate weight of chemical when Molarity and volume are given? The molar mass and molecular weight of Zn(IO2)2 is 383,216. What mass of silver nitrate is dissolved in 40.0 mL of a 0.400 M solution of silver nitrate? The chemical formula for zinc nitrate is: (ZnNO 3) 2. or any suggestion to make a good solution? 3 AgNO 3 (aq) + K 3 PO 4 (aq) = Ag 3 PO 4 (s) + 3KNO 3 (aq) Part B In this experiment, we were trying to understand the process of how to determine the Molar Mass Practice Worksheet . We have step-by-step solutions for your textbooks written by … Expert Answer 100% (2 ratings) Previous question Next question Transcribed Image Text from this Question. A. In related terms, another unit of mass often used is Dalton (Da) or unified atomic mass unit (u) when describing atomic masses and molecular masses. TIP: In the case of zinc nitrate, the molar mass is (1 atom times 65 grams/mole zinc) plus (two atoms times 14 grams/mole of nitrogen) plus (six atoms times 16 grams/mole oxygen) equals 189 grams/mole of zinc nitrate. Why did the mortgage default rate increase so. How do I find the ''dopant percentage (at.%)'' during the manufacturing of doped ZnO nanoparticles by sol-Gel Process?? is there any technique or method to measure the weight of chemical to produce 5mM of TiO2? Place your order. Iam trying to prepare 0.1M ZnCl2 solution but it is forming white precipitate. The chemical formula for zinc nitrate is: Zn(NO3), Calculate the molar mass of zinc nitrate. Molecular mass (molecular weight) is the mass of one molecule of a substance and is expressed in the unified atomic mass units (u). C 6 O 3 H 12 C. C 2 O 4 H 8 D. C 4 O 2 H 8 10. The molar mass of zinc nitrate will be equal to (1 atom x 65 grams/mole of zinc) + (two atoms x 14 grams/mole of nitrogen) + (six atoms x 16 grams/mole of oxygen) = 189 grams/mole of zinc nitrate. Equation: Reaktionstyp: Zn + HCl = ZnCl 2 + H 2: single … :7779-88-6 nature : Notice:Each item can have many explanations from different angels. All zinc oxide molecular weight wholesalers & zinc oxide molecular weight manufacturers come from members. 5. Select elements and see the molar mass of the compound. Continue to order Get a quote. Zinc has three major and two minor isotopes. - 1 g.mol x10 5 Zinc Nitrate is a highly deliquescent substance which is usually prepared by dissolving zinc in nitric acid. An example reaction gives a precipitate of zinc carbonate: Zn(NO3)2 + Na2CO3 → ZnCO3 + 2 NaNO3. Calculate the molar mass of zinc nitrate. I need to prepare zinc acetate solution for nanoparticle synthesis by actinomycetes. Formula and structure: The zinc nitrate chemical formula is Zn(NO 3 ) 2 and the molar mass is 189.36 g mol -1 in its anhydrous form and 297.49 g mol -1 when it is in its hexahydrate form. Formula and structure: The zinc nitrate chemical formula is Zn(NO 3 ) 2 and the molar mass is 189.36 g mol -1 in its anhydrous form and 297.49 g mol -1 when it is in its hexahydrate form. Element Symbol Atomgewicht Atome Massenprozent; Zink : Zn: 65.38: 1: 100.0000: Prozentuale Massenzusammensetzung: Prozentuale atomare Zusammensetzung: Sample reactions for Zn. molecular weight of TiO2 is 79.9. Zinc nitrate is sold by chemical suppliers, most of the time as hexahydrate. Continue to order Get a quote. Molar mass calculator also displays common compound name, Hill formula, elemental composition, mass percent composition, atomic percent compositions and allows to convert from weight to number of moles and vice versa. Molar conductance of 0.1M HCl = 39.132 x 10-3 S m 2 mol-1. Structure, properties, spectra, suppliers and links for: Zinc nitrate hydrate (1:2:4). How are we justified in using the molarity and molality to determine the mass of water? - 1 G.mol X10 5. I am using sodium hydroxide from Sigma-Aldrich in pellet form. The chemical formula for zinc nitrate is: (ZnNO3)2 Calculate the molar mass of zinc nitrate. C 12 H 2 O 16 B. The molar mass of zinc nitrate will be equal to (1 atom x 65 grams/mole of zinc) + (two atoms x 14 grams/mole of nitrogen) + (six atoms x 16 grams/mole of oxygen) = 189 grams/mole of zinc nitrate. © 2008-2021 ResearchGate GmbH. Do a quick conversion: 1 moles Zinc Nitrate = 189.3898 gram using the molecular weight calculator and the molar mass of Zn(NO3)2. The solvothermal reaction of 3-amino-1H-1,2,4-triazole (ttr) with Zn(C1O4)2-6H2O and NaCl in a 1:1:1 molar ratio gave a new complex [Zn(ttr)Cl]n, with a 3-connected (4.82) 2-D structure. Molar mass is used when doing dimensional analysis to convert between grams and moles. Find molecular formula and molecular weight of zinc nitrite or Find chemical formula or molecular formula of different material, calculate it molecular weight and find related information Management - Others. gimo RIO 5 Preparation. A certain sound wave has a wavelength of 5.0 m. What is. Zinc Nitrite Chemical Formula. Die traditionelle Bezeichnung Zinkweiß (Chinesischweiß, Ewigweiß, Schneeweiß) stammt von der Verwendung als weißes Farbmittel in Malerfarbe. In chemistry, the molar mass M is a physical property. Zinc citrate is the zinc salt of citric acid. And I can look up all of those molar masses from the periodic table. Does it means that 20mM of Zinc Nitrate hexahydrate was dissolve in 40ml(other beaker) of DI and then mix with other 20mM of HMT(dissolve in other 40ml in other beaker)? When 0.189 grams of zinc nitrate is added and 10 ml of pure water is finished, the ready is ready. Molecular formula: Zn(NO 3) 2 ⋅ 6H 2 O. Compare Products: Select up to 4 products. Molar mass of zinc is 65.3800 g/mol Compound name is zinc Convert between Zn weight and moles. The molar mass of zinc nitrate is 189.36 g/mol. ›› Zinc Nitrate molecular weight. Plus two oxygens which each have masses of 15.999 grams per mole plus the two hydrogens which are each 1.0079 grams per mole. Zinc nitrate can be produced by dissolving zinc oxide, zinc hydroxide or zinc carbonate in nitric acid. Round your answer to 1 decimal place. Zinc nitrate. So, zinc hydroxide, molar mass, is comprised of one zinc, which has a mass of 65.39 grams per mole. The Stork reaction is a condensation reaction between an enamine donor and an a,ß-unsaturated… Why did housing prices rise rapidly during 2002 2005? What is the molecular formula of a molecule that has an empirical formula of C 2 OH 4 and molar mass of 88 g/mol? Zinc nitrate, also known as zinc dinitrate, is an inorganic chemical compound used as precursor and catalyst in many organic synthesis. I have to add 10 ml of 0.1 M zinc nitrate in starch solution. By looking at a periodic table, we can conclude that the molar mass of lithium is $$6.94 \: \text{g}$$, the molar mass of zinc is $$65.38 \: \text{g}$$, and the molar mass of … NO contact with … Thanks a lot. The speed of sound is approximately 340 m/s. How to prepare a 10 ml of 0.1 M zinc nitrate solution? Berechnung des Molekulargewichtes (Molekularmasse) To calculate molecular weight of a chemical compound enter it's formula, specify its isotope mass number after each element … The layers stack in the -ABAB- way along the c axis with abundant hydrogen bonding interactions to form the crystal structure. Zinc nitrate is available as Zn+N fertilizer, though not always as pure substance. luminal exposure of excised canine tracheal epithelium to zinc nitrate (3 times @ 1x10-3 molar) reduced conductance by 24% but did not affect short circuit current. or my calculation is right? Continue to order Get a quote. Formula FOR Zinc Nitrate, Molecular formula: Zn(NO3)2 Molecular weight: 189.36 g/mol (anhydrous), Zinc Nitrate Hexahydrate Tetrahydrate Anhydrous Manufacturers. submucosal zinc nitrate inhibited short circuit current, but had little effect on conductance. A. CHO B. Molar Mass: 189.36 g/mol. A hydrate of zinc chlorate, Zn(ClO3)2 .xH2O(s) contains 21.5% zinc by mass. 7779-88-6 3D model . Identifiers CAS Number . I need to calculate the concentration for dopant with another material. but i can't find it. Zinc nitrate is an inorganic chemical compound with the formula Zn(NO 3) 2. Molar mass is defined as the mass of one mole of representative particles of a substance. It is soluble in water and explodes when exposed to prolonged fire or heat. Q1:- Find the molecular mass of zinc nitrate Q2:- The mass 0 5 molecules of suger is ----- Q3:- Molecular mass of acetic acid (H3CooH) is ----- Q4:- The mass of an atom of molecules * is 5 30*10-23g (a) calculate its - Science - Structure of the Atom Molecular mass or molar mass are used in stoichiometry calculations in chemistry. Anmol Chemicals What are factors that influence its solubility in water or what might be the probable solvent to make it soluble. The invention relates to a compound, comprising How can I calculate the concentration for dopant with another material? The molarity and molality can be determined with the given information. The molar mass of sodium is 22.99 g/mol, oxygen is 15.999 g/mol, and hydrogen is 1.008 g/mol. Stoff: Mol: Gewicht, g: Zn: Elementare Zusammensetzung von Zn. The chemical formula is C 12 H 10 O 14 Zn 3 and the molar mass of is 574.3 g/mol. Academic level. Molarity= (gram/molecular weight)/ volume. Equation: Reaction type : Zn + HCl = ZnCl 2 + H 2: single replacement: Zn + … How to prepare heavy metal ion concentration? It is soluble in both water and alcohol. I would like to know, how to prepare known concentration (10, 20, 30, 40mg/L) of heavy metal ions like zinc from zinc nitrate, copper from copper nitrate, lead from lead nitrate, etc. What is the pore size distribution by BJH method? To find the molar mass of a compound, you need to add the molar mass of all the elements in that compound. It is soluble in water and explodes when exposed to prolonged fire or heat. How can I calculate the 0.1mol% of Europium Nitrate with TiO2 material? b) a second starch derivative select... Join ResearchGate to find the people and research you need to help your work. Take 0.18936 grams of zinc nitrate in conical flask and add 10 ml of distilled water. Molar mass of the compound is 189.36 g/mol in the anhydrous state, and 297.49 g/mol in hexahydrate structure. Zinc Nitrite Zn(NO2)2 Molar Mass, Molecular Weight. The molar mass of copper (II) nitrate is 187.5 g/mol. Zinc Nitrate is an inorganic colorless, deliquescent crystals obtained by dissolving zinc in nitric acid. As its a covalent molecule, it contains double bond between an atom of oxygen and nitrogen. ACMC-1BZDD 77, No. Structure, properties, spectra, suppliers and links for: Zinc nitrate, 7779-88-6. Chemical formula of Zinc nitrite is Zn(NO 2) 2. If you convert the volume into liters, the molarity of the molarity formula you want will be 1x10-3 moles. 58 / Monday, March 26, 2012 / Rules and Regulations 10/18/2019 EN (English US) 5/5 ----- Ox. C 2 H 2 O C. CH 3 O D. C 3 H 3 O. 10196-18-6. We doesn't provide zinc oxide molecular weight products or service, please contact them directly and verify their companies info carefully. Round your answer to 1 decimal place. *Please select more than one item to compare The chemical formula for zinc nitrate is: Zn(NO3)2 Calculate the molar mass of zinc nitrate. What is the empirical formula of a sample which contains 38.74% C, 9.63% H, 51.63% O an has a molar mass of 62 g/ mol? Zinc nitrate, also known as zinc dinitrate, is an inorganic chemical compound used as precursor and catalyst in many organic synthesis. Solubility: Very soluble in alcohol. Solubility in Water: 327 g/100 mL, 40 Degree C (trihydrate); 184.3 g/100 ml, 20 Degree C (hexahydrate) Zinc nitrate is … APPENDIX I ELEMENTS, THEIR ATOMIC NUMBER AND MOLAR MASS 16-04-2018, 9. If you convert the volume into liters, the molarity of the molarity formula you want will be 1x10-3 moles. Calculate the molecular weight If the formul I would like to work on heavy metal ion adsorption. Molar mass of Zn(NO3)2 = 189.3898 g/mol. Is 574.3 g/mol, extra pure Zn oder Mol: Zn ( ClO3 ) 2 molar mass zinc! Calculate molar mass of zinc-64 is 63.9291 Da, and 297.49 g/mol in the anhydrous,... Compound, you need to Calculate the molar mass are used in stoichiometry calculations in chemistry dopant percentage ( %... The molecular formula: Zn ( IO2 ) 2 Answer to the Calculate! Hydroxide from sigma-aldrich in pellet form a medium to convert between Zn weight Elemental... During the manufacturing of doped ZnO nanoparticles by sol-Gel Process? starch.. Notice: each item can have many explanations from different angels Farbmittel in.. Measure the weight of chemical when molarity and molality to determine the mass of zinc nitrite Zn ( )! ) stammt von der Verwendung als weißes Farbmittel in Malerfarbe a precipitate of zinc nitrate is sold by chemical,..., deliquescent crystals obtained by dissolving zinc oxide molecular weight of chemical when given... / Monday zinc nitrite molar mass March 26, 2012 / Rules and Regulations 10/18/2019 EN English! Inhibited short circuit current, but had little effect on conductance grams and moles only isotopes zinc... C 12 H 10 O 14 Zn 3 and the molar mass of metal = 0.327 g 0.005... The time as hexahydrate 2012 / Rules and Regulations 10/18/2019 EN ( English US ) 5/5 -- -- Ox! Wholesalers & zinc oxide, zinc hydroxide or zinc carbonate in nitric acid solution the! Hydrate of zinc chlorate, Zn ( NO3 ), Calculate the molar mass of is 574.3 g/mol anyone! Cas, MSDS & more of Europium nitrate with TiO2 material, Schneeweiß ) stammt der! O 14 Zn 3 and the molar mass to convert between mass and molar weight weight g! Znno3 ) 2 zinc acetate solution for the chemical formula for zinc.! Number of moles of the molarity and volume are given of any chemical compound now companies carefully. Chlorate, Zn ( NO3 ) 2 molar mass calculator computes molar mass is used doing... Of C 2 H 8 D. C 3 H 3 O them directly verify. See chemical dangers gives off irritating or toxic fumes ( or gases ) in a fire make soluble... There any technique or method to measure the weight of Zn ( NO3 ), Calculate the masses! Anyone help me out to learn the calculations verify THEIR companies info carefully masses from the periodic table in compound. O 14 Zn 3 and the molar mass of the substance affects the.. Or toxic fumes ( or gases ) in a fire C. C 2 H 8 D. 3! No 3 ) 2 MW: 189.40 CAS NO wave has a wavelength 5.0! As its a covalent molecule, it contains double bond between an atom of oxygen and hydrogen am! Is typically encountered as a medium to prolonged fire or heat prepare 0.1M solution. Contact them directly and verify THEIR companies info carefully traditionelle Bezeichnung Zinkweiß ( Chinesischweiß, Ewigweiß, )... + 2 NaNO3 oxygen and hydrogen when exposed to prolonged fire or heat reaction a. Prepare 0.1M ZnCl2 solution but it is soluble in water or what might be the probable to. Particle size anyone help me out to learn the calculations oxygen and.! Ready is ready I am using sodium hydroxide from sigma-aldrich in pellet.. The formula Zn ( NO 3 ) 2 ⋅ 6H 2 O 4 H 8 10 in solution. Make it soluble as pure substance gases ) in a fire, properties, spectra, suppliers and links:... All, I have problem to solve simple Calculate about weight of Zn ( NO3 ), Calculate the zinc nitrite molar mass. Compounds, the ready is ready is zinc convert between Zn weight Elemental. Or moles zinc nitrate, also known as zinc dinitrate, is inorganic!
This entry was posted in Uncategorized. Bookmark the permalink.
|
## Stream: new members
### Topic: preferred definition of subobject?
#### Alex Mathers (Mar 20 2020 at 06:54):
Is there a currently preferred way to define a "subobject" in Lean? I noticed that for groups we have the definition
class is_subgroup (s : set G) extends is_submonoid s : Prop :=
(inv_mem {a} : a ∈ s → a⁻¹ ∈ s)
and similarly for subrings, but for submodules we define it as a structure, more specifically it's defined as
structure submodule (α : Type u) (β : Type v) [ring α]
[add_comm_group β] [module α β] : Type v :=
(carrier : set β),
...
Which is better?
#### Johan Commelin (Mar 20 2020 at 06:59):
We're slowly moving the library over to bundled subobjects (i.e., structures)
#### Johan Commelin (Mar 20 2020 at 07:03):
On the other hand, we might need to support both to some extent...
#### Alex Mathers (Mar 20 2020 at 07:16):
Thanks for the response. This leads me to the follow-up question: what should the definition of the trivial subgroup look like if I'm using the bundled definition? Let's say I want to write
def trivial_subgrp (G : Type*) [grp G] : subgrp G :=
{ carrier := {1},
one_mem' := sorry,
mul_mem' := sorry,
inv_mem' := sorry,
}
Then I want to give the "proof" of one_mem' : (1:G) \in {1}, but it's not obvious to me how to write this. It seems so simple and it's probably just an artifact of me not understanding the way sets are implemented in Lean.
#### Kevin Buzzard (Mar 20 2020 at 07:20):
sets are implemented in a slightly irritating way. If you defined carrier := {x | x = 1} then the proof of one_mem' would be rfl (i.e. "true by definition"). However x \in {1} unfolds to something like x = 1 \or false so it's not true by definition.
#### Kevin Buzzard (Mar 20 2020 at 07:20):
To fix this up, there will be a lemma in the library which says x \in {x}, and you can search for it in one of several ways.
#### Kevin Buzzard (Mar 20 2020 at 07:21):
import data.set.basic
example {α : Type} (a : α) : a ∈ {a} :=
begin
library_search
end
will tell you a way to do it, although in this case it doesn't find the canonical way to do it.
#### Kevin Buzzard (Mar 20 2020 at 07:25):
The best way to do it is to guess what this lemma might be called. There's a knack to this, and once you have it it really makes writing Lean code a whole lot easier. Basically you have to know the "code" for a lot of the symbols and ideas used in Lean. For example the code for ∈ is mem and the code for {x} is singleton, so the lemma you want will be called something like mem_singleton.... The problem is there will be several lemmas about being an element of a singleton set, e.g. x \in {y} \iff x = y, so we will still need to look more. If you try this:
example {α : Type} (a : α) : a ∈ {a} := mem_singleton
in VS Code and then with your cursor just after the n in singleton type ctrl-space, you will get a list of suggested completions. You can scroll up and down them with the arrow keys, until you spot the one you want, and then press tab to get it. It's set.mem_singleton a.
#### Johan Commelin (Mar 20 2020 at 07:28):
@Gabriel Ebner @Mario Carneiro What is the status of the idea of redefining {x} to mean {y | y = x}?
#### Kevin Buzzard (Mar 20 2020 at 07:30):
Finally, here's how to see that singleton sets are implemented in an annoying way. first switch notation off, and then just keep unfolding the definition to see what the heck is going on.
import data.set.basic
set_option pp.notation false
example {α : Type} (a : α) : a ∈ ({a} : set α) :=
begin
unfold singleton,
unfold has_insert.insert,
unfold set.insert,
unfold set_of,
unfold has_emptyc.emptyc,
unfold has_mem.mem,
unfold set.mem,
-- all notation now unfolded
end
If you watch the goal you will see how I have managed to generate this script. I just keep taking apart the types in the goal. Now if you comment out the set_option pp.notation false line you will see the goal has become (a = a) ∨ false which you can prove with left, refl
#### Mario Carneiro (Mar 20 2020 at 07:32):
@Johan Commelin When we last had this conversation (a week ago?) it got redirected to fixing the parse order to {a, b, c} = insert a (insert b (singleton c)), then adding a has_singleton typeclass in core rather than defining singleton a := insert a empty unconditionally
#### Johan Commelin (Mar 20 2020 at 07:33):
So this needs work on the parser. Does that mean C++?
#### Johan Commelin (Mar 20 2020 at 07:33):
Or is this something that can be done in Lean?
#### Mario Carneiro (Mar 20 2020 at 07:33):
unfortunately yes. It wouldn't normally, notations are almost always defined in lean, but {a, ..., z} is magic
#### Kevin Buzzard (Mar 20 2020 at 07:34):
If the has_singleton typeclass got changed then this would just require changes to core Lean lean (and a whole bunch of library fixes). Would this at least mean x \in {a,b} expands to x = b \or x = a?
that's right
#### Mario Carneiro (Mar 20 2020 at 07:36):
We could also fix the order by defining set.insert a s to be {x | x \in s \/ x = a}, but this would cause the or's to be left associated when you iterate it
#### Johan Commelin (Mar 20 2020 at 07:39):
I would prefer fixing the parser
#### Johan Commelin (Mar 20 2020 at 07:40):
It shouldn't be more work in the long run, and the end result is more pleasant, I think.
#### Johan Commelin (Mar 20 2020 at 07:42):
@Alex Mathers Sorry for hijacking your thread. I hope you are now able to solve your problem.
We can take this discussion elsewhere.
No problem.
#### Kevin Buzzard (Mar 20 2020 at 07:48):
@Alex Mathers the other thing to say about subobjects is that there is no one right answer here. There is a PR to "bundle subgroups" #2140 by the way. But here is an issue. A normal subgroup is a subgroup plus some extra condition. So do we now bundle normal subgroups as well? If so, how do we prove that a normal subgroup is a subgroup? With the class is_subgroup approach we can use the type class system to do this. With the fully bundled approach, we will perhaps use the coercion system. But then if we want to prove that finitely-generated central subgroups are normal, and hence subgroups, we might end up with coercions of coercions of coercions etc. However my understanding is the community has decided that in the cases we understand best, the fully bundled approach is better than the is_ approach (for reasons I perhaps do not fully understand, although the fully bundled approach does give you access to the powerful "dot notation", where you can write H.has_one for the proof that the subgroup H has a 1, and perhaps this was what swung it).
Another approach is to fully bundle everything, and instead of having (G : Type) [group G] just have G : Group where Group is the category of groups. Again there are advantages and disadvantages to this. The community is basically trying to figure out the best way to do everything. When Lean 4 comes, the best way might change. Currently the approach seems to be to have more than one way to do things, which comes with its own set of advantages and disadvantages :-)
#### Johan Commelin (Mar 20 2020 at 07:53):
I think what also swung it was that simp doesn't (at least didn't) play well with type class inference.
#### Johan Commelin (Mar 20 2020 at 07:54):
With [is_group_hom f] you couldn't simp something like f (x * y) into f x * f y. But with bundled homs you can.
#### Johan Commelin (Mar 20 2020 at 07:54):
Similar things may show up with bundled subgroups.
#### Kevin Buzzard (Mar 20 2020 at 07:59):
And with bundling morphisms (f : group_hom G H instead of (f : G -> H) [is_group_hom f]) there were also pros and cons. One con I ran into recently with bundling morphisms was that if we define group_hom to be the obvious thing, then group_hom G H and monoid_hom G H are two ways of saying the same thing. Because of this, there is some definite agreement amongst the CS people that group_hom G H simply should not exist. But then there are issues with pushing forward and pulling back, e.g. f.map is the construction which sends a subgroup of G to a subgroup of H, except that it isn't: it's the construction which sends a submonoid of G to a submonoid of H, and one has to define e.g. f.gmap to push subgroups forward. This is a very minor issue though, compared to simp breaking.
#### Kevin Buzzard (Mar 20 2020 at 08:01):
The alternative is to define group_hom to be equal to monoid_hom when the inputs are groups, and then duplicate a bunch of code, which the CS people get really shirty about.
#### Johan Commelin (Mar 20 2020 at 08:02):
Kevin Buzzard said:
The alternative is to define group_hom to be equal to monoid_hom when the inputs are groups, and then duplicate a bunch of code, which the CS people get really shirty about.
Hmmmm, not only CS people
#### Johan Commelin (Mar 20 2020 at 08:02):
I think mathematicians are even more hung up about writing code that "has been written before"
#### Kevin Buzzard (Mar 20 2020 at 08:05):
I have always thought that defining group_hom to be a multiplication preserving map and then just proving the finitely many lemmas about group homs again in the correct namespace somehow felt like the right thing to do
#### Kevin Buzzard (Mar 20 2020 at 08:05):
In Shenyang Wu's group cohomology repo this is what we do
#### Kevin Buzzard (Mar 20 2020 at 08:06):
It's great supervising MSc projects, you're not controlled by the mathlib militia :-)
#### Alex Mathers (Mar 20 2020 at 08:17):
@Kevin Buzzard Thanks for the detailed response, this (and the ensuing conversation) is certainly giving me a better idea of the pros and cons of each. I see your point, I think as mathematicians we do implicitly do a lot of repeated "coercions" which might be inconvenient in the long run when getting to more complex objects. I guess we'll see.
#### Kevin Buzzard (Mar 20 2020 at 08:20):
Indeed we'll see. I have a half-written blog post about how mathematicians effortlessly cheat in this way, because it is manifestly legal mathematically to just observe that eg a subring is an additive subgroup -- there is nothing to prove. But when you're making subrings and subgroups this triviality becomes a map and there has to be a system for dealing with these maps in a completely painless way
#### Mario Carneiro (Mar 20 2020 at 08:38):
I just wrote a fix to the C++ parser to reverse the order, although I don't feel like putting in the legwork to fix everything that this breaks
#### Johan Commelin (Mar 20 2020 at 08:39):
I'm fine with helping fixing mathlib. But fixing C++ is not really by cup of tea (-;
#### Mario Carneiro (Mar 20 2020 at 08:40):
there is probably test breakage too
#### Mario Carneiro (Mar 20 2020 at 08:41):
the C++ change is pretty minimal
#### Kevin Buzzard (Mar 20 2020 at 09:13):
This came up when Shenyang and I were duplicating a bunch of code ;-) e.g. defining group_hom.ker and group_hom.range to be group_hom.comap bot (pullback of trivial subgroup) and group_hom.map top (pushforward of entire group). For bot it turned out to go much more smoothly to make it {x | x = 1} because then the kernel being the stuff which mapped to 1 was true by definition.
#### Mario Carneiro (Mar 20 2020 at 10:27):
https://github.com/leanprover-community/lean/pull/153
Last updated: May 06 2021 at 21:09 UTC
|
Journal article Open Access
# Smart Home Automation using Hand Gesture Recognition System
Vignesh Selvaraj Nadar; Vaishnavi Shubhra Sinha; Sushila Umesh Ratre
### DataCite XML Export
<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="URL">https://zenodo.org/record/5596243</identifier>
<creators>
<creator>
<affiliation>Student, Department of Computer Science, Amity University Mumbai, India.</affiliation>
</creator>
<creator>
<creatorName>Vaishnavi Shubhra Sinha</creatorName>
<affiliation>Student, Department of Computer Science, Amity University Mumbai, India.</affiliation>
</creator>
<creator>
<creatorName>Sushila Umesh Ratre</creatorName>
<affiliation>Professor, Department of Computer Science, Amity University Mumbai, India.</affiliation>
</creator>
</creators>
<titles>
<title>Smart Home Automation using Hand Gesture Recognition System</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2019</publicationYear>
<subjects>
<subject>Arduino, Gesture Recognition, Home Automation, Human Computer Interaction, Machine Learning</subject>
<subject subjectScheme="issn">2249-8958</subject>
</subjects>
<contributors>
<contributorName>Blue Eyes Intelligence Engineering & Sciences Publication (BEIESP)</contributorName>
<affiliation>Publisher</affiliation>
</contributor>
</contributors>
<dates>
<date dateType="Issued">2019-12-30</date>
</dates>
<language>en</language>
<resourceType resourceTypeGeneral="JournalArticle"/>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/5596243</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="ISSN" relationType="IsCitedBy" resourceTypeGeneral="JournalArticle">2249-8958</relatedIdentifier>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.35940/ijeat.B3055.129219</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract"><p>Visual interpretation of hand gestures is a natural method of achieving Human-Computer Interaction (HCI). In this paper, we present an approach to setting up of a smart home where the appliances can be controlled by an implementation of a Hand Gesture Recognition System. More specifically, this recognition system uses Transfer learning, which is a technique of Machine Learning, to successfully distinguish between pre-trained gestures and identify them properly to control the appliances. The gestures are sequentially identified as commands which are used to actuate the appliances. The proof of concept is demonstrated by controlling a set of LEDs that represent the appliances, which are connected to an Arduino Uno Microcontroller, which in turn is connected to the personal computer where the actual gesture recognition is implemented.</p></description>
</descriptions>
</resource>
38
18
views
|
# How do you solve abs(x-a)>b?
Jan 25, 2017
$x > a + b$, when $b \ge 0.$
$x < a - b$, when $b < 0$
#### Explanation:
Despite that $| x - a | \ge 0$, b could be negative.
So, I allow negative b also.
The inequality breaks up to
$x - a > b$, giving $x > a + b$, for $x \ge a \to b \ge 0$ and
$- \left(x - a\right) > b$, giving $x < a - b$, for $x \le a \to b < 0$
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 18 Mar 2019, 14:52
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# A right circular cylinder with radius 6 units and height 10 units is
Author Message
TAGS:
### Hide Tags
Intern
Joined: 30 Aug 2017
Posts: 15
A right circular cylinder with radius 6 units and height 10 units is [#permalink]
### Show Tags
07 Oct 2017, 23:09
2
1
4
00:00
Difficulty:
65% (hard)
Question Stats:
54% (02:29) correct 46% (03:01) wrong based on 55 sessions
### HideShow timer Statistics
A right circular cylinder with radius 6 units and height 10 units is placed upright and is filled up to 2/5th of its volume with water. a solid cone of radius 3 units and height 4 units is then submerged completely in the cylinder. what is the increase in the height of water level ?
Attachments
Question.jpeg [ 143.64 KiB | Viewed 3380 times ]
Intern
Joined: 28 Dec 2010
Posts: 49
Re: A right circular cylinder with radius 6 units and height 10 units is [#permalink]
### Show Tags
08 Oct 2017, 02:37
1
In this question, you just need to calculate the volume of solid cone. There is no need to calculate volume of cylinder to find the actual increase.
Volume of cone = $$\frac{1}{3}* π * r^2 * H$$
= $$\frac{1}{3}* π * 3^2 * 4$$
= $$12 π$$
Now how much volume would it displace in a cylinder, let the height be x.
Volume displaced = $$π * r^2 * H$$
$$12 π = π * 6^2 * x$$
$$1 = 3 * x$$
$$x = 1/3$$
A.
_________________
_________________________________________
Intern
Joined: 04 Dec 2015
Posts: 44
WE: Operations (Commercial Banking)
Re: A right circular cylinder with radius 6 units and height 10 units is [#permalink]
### Show Tags
05 Jan 2019, 07:04
Fair enough. But could someone please explain that why we don’t really need to calculate the volume of the cylinder?
Moreover, I still don’t understand the logic behind the given solution
Posted from my mobile device
Intern
Joined: 09 Jul 2018
Posts: 3
Re: A right circular cylinder with radius 6 units and height 10 units is [#permalink]
### Show Tags
07 Jan 2019, 18:35
The maximum volume that the cone can displace is it's total volume
Hence we don't need to calculate the volume of the cyclinder!
Posted from my mobile device
Re: A right circular cylinder with radius 6 units and height 10 units is [#permalink] 07 Jan 2019, 18:35
Display posts from previous: Sort by
|
# Induced maps in Morse Homology
Let $M,N$ be closed manifolds. Given a differentiable map $f:M\rightarrow N$, I am interested in computing $f_k:H_k(M)\rightarrow H_k(N)$, in Morse Homology. This problems seems difficult, and the only reference I have found is Schwarz' Morse Homology. His strategy is to factor $f$ as follows $$M\rightarrow M\times N\rightarrow \mathbb{R}^n\times N\rightarrow N.$$ where the first map is the graph of $f$, and the second map is an embedding of $M$ into some large $\mathbb{R}^n$, and the third map is a projection to $N$. The first two maps are embeddings of submanifolds, and it is not hard to see what the induced maps must be. Something similar happens with the projection map.
This seems difficult to compute, because we need to construct an embedding $M\rightarrow\mathbb{R}^n$. I believe that the second and third step can be a simplified a bit, by choosing a suitable function $a$ on $M$ (with one minimum) and a function $b$ on $N$, and constructing an explicit map $C^k(M\times N,a\oplus b)\rightarrow C^k(N,b)$, which descends to homology.
Has this approach been studied somewhere? Is there any literature on these functioral properties that I missed? Is anything known (and written down) for manifolds with boundary?
-
I want to mention an approach described in Kronheimer and Mrowka's book Monopoles and Three-Manifolds. In section 2, they give an outline of Morse theory, including Morse homology for manifolds with boundary and functoriality in Morse theory. The nice thing is we can recover the induced map $f_* : H_* (M) \rightarrow H_* (N)$ from a chain map between Morse complexes by counting something.
I'll try to give some idea. Suppose we have a smooth map $r : Z \rightarrow M_1 \times M_2$. The compositions $\pi_i \circ r$ give an induced map $H_* (M_1) \rightarrow H_* (M_2)$, where $\pi_i$ is the projection to $M_i$ and we use Poincare duality to get a map from $H_* (M_1)$ to $H_* (Z)$. In the special case when $Z$ is a graph of $f : M_1 \rightarrow M_2$, this gives back $f_*$. This construction is called "pull-up and push-down".
Now suppose that we have Morse complexes for $M_1$ and $M_2$. Let $a \in M_1$ and $b \in M_2$ be critical points and denote $U_a, S_b$ by unstable and stable manifolds. Consider a subspace $$Z(a,b) = \lbrace w \in Z \ | \ r(w) \in U_a \times S_b \rbrace .$$ Assuming transversality, this is a submanifold of $Z$ and we can count $|Z(a,b)|$ when its dimension is 0. Define a chain map by $$m(a) = \Sigma |Z(a,b)|b,$$ where $b$ ranges over all critical points of $M_2$. It is claimed that this induces a map on homology described earlier. In their book, this idea is treated in the context of Floer homology.
I'm not sure if this construction has appeared elsewhere. I haven't tried doing an explicit calculation from this construction either, but I hope this will be helpful.
-
This pull-back push-forward trick is also used in Lawson's paper. – Liviu Nicolaescu Nov 19 '12 at 0:33
Thank you. I will order a copy of the book. – Thomas Rot Nov 19 '12 at 14:51
This is a hard problem closely related to the following. Suppose that $M, N$ are $CW$ complexes and $f: M\to N$ is a continuous map, not necessarily compatible with the cellular structures on $M$ and $N$. How do we compute the induced map $f_k: H_k(M)\to H_k(N)$? The difficulty lies in the fact that $f$ does not induce maps between the cellular cell complexes so you have to scramble.
There is a way, though quite impractical for Morse cohomology. Use the results of Blaine Lawson, Finite volume flows and Morse theory to exhibit a cochain homotopy equivalence between the Morse complex and the DeRham complex. Then if you are lucky, you can write down explicit closed forms that span the DeRham cohomologies of $M$ and $N$. Finally use Lawson's isomorphism to interpret this in the language of Morse cohomology. This is where it gets tricky.
Let me give you another simple example suggesting that you need to formulate the problem more carefully. Suppose that $M$ is a smooth manifold, and $F:M\to M$ is a diffeomorphism. If $f: M\to \mathbb{R}$ is a Morse function and $g$ is a metric on $M$ such that the gradient flow of $f$ is Morse-Smale, then $F^* f=f\circ F$ is another Morse function and the pullback $F^* g$ is a metric so that the gradient flow of $F^* f$ is Morse-Smale. We obtain two Morse complexes
$$C_* (M, f, g),\;\;C_*(M,F^*f, F^*g).$$
These are equipped with natural bases, and with respect to these natural bases the induced map
$$F_* :C_* (M,F^*f, F^* g) \to C_* (M, f, g)$$
is the identity map. However, $F$ may not induce the identity map in homology.
-
Thank you for your answer. However, I am not sure I understand your example... What is the problem exactly? I can assume without loss of generality that the metrics chosen make $F$ into a isometry right? This will make sure that isolated trajectories of the gradient flow on the left are mapped to isolated trajectories to the right. Or is the problem in the orientations? I believe you can fix this by fixing the orientations of the complexes. Or am I missing something obvious? – Thomas Rot Nov 19 '12 at 14:48
The map $F$ may not induce the identity morphism in homology although, with respect to the canonical bases, it induces the identity map between the Morse chain complexes. This can happen if $(M,g)$ is the torus $T^2$ equipped with the flat product metric and $F$ is a nontrivial element in $SL(2,\mathbb{Z})$. – Liviu Nicolaescu Nov 19 '12 at 15:44
Thank you, that clarifies a lot! I misinterpreted the example the first time. – Thomas Rot Nov 20 '12 at 9:55
A direct description of functoriality in Morse homology is given by Abbondandolo and Schwarz in Appendix A.2 of "Floer homology of cotangent bundles and the loop product".
If $\varphi: M_1 \to M_2$ is a differentiable map between manifolds, $x$ is a critical point of the Morse function on $M_1$ and $y$ is a critical point of the Morse function on $M_2$, the chain map induced by $\varphi$ is then roughly given by making the intersections $$\varphi(W^u(x))\cap W^s(y)$$ transverse and counting elements of zero-dimensional intersections. Transversality can easily be achieved by perturbing the Riemannian metrics on $M_1$ and $M_2$.
-
This construction is a special case of the one described in Tirasan Khandhawit's answer. – Tim Perutz Dec 9 '12 at 14:21
In case anybody is interested we give more detailed proofs of the functoriality described in the other answers by direct analysis of appropriate moduli spaces here.
-
|
# Monod equation
The Monod equation is a holy mathematical model for the oul' growth of microorganisms. It is named for Jacques Monod (1910 – 1976, a French biochemist, Nobel Prize in Physiology or Medicine in 1965), who proposed usin' an equation of this form to relate microbial growth rates in an aqueous environment to the feckin' concentration of a feckin' limitin' nutrient.[1][2][3] The Monod equation has the feckin' same form as the feckin' Michaelis–Menten equation, but differs in that it is empirical while the feckin' latter is based on theoretical considerations.
The Monod equation is commonly used in environmental engineerin'. Jesus, Mary and holy Saint Joseph. For example, it is used in the oul' activated shludge model for sewage treatment.
## Equation
The growth rate μ of an oul' considered micro-organism as a bleedin' function of the limitin' substrate concentration [S].
The empirical Monod equation is:[4]
${\displaystyle \mu =\mu _{\max }{[S] \over K_{s}+[S]}}$
where:
• μ is the growth rate of a holy considered microorganism
• μmax is the maximum growth rate of this microorganism
• [S] is the feckin' concentration of the oul' limitin' substrate S for growth
• Ks is the oul' "half-velocity constant"—the value of [S] when μ/μmax = 0.5
μmax and Ks are empirical (experimental) coefficients to the feckin' Monod equation. Soft oul' day. They will differ between microorganism species and will also depend on the bleedin' ambient environmental conditions, e.g., on the oul' temperature, on the oul' pH of the oul' solution, and on the composition of the feckin' culture medium.[5]
## Application notes
The rate of substrate utilization is related to the feckin' specific growth rate as follows:[6]
${\displaystyle r_{s}=\mu X/Y}$
where:
• X is the total biomass (since the feckin' specific growth rate, μ is normalized to the feckin' total biomass)
• Y is the oul' yield coefficient
rs is negative by convention.
In some applications, several terms of the form [S] / (Ks + [S]) are multiplied together where more than one nutrient or growth factor has the feckin' potential to be limitin' (e.g. organic matter and oxygen are both necessary to heterotrophic bacteria). Chrisht Almighty. When the bleedin' yield coefficient, bein' the ratio of mass of microorganisms to mass of substrate utilized, becomes very large this signifies that there is deficiency of substrate available for utilization.
## Graphical determination of constants
As with the bleedin' Michaelis–Menten equation graphical methods may be used to fit the oul' coefficients of the oul' Monod equation:[4]
|
Fiber product with diagonal morphism [duplicate]
Stacks tag 01KR states that the diagram of schemes
is "by general category theory" "a fibre product diagram". I tried to show this using the universal property, but didn't obtain anything useful. How do you prove $X\times_TY$ is a fiber product of $X\times_SY$ and $T$ with respect to $T\times_ST$?
marked as duplicate by Najib Idrissi, Community♦Dec 7 '15 at 22:06
• If people don't want to spend time converting the notations in the duplicate: $X \leadsto X_1$, $Y \leadsto X_2$, $T \leadsto Y$, $S \leadsto Z$. – Najib Idrissi Dec 7 '15 at 21:25
You have to show that if an arbitrary scheme $P$ (weird name but out of letters!) maps to $X \times_S Y$ and to $T$, commuting with the given maps to $T \times_S T$, then this map factors through a map $X \times_T Y$.
To get a map $P \to X \times_T Y$, you'd better make a map $P \to X$ and a map $P \to Y$, commuting with the given map to $T$. There's only one reasonable guess for the maps $P \to X$ and $P \to Y$; namely the factors of the given map $P \to X \times_S Y$, so you're forced to check that these commute with the given map $P \to T$.
To check this, build a square diagonally to the bottom right of your picture, giving the definition of $T \times_S T$, and note the definition of the map $\Delta$ implies the two compositions $T \to T$ are both the identity. Now add a copy of $X$ and $Y$ to your picture, mapping to the two different $T$s. I claim these copies receive maps from the $X \times_S Y$ in your picture making everything commute. This is because $Y$ and $X$ both receive their $S$-scheme structure via a given map $T \to S$ (as can be confirmed by reading the link in your question).
|
Free Version
Moderate
# Ethanol Production
APES-WUTLKD
Which of the following statements is NOT true in regards to ethanol production in the American Midwest?
A
It is mixed into conventional gasoline to produce E-85.
B
It is a domestic source of energy.
C
Its production and use can result in lower $C{ O }_{ 2 }$ emissions.
D
Its production uses land that could be used to grow food.
E
Switchgrass is an alternative plant that might be used to produce ethanol.
|
# zbMATH — the first resource for mathematics
Implementation of data types by algebraic methods. (English) Zbl 0537.68026
This paper presents a precise definition of implementation of data types, which are taken to be many-sorted algebras of a given signature (sorts and operations). No particular presentation (e.g. equational) of the data types is assumed but each element of the type is required to be reachable from the given operations. For each operation $$w_ B$$ of the $$\Omega$$- algebra B to be implemented, this notion of implementation calls for the construction of an operation $$f_ w$$ on the implementing algebra A and the explicit definition of an $$\Omega$$-homomorphism T from F(A) (the $$\Omega$$-subalgebra of A generated by the simulators $$f_ w)$$ onto B. The surjectivity of T insures that every element of B can be represented by elements of A. This representation can be multiple as T is not required to be an isomorphism and this implies, in the case of equational presentation of the algebras A and B, that F(A) need not satisfy the axioms defining B. Unlike other approaches, the operations of the two algebras A and B are kept separate and only the operations of A are allowed in the generalized Gödel-Herbrand-Kleene schemes used to define the simulators. Sufficient conditions are given for such a scheme to define a (possibly partial) function using substitution and replacements rules. The notions are illustrated with two examples of implementations of stacks and symbol-tables, dealing with the cases of polynomial and recursively-defined simulators, respectively. Similarities with the approach by H. Ehrig, H.-J. Kreowski and P. Padawitz are analyzed and comparisons with other approaches are mentioned.
##### MSC:
68P05 Data structures 68N01 General topics in the theory of software 08A99 Algebraic structures 03C05 Equational classes, universal algebra in model theory
##### Keywords:
simulators; polynomial terms; recursive schemes
Full Text:
##### References:
[1] Guttag, J.V.; Horning, J.J., The algebraic specification of abstract data types, Acta inform., 10, 27-52, (1978) · Zbl 0369.68010 [2] Guttag, J.V.; Horowitz, E.; Musser, D., Abstract data types and software validation, Comm. ACM, 21, 1048-1064, (1978) · Zbl 0387.68012 [3] Zilles, S., An introduction to data algebras, (), 248-272 [4] Goguen, J.A.; Thatcher, J.W.; Wagner, E.G.; Wright, J.B., Initial algebra semantics and continuous algebras, J. assoc. comput. Mach., 24, 68-95, (1977) · Zbl 0359.68018 [5] Goguen, J.A.; Thatcher, J.W.; Wagner, E.G., Abstract data types as initial algebras and the correctness of data representation, (), 80-149 [6] Blum, E.K.; Lynch, N., Relative complexity of algebras, Math. systems theory, 14, 193-214, (1981) · Zbl 0473.68031 [7] Blum, E.K.; Estes, D.R., A generalization of the homomorphism concept, Algebra universalis, 7, 143-161, (1977) · Zbl 0386.08003 [8] Blum, E.K., Machines, algebras and axioms, (), 1-29 [9] Majster, M., Limits on the algebraic specification of abstract data types, SIGPLAN notices,, Vol. 12, 37-42, (1977) [10] Bergstra, J.A.; Tucker, J., On the adequacy of finite equational methods for data types specification, SIGPLAN notices, Vol. 14, 13-18, (1979) [11] Enrich, H.D., Extensions and implementations of abstract data type specifications, (), 155-163 [12] Enrich, H.D., On the theory of specification, implementation and parametrization of abstract data types, J. assoc. comput. Mach., 29, 1, 206-227, (1982) · Zbl 0478.68020 [13] Ehrig, H.; Kreowski, H.-J.; Padawitz, P., Stepwise specification and implementation of abstract data types, (), 205-226 [14] Ehrig, H.; Mahr, B., Complexity of algebraic implementations for abstract data types, J. comput. system sci., 23, (1981) · Zbl 0474.68021 [15] Blum, E.K., Towards a theory of semantics and compilers for programming languages, J. comput. system sci., 3, 248-275, (1969) · Zbl 0174.28903 [16] Gratzer, G., Universal algebra, (1968), Von Nostrand Princeton, N.J · Zbl 0182.34201 [17] Cohn, P.M., Universal algebra, (1965), Harper & Row New York · Zbl 0141.01002 [18] Parisi-Presicce, F., On the faithful regular extensions of iterative algebras, () · Zbl 0884.68084 [19] Courcelle, B.; Nivat, M., The algebraic semantics of recursive program schemes, (), 16-30 · Zbl 0384.68016 [20] Ganzinger, H., Parameterized specifications: parameter passing and optimizing implementation, Tech. U. Munich-18110, (August 1981) [21] Ehrig, H.; Kreowski, H.-J.; Padawitz, P., Algebraic implementation of abstract data types: concept, syntax, semantics, and correctness, (), 142-156 · Zbl 0412.68018 [22] Theoret. Comput. Sci., in press. [23] Ehrig, H.; Kreowski, H.-J.; Thatcher, J.W.; Wagner, E.G.; Wright, J.B., Parameter passing in algebraic spcification languages, (), 157-168 [24] Hupbach, U.L., Abstract implementation of abstract data types, (), 291-304 · Zbl 0499.68011 [25] Elgot, C.C., Monadic computation and iterative algebraic theories, (), 175-230 · Zbl 0327.02040 [26] Tiuryn, J.; Tiuryn, J., Fixed-points and algebras with infinitely long expressions. II. regular algebras, Fund. inform., Fund. inform., 2, 317-335, (1979) · Zbl 0436.68015 [27] Tiuryn, J., Unique fixed points vs least fixed points, Theoret. comput. sci., 12, 3, 229-254, (1980) · Zbl 0439.68026 [28] Gallier, J.H., Recursion-closed algebraic theories, J. comput. system sci., 23, 69-105, (1981) · Zbl 0472.68006 [29] Garland, S.G.; Luckham, D.C., Program schemes, recursion schemes and formal languages, J. comput. system sci., 7, 119-160, (1973) · Zbl 0277.68010 [30] Thatcher, J.W.; Wagner, E.G.; Wright, J.B., Programming languages as mathematical objects, () · Zbl 0394.68008 [31] Courcelle, B.; Guessarian, I., On some classes of interpretations, J. comput. system sci., 17, 388-413, (1978) · Zbl 0392.68009 [32] Scott, D.; Strachey, C., Towards a mathematical semantics for computer languages, () · Zbl 0268.68004 [33] Kfoury, D., Comparing algebraic structures up to algorithmic equivalence, (), 253-263 · Zbl 0281.68007 [34] Englefriet, J.; Schmidt, E.; Englefriet, J.; Schmidt, E., IO and IO. I. II, J. comput. system sci., J. comput. system sci., 16, (1978) [35] Mailbaum, T.S.E., A generalized approach to formal languages, J. comput. system sci., 8, 409-439, (1974) · Zbl 0361.68113 [36] Bloom, S.L.; Elgot, C.C., The existence and construction of free iterative theories, J. comput. system sci., 12, 305-318, (1976) · Zbl 0333.68017 [37] Ginalt, S., Regular trees and the free iterative theory, J. comput. system sci., 18, 228-242, (1979) [38] Guessarian, I., Algebraic semantics, () · Zbl 0602.68017 [39] Manna, Z.; Ness, S.; Vullemin, J., Inductive methods for proving properties of programs, Comm. ACM, 16, 8, (1973) [40] Goguen, J.A.; Thatcher, J.W.; Wagner, E.G.; Wright, J.B., An introduction to categories, algebraic theories and algebras, IBM, RC 5369, (1975) [41] Kleene, S.C.; Kleene, S.C., Introduction to metamathematics, (), 347 ff · Zbl 0047.00703 [42] Rogers, H., Theory of recursive functions and effective computability, (), 194 [43] Thatcher, J.W., Characterizing derivation trees of context-free grammars through a generalization of finite automata theory, J. comput. system sci., 1, 317-322, (1967) · Zbl 0155.01802 [44] Levy, M.R.; Mailbaum, T.S.E., Continuous data types, SIAM J. comput., 11, 2, 201-216, (1982) · Zbl 0479.68016 [45] Sannella, D.; Wirsing, M., Implementation of parametrized specifications, () [46] {\scM. Broy, B. Moller, P. Pepper, and M. Wirsing}, A model-independent approach to implementation of abstract data types, in “Proceedings, Symp. on Algorithmic Logic and the Programming Language LOGLAN, Poznam, Poland”, Lecture Notes in Computer Science, to appear.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
# ML Aggarwal Class 7 Solutions for ICSE Maths Chapter 4 Exponents and Powers Objective Type Questions
## ML Aggarwal Class 7 Solutions for ICSE Maths Chapter 4 Exponents and Powers Objective Type Questions
Mental Maths
Question 1.
Fill in the blanks:
(i) In the expression (-5)9, exponent = ……… and base = ………..
(ii) If the base is $$\frac { -3 }{ 4 }$$ and exponent is 5, then exponential form is ………
(iii) The expression (x2y5)3 in the simplest form is ………
(iv) If (100)0 = 10n, then the value of n is ……..
(ix) 35070000 = 3.507 × 10….
(x) If (-2)n = -128, then n = ……..
Solution:
Question 2.
State whether the folloiwng statements are true (T) or false (F):
(i) If a is a rational number then am × an = am×n
(ii) 23 × 32 = 65.
(iii) The value of (-2)-3 is $$\frac { -1 }{ 8 }$$.
(iv) The value of the expression 29 × 291 – 219 × 281 is 1.
(v) 30 = (1000)0.
(vi) 56 ÷ (-2)6 = $$\frac { -5 }{ 2 }$$
(vii) 50 × 30 = 80
(viii) $$\frac { { 2 }^{ 3 } }{ 7 } <\left( \frac { 2 }{ 7 } \right) ^{ 3 }$$
(ix) (10 + 10)4 = 104 + 104.
(x) x0 × x0 = x0 ÷ x0, where x is a non-zero rational number.
(xi) 49 is greater than 163.
(xii) xm + xm = x2m, where x is a non-zero rational number and m is a positive integer.
(xiii) $$\left( \frac { 4 }{ 3 } \right) ^{ 5 }\times \left( \frac { 5 }{ 7 } \right) ^{ 5 }=\left( \frac { 4 }{ 3 } +\frac { 5 }{ 7 } \right) ^{ 5 }$$
Solution:
Multiple Choice Questions
Choose the correct answer from the given four options (3 to 18):
Question 3.
a × a × a × b × b × b is equal to
(a) a3b2
(b) a2b3
(c) (ab)3
(d) a6b6
Solution:
Question 4.
(-2)3 × (-3)2 is equal to
(a) 65
(b) (-6)6
(c) 72
(d) -72
Solution:
Question 5.
The expression (pqr)3 is equal to
(a) p3qr
(b) pq3r
(c) pqr3
(d) p3q3r3
Solution:
Question 6.
Solution:
Question 7.
Solution:
Question 8.
The value of (530 × 520) ÷ (55)9 in the exponential form is
(a) 5-5
(b) 55
(c) 550
(d) 595
Solution:
Question 9.
The law $$\left( \frac { a }{ b } \right) ^{ n }=\frac { { a }^{ n } }{ b^{ n } }$$ does not hold when
(a) a = 3, b = 2
(b) a = -2, b = 3
(c) n = 0
(d) b = 0
Solution:
Question 10
Solution:
Question 11.
Solution:
Question 12.
The value of 5-1 – 6-1 is
(a) $$\frac { 1 }{ 30 }$$
(b) $$\frac { -1 }{ 30 }$$
(c) 30
(d) -30
Solution:
Question 13.
The value of (6-1 – 8-1)-1 is
(a) $$\frac { -1 }{ 2 }$$
(b) -2
(c) $$\frac { 1 }{ 24 }$$
(d) 24
Solution:
Question 14.
Solution:
Question 15.
If 23 + 13 = 3x, then the value of x is
(a) 0
(b) 1
(c) 2
(d) 3
Solution:
Question 16.
The standard form of 751.65 is
(a) 7.5165 × 102
(b) 75.165 × 101
(c) 7.5165 × 104
(d) 7.51 × 102
Solution:
Question 17.
The usual form of 5.658 × 105 is
(a) 5658
(b) 56580
(c) 565800
(d) 5658000
Solution:
Question 18.
Which of the following numbers is in the standard form?
(a) 26.57 × 104
(b) 2.657 × 104
(c) 265.7 × 103
(d) 0.2657 × 106
Solution:
Value Based Questions
Question 1.
Typhoid is caused by bacteria Salmonella typhi. The size of Salmonella typhi is about 0.0000000005 mm. Express it in standard form. Vinay is suffering from typhoid, his doctor advised him to take healthy food and avoid eating food or drinking beverages from street vendors.
Why should we eat healthy food and why should we not eat food from street vendors?
Solution:
Higher Order Thinking Skills (HOTS)
Question 1.
Solution:
Question 2.
Solution:
|
# New bounds on proximity and remoteness in graphs
Document Type : Original paper
Author
University of Johannesburg
Abstract
The average distance of a vertex $v$ of a connected graph $G$ is the arithmetic mean of the distances from $v$ to all other vertices of $G$. The proximity $\pi(G)$ and the remoteness $\rho(G)$ of $G$ are defined as the minimum and maximum, respectively, average distance of the vertices of $G$. In this paper we investigate the difference between proximity or remoteness and the classical distance parameters diameter and radius. Among other results we show that in a graph of order $n$ and minimum degree $\delta$ the difference between diameter and proximity and the difference between radius and proximity cannot exceed $\frac{9n}{4(\delta+1)}+c_1$ and $\frac{3n}{4(\delta+1)}+c_2$, respectively, for constants $c_1$ and $c_2$ which depend on $\delta$ but not on $n$. These bounds improve bounds by Aouchiche and Hansen \cite{AouHan2011} in terms of order alone by about a factor of $\frac{3}{\delta+1}$. We further give lower bounds on the remoteness in terms of diameter or radius. Finally we show that the average distance of a graph, i.e., the average of the distances between all pairs of vertices, cannot exceed twice the proximity.
Keywords
Main Subjects
#### References
[1] M. Aouchiche, G. Caporossi, and P. Hansen. Variable Neighbourhood Search for Extremal Graphs, 20 Automated Comparison of Graph Invariants. MATCH Commun. Math. Comput. Chem. 58 (2007), 365-384.
[2] M. Aouchiche and P. Hansen. Nordhaus-Gaddum relations for proximity and remoteness in graphs. Comput. Math. Appl. 59(no. 8) (2010), 2827-2835.
[3] M. Aouchiche, P. Hansen. Proximity and remoteness in graphs: results and conjectures. Networks 58 (no. 2) (2011), 95-102.
[4] C.A. Barefoot, R.C. Entringer, and L.A. Sz´ekely. Extremal values for ratios of distances in trees. Discrete Appl. Math. 80 (1997), 37-56.
[5] P. Dankelmann,G. Dlamini, and H.C. Swart. Uppper bounds on distance measures in K2,l-free graphs. (manuscript)
[6] P. Dankelmann. Proximity, remoteness, and minimum degree. Discrete Appl. Math. 184 (2015), 223-228.
[7] G. Dlamini. Aspects of distances in graphs, Ph.D. Thesis, University of Natal, Durban, 2003. [8] R.C. Entringer, D.E. Jackson, and D.A. Snyder. Distance in graphs. Czechoslovak Math. J. 26 (101) no. 2 (1976), 283-296.
[9] P. Erd¨os, J. Pach,R. Pollack, and Z. Tuza. Radius, diameter, and minimum degree. J. Combin. Theory Ser. B 47 (1989) 73-79.
[10] B. Ma, B. Wu, and W. Zhang. Proximity and average eccentricity of a graph. Inform. Process. Lett. 112(no. 10) (2012), 392-395.
[11] B. Wu and W. Zhang. Average distance, radius and remoteness of a graph. Ars Math. Contemp. 7 (2014), 441-452.
[12] B. Zelinka. Medians and peripherians of trees. Arch. Math. (Brno) 4 (1968), 87-95.
|
# Finding Two probabilities are independent or not…
A bag contains 30 balls 20 red and 10 blue. Two balls are drawn from the bag. Let A be the event that the first ball is red, and B be the event that the second ball is red. Are A and B independent?
I have no idea how to solve this problem.
for A we want red for first position (20/30) and for second position it can be any color but for B if the first position comes out to be blue then we will have (20/30) but if we have red in first position ...
-
One question to ask is whether the outcome of the first event affects the probability of the second event? – jimmywho Dec 4 '12 at 6:41
Yes, also if two events are independent P($A\bigcap B$)=P(A)$\bigcap$P(B). So if they are not equal then the two events can not be independent. – jimmywho Dec 4 '12 at 6:46
Can you please explain more? – Hooman Dec 4 '12 at 6:52
Presumably we are sampling without replacement.
Let $A$ be the event the first ball is red, and $B$ the event the second ball is red. Intuitively the answer is clear. Take an extreme example of $1$ red ball and $10$ blue. If we know the first ball was red, then we know for sure that the second will be blue. But we will do some formal calculations, since all too often in probability the intuition is not entirely reliable.
The probability that the first ball is red is $\dfrac{20}{30}$. Given that the first ball is red, the probability the second is red is $\dfrac{19}{29}$, for one red ball is drawn. So, in symbols, $\Pr(B|A)=\dfrac{19}{29}$.
Now what is the plain $\Pr(B)$? Imagine that the balls all have unique ID numbers, and that we keep drawing them out one at a time until all the balls have been drawn. All sequences of ID numbers are equally likely. Since there are $20$ red balls, the probability a red ball occupies the second position is simply $\dfrac{20}{30}$. This is $\Pr(B)$.
Since $\Pr(B|A)\ne \Pr(B)$, the events $A$ and $B$ are not independent.
You probably had another definition of independence, namely "the events $A$ and $B$ are independent if and only if $\Pr(A\cap B)=\Pr(A)\Pr(B)$."
We can check the non-independence by using the definition. By the discussion above, $\Pr(A\cap B)=\dfrac{20}{30}\cdot\dfrac{19}{29}$.
However $\Pr(A)\Pr(B)=\dfrac{20}{30}\cdot\dfrac{20}{30}$.
Remark: After a while, it will be obvious that given no information on the colour of the first ball, the probability that the second ball is red is $\dfrac{20}{30}$. But let's show this an uglier way.
The second ball can be red in two ways: (i) First is red and second is red or (ii) First is blue and second is red. By earlier discussion, the probability of (i) is $\dfrac{20}{30}\cdot\dfrac{19}{29}$.
By a similar argument, the probability of (ii) is $\dfrac{10}{30}\cdot\dfrac{20}{29}$.
Add up and simplify. After a while we get $\dfrac{2}{3}$, or equivalently $\dfrac{20}{30}$.
-
As Always, very clear answer and really easy to understand, Thanks Man, Thanks – Hooman Dec 4 '12 at 7:24
There's a little typo : "Since $\Pr(B|A)\ne \Pr(B)$,..." – Thibaut Dumont Dec 4 '12 at 8:34
@ThibautDumont: Thank you, fixed I hope. Unfortunately probably not the only one. – André Nicolas Dec 4 '12 at 8:45
|
Integer type: int32 int64 nag_int show int32 show int32 show int64 show int64 show nag_int show nag_int
Chapter Contents
Chapter Introduction
NAG Toolbox
NAG Toolbox: nag_tsa_uni_garch_asym1_estim (g13fa)
Purpose
nag_tsa_uni_garch_asym1_estim (g13fa) estimates the parameters of either a standard univariate regression GARCH process, or a univariate regression-type I $\text{AGARCH}\left(p,q\right)$ process (see Engle and Ng (1993)).
Syntax
[theta, se, sc, covr, hp, et, ht, lgf, ifail] = g13fa(dist, yt, x, ip, iq, nreg, mn, isym, theta, hp, copts, maxit, tol, 'num', num, 'npar', npar)
[theta, se, sc, covr, hp, et, ht, lgf, ifail] = nag_tsa_uni_garch_asym1_estim(dist, yt, x, ip, iq, nreg, mn, isym, theta, hp, copts, maxit, tol, 'num', num, 'npar', npar)
Note: the interface to this routine has changed since earlier releases of the toolbox:
At Mark 25: nreg was made optional
Description
A univariate regression-type I $\text{AGARCH}\left(p,q\right)$ process, with $q$ coefficients ${\alpha }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,q$, $p$ coefficients ${\beta }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,p$, and $k$ linear regression coefficients ${b}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,k$, can be represented by:
$yt = bo + xtT b + εt$ (1)
$ht=α0+∑i=1qαi εt-i+γ 2+∑i=1pβiht-i, t=1,2,…,T$ (2)
where ${\epsilon }_{t}\mid {\psi }_{t-1}=N\left(0,{h}_{t}\right)$ or ${\epsilon }_{t}\mid {\psi }_{t-1}={S}_{t}\left(\mathit{df},{h}_{t}\right)$. Here ${S}_{t}$ is a standardized Student's $t$-distribution with $\mathit{df}$ degrees of freedom and variance ${h}_{t}$, $T$ is the number of terms in the sequence, ${y}_{t}$ denotes the endogenous variables, ${x}_{t}$ the exogenous variables, ${b}_{o}$ the regression mean, $b$ the regression coefficients, ${\epsilon }_{t}$ the residuals, ${h}_{t}$ the conditional variance, $\mathit{df}$ the number of degrees of freedom of the Student's $t$-distribution, and ${\psi }_{t}$ the set of all information up to time $t$.
nag_tsa_uni_garch_asym1_estim (g13fa) provides an estimate for $\stackrel{^}{\theta }$, the parameter vector $\theta =\left({b}_{o},{b}^{\mathrm{T}},{\omega }^{\mathrm{T}}\right)$ where ${b}^{\mathrm{T}}=\left({b}_{1},\dots ,{b}_{k}\right)$, ${\omega }^{\mathrm{T}}=\left({\alpha }_{0},{\alpha }_{1},\dots ,{\alpha }_{q},{\beta }_{1},\dots ,{\beta }_{p},\gamma \right)$ when ${\mathbf{dist}}=\text{'N'}$ and ${\omega }^{\mathrm{T}}=\left({\alpha }_{0},{\alpha }_{1},\dots ,{\alpha }_{q},{\beta }_{1},\dots ,{\beta }_{p},\gamma ,\mathit{df}\right)$ when ${\mathbf{dist}}=\text{'T'}$.
isym, mn and nreg can be used to simplify the $\text{GARCH}\left(p,q\right)$ expression in (1) as follows:
No Regression and No Mean
• ${y}_{t}={\epsilon }_{t}$,
• ${\mathbf{isym}}=0$,
• ${\mathbf{mn}}=0$,
• ${\mathbf{nreg}}=0$ and
• $\theta$ is a $\left(p+q+1\right)$ vector when ${\mathbf{dist}}=\text{'N'}$ and a $\left(p+q+2\right)$ vector when ${\mathbf{dist}}=\text{'T'}$.
No Regression
• ${y}_{t}={b}_{o}+{\epsilon }_{t}$,
• ${\mathbf{isym}}=0$,
• ${\mathbf{mn}}=1$,
• ${\mathbf{nreg}}=0$ and
• $\theta$ is a $\left(p+q+2\right)$ vector when ${\mathbf{dist}}=\text{'N'}$ and a $\left(p+q+3\right)$ vector when ${\mathbf{dist}}=\text{'T'}$.
Note: if the ${y}_{t}=\mu +{\epsilon }_{t}$, where $\mu$ is known (not to be estimated by nag_tsa_uni_garch_asym1_estim (g13fa)) then (1) can be written as ${y}_{t}^{\mu }={\epsilon }_{t}$, where ${y}_{t}^{\mu }={y}_{t}-\mu$. This corresponds to the case No Regression and No Mean, with ${y}_{t}$ replaced by ${y}_{t}-\mu$.
No Mean
• ${y}_{t}={x}_{t}^{\mathrm{T}}b+{\epsilon }_{t}$,
• ${\mathbf{isym}}=0$,
• ${\mathbf{mn}}=0$,
• ${\mathbf{nreg}}=k$ and
• $\theta$ is a $\left(p+q+k+1\right)$ vector when ${\mathbf{dist}}=\text{'N'}$ and a $\left(p+q+k+2\right)$ vector when ${\mathbf{dist}}=\text{'T'}$.
References
Bollerslev T (1986) Generalised autoregressive conditional heteroskedasticity Journal of Econometrics 31 307–327
Engle R (1982) Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom inflation Econometrica 50 987–1008
Engle R and Ng V (1993) Measuring and testing the impact of news on volatility Journal of Finance 48 1749–1777
Hamilton J (1994) Time Series Analysis Princeton University Press
Parameters
Compulsory Input Parameters
1: $\mathrm{dist}$ – string (length ≥ 1)
The type of distribution to use for ${e}_{t}$.
${\mathbf{dist}}=\text{'N'}$
A Normal distribution is used.
${\mathbf{dist}}=\text{'T'}$
A Student's $t$-distribution is used.
Constraint: ${\mathbf{dist}}=\text{'N'}$ or $\text{'T'}$.
2: $\mathrm{yt}\left({\mathbf{num}}\right)$ – double array
The sequence of observations, ${y}_{\mathit{t}}$, for $\mathit{t}=1,2,\dots ,T$.
3: $\mathrm{x}\left(\mathit{ldx},:\right)$ – double array
The first dimension of the array x must be at least ${\mathbf{num}}$.
The second dimension of the array x must be at least ${\mathbf{nreg}}$.
Row $\mathit{t}$ of x must contain the time dependent exogenous vector ${x}_{\mathit{t}}$, where ${x}_{\mathit{t}}^{\mathrm{T}}=\left({x}_{\mathit{t}}^{1},\dots ,{x}_{\mathit{t}}^{k}\right)$, for $\mathit{t}=1,2,\dots ,T$.
4: $\mathrm{ip}$int64int32nag_int scalar
The number of coefficients, ${\beta }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,p$.
Constraint: ${\mathbf{ip}}\ge 0$ (see also npar).
5: $\mathrm{iq}$int64int32nag_int scalar
The number of coefficients, ${\alpha }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,q$.
Constraint: ${\mathbf{iq}}\ge 1$ (see also npar).
6: $\mathrm{nreg}$int64int32nag_int scalar
$k$, the number of regression coefficients.
Constraint: ${\mathbf{nreg}}\ge 0$ (see also npar).
7: $\mathrm{mn}$int64int32nag_int scalar
If ${\mathbf{mn}}=1$, the mean term ${b}_{0}$ will be included in the model.
Constraint: ${\mathbf{mn}}=0$ or $1$.
8: $\mathrm{isym}$int64int32nag_int scalar
If ${\mathbf{isym}}=1$, the asymmetry term $\gamma$ will be included in the model.
Constraint: ${\mathbf{isym}}=0$ or $1$.
9: $\mathrm{theta}\left({\mathbf{npar}}\right)$ – double array
The initial parameter estimates for the vector $\theta$.
The first element must contain the coefficient ${\alpha }_{o}$ and the next iq elements must contain the coefficients ${\alpha }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,q$.
The next ip elements must contain the coefficients ${\beta }_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,p$.
If ${\mathbf{isym}}=1$, the next element must contain the asymmetry parameter $\gamma$.
If ${\mathbf{dist}}=\text{'T'}$, the next element must contain $\mathit{df}$, the number of degrees of freedom of the Student's $t$-distribution.
If ${\mathbf{mn}}=1$, the next element must contain the mean term ${b}_{o}$.
If ${\mathbf{copts}}\left(2\right)=\mathit{false}$, the remaining nreg elements are taken as initial estimates of the linear regression coefficients ${b}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,k$.
10: $\mathrm{hp}$ – double scalar
If ${\mathbf{copts}}\left(2\right)=\mathit{false}$, hp is the value to be used for the pre-observed conditional variance; otherwise hp is not referenced.
11: $\mathrm{copts}\left(2\right)$ – logical array
The options to be used by nag_tsa_uni_garch_asym1_estim (g13fa).
${\mathbf{copts}}\left(1\right)=\mathit{true}$
Stationary conditions are enforced, otherwise they are not.
${\mathbf{copts}}\left(2\right)=\mathit{true}$
The function provides initial parameter estimates of the regression terms, otherwise these are to be provided by you.
12: $\mathrm{maxit}$int64int32nag_int scalar
The maximum number of iterations to be used by the optimization function when estimating the $\text{GARCH}\left(p,q\right)$ parameters. If maxit is set to $0$, the standard errors, score vector and variance-covariance are calculated for the input value of $\theta$ in theta when ${\mathbf{dist}}=\text{'N'}$; however the value of $\theta$ is not updated.
Constraint: ${\mathbf{maxit}}\ge 0$.
13: $\mathrm{tol}$ – double scalar
The tolerance to be used by the optimization function when estimating the $\text{GARCH}\left(p,q\right)$ parameters.
Optional Input Parameters
1: $\mathrm{num}$int64int32nag_int scalar
Default: the dimension of the array yt and the first dimension of the array x. (An error is raised if these dimensions are not equal.)
$T$, the number of terms in the sequence.
Constraints:
• ${\mathbf{num}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{ip}},{\mathbf{iq}}\right)$;
• ${\mathbf{num}}\ge {\mathbf{nreg}}+{\mathbf{mn}}$.
2: $\mathrm{npar}$int64int32nag_int scalar
Default: the dimension of the array theta.
The number of parameters to be included in the model. ${\mathbf{npar}}=1+{\mathbf{iq}}+{\mathbf{ip}}+{\mathbf{isym}}+{\mathbf{mn}}+{\mathbf{nreg}}$ when ${\mathbf{dist}}=\text{'N'}$, and ${\mathbf{npar}}=2+{\mathbf{iq}}+{\mathbf{ip}}+{\mathbf{isym}}+{\mathbf{mn}}+{\mathbf{nreg}}$ when ${\mathbf{dist}}=\text{'T'}$.
Constraint: ${\mathbf{npar}}<20$.
Output Parameters
1: $\mathrm{theta}\left({\mathbf{npar}}\right)$ – double array
The estimated values $\stackrel{^}{\theta }$ for the vector $\theta$.
The first element contains the coefficient ${\alpha }_{o}$, the next iq elements contain the coefficients ${\alpha }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,q$.
The next ip elements are the coefficients ${\beta }_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,p$.
If ${\mathbf{isym}}=1$, the next element contains the estimate for the asymmetry parameter $\gamma$.
If ${\mathbf{dist}}=\text{'T'}$, the next element contains an estimate for $\mathit{df}$, the number of degrees of freedom of the Student's $t$-distribution.
If ${\mathbf{mn}}=1$, the next element contains an estimate for the mean term ${b}_{o}$.
The final nreg elements are the estimated linear regression coefficients ${b}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,k$.
2: $\mathrm{se}\left({\mathbf{npar}}\right)$ – double array
The standard errors for $\stackrel{^}{\theta }$.
The first element contains the standard error for ${\alpha }_{o}$. The next iq elements contain the standard errors for ${\alpha }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,q$. The next ip elements are the standard errors for ${\beta }_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,p$.
If ${\mathbf{isym}}=1$, the next element contains the standard error for $\gamma$.
If ${\mathbf{dist}}=\text{'T'}$, the next element contains the standard error for $\mathit{df}$, the number of degrees of freedom of the Student's $t$-distribution.
If ${\mathbf{mn}}=1$, the next element contains the standard error for ${b}_{o}$.
The final nreg elements are the standard errors for ${b}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,k$.
3: $\mathrm{sc}\left({\mathbf{npar}}\right)$ – double array
The scores for $\stackrel{^}{\theta }$.
The first element contains the score for ${\alpha }_{o}$.
The next iq elements contain the score for ${\alpha }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,q$.
The next ip elements are the scores for ${\beta }_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,p$.
If ${\mathbf{isym}}=1$, the next element contains the score for $\gamma$.
If ${\mathbf{dist}}=\text{'T'}$, the next element contains the score for $\mathit{df}$, the number of degrees of freedom of the Student's $t$-distribution.
If ${\mathbf{mn}}=1$, the next element contains the score for ${b}_{o}$.
The final nreg elements are the scores for ${b}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,k$.
4: $\mathrm{covr}\left(\mathit{ldcovr},{\mathbf{npar}}\right)$ – double array
The covariance matrix of the parameter estimates $\stackrel{^}{\theta }$, that is the inverse of the Fisher Information Matrix.
5: $\mathrm{hp}$ – double scalar
If ${\mathbf{copts}}\left(2\right)=\mathit{true}$, hp is the estimated value of the pre-observed conditional variance.
6: $\mathrm{et}\left({\mathbf{num}}\right)$ – double array
The estimated residuals, ${\epsilon }_{\mathit{t}}$, for $\mathit{t}=1,2,\dots ,T$.
7: $\mathrm{ht}\left({\mathbf{num}}\right)$ – double array
The estimated conditional variances, ${h}_{\mathit{t}}$, for $\mathit{t}=1,2,\dots ,T$.
8: $\mathrm{lgf}$ – double scalar
The value of the log-likelihood function at $\stackrel{^}{\theta }$.
9: $\mathrm{ifail}$int64int32nag_int scalar
${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see Error Indicators and Warnings).
Error Indicators and Warnings
Note: nag_tsa_uni_garch_asym1_estim (g13fa) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:
Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.
${\mathbf{ifail}}=1$
On entry, ${\mathbf{nreg}}<0$, or ${\mathbf{mn}}>1$, or ${\mathbf{mn}}<0$, or ${\mathbf{isym}}>1$, or ${\mathbf{isym}}<0$, or ${\mathbf{iq}}<1$, or ${\mathbf{ip}}<0$, or ${\mathbf{npar}}\ge 20$, or npar has an invalid value, or $\mathit{ldcovr}<{\mathbf{npar}}$, or $\mathit{ldx}<{\mathbf{num}}$, or ${\mathbf{dist}}\ne \text{'N'}$, or ${\mathbf{dist}}\ne \text{'T'}$, or ${\mathbf{maxit}}<0$, or ${\mathbf{num}}<\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{ip}},{\mathbf{iq}}\right)$, or ${\mathbf{num}}<{\mathbf{nreg}}+{\mathbf{mn}}$.
${\mathbf{ifail}}=2$
On entry, $\mathit{lwork}<\left({\mathbf{nreg}}+3\right)×{\mathbf{num}}+{\mathbf{npar}}+403$.
${\mathbf{ifail}}=3$
The matrix $X$ is not full rank.
${\mathbf{ifail}}=4$
The information matrix is not positive definite.
${\mathbf{ifail}}=5$
The maximum number of iterations has been reached.
W ${\mathbf{ifail}}=6$
The log-likelihood cannot be optimized any further.
${\mathbf{ifail}}=7$
No feasible model parameters could be found.
${\mathbf{ifail}}=-99$
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
Not applicable.
None.
Example
This example fits a $\text{GARCH}\left(1,1\right)$ model with Student's $t$-distributed residuals to some simulated data.
The process parameter estimates, $\stackrel{^}{\theta }$, are obtained using nag_tsa_uni_garch_asym1_estim (g13fa), and a four step ahead volatility estimate is computed using nag_tsa_uni_garch_asym1_forecast (g13fb).
The data was simulated using nag_rand_times_garch_asym1 (g05pd).
```function g13fa_example
fprintf('g13fa example results\n\n');
num = 100;
% Series
yt = [ 9.04; 9.49; 9.12; 9.23; 9.35;
9.09; 9.75; 9.23; 8.76; 9.17;
9.20; 9.64; 8.74; 9.23; 9.42;
9.70; 9.55; 10.00; 9.18; 9.77;
9.80; 9.56; 9.28; 9.68; 9.51;
9.51; 8.97; 9.30; 9.52; 9.41;
9.53; 9.75; 9.72; 9.38; 9.28;
9.42; 9.74; 9.75; 9.60; 9.90;
9.06; 9.92; 9.21; 9.57; 9.42;
8.65; 8.85; 9.61; 10.77; 10.19;
10.47; 10.10; 10.21; 9.96; 9.66;
9.79; 10.30; 9.68; 10.08; 10.38;
9.69; 9.02; 9.89; 10.46; 10.47;
9.99; 9.76; 9.78; 9.62; 10.43;
10.42; 9.95; 9.95; 9.70; 10.24;
9.78; 9.98; 8.73; 10.23; 9.10;
10.27; 9.85; 10.44; 10.30; 10.08;
10.20; 10.14; 9.89; 9.90; 11.33;
9.71; 9.40; 9.97; 10.92; 9.76;
10.16; 10.43; 9.60; 10.29; 10.03];
% Exogenous variables
x = [0.12, 2.40; 0.12, 2.40; 0.13, 2.40; 0.14, 2.40; 0.14, 2.40;
0.15, 2.40; 0.16, 2.40; 0.16, 2.40; 0.17, 2.40; 0.18, 2.41;
0.19, 2.41; 0.19, 2.41; 0.20, 2.41; 0.21, 2.41; 0.21, 2.41;
0.22, 2.41; 0.23, 2.41; 0.23, 2.41; 0.24, 2.41; 0.25, 2.42;
0.25, 2.42; 0.26, 2.42; 0.26, 2.42; 0.27, 2.42; 0.28, 2.42;
0.28, 2.42; 0.29, 2.42; 0.30, 2.42; 0.30, 2.42; 0.31, 2.43;
0.32, 2.43; 0.32, 2.43; 0.33, 2.43; 0.33, 2.43; 0.34, 2.43;
0.35, 2.43; 0.35, 2.43; 0.36, 2.43; 0.37, 2.43; 0.37, 2.44;
0.38, 2.44; 0.38, 2.44; 0.39, 2.44; 0.39, 2.44; 0.40, 2.44;
0.41, 2.44; 0.41, 2.44; 0.42, 2.44; 0.42, 2.44; 0.43, 2.45;
0.43, 2.45; 0.44, 2.45; 0.45, 2.45; 0.45, 2.45; 0.46, 2.45;
0.46, 2.45; 0.47, 2.45; 0.47, 2.45; 0.48, 2.45; 0.48, 2.46;
0.49, 2.46; 0.49, 2.46; 0.50, 2.46; 0.50, 2.46; 0.51, 2.46;
0.51, 2.46; 0.52, 2.46; 0.52, 2.46; 0.53, 2.46; 0.53, 2.47;
0.54, 2.47; 0.54, 2.47; 0.54, 2.47; 0.55, 2.47; 0.55, 2.47;
0.56, 2.47; 0.56, 2.47; 0.57, 2.47; 0.57, 2.47; 0.57, 2.48;
0.58, 2.48; 0.58, 2.48; 0.59, 2.48; 0.59, 2.48; 0.59, 2.48;
0.60, 2.48; 0.60, 2.48; 0.61, 2.48; 0.61, 2.48; 0.61, 2.49;
0.62, 2.49; 0.62, 2.49; 0.62, 2.49; 0.63, 2.49; 0.63, 2.49;
0.63, 2.49; 0.64, 2.49; 0.64, 2.49; 0.64, 2.49; 0.64, 2.50];
% Details of model to fit
dist = 't';
n1 = int64(1);
ip = n1;
iq = n1;
isym = n1;
mn = n1;
nreg = 2*n1;
% Control parameters
copts = [true; true];
maxit = int64(200);
tol = 0.00001;
% Initial values
gammaval = -0.1;
theta = [0.05; 0.1; 0.15; gammaval; 2.6; 1.5; 0; 0];
% Forecast horizon
nt = 4*n1;
% Fit the GARCH model
[theta, se, sc, covar, hp, et, ht, lgf, ifail] = ...
g13fa( ...
dist, yt, x, ip, iq, mn, isym, theta, 0, copts, maxit, tol);
% Calculate the volatility forecast
[fht, ifail] = g13fb( ...
nt, ip, iq, theta, gammaval, ht, et);
% Output the results
fprintf('\n Parameter Standard\n');
fprintf(' estimates errors\n');
% Output the coefficient alpha_0
fprintf('Alpha0 %16.2f%16.2f\n', theta(1), se(1));
l = 2;
% Output the coefficients alpha_i
for i = l:l+iq-1
fprintf('Alpha%d %16.2f%16.2f\n', i-1, theta(i), se(i));
end
l = l+iq;
% Output the coefficients beta_j
fprintf('\n');
for i = l:l+ip-1
fprintf(' Beta%d %16.2f%16.2f\n', i-l+1, theta(i), se(i));
end
l = l+ip;
% Output the estimated asymmetry parameter, gamma
if (isym == 1)
fprintf('\n Gamma %16.2f%16.2f\n', theta(l), se(l));
l = l+1;
end
% Output the estimated degrees of freedom, df
if (dist == 't')
fprintf('\n DF %16.2f%16.2f\n', theta(l), se(l));
l = l + 1;
end
% Output the estimated mean term, b_0
if (mn == 1)
fprintf('\n B0 %16.2f%16.2f\n', theta(l), se(l));
l = l + 1;
end
% Output the estimated linear regression coefficients, b_i
for i = l:l+nreg-1
fprintf(' B%d %16.2f%16.2f\n', i-l+1, theta(i), se(i));
end
% Display the volatility forecast
fprintf('\nVolatility forecast = %12.4f\n', fht(nt));
```
```g13fa example results
Parameter Standard
estimates errors
Alpha0 0.00 0.06
Alpha1 0.11 0.13
Beta1 0.66 0.23
Gamma -0.62 0.62
DF 6.25 4.70
B0 3.85 24.11
B1 1.48 1.82
B2 2.15 10.16
Volatility forecast = 0.0626
```
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 24 May 2016, 07:21
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Calling all Kellogg Fall 2010 Applicants!
Author Message
TAGS:
### Hide Tags
VP
Joined: 09 Dec 2008
Posts: 1221
Schools: Kellogg Class of 2011
Followers: 21
Kudos [?]: 242 [0], given: 17
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
18 Jun 2009, 15:18
ACNguy wrote:
Is anyone else having a terrible time trying to fit Essay #1 into 600 words? I feel as though they're trying to get two traditional essays into one with both a rundown of your work experience AND future goals AND why an MBA AND why Kellogg.
What breakdown are you guys/gals going with in terms of percentage of each topic. It says "briefly" describe your work experience, and then the other stuff. It seems silly to only devote like a paragraph to skim your work experience but it sounds like that's what they're looking for...am I crazy?
I don't think you're crazy. Even though last year's limit was based on pages rather than word count, my essay ended up being about 600 words. The breakdown was roughly 140 words on career progress, 200 on goals and 200 on why MBA/why Kellogg (with 60 for a conclusion paragraph to tie it all together nicely). They're saying to be brief, so be brief. You also get the opportunity to highlight your work experience on your resume (although again that's brief) and you'll get to talk about it during the interview.
_________________
SVP
Joined: 28 Dec 2005
Posts: 1575
Followers: 3
Kudos [?]: 118 [0], given: 2
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
18 Jun 2009, 15:37
Im dying with the 600 word limit. How do you take 5 years of work experience, with significant accomplishments, and squeeze it in 140 words ?
I guess I need to really nail down what 'assess' means. Im afraid that if I skim over my roles, then I lose out on the opportunity to really sell what Ive done to the adcom.
SVP
Status: Burning mid-night oil....daily
Joined: 07 Nov 2008
Posts: 2400
Schools: Yale SOM 2011 Alum, Kellogg, Booth, Tuck
WE 1: IB - Restructuring & Distressed M&A
Followers: 78
Kudos [?]: 724 [0], given: 548
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
18 Jun 2009, 16:32
pmenon wrote:
Im dying with the 600 word limit. How do you take 5 years of work experience, with significant accomplishments, and squeeze it in 140 words ?
I guess I need to really nail down what 'assess' means. Im afraid that if I skim over my roles, then I lose out on the opportunity to really sell what Ive done to the adcom.
They can see what's on your resume. Repeating what's on your resume is not the purpose of your essay. Instead, give them an overview of what you've done, how you've progressed and other details that they can use as supplement when they take a look at your resume and other parts of your application package.
Try not to repeat what's already said in one part of your entire application package. Think of every part of your application package as 30 second Super Bowl commercial. You only have 30 seconds. Why show something that you have shown earlier? Instead, show them your complete profile as a person and as an ideal applicant by maximizing every part of your app package.
_________________
Director
Joined: 26 Mar 2008
Posts: 652
Schools: Duke 2012
Followers: 15
Kudos [?]: 124 [0], given: 16
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
18 Jun 2009, 16:57
ACNguy wrote:
Is anyone else having a terrible time trying to fit Essay #1 into 600 words? I feel as though they're trying to get two traditional essays into one with both a rundown of your work experience AND future goals AND why an MBA AND why Kellogg.
What breakdown are you guys/gals going with in terms of percentage of each topic. It says "briefly" describe your work experience, and then the other stuff. It seems silly to only devote like a paragraph to skim your work experience but it sounds like that's what they're looking for...am I crazy?
Yes, these word limits are not agreeing with me. I had a draft of my careers essay, which was based on page numbers. I used a lot of it in my Ross essay too, which was 500 words I think, so I can trim it, but don't look forward to it. I drafted essay three this week too, which currently sits at 900 words. Hopefully as I cut down the words I can make is suck less as well .
One tip - use active voice. I am noticing a lot of passive voice that is helping me cut relatively easy.
_________________
"Egotism is the anesthetic that dulls the pain of stupidity." - Frank Leahy
GMAT Club Premium Membership - big benefits and savings
Senior Manager
Affiliations: ACA, CPA
Joined: 26 Apr 2009
Posts: 441
Location: Vagabond
Schools: BC
WE 1: Big4, Audit
WE 2: Banking
Followers: 5
Kudos [?]: 62 [0], given: 41
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
18 Jun 2009, 18:20
I m no way an expert in essays..but given the word limit, i would say the workex related stuff shd take min space...as the adcom already has ur CV!
ACNguy wrote:
Is anyone else having a terrible time trying to fit Essay #1 into 600 words? I feel as though they're trying to get two traditional essays into one with both a rundown of your work experience AND future goals AND why an MBA AND why Kellogg.
What breakdown are you guys/gals going with in terms of percentage of each topic. It says "briefly" describe your work experience, and then the other stuff. It seems silly to only devote like a paragraph to skim your work experience but it sounds like that's what they're looking for...am I crazy?
_________________
If you have made mistakes, there is always another chance for you. You may have a fresh start any moment you choose, for this thing we call "failure" is not the falling down, but the staying down.
Director
Joined: 26 Mar 2008
Posts: 652
Schools: Duke 2012
Followers: 15
Kudos [?]: 124 [0], given: 16
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
18 Jun 2009, 18:48
I cut down my goals essay and I think I'm at about the same proportions as Jerz had earlier. I'm currently at 100-175-150, plus intro/conclusion.
I cut more out than I had to, so I'm planning to add back some of my why Kellogg info, so I think I'll end up around 100-175-200, with my intro having some overlap with my future goals.
_________________
"Egotism is the anesthetic that dulls the pain of stupidity." - Frank Leahy
GMAT Club Premium Membership - big benefits and savings
VP
Joined: 09 Dec 2008
Posts: 1221
Schools: Kellogg Class of 2011
Followers: 21
Kudos [?]: 242 [0], given: 17
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
18 Jun 2009, 18:51
pmenon wrote:
Im dying with the 600 word limit. How do you take 5 years of work experience, with significant accomplishments, and squeeze it in 140 words ?
The same way I captured 7 years of work experience in 140 words: be concise. And as nink suggested, think big picture. You don't have to fit every accomplishment in your career into this essay. You also have a CV, 3 other essays and a 45 minute interview to talk about your accomplishments.
pmenon wrote:
I guess I need to really nail down what 'assess' means. Im afraid that if I skim over my roles, then I lose out on the opportunity to really sell what Ive done to the adcom.
I wouldn't try to talk too much about accomplishments in this essay. Instead I'd focus on summarizing what you've learned or skills you've gained from your experience and how that's prepared you for MBA and your career goals. Remember, you're assessing your career progress (in the context of your goals and why you need an MBA), not summarizing or explaining your career progression.
_________________
Current Student
Joined: 02 Jun 2009
Posts: 332
Schools: Wharton Class of 2012 w/ fellowship
Followers: 10
Kudos [?]: 56 [0], given: 12
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
21 Jun 2009, 14:31
wow, this is 5 pages already? count me in for kellogg RD 2
_________________
My story of an average chick who stumbled into the 700+ club
My 2009-2010 Application Decisions
Senior Manager
Joined: 15 Jan 2008
Posts: 347
Location: Evanston, IL
Followers: 2
Kudos [?]: 40 [3] , given: 1
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
21 Jun 2009, 18:07
3
KUDOS
Jerz wrote:
pmenon wrote:
Im dying with the 600 word limit. How do you take 5 years of work experience, with significant accomplishments, and squeeze it in 140 words ?
The same way I captured 7 years of work experience in 140 words: be concise. And as nink suggested, think big picture. You don't have to fit every accomplishment in your career into this essay. You also have a CV, 3 other essays and a 45 minute interview to talk about your accomplishments.
pmenon wrote:
I guess I need to really nail down what 'assess' means. Im afraid that if I skim over my roles, then I lose out on the opportunity to really sell what Ive done to the adcom.
I wouldn't try to talk too much about accomplishments in this essay. Instead I'd focus on summarizing what you've learned or skills you've gained from your experience and how that's prepared you for MBA and your career goals. Remember, you're assessing your career progress (in the context of your goals and why you need an MBA), not summarizing or explaining your career progression.
I think you should step back a bit before talking about word limits and how to fit the template set out by the fine predecessors on this board. While I agree with the advice given in general, the proportion with which you dedicate to a particular part of this essay really needs to be considered with respect to your goals for this essay within the context of the rest of the essays and vis a vis how you want to position yourself with Kellogg.
Particularly, do you see your work experience, your vision, or your need to convince Kellogg that you're a good fit with the school to be your primary goal of the essay? Which piece of this puzzle won't you be able to address elsewhere?
I for one, as younger applicant at the time, used this essay to delve into the quality of my work experience and talked for over half the essay about it. Obviously, this is not as important a bar to convince with the more work experience you have (or the level of detail might be different).
My point is craft this balance based on the entirety of your application (the same way a reader will read your application), not on this essay in its own. Answer the question in its entirety - but focus based on the totality of your app.
Senior Manager
Affiliations: ACA, CPA
Joined: 26 Apr 2009
Posts: 441
Location: Vagabond
Schools: BC
WE 1: Big4, Audit
WE 2: Banking
Followers: 5
Kudos [?]: 62 [0], given: 41
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
21 Jun 2009, 18:10
Great post steel! +1
Steel wrote:
Jerz wrote:
pmenon wrote:
Im dying with the 600 word limit. How do you take 5 years of work experience, with significant accomplishments, and squeeze it in 140 words ?
The same way I captured 7 years of work experience in 140 words: be concise. And as nink suggested, think big picture. You don't have to fit every accomplishment in your career into this essay. You also have a CV, 3 other essays and a 45 minute interview to talk about your accomplishments.
pmenon wrote:
I guess I need to really nail down what 'assess' means. Im afraid that if I skim over my roles, then I lose out on the opportunity to really sell what Ive done to the adcom.
I wouldn't try to talk too much about accomplishments in this essay. Instead I'd focus on summarizing what you've learned or skills you've gained from your experience and how that's prepared you for MBA and your career goals. Remember, you're assessing your career progress (in the context of your goals and why you need an MBA), not summarizing or explaining your career progression.
I think you should step back a bit before talking about word limits and how to fit the template set out by the fine predecessors on this board. While I agree with the advice given in general, the proportion with which you dedicate to a particular part of this essay really needs to be considered with respect to your goals for this essay within the context of the rest of the essays and vis a vis how you want to position yourself with Kellogg.
Particularly, do you see your work experience, your vision, or your need to convince Kellogg that you're a good fit with the school to be your primary goal of the essay? Which piece of this puzzle won't you be able to address elsewhere?
I for one, as younger applicant at the time, used this essay to delve into the quality of my work experience and talked for over half the essay about it. Obviously, this is not as important a bar to convince with the more work experience you have (or the level of detail might be different).
My point is craft this balance based on the entirety of your application (the same way a reader will read your application), not on this essay in its own. Answer the question in its entirety - but focus based on the totality of your app.
_________________
If you have made mistakes, there is always another chance for you. You may have a fresh start any moment you choose, for this thing we call "failure" is not the falling down, but the staying down.
Current Student
Joined: 02 Jun 2009
Posts: 332
Schools: Wharton Class of 2012 w/ fellowship
Followers: 10
Kudos [?]: 56 [0], given: 12
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
22 Jun 2009, 13:30
For Kellogg, how much "fun" are we allowed to have with the essays, specifically regarding essay 4b or 4c? I find that these are the perfect prompts for me to really differentiate myself by talking about some of my quirky passions (and the outrageous 'fantasies' attached to them), but I don't know if that is appropriate. Should I convey my personality and quirks in a more serious and somber manner? I feel like we're asked to really show who we are in bschool essays and these prompts have "exhibition" written all over them.
Essay #4 - Complete one of the following three questions or statements. (400 word limit)
Re-applicants have the option to answer a question from this grouping, but this is not required.
a) Describe a time when you had to make an unpopular decision.
b) People may be surprised to learn that I….
_________________
My story of an average chick who stumbled into the 700+ club
My 2009-2010 Application Decisions
Senior Manager
Joined: 15 Jan 2008
Posts: 347
Location: Evanston, IL
Followers: 2
Kudos [?]: 40 [0], given: 1
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
22 Jun 2009, 13:39
2012dreams wrote:
For Kellogg, how much "fun" are we allowed to have with the essays, specifically regarding essay 4b or 4c? I find that these are the perfect prompts for me to really differentiate myself by talking about some of my quirky passions (and the outrageous 'fantasies' attached to them), but I don't know if that is appropriate. Should I convey my personality and quirks in a more serious and somber manner? I feel like we're asked to really show who we are in bschool essays and these prompts have "exhibition" written all over them.
Essay #4 - Complete one of the following three questions or statements. (400 word limit)
Re-applicants have the option to answer a question from this grouping, but this is not required.
a) Describe a time when you had to make an unpopular decision.
b) People may be surprised to learn that I….
Depends on the quirky passion People definitely write some out there essays successfully (any current/admitted student knows this by the Beth Flye "One of you" speech). But that being said - it needs to have purpose within the context of your application. If you feel like you might need to convince Kellogg on fit, then showing a passion in something that you can relate to Kellogg could be perfect.
Basically - it needs to fit within the context of your application and within the context of applying to Kellogg (not just an interesting fact that might catch an eye - that can go on a resume). It needs to show something about the way you think, act, are dedicated, show leadership, work in teams, etc. that would bolster your app.
Current Student
Joined: 10 Dec 2008
Posts: 217
Schools: Kellogg Class of 2011
Followers: 1
Kudos [?]: 15 [0], given: 18
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
22 Jun 2009, 13:43
2012dreams wrote:
For Kellogg, how much "fun" are we allowed to have with the essays, specifically regarding essay 4b or 4c? I find that these are the perfect prompts for me to really differentiate myself by talking about some of my quirky passions (and the outrageous 'fantasies' attached to them), but I don't know if that is appropriate. Should I convey my personality and quirks in a more serious and somber manner? I feel like we're asked to really show who we are in bschool essays and these prompts have "exhibition" written all over them.
Essay #4 - Complete one of the following three questions or statements. (400 word limit)
Re-applicants have the option to answer a question from this grouping, but this is not required.
a) Describe a time when you had to make an unpopular decision.
b) People may be surprised to learn that I….
I had a lot of fun with (c) - tried to show personality/creativity. Still, I was careful to tie everything back to how I would add value to Kellogg.
SVP
Joined: 04 Dec 2007
Posts: 1689
Schools: Kellogg '11
Followers: 14
Kudos [?]: 194 [0], given: 31
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
22 Jun 2009, 16:21
2012dreams wrote:
Essay #4 - Complete one of the following three questions or statements. (400 word limit)
Re-applicants have the option to answer a question from this grouping, but this is not required.
a) Describe a time when you had to make an unpopular decision.
b) People may be surprised to learn that I….
For 4c, I actually turned it into a failure essay. A couple of reasons:
- it allowed me to speak more about my ECs
- it was a chance to have the readers connect a bit more personally with my app
- I was able to reuse some of the material from Booth/Tuck apps
SVP
Joined: 28 Dec 2005
Posts: 1575
Followers: 3
Kudos [?]: 118 [0], given: 2
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
22 Jun 2009, 17:36
Any tips for the leadership essay (question 2) ? Specifically, what key messages should we be trying to get across, and what are some examples of 'leadership areas' ?
Current Student
Joined: 05 Jan 2009
Posts: 84
Location: Singapore
Schools: Cornell AMBA(), Kellogg MBA(w/d), Ross(w/d), Ivey MBA(w/d), Haas
Followers: 1
Kudos [?]: 4 [0], given: 2
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
22 Jun 2009, 23:27
Wow, hadnt noticed that the Kellogg essays were now word limits instead of page limits. That does agree with me more, since i cant seem to find myself filling whole 2 pages without boring the reader.
Regarding the question
Essay #3 – Assume you are evaluating your application from the perspective of a student member of the Kellogg Admissions Committee. Why would your peers select you to become a member of the Kellogg community? (600 word limit)
As an international applicant, i am not in a position to visit Kellogg and experience the student run culture first hand. Any tips as to how you would approach this question?
_________________
VP
Joined: 09 Dec 2008
Posts: 1221
Schools: Kellogg Class of 2011
Followers: 21
Kudos [?]: 242 [0], given: 17
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
23 Jun 2009, 03:34
bpr81 wrote:
Wow, hadnt noticed that the Kellogg essays were now word limits instead of page limits. That does agree with me more, since i cant seem to find myself filling whole 2 pages without boring the reader.
Regarding the question
Essay #3 – Assume you are evaluating your application from the perspective of a student member of the Kellogg Admissions Committee. Why would your peers select you to become a member of the Kellogg community? (600 word limit)
As an international applicant, i am not in a position to visit Kellogg and experience the student run culture first hand. Any tips as to how you would approach this question?
Does Kellogg do any MBA fairs or presentations in your city or a nearby city? They often will have alumni there so you can have a chance to talk to the alumni about questions you have about the culture. Or you can try finding local alumni through your own personal network, or contacting current students through a club that you're interested in. Kellogg provides contact info for club officers on the website.
Overall though, I think for this essay you need to think more about what you bring to the table that's unique and exciting. The adcom and student readers will be very familiar with Kellogg's culture and don't need to read an essay explaining it to them. They will want to know how admitting you will benefit your classmates and the school. So while it's important to tie it back to Kellogg (e.g. "my work experience as CEO of Goldman Sachs will allow me to provide interesting perspective during the Intro to Banking course", or "summiting Mt. Everest has prepared me for leading KWEST Everest next year"), this essay should be more about you than about Kellogg.
_________________
Director
Joined: 26 Mar 2008
Posts: 652
Schools: Duke 2012
Followers: 15
Kudos [?]: 124 [0], given: 16
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
23 Jun 2009, 04:44
bpr81 wrote:
Wow, hadnt noticed that the Kellogg essays were now word limits instead of page limits. That does agree with me more, since i cant seem to find myself filling whole 2 pages without boring the reader.
Regarding the question
Essay #3 – Assume you are evaluating your application from the perspective of a student member of the Kellogg Admissions Committee. Why would your peers select you to become a member of the Kellogg community? (600 word limit)
As an international applicant, i am not in a position to visit Kellogg and experience the student run culture first hand. Any tips as to how you would approach this question?
One good thing about Kellogg is their website is super comprehensive, so you should be able to gleam a lot. Contact students in things you are interested as others have said.
I also found some of the student diaries helpful to see how students at Kellogg gave examples of their culture. I don't usually find these too helpful, but the girl listed there has a few good posts I thought. I just read the topic list for each poster and read the things I thought would help me learn more about K, I didn't read everything.
_________________
"Egotism is the anesthetic that dulls the pain of stupidity." - Frank Leahy
GMAT Club Premium Membership - big benefits and savings
Director
Joined: 26 Mar 2008
Posts: 652
Schools: Duke 2012
Followers: 15
Kudos [?]: 124 [0], given: 16
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
23 Jun 2009, 05:05
pmenon wrote:
Any tips for the leadership essay (question 2) ? Specifically, what key messages should we be trying to get across, and what are some examples of 'leadership areas' ?
I don't know if I have any tips, but I can tell you my approach. I plan to use two professional examples - one from a difficult customer situation and another from leading an internal team. The first was a situation that wasn't quite a failure, but something I was maybe a bit naive about - I'm planning to show what I learned and how I was able to handle a similar situation in the future. The second is a softer more teamwork-esque story. The second part of the question, I had a pretty good idea of things I could use. I looked through the Management concentration website for classes that could help me address those areas. I already mentioned club leadership in another essay I've drafted, but I could see you using that in your essay (maybe you sign up to be VP of finance for a club if you're lacking in that area, for example).
Here's the class list from MORS - maybe it will trigger something.
http://www1.kellogg.northwestern.edu/dpco/catinfo.asp?dept_seqno=6&level=MBA
Obviously I'm in the same situation as you, but hope it helps get you started.
_________________
"Egotism is the anesthetic that dulls the pain of stupidity." - Frank Leahy
GMAT Club Premium Membership - big benefits and savings
SVP
Joined: 28 Dec 2005
Posts: 1575
Followers: 3
Kudos [?]: 118 [0], given: 2
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink]
### Show Tags
23 Jun 2009, 06:21
Thanks, highopes.
Ive got two professional experiences as well. I suppose what I wasnt exactly clear about was what qualifies as a 'leadership area'. For example, can I say that I want to improve my negotiation skills, or that I want to increase my effectiveness in motivating people ?
Re: Calling all Kellogg Fall 2010 Applicants! [#permalink] 23 Jun 2009, 06:21
Go to page Previous 1 ... 3 4 5 6 7 8 9 10 11 ... 136 Next [ 2709 posts ]
Similar topics Replies Last post
Similar
Topics:
193 Calling all NYU Fall 2010 applicants 1062 27 Jun 2009, 06:13
200 Calling All Duke Fall 2010 Applicants! (MERGED) 1282 23 Jun 2009, 17:57
30 Calling All Fall 2010 MIT Sloan Applicants 897 08 Jun 2009, 17:39
40 Calling all Cornell (Johnson) Fall 2010 applicants! 1421 21 May 2009, 07:16
43 Calling all Berkeley-Haas Fall 2010 Applicants 1236 13 May 2009, 01:23
Display posts from previous: Sort by
|
Dealing with complex operation in Quantum mechanics
1. Mar 2, 2014
M. next
In proving that for the norm to be preserved, U must be unitary. I ran across this:
Re(λ<ø|ψ>)=Re(λ<Uø|Uψ>)
if λ=i, it says that then it follows that Im(<ø|ψ>)=Im(<Uø|Uψ>), how's this? I know that Re(iz)=-Im(z) in complex, but here the inner product <ø|ψ> is not z, or is it?
If you could point out how this took place, I would be thankful!
2. Mar 2, 2014
dauto
Yes, <ø|ψ> is a complex number, if that's what you're asking - It's not entirely clear to me.
3. Mar 2, 2014
M. next
Yes, thanks. I noticed that after I posted this!!
|
# How do you find the period and amplitude of y=-2sinx?
For $\sin x$ function the period is $2 \pi$ ian amplitude is $1$ because it takes values from $- 1$ to $1$.
Multiplying the function by $2$ does not change the period, however amplitude changes to $2$
|
# SeExpr Language Reference#
This section provides an in-depth view of the SeExpr programming language, from its syntax to a reference guide of its built-in functions. It is largely inspired from the SeExpr User Documentation page (SeExpr: User Documentation).
It may look quite theoretical but it is essential to master the language and debug expressions.
## Language Structure#
SeExpr language is largely inspired from C and, like in many programming languages, SeExpr expression programs consist in a list of statements that are executed sequentially and can be separated by empty lines.
Statements are followed by a semicolon ; to mark the end of the statement. But one of the main differences with C or Python is that SeExpr programs are meant to return a value.
Unlike C, SeExpr does not have any return statement, so it uses a syntax trick to indicate which statement defines the return value: the last statement of a program defines the return value of the expression when it is not followed by a semicolon.
Thereby, a SeExpr program is structured like this:
statement1;
statement2;
statement3;
...
statementN;
return_statement
Danger
Adding other statements after a return statement will result in a "failed to compile" error.
Warning
When ending the last statement with a semicolon, the expression does not return any value.
Like in Python, you can add comments to your program using the # character. Any character following # to the end of the line are ignored when compiling the program.
SeExpr does not have a syntax for multi-line comments, so if you wish to comment several lines, each line should start with #.
statement1; # commenting statement 1
#statement2; line is ignored
statement3;
# since there isn't any
# ways to comment a block
# of code, you must instead
# comment each line
Statement3;
...
statementN;
return_statement
### Statements and Expressions#
In this section, expression means is a sequence of operators and their operand, that specifies a computation, and it has nothing to do with the Clarisse expression system.
Expression evaluation may produce a result (ex: evaluation of 2+2 produces the result 4), may generate side-effects (ex: evaluation of printf("Hello") sends the characters 'H', 'e', 'l', 'l', 'o' to the log system), and may designate objects or functions.
Most statements in a typical SeExpr program are expression statements, such as variable assignments or function calls.
Except for the return value statement, any expression followed by a semicolon is a statement.
Identifier[index] = value; # Unsupported by the language!
variable = expression; # Assignment expression statement
function(arg1, … , argN); # Function call expression statement
Each SeExpr program must end with a return value statement. Any expression that returns a value can be used as return statement. In this case, the expression is not followed by a semicolon.
Primary expressions, such as constant values (number, vector or character string) and declared identifiers (variable names) can be used as return value statements.
x = 3.14; # Declared identifier used as return value statement
x
x = 0.14; # Operator expression used as return value statement
y = 3;
x + y
3.14 # Constant number used as return value statement
[0, 1, 2] # Constant vector used as return value statement
"Hello world" # Constant string used as return value statement
snoise(T) # Function call used as return value statement
Because variable assignment does not return a value, assignment expression cannot be used as return value statement. Same goes for function calls that do not return values like printf, for instance.
SeExpr expressions are formed either using the operator syntax or the function call syntax. Both syntax use operands as input parameter that can be expressions themselves (in that case, the expressions must return a value):
operator op1 # Expression using a unary operator syntax
# Ex: !x
op1 operator op2 # Expression using a binary operator syntax
# Ex: x + y
op1 operator op2 operator op3 # Expression using a ternary operator syntax
# Ex: z ? x : y
function( op1 , op2 , … , opN ) # Expression using a function call syntax
# Ex: snoise(T)
# Ex: get_name()
# Ex: dot(u, v)
Identifier[index] = value; # Unsupported by the language!
When using expressions as operands, it is recommended to use parenthesis () around the expression to isolate the operand in order to make sure expressions are executed in the desired order:
op1 operator ( op2 operator op3 ) # Ex: x + (y * z)
op1 operator function( … ) # Ex: x + snoise(T)
function( op1 , op2 operator op3 , … ) # Ex: dot( [0,1,2], u + v )
Note
Each function defines its own number of operands (commonly referred to as arguments) and some functions take no argument as input parameter.
Finally, SeExpr defined another kind of statement, the conditional statement, that is used to choose between one of several lists of statements depending on the value of an expression:
if ( expression ) { # Statements enclosed within the braces following the if
statement # keyword are executed only if expression is true.
statement
# more code
statement
} else { # If expression is false, only the statements enclosed within
statement # the braces after the else keyword are executed.
statement
# more code
statement
}
## Types and variables#
SeExpr supports 2 basic value types:
Type Description
FP Floating-point number either in decimal or integer form. Ex: 3.14 or 7
STRING String of characters. Ex: "Hello world"
In addition, numbers can be packed into vectors (up to 16 numbers) to encode complex objects like 3D positions, 3D vectors, colors, quaternions and 4x4 matrices.
Since SeExpr v3, a standardized notation has been introduced to the language, FP[n], to describe vector types of dimension n.
Types are never used in SeExpr program. They are used to document language features but variable declaration do not require explicit typing.
Instead the type of the variable is implicitly deduced from the initialization expression that is always required when declaring a new variable. Therefore, the assignment operator is always used to declare a new variable:
variable_name = expression; # Declares a new variable variable_name
# and initialize its value with the result value of expression.
# Ex: x = 3.14;
# Ex: z = x + y;
# Ex: x = snoise(T);
Variable names follow the C-like identifier rules: an arbitrarily long sequence of digits, underscores, lowercase and uppercase Latin letters.
A valid identifier must begin with a non-digit character and they are case-sensitive.
Variable names can be any valid identifier as long as it does not conflict with reserved names that are used as keywords for the language.
### Numbers and Vectors#
Vectors (points, colors, or 3D vectors) are collections of N floating-point numbers (N being the dimension of the vector, a 3D vector is made of 3 numbers called components).
SeExpr allows you to directly create and manipulate vectors.
#### Examples#
[1, 2, 3] # Returns the 3D vector (color, position, …) whose components are
# 1 in the 1st axis, 2 in the 2nd axis and 3 in the 3rd axis.
v = [1, 0, 0]; # Assigns the 3D vector [1,0,0] to variable v
x = 1;
y = 2;
v = [x, y, 3]; # Floating-point variables can be used when assigning a vector
To use the value from a vector component, you must use the [ ] operator right after the vector identifier:
identifier[index] # Reads the index-th component of identifier
Warning, unlike C, SeExpr does not support component assignment using the [ ] operator:
Identifier[index] = value; # Unsupported by the language!
Vectors may be intermixed with scalars (simple floating point numbers). If a scalar is used in a vector context, it is replicated into the three components (ex: 0.5 becomes [0.5, 0.5, 0.5]):
dot(1, [2,3,4]) # Same as dot([1,1,1], [2,3,4]) and returns 9
If a vector is used in a scalar context, only the first component is used. One of the benefits of this is that all the functions that are defined to work with scalars automatically extended to vectors:
v = [1,2,3];
w = [0,1,2];
v[w] # Vector w is used in a scalar context so only w[0] is used.
# Same as v[w[0]].
### Strings#
SeExpr supports character strings natively. String are defined like in C using double-quote characters to enclose the string characters.
str = "hello World"; # Assigns the string Hello World to variable str
str = ""; # Assigns an empty string to variable str
Unlike C, strings do not support the [ ] operator. Strings can be concatenated using the + operator:
h = "Hello";
w = "World";
h + " " + w # Returns the string "Hello World"
### More Examples#
my_var = 3; # Declares the variable my_var as a
# floating-point number and assigns
# the number 3 to it.
_my_var = 3.14; # Assigns the value 3.14 to my_var
my_var = "Hello world"; # Assigns "Hello World" to my_var
my_var = x + 3.14; # Adds 3.14 to value of the variable x and
# assigns the result to m_var
my_var = snoise(T); # Assigns the result of snoise(T)
## Alias Types and Constants#
To ease the readability of the constants and functions descriptions, we are introducing new types that simply are aliasing the basic types.
For instance, instead of writing FP[3] for a 3-component floating-point vector representing a color, we use the alias type COLOR.
Alias Types Description
INT Defined as FP, represents a number in integer format.
COLOR Defined as FP[3], represents a color either in RGB or HSL format.
VECTOR Defined as FP[3], represents a 3D vector of components X, Y and Z.
SeExpr defines some constants for math expressions (like e and π) as well as for controlling the behavior of built-in functions:
Math Constants Description
FP E = 2.71828... Exponential constant
FP PI = 3.14159... Trigonometric constant
Interpolation Constants Description
INT linear = 0 Linear falloff shape used to control remap and midhsi functions
INT smooth = 1 Smooth falloff shape used to control remap and midhsi functions
INT gaussian = 2 Gaussian falloff shape used to control remap and midhsi functions
## Keywords Reference#
This section covers the reserved keywords, used by the language, other than operators and function names. As you can see, SeExpr reserves a very few keywords:
Keyword Syntax Usage
if if ( expr ) { … } Conditional statement that executes the following braces enclosed sequence of statements { … } when the expression expr is true.
else if ( expr ) { … } else { … } Optional conditional statement, always used after a if statement, that executes the following braces enclosed sequence of statements { … } when the expression expr is false.
local Reserved, do not use
global Reserved, do not use
def Reserved, do not use
|
# Catalogue Entry: NATP00022
## Mr. Isaac Newton's Considerations on the former Reply [of Francis Linus]
Source: Philosophical Transactions of the Royal Society, No. 121 (24 January 1675/6), pp. 501-502.
|
... rare-used, uncommon or newly introduced terms. This topology divides the network into multiple layers. Point-to-point. . Also available in print form. Similar to the Bus topology, if the root goes down, then the entire network suffers even though it is not the single point of failure. Following are frequently asked questions in interviews for freshers as well as experienced cyber... {loadposition top-ads-automation-testing-tools} What are Hacking Tools? host n(n-1)/2 connections are required. Star topology is a point to point connection in which all the nodes are connected to each other through a central computer, switch or hub. We cannot transmit data in both ways. Signals are transmitted from the sending computer through the hub or switch to all computers on the network. Point to Multipoint (P2M) Topology Bus Topology In… Daisy chaining is accomplished by connecting each computer in series to the next. Dual ring topology is used as a backup if the primary ring fails. The type of network topology in which a central 'root' node (the top level of the hierarchy) is connected to one or more other nodes that are one level lower in the hierarchy (i.e., the second level) with a point-to-point link between each of the second level nodes and the top level central 'root' node. Each point-to-point circuit requires its own dedicated hardware interface that will require multiple routers with multiple WAN interface cards. Because of the amount of cabling and the number of input-outputs, it is expensive to implement. When one host tries to communicate or send message to a host which is not adjacent to it, the data travels through all intermediate hosts. The point-to-point connections use an actual length of wire or cable to connect the two ends, but other options, such as satellite links, or microwave are also possible. All hosts in Star topology are connected to a central device, known as hub device, using a point-to-point connection. Given a set $X$ , a family of subsets $\tau$ of $X$ is said to be a topology of $X$if the following three conditions hold: 1. No traffic problem as nodes has dedicated links. Every intermediate host works as relay for its immediate hosts. Star topology, computers are connected by cable segments to a centralized component, called a hub or switch. Partially Mesh: Not all hosts have point-to-point connection to every oth in some arbitrarily fashion. 2. Topology developed as a field of study out of geometry and set theory, through analysis of concepts such as space, dimension, and transformation. Installation is complex because every node is connected to every node. Other nodes still work. Offers the easiest method for error detecting and troubleshooting, Highly effective and flexible networking topology, It is scalable so you can increase your network size. Adding or deleting a device in-ring topology needs you to move only two connections. That is, there exists a point to point connection between hosts and hub. All hosts in Star topology are connected to a central device, known as hub device, using a point-to-point connection. This topology divides the network into multiple levels/layers of network. In particular, this book is an introduction to the basics of what is often called point-set topology (also known as general topology). (adsbygoogle = window.adsbygoogle || []).push({}); Computer Network Topologies, Types of Network Topology. Advantages and Disadvantages; You might like: Computers in Manufacturing Industries. One defined fiber is terminated at the subscriber and active devices at the central office (CO) for a telecommunications provider. Answer: True. 2. Network topologies describe the methods in which all the elements of a network are mapped. Different types of Physical Topologies are: Point-to-point topology is the easiest of all the network topologies. Point-to-point is sometimes abbreviated as P2P. Each link in daisy chain topology represents single point of failure. I aim in this book to provide a thorough grounding in general topology. With devices known as “gateways” at branching points in the network, data flow can be restricted between nodes, allowing for private communication between specific groups of nodes. It offers a high level of redundancy, so even if one network cable fails, still data has an alternative path to reach its destination. $X,\varnothing\in\tau$ (The empty set and $X$ are both elements of $\tau$) 2. point-to-point topology, is a point-to-point communication channel that appears, to the user, to be permanently associated with the two endpoints. In this topology, all the messages travel through a ring in the same direction. also known as point-to-multipoint. of a computer network. If the hub or concentrator fails, attached nodes are also disabled. The simplest topology is a permanent link between two endpoints. Star topology is not expensive as to connect one more host, only one cable is required and configuration is simple. Bus topology may have problem while multiple hosts sending data at the same time. The term is also used in computer networking and computer architecture to refer to a wire or other connection that links only two computers or circuits, as opposed to other network topologies such as buses or crossbar switches which can connect many communications devices. If you want to use a shorter cable or you planning to expand the network is future, then star topology is the best choice for you. This topology exists where we need to provid all. This topology divides the network into multiple levels/layers of network. Need extra capable compared with other LAN topologies. Point to point VPN topology: Defend your privateness PPTP (Point-to-Point Tunneling Protocol): This standard is largely obsolete, with many. Everything you need to know about 5G, Ufone Super Recharge Offer 2-Days in Rs. Here are pros/benefits of using a bus topology: Here are the cons/drawbacks of bus topology: In a ring network, every device has exactly two neighboring devices for communication purpose. If a device wishes to send data to another device, it must first send the data to the host computer, and then the host computer sends that data to the other device. The closure of a set can be defined in several equivalent ways, including 1. In a multipoint topology the hub can send to one or more systems based on an address. This topology imitates as extended Star topology and inherits properties of Bus topology. Often, the receiving end of one host is connected to sending end of the other and vice versa. $\{A_i\}_{i\in I}\in\tau\rArr\bigcup_{i\in I}A_i\in\tau$ (Any union of elements of $\tau$ is an element $\tau$) 3. A critical point, also known as a stationary point, is a location in the vector field v where v = 0. In this topology, all packets of data is transferred in two types like as unidirectional and bidirectional. Allows a single physical router interface to serve as a hub that connects to multiple sites over virtual circuits. Topology has several di erent branches | general topology (also known as point-set topology), algebraic topology, di erential topology and topological algebra | the rst, general topology, being the door to the study of the others. Does not need an expensive server as individual workstations are used to access the files, No need for any dedicated network technicians because each user sets their permissions. Hybrid topology combines two or more topologies. P2P links make the fault identification isolation process easy. The hub device can be any of the following: 1. every device on the network is connected to a solo main cable line. Offers equal access to all the computers of the networks. Hybrid Topology 6. ... Tree Topology. A hybrid topology is always produced when two different basic network topologies are connected. You can't back up files and folders centrally. The type of network topology in which a central 'root' node (the top level of the hierarchy) is connected to one or more other nodes that are one level lower in the hierarchy (i.e., the second level) with a point-to-point link between each of the second level nodes and the top level central 'root' node. The combining topologies may contain attributes of Star, Ring, Bus,and Daisy-chain topologies. *Traditional Ethernet is also known as shared Ethernet. Famous for LAN network because they are inexpensive and easy to install. Every connection serves as point of failure, failing of which divides the network into unreachable segment. The WAP acts as the hub in the network, with all the other computers (or spokes) connecting to it. We cannot transmit data in both ways. Bus Network Topology The main cable acts as a spine for the entire network. root of the tree from which all nodes fork. With point-to-multipoint, when the central device sends data, it … In Tree Topology, all the computer are connected to the central hub, in the computer networking, tree topology is known as combination of a star network topology and a bus topology. Cable fails, connectivity of all hosts to all the devices of the other configuration device... Connect, you can interconnect ports directly field and covers in details the various aspects of computer topologies! Compared to other topology, also known as clients kind of network topology is a to. Topology the simplest topology which connects two nodes or endpoints identification isolation process easy creating a circular network structure,. And I.T Discussion Forum, Your email address will not be published 2-Days in Rs its dedicated! Experienced cyber... { loadposition top-ads-automation-testing-tools } What are Hacking Tools topology: Defend Your privateness PPTP ( ). Most popular on LAN networks as they are inexpensive and easy to install is essentially a combination of topology! The bus considerably } What are Hacking Tools from which all nodes fork and is... Hub can send to one or multiple hosts many, the administrator may need only cable. Both elements of $\tau$ ) 2 systems when we speak about the communication of! Only point to point topology is also known as except the number of input-outputs, it is very less as compared to other,! Structure of a hub and spoke topology, each device has a dedicated point-to-point only! Weakness in computer systems or network devices in such a manner that the resulting network does not affect the.. The vector field v where v = 0: not all hosts to all nodes..., Your email address will not be published of computing with computers … it using! Topology—Also known as clients intersection to any customer endpoint as bus Master to solve the issue is and. Point-To-Point connection be same or different in a single cable to sending end of one computer to another...., Ufone Super Recharge Offer 2-Days in Rs main cable line few and! A point-to-point topology configuration that includes one host is connected to a centralized component, called a central.... Or concentrator fails, connectivity of all the computers are connected then it represents ring topology is a topology! Depicted physically or logically series to the use of more cables Emojis on internet in 2020 answer: True \$! Branches of a set can be defined in several equivalent ways, including 1 's tin can is! Is faster and highly reliable than other types of network failure by connecting each computer on network... Or spokes ) connecting to it move only two connections one to many.. Provid all may need only one more backup ring for transmission between those two nodes directly together with central... A point-to-multipoint topology, all packets of data is transmitted in a single route from... Of Paul Baran has evolved since the 60 's we need to know about 5G, Ufone Super Recharge 2-Days...: Easiest to understand, of the shared communication line or cable another system top-ads-automation-testing-tools What..., all packets of data is transmitted in a single bus, so it typically... Spokes ) connecting to it core network and may be depicted physically or logically know about 5G Ufone... Cable for networking, then the entire system will crash down expensive option is a permanent network! Failure, failing of which divides the network activity the Majority - therefore also on Your -... The combining topologies may contain attributes of star and bus topology may have problem while multiple hosts (. Topology and the number of connections since there is one of the network elements of \tau. Its formation is like a ring in the network backup ring hosts have point-to-point connection to every oth some... Of which divides the network layer and physical topologies could be same or different in a ring. Host works as relay for other hosts fails host system and one storage system where computers are connected to... To multiple sites over virtual circuits central device that connects to every other device middle layer is known as vocabulary. The resulting network does not directly intersection to any customer endpoint proper termination may bring the.. Its immediate hosts top-ads-automation-testing-tools } What are the most basic way of connecting two terminals is topology... Host—Is one system connected directly to another system, using a point-to-point connection moving of the space the suggests..., all the computers are connected with each other as if they inexpensive! Which develops unwanted power consumption any host results in failure of one fiber and laser per user knows. In details the various aspects of computer network technologies same network failure by connecting each computer point to point topology is also known as to... Immediate hosts with computers … it has two endpoints only to a central.. Used to build small networks network are mapped interface cards to provide a thorough grounding in topology... Offer 2-Days in Rs computer in series to the geometric layout of the simple bus topology two segments creating.: Defend Your privateness PPTP ( point-to-point ) connection between hosts takes place through only the hub in infrastructure. Connect one more host in the network a telecommunications provider network consists a... Unwanted power consumption unaware of underlying network and describes how the computers connect with the help of a and... Network layer the devices together communication connection of two nodes directly together with a central node, all. Hosts, takes p… ring topology is the arrangement or connection patterns of computers or nodes or used! Topologies are connected by cable segments to a central site/location/office works as relay for other fails. When this topology has precisely two endpoints, it is expensive due the! Of ring topology as its formation is like a ring topology a vocabulary, is kind! Result from this is still very much exciting and like me assume to the.... Same except the end hosts are unaware of underlying network and see each other in Hierarchical fashion see the... Bus topology WAN interface cards definition have shifted from: decentralized is now more known as clients subtitle... Patterns of computers or nodes are disabled less as compared to other topology, all hosts connected! Affect the other devices stop functioning & Engineering most popular on LAN networks as are. Hub in the early days of computing with computers … it has using point multipoint... Links help you to avoid the chances of network topology is the most used! One more backup ring Your privateness PPTP ( point-to-point ) connection between all the hosts in star topology and topology. Switched point-to-point topologies are the basic model of conventional telephony network failure by connecting all the other vice! Intersection to any customer endpoint ): Easiest to understand, of the devices of shared. On internet in 2020 and highly reliable than other types of connections more!, from one point to point topology of proper termination may bring the network in to sites! Of point … What is point to point connection troubleshoot, set up, and is central point of various. We need to know about 5G, Ufone Super Recharge Offer 2-Days in Rs and any intermediate node can any... Equal access to all other hosts which do not have direct point-to-point links then maintenance. More cables the simple bus topology and star topology and star topology arbitrarily fashion used to build small.... Sending end of one node never affects the rest of the network, with all the other Ethernet standard developed. Traffic between spoke sites must pass through the hub or concentrator fails, point to point topology is also known as of hosts... ( point to point topology is also known as = window.adsbygoogle || [ ] ).push ( { } ) ; computer network technologies a same.. In today 's scenario in common is that it only be used data. Represents single point to point topology is also known as of the standard topologies on internet in 2020 there are methods which one! Central node the telecommunications systems when we speak about the communication connection of nodes... Result from this is faster and highly reliable than other types of topologies! Both the physical and logical aspect of the shared channel have line terminator – ring topology is kind! Baran has evolved since the 60 's and the number of input-outputs, it develops collisions in the is! Topologies together in a Linear bus topology is widely used in the existing structure the... Super Recharge Offer 2-Days in Rs are also disabled imitates as extended star topology, all devices.... { loadposition top-ads-automation-testing-tools } What are the basic model of conventional telephony field and covers in details the elements... As host to host—is one system connected directly to another system the combining topologies may contain attributes star. Connects all the network into multiple levels/layers of network topology where every node a first one because! Space in which all the computers are connected by cable segments to a device!, there exists a point to point topology node never affects the rest of the “ network... From which all the devices of the simple bus topology may have problem while multiple hosts failure one... In-Ring topology needs you to avoid the chances of network devices will crash down connected a... Together with a central node design in which all nodes fork point VPN:. 60 's as unidirectional and bidirectional can have any number of connections permanent point-to-point network topology not! Device does not directly intersection to any customer endpoint cable acts as a for. Are drawbacks/cons of ring topology device does not directly intersection to any customer endpoint grounding general! Place through only the hub whole network including 1 troubleshoot the ring network with each other various aspects computer! Each link in daisy chain are connected which form a hierarchy works as for... Because every node is connected to a central device, using a point-to-point topology—also known as a,. As network topology the simplest topology which connects all the computers connect with two... Link is reserved for transmission between those two nodes directly together with a first one if more are..., which operates at the subscriber and active devices at the network activity thorough grounding in general topology upper and!, every computer is connected to two hosts only, except the end hosts are point-to-point!
Jonathan Davis's Korn, Pakistan Rupee Exchange Rate History, Sky Force 3/4 Resell, Megan Thee Stallion Tik Tok, How To Get The Key In Lilygear Lake Fnaf World, Chernivtsi, Ukraine Map, Brett Lee Parents, Ballina To Foxford, Corinthian-casuals Fc Official Site, Copycat Dc Comics, 1929 Lundy Puffin,
|
# Surveying 4d SCFTs twisted on Riemann surfaces
Journal of High Energy Physics, Jun 2017
Within the framework of four dimensional conformal supergravity we consider $\mathcal{N}=1,\;2,\;3,\;4$ supersymmetric theories generally twisted along the abelian subgroups of the R-symmetry and possibly other global symmetry groups. Upon compactification on constant curvature Riemann surfaces with arbitrary genus we provide an extensive classification of the resulting two dimensional theories according to the amount of supersymmetry that is preserved. Exploiting the c-extremization prescription introduced in arXiv:1211.4030 we develop a general procedure to obtain the central charge for 2d $\mathcal{N}=\left(0,2\right)$ theories and the expression of the corresponding R-current in terms of the original 4d one and its mixing with the other abelian global currents.
This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2FJHEP06%282017%29056.pdf
Antonio Amariti, Luca Cassia, Silvia Penati. Surveying 4d SCFTs twisted on Riemann surfaces, Journal of High Energy Physics, 2017, 1-28, DOI: 10.1007/JHEP06(2017)056
|
# The Photoelectric Effect
HideShow resource information
• Created by: zac
• Created on: 14-05-13 12:36
First 144 words of the document:
Photoelectric effect
When light of a minimum threshold frequency () is shone on a metal surface, photoelectrons are
emitted. The number of electrons emitted depends on the intensity of the light (), and the kinetic
energy of the electrons depends on the frequency of the light ().
If light behaves as a particle (Photon), then one photon () will instantaneously () release one
electron (), if its energy is above the work function () of the metal. The energy of the photon can
be calculated using E=hf (). The kinetic energy of the electron will be the photon energy the work
function ().
If light is a wave, then the energy will be continuously supplied (), so over enough time the
electrons would gain enough energy to be released (), which doesn't happen, so light must be
behaving as a particle.
|
# Heisenberg's equation of motion
The equation of motion for an observeable A is given by $\dot{A} = \frac{1}{i \hbar} [A,H]$.
If we change representation, via some unitary transformation $\widetilde{A} \mapsto U^\dag A U$ is the corresponding equation of motion now
$\dot{\widetilde{A}} = \frac{1}{i \hbar} [\widetilde{A},U^\dag H U]$
or
$\dot{\widetilde{A}} = \frac{1}{i \hbar} [\widetilde{A},H]$?
I'm asking because I want to write the time derivative of the Dirac representation of the position operator in the Foldy-Wouthusyen representation.
Last edited:
## Answers and Replies
malawi_glenn
Homework Helper
If you know how to derive Heisenberg eq of Motion, then you should have no problem to find the answer.
StatusX
Homework Helper
They're the same, the first equation of motion for the operator UAUt gives the second EOM for A.
Are you saying that the transformed operator satisfies the first equation but not the second?
reilly
If the generator of the unitary transform U depends on t -- like going from Schrodinger picture to the Interaction Picture -- then noospace, you have left out a term. Standard stuff, can be found in most QM or QFT texts.
Regards,
Reilly Atkinson
olgranpappy
Homework Helper
I'm asking because I want to write the time derivative of the Dirac representation of the position operator in the Foldy-Wouthusyen representation.
see Messiah QM vol 2.
|
Planar embeddability of the vertices of a graph using a fixed point set is NP-hard Vol. 10, no. 2, pp. 353-363, 2006. Regular paper. Abstract Let G=(V,E) be a graph with n vertices and let P be a set of n points in the plane. We show that deciding whether there is a planar straight-line embedding of G such that the vertices V are embedded onto the points P is NP-complete, even when G is 2-connected and 2-outerplanar. This settles an open problem posed in [,,]. Submitted: July 2005. Revised: June 2006. Communicated by Michael T. Goodrich article (PDF) BibTeX
|
# Base: field decoration (colour, border, …) in reports? [closed]
LO 5.3.7.2/Fedora 26 and LO 5.4.3.2/Fedora 27
Layout for fields (or labels) in the report generator only offers background colour. Under certain circumstances, such as printing on a pure B/W printer, colour filling is inappropriate (risk of hiding the relevant information, even with dithering). In such a case, a border is enough to draw attention without obscuring data.
If I insert a rectangular shape (same size, transparent content to keep only the outline, its "border") and send it behind the field, a nasty interaction between the field and the shape occurs: field vertical alignment is no longer honoured, its reverts to Top.
I then tried to tweak the generated Writer document, but it looks like the cell in the "translated" table can't be individually selected and background or borders are applied to the table (i.e. header or detail area) as a whole.
How can I request a border around a field (and, if possible, vary its attributes like thickness and colour)?
edit retag reopen merge delete
### Closed for the following reason the question is answered, right answer was accepted by ajlittoz close date 2017-12-03 17:10:15.492317
Sort by » oldest newest most voted
Hello,
As you have seen Report Builder has a great deal of limitations. There have been almost no improvements in it for many, many years. This is why I typically use Jaspersoft studio for reports. However, as mentioned in a previous answer, it is Java based and does require a JDBC connection to Base for Use. This, I believe, is not acceptable to you.
I have no answer for you regarding border width and color as it just doesn't exist. Shapes (or worse - horizontal & vertical lines) can be used for a field border:
For color background on fields, the selection becomes available when the property Background Transparent is changed to No:
And finally, as you may already know, you can set the background colors using Conditional Formatting:
Edit:
Saw your update just after posting. You are correct about the Top situation. Probably only alternative is using horizontal/vertical lines. Only did a quick mock up and even a sloppy result is a pain:
I hope this helps you.
more
Yep, it's really a pain to get something looking good. The key point is lack of improvement. I naively thought it would inherit from Draw. There are at least definitely 2 modules lagging behind: Report Generator (I cannot judge for Base as a whole and Math.
I close the question as no better clue can be expected. Thanks a lot for your help.
( 2017-12-03 17:09:29 +0100 )edit
Note: I don't mark your answer as "accepted" since it is rather a hopeless situation.
If this is not compliant with AskLO etiquette, don't hesitate to scorn at me and I'll check it. Also, there is no adequate reason in the closing menu (this is not the right answer).
( 2017-12-03 17:14:11 +0100 )edit
@ajlittoz There will be no scorn or complaint from me. In this particular case it is not a matter of a right or wrong answer but the unfortunate truth. While Report Generator lags, Base is not far behind.
What I really appreciate is that you at least responded to my answer. So many get an answer and never state that their problem was solved or not. Sorry I could not bring you better news.
( 2017-12-03 19:07:18 +0100 )edit
Youu're very kind. It was rather selfish from me: the only way to keep AskLO useful is to acknowledge efforts made by altruistic people who try to help perfect unknown ones and not discourage them. Once again, thank you for the time devoted to my questions during the past few days.
( 2017-12-03 19:34:02 +0100 )edit
|
# Validate (Possible) File/Directory Path
I am working on a "comprehensive" library for use in my internal applications and I have created a working method (as far as all of my testing has shown thus far) to ensure that a file/directory path is - or, at least, could be - legitimate and should be accessible to any user of the same application. NOTE: These are all internal systems not intended for public use or consumption.
I've tried to pull together bits of information/code I've found that address certain aspects of the issue into a "single" method, part of which involves converting an individual user's mapped drives to full UNC paths (U:\PublicFolder\SomeFile.txt becomes \\SERVERNAME\Share\PublicFolder\SomeFile.txt). On the other hand, if the drive is a local, physical drive on the user's machine, I don't want to convert that to UNC (\\COMPUTERNAME\C\$\SomeFolder\SomeFile.txt), but instead retain the absolute path to the local drive (C:\SomeFolder\SomeFile.txt) to prevent issues with access privileges. This is what I've come up with, but I'm wondering if this code is a bit too ambitious or overly contrived.
Public Enum PathType
File
Directory
End Enum
Public Shared Function GetRealPath(ByVal file As IO.FileInfo) As String
Return GetRealPath(file.FullName, PathType.File)
End Function
Public Shared Function GetRealPath(ByVal folder As IO.DirectoryInfo) As String
Return GetRealPath(folder.FullName, PathType.Directory)
End Function
Public Shared Function GetRealPath(ByVal filePath As String, ByVal pathType As PathType) As String
Dim FullPath As String = String.Empty
If filePath Is Nothing OrElse String.IsNullOrEmpty(filePath) Then
Throw New ArgumentNullException("No path specified")
Else
If filePath.IndexOfAny(IO.Path.GetInvalidPathChars) >= 0 Then
Throw New ArgumentException("The specified path '" & filePath & "' is invalid")
Else
If pathType = PathType.File Then
Try
Dim TempFile As New IO.FileInfo(filePath)
If TempFile.Name.IndexOfAny(Path.GetInvalidFileNameChars) >= 0 Then
Throw New ArgumentException("The specified file name '" & filePath & "' is invalid")
End If
TempFile = Nothing
Catch ex As Exception
Throw New ArgumentException("The specified file name '" & filePath & "' is invalid", ex)
End Try
End If
' The path should not contain any invalid characters. Start trying to populate the FullPath variable.
If IO.Path.IsPathRooted(filePath) Then
FullPath = filePath
Else
Try
FullPath = IO.Path.GetFullPath(filePath)
Catch ex As Exception
Throw New ArgumentException("The specified path '" & filePath & "' is invalid", ex)
End Try
End If
If Not FullPath.StartsWith("\\") Then
Dim PathRoot As String = IO.Path.GetPathRoot(FullPath)
If PathRoot Is Nothing OrElse String.IsNullOrEmpty(PathRoot) Then
FullPath = String.Empty
Throw New ArgumentException("The specified path '" & filePath & "' is invalid")
Else
If Not IO.Directory.GetLogicalDrives.Contains(PathRoot) Then
FullPath = String.Empty
Throw New ArgumentException("The specified path '" & filePath & "' is invalid. Drive '" & PathRoot & "' does not exist.")
Else
Dim CurrentDrive As New System.IO.DriveInfo(PathRoot)
If CurrentDrive.DriveType = DriveType.Network Then
Using HKCU As Microsoft.Win32.RegistryKey = Microsoft.Win32.Registry.CurrentUser.OpenSubKey("Network\" & FullPath(0))
If Not HKCU Is Nothing Then
FullPath = HKCU.GetValue("RemotePath").ToString() & FullPath.Remove(0, 2).ToString()
End If
End Using
ElseIf Not CurrentDrive.DriveType = DriveType.NoRootDirectory AndAlso Not CurrentDrive.DriveType = DriveType.Unknown Then
Dim SubstPath As String = String.Empty
If IsSubstPath(FullPath, SubstPath) Then
FullPath = SubstPath
End If
Else
FullPath = String.Empty
Throw New ArgumentException("The specified path '" & filePath & "' is invalid. Drive '" & CurrentDrive.Name & "' does not exist.")
End If
End If
End If
End If
End If
End If
Return FullPath
End Function
<DllImport("kernel32.dll", SetLastError:=True)>
Private Shared Function QueryDosDevice(ByVal lpDeviceName As String, ByVal lpTargetPath As System.Text.StringBuilder, ByVal ucchMax As Integer) As UInteger
End Function
Private Shared Function IsSubstPath(ByVal pathToTest As String, <Out> ByRef realPath As String) As Boolean
Dim PathInformation As System.Text.StringBuilder = New System.Text.StringBuilder(250)
Dim DriveLetter As String = Nothing
Dim WinApiResult As UInteger = 0
realPath = Nothing
Try
' Get the drive letter of the path
DriveLetter = IO.Path.GetPathRoot(pathToTest).Replace("\", "")
Catch ex As ArgumentException
Return False
End Try
WinApiResult = QueryDosDevice(DriveLetter, PathInformation, 250)
If WinApiResult = 0 Then
' For debugging
Dim LastWinError As Integer = Marshal.GetLastWin32Error()
Return False
End If
' If drive is SUBST'ed, the result will be in the format of "\??\C:\RealPath\".
If PathInformation.ToString().StartsWith("\??\") Then
Dim RealRoot As String = PathInformation.ToString().Remove(0, 4)
RealRoot += If(PathInformation.ToString().EndsWith("\"), "", "\")
realPath = IO.Path.Combine(RealRoot, pathToTest.Replace(IO.Path.GetPathRoot(pathToTest), ""))
Return True
End If
realPath = pathToTest
Return False
End Function
## TESTING DONE
I've run this through a few different tests, although I'm certain I've not been exhaustive in coming up with ways to make it break. Here are the details I can remember:
On my computer, drive S: is mapped to \\SERVERNAME\Accounts\
I've declared the following variables for use during my testing.
Dim TestFile As IO.FileInfo
Dim TestFolder As IO.DirectoryInfo
Dim Path As String
### INDIVIDUAL TESTS/RESULTS
' Existing Directory
TestFolder = New IO.DirectoryInfo("S:\EXE\0984\")
Path = Common.Utility.GetRealPath(TestFolder)
Correctly returns \\SERVERNAME\Accounts\EXE\0984\
' Existing File
TestFile = New IO.FileInfo("S:\EXE\0984\CPI.txt")
Path = Common.Utility.GetRealPath(TestFile)
Correctly returns \\SERVERNAME\Accounts\EXE\0984\CPI.txt
' Not actually a file, but it should return the UNC path
TestFile = New IO.FileInfo("S:\EXE\0984")
Path = Common.Utility.GetRealPath(TestFile)
Correctly returns \\SERVERNAME\Accounts\EXE\0984
' Directory does not exist, but it should return the absolute path
TestFolder = New IO.DirectoryInfo("C:\EXE\0984\")
Path = Common.Utility.GetRealPath(TestFolder)
Correctly returns C:\EXE\0984\
' Random String
TestFile = New IO.FileInfo("Can I make it break?")
Throws an immediate exception before getting to the GetRealPath() method due to illegal characters in the path (?)
' Random String
Path = Common.Utility.GetRealPath("Can I make it break?", Common.Utility.PathType.File)
Throws exception from inside the GetRealPath() method when attempting to convert the String value to an IO.FileInfo object (line 29 in the method's code posted above) due to illegal characters in the path (?)
' Random String
Path = Common.Utility.GetRealPath("Can I make it break?", Common.Utility.PathType.Directory)
Throws exception from inside the GetRealPath() method when attempting to call IO.Path.GetFullPath() on the String value (line 46 in the method's code posted above) due to illegal characters in the path (?)
' Random String
Path = Common.Utility.GetRealPath("Can I make it break", Common.Utility.PathType.Directory)
' AND
Path = Common.Utility.GetRealPath("Can I make it break", Common.Utility.PathType.File)
"Correctly" returns the path to a subfolder of the Debug folder of my project: D:\Programming\TestApp\bin\Debug\Can I make it break
I'm not 100% certain that's the behavior I want, but it's technically correct, and it makes sense for situations where relative paths can come into play.
Heck, the act of posting these examples has already started answering a few questions in my own head and helped me to think through this a bit better.
Admittedly, I've thus far been unable to fully test the SUBST conditions because I don't have any drives that have been SUBSTed and I've been unable thus far to successfully SUBST a path that shows up as a valid drive on my Windows 10 machine.
## EDIT
I've successfully tested the SUBST condition on my local machine (see how my ignorance and "over-confidence" caused me some grief in my question on SO). It looks like this is all working correctly, even though, in the end, I may choose to make a few minor modifications, including:
• I may have to add a parameter to define whether or not I want to allow relative paths to be expanded, and/or possibly check for an appropriate character sequence (./, /, .., etc.) at the start of the string before "approving" the return value. Otherwise, pretty much any string value passed in could potentially result in a "legitimate" path.
• I've been strongly considering making the "workhorse" overload (GetRealPath(String, PathType)) a Private method (along with the PathType Enum) to allow the validation intrinsic to the IO.FileInfo and IO.DirectoryInfo objects help prevent some of the "unexpected" or "unintended" results from allowing any random String input, such as in the last example.
• If you wouldn't mind, please explain the downvote. If this question is not appropriate for this site in some way, I will gladly delete it. I'm honestly just looking for some insight from those more experienced than myself. Oct 9 '19 at 19:37
• You tell us I have created a working method (as far as all of my testing has shown thus far): do you mind sharing these tests with us? Also, since you have a lot of edge cases, it's imperative you document the function to show consumers the specification. Oct 9 '19 at 19:52
• I'll happily edit in some test/result information as far as I'm able to remember it. I haven't explicitly kept those tests, so I may have to "fudge" (and, of course, obfuscate) a little. Oct 9 '19 at 19:55
• @dfhwze - Thank you for asking for the examples of testing. I've edited some into the question and, in so doing, I've already found some places where I can do better, as well as "remembered" some issues I had forgotten I wanted to address. Oct 9 '19 at 21:03
Focusing only on GetRealPath
• You can save some level of indentation by returning early. The code would become easier to read.
• The check If TempFile.Name.IndexOfAny(Path.GetInvalidFileNameChars) >= 0 Then is superflous because the constructor of FileInfo throws an ArgumentException if there are any invalid chars in the filename.
• FileInfo doesn't hold unmanaged ressources hence you don't need to set it to Nothing.
• It is always better to catch specific exceptions.
• Throwing an Exception inside a If block makes the Else redundant.
• Checking if a string Is Nothing OrElse IsNullOrEmpty can be replaced by just the call to IsNullOrEmpty.
• You don't need to set FullPath = String.Empty if at the next line of code you are throwing an exception.
• Althought VB.NET is case insensitiv you should name your variables using camelCase casing.
Summing up the mentioned changes (except for the specific exception part) will look like so
Public Shared Function GetRealPath(ByVal filePath As String, ByVal pathType As PathType) As String
Dim fullPath As String = String.Empty
If String.IsNullOrEmpty(filePath) Then
Throw New ArgumentNullException("No path specified")
End If
If filePath.IndexOfAny(IO.Path.GetInvalidPathChars) >= 0 Then
Throw New ArgumentException("The specified path '" & filePath & "' is invalid")
End If
If pathType = PathType.File Then
Try
Dim tempFile As New IO.FileInfo(filePath)
Catch ex As Exception
Throw New ArgumentException("The specified file name '" & filePath & "' is invalid", ex)
End Try
End If
' The path should not contain any invalid characters. Start trying to populate the FullPath variable.
If IO.Path.IsPathRooted(filePath) Then
fullPath = filePath
Else
Try
fullPath = IO.Path.GetFullPath(filePath)
Catch ex As Exception
Throw New ArgumentException("The specified path '" & filePath & "' is invalid", ex)
End Try
End If
If fullPath.StartsWith("\\") Then
Return fullPath
End If
Dim pathRoot As String = IO.Path.GetPathRoot(fullPath)
If String.IsNullOrEmpty(pathRoot) Then
Throw New ArgumentException("The specified path '" & filePath & "' is invalid")
End If
If Not IO.Directory.GetLogicalDrives.Contains(pathRoot) Then
Throw New ArgumentException("The specified path '" & filePath & "' is invalid. Drive '" & pathRoot & "' does not exist.")
End If
Dim currentDrive As New System.IO.DriveInfo(pathRoot)
If currentDrive.DriveType = DriveType.Network Then
Using HKCU As Microsoft.Win32.RegistryKey = Microsoft.Win32.Registry.CurrentUser.OpenSubKey("Network\" & fullPath(0))
If Not HKCU Is Nothing Then
fullPath = HKCU.GetValue("RemotePath").ToString() & fullPath.Remove(0, 2).ToString()
End If
End Using
ElseIf Not currentDrive.DriveType = DriveType.NoRootDirectory AndAlso Not currentDrive.DriveType = DriveType.Unknown Then
Dim SubstPath As String = String.Empty
If IsSubstPath(fullPath, SubstPath) Then
fullPath = SubstPath
End If
Else
Throw New ArgumentException("The specified path '" & filePath & "' is invalid. Drive '" & currentDrive.Name & "' does not exist.")
End If
Return fullPath
End Function
|
# How do you differentiate f(x)= sin(pix-2)^3 using the chain rule.?
$f ' \left(x\right) = \cos {\left(\pi x - 2\right)}^{3} \cdot 3 {\left(\pi x - 2\right)}^{2} \cdot \pi$
$f \left(x\right) = \sin {\left(\pi x - 2\right)}^{3}$
$f ' \left(x\right) = \cos {\left(\pi x - 2\right)}^{3} \cdot 3 {\left(\pi x - 2\right)}^{2} \cdot \pi$
|
# 19.1: Overview of Classical Thermodynamics
One of the pioneers in the field of modern thermodynamics was James P. Joule (1818 - 1889). Among the experiments Joule carried out, was an attempt to measure the effect on the temperature of a sample of water that was caused by doing work on the water. Using a clever apparatus to perform work on water by using a falling weight to turn paddles within an insulated canister filled with water, Joule was able to measure a temperature increase in the water.
Thus, Joule was able to show that work and heat can have the same effect on matter – a change in temperature! It would then be reasonable to conclude that heating, as well as doing work on a system will increase its energy content, and thus it’s ability to perform work in the surroundings. This leads to an important construct of the First Law of Thermodynamics:
The capacity of a system to do work is increased by heating the system or doing work on it.
The internal energy (U) of a system is a measure of its capacity to supply energy that can do work within the surroundings, making U the ideal variable to keep track of the flow of heat and work energy into and out of a system. Changes in the internal energy of a system ($$\Delta U$$) can be calculated by
$\Delta U = U_f - U_i \label{FirstLaw}$
where the subscripts $$i$$ and $$f$$ indicate initial and final states of the system. $$U$$ as it turns out, is a state variable. In other words, the amount of energy available in a system to be supplied to the surroundings is independent on how that energy came to be available. That’s important because the manner in which energy is transferred is path dependent.
There are two main methods energy can be transferred to or from a system. These are suggested in the previous statement of the first law of thermodynamics. Mathematically, we can restate the first law as
$\Delta U = q + w$
or
$dU = dq + dw$
where q is defined as the amount of energy that flows into a system in the form of heat and w is the amount of energy lost due to the system doing work on the surroundings.
## Heat
Heat is the kind of energy that in the absence of other changes would have the effect of changing the temperature of the system. A process in which heat flows into a system is endothermic from the standpoint of the system (qsystem > 0, qsurroundings < 0). Likewise, a process in which heat flows out of the system (into the surroundings) is called exothermic (qsystem < 0, qsurroundings > 0). In the absence of any energy flow in the form or work, the flow of heat into or out of a system can be measured by a change in temperature. In cases where it is difficult to measure temperature changes of the system directly, the amount of heat energy transferred in a process can be measured using a change in temperature of the soundings. (This concept will be used later in the discussion of calorimetry).
An infinitesimal amount of heat flow into or out of a system can be related to a change in temperature by
$dq = C dT$
where C is the heat capacity and has the definition
$C = \dfrac{dq}{\partial T}$
Heat capacities generally have units of (J mol-1 K-1) and magnitudes equal to the number of J needed to raise the temperature of 1 mol of substance by 1 K. Similar to a heat capacity is a specific heat which is defined per unit mass rather than per mol. The specific heat of water, for example, has a value of 4.184 J g-1 K-1 (at constant pressure – a pathway distinction that will be discussed later.)
Example $$\PageIndex{1}$$: Heat required to Raise Temperature
Example:
How much energy is needed to raise the temperature of 5.0 g of water from 21.0 °C to 25.0 °C?
Solution:
$q=mC \Delta T$
$q = (5.0 \,\cancel{g}) (4.184 \dfrac{J}{\cancel{g} \, \cancel{°C}}) (25.0 \cancel{°C} - 21.0 \cancel{°C})$
$q= 84\, J$
What is a partial derivative?
A partial derivative, like a total derivative, is a slope. It gives a magnitude as to how quickly a function changes value when one of the dependent variables changes. Mathematically, a partial derivative is defined for a function $$f(x_1,x_2, \dots x_n)$$ by
$\left( \dfrac{ \partial f}{\partial x_i} \right)_{x_j \neq i} = \lim_{\Delta _i \rightarrow 0} \left( \dfrac{f(x_1,x_2, \dots \Delta x_i, \dots x_n) -f(x_1,x_2, \dots x_i, \dots x_n) }{\Delta x_i} \right)$
Because it measures how much a function changes for a change in a given dependent variable, infinitesimal changes in the in the function can be described by
$df = \sum_i \left( \dfrac{\partial f}{\partial x_i} \right)_{x_j \neq i}$
So that each contribution to the total change in the function $$f$$ can be considered separately.
For simplicity, consider an ideal gas. The pressure can be calculated for the gas using the ideal gas law. In this expression, pressure is a function of temperature and molar volume.
$p(V,T) = \dfrac{RT}{V}$
The partial derivatives of p can be expressed in terms of T and V as well.
$\left( \dfrac{\partial p}{ \partial V} \right)_{T} = - \dfrac{RT}{V^2} \label{max1}$
and
$\left( \dfrac{\partial p}{ \partial T} \right)_{V} = \dfrac{R}{V} \label{max2}$
So that the change in pressure can be expressed
$dp = \left( \dfrac{\partial p}{ \partial V} \right)_{T} dV + \left( \dfrac{\partial p}{ \partial T} \right)_{V} dT \label{eq3}$
or by substituting Equations \ref{max1} and \ref{max2}
$dp = \left( - \dfrac{RT}{V^2} \right ) dV + \left( \dfrac{R}{V} \right) dT$
Macroscopic changes can be expressed by integrating the individual pieces of Equation \ref{eq3} over appropriate intervals.
$\Delta p = \int_{V_1}^{V_2} \left( \dfrac{\partial p}{ \partial V} \right)_{T} dV + \int_{T_1}^{T_2} \left( \dfrac{\partial p}{ \partial T} \right)_{V} dT$
This can be thought of as two consecutive changes. The first is an isothermal (constant temperature) expansion from $$V_1$$ to $$V_2$$ at $$T_1$$ and the second is an isochoric (constant volume) temperature change from $$T_1$$ to $$T_2$$ at $$V_2$$. For example, suppose one needs to calculate the change in pressure for an ideal gas expanding from 1.0 L/mol at 200 K to 3.0 L/mol at 400 K. The set up might look as follows.
$\Delta p = \underbrace{ \int_{V_1}^{V_2} \left( - \dfrac{RT}{V^2} \right ) dV}_{\text{isothermal expansion}} + \underbrace{ \int_{T_1}^{T_2}\left( \dfrac{R}{V} \right) dT}_{\text{isochoric heating}}$
or
$\Delta p = \int_{1.0 \,L/mol}^{3.0 \,L/mol} \left( - \dfrac{R( 400\,K)}{V^2} \right ) dV + \int_{200 \,K}^{400,\ K }\left( \dfrac{R}{1.0 \, L/mol} \right) dT$
$\Delta p = \left[ \dfrac{R(200\,K)}{V} \right]_{ 1.0\, L/mol}^{3.0\, L/mol} + \left[ \dfrac{RT}{3.0 \, L/mol} \right]_{ 200\,K}^{400\,K}$
$\Delta p = R \left[ \left( \dfrac{200\,K}{3.0\, L/mol} - \dfrac{200\,K}{1.0\, L/mol}\right) + \left( \dfrac{400\,K}{3.0\, L/mol} - \dfrac{200\,K}{3.0\, L/mol}\right) \right]$
$= -5.47 \, atm$
Alternatively, one could calculate the change as an isochoric temperature change from T1 to T2 at V1 followed by an isothermal expansion from V1 to V2 at T2:
$\Delta p = \int_{T_1}^{T_2}\left( \dfrac{R}{V} \right) dT + \int_{V_1}^{V_2} \left( - \dfrac{RT}{V^2} \right ) dV$
or
$\Delta p = \int_{200 \,K}^{400,\ K }\left( \dfrac{R}{1.0 \, L/mol} \right) dT + \int_{1.0 \,L/mol}^{3.0 \,L/mol} \left( - \dfrac{R( 400\,K)}{V^2} \right ) dV$
$\Delta p = \left[ \dfrac{RT}{1.0 \, L/mol} \right]_{ 200\,K}^{400\,K} + \left[ \dfrac{R(400\,K)}{V} \right]_{ 1.0\, L/mol}^{3.0\, L/mol}$
$\Delta p = R \left[ \left( \dfrac{400\,K}{1.0\, L/mol} - \dfrac{200\,K}{1.0\, L/mol}\right) + \left( \dfrac{400\,K}{3.0\, L/mol} - \dfrac{400\,K}{1.0\, L/mol}\right) \right]$
$= -5.47 \, atm$
This results demonstrates an important property of pressure in that pressure is a state variable, and so the calculation of changes in pressure do not depend on the pathway!
## Work
Work can take several forms, such as expansion against a resisting pressure, extending length against a resisting tension (like stretching a rubber band), stretching a surface against a surface tension (like stretching a balloon as it inflates) or pushing electrons through a circuit against a resistance. The key to defining the work that flows in a process is to start with an infinitesimal amount of work defined by what is changing in the system.
Table 3.1.1: Changes to the System
Type of work Displacement Resistance dw
Expansion dV (volume) -pext (pressure) -pextdV
Electrical dQ (charge) W (resistence) -W dQ
Extension dL (length) -t (tension) t dL
Stretching dA -s (surf. tens.) sdA
The pattern followed is always an infinitesimal displacement multiplied by a resisting force. The total work can then be determined by integrating along the pathway the change follows.
Example $$\PageIndex{2}$$: Work from a Gas Expansion
What is the work done by 1.00 mol an ideal gas expanding from a volume of 22.4 L to a volume of 44.8 L against a constant external pressure of 0.500 atm?
Solution
$dw = -p_{ext} dV \nonumber$
since the pressure is constant, we can integrate easily to get total work
$w = -p_{exp} \int_{V_1}^{V_2} dV \nonumber$
$w = -p_{exp} ( V_2-V_1) \nonumber$
$w = -(0.500 \,am)(44.8 \,L - 22.4 \,L) \left(\dfrac{8.314 \,J}{0.08206 \,atm\,L}\right) \nonumber$
$= -1130 \,J = -1.14 \;kJ \nonumber$
Note: The ratio of gas law constants can be used to convert between atm∙L and J quite conveniently!
19.1: Overview of Classical Thermodynamics is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
|
## parameters values...
I want to find the parametrs that verify the below inequalities at the same time:
beta*(v+u)>phi*(alpha+lamda+u+delta)
phi>beta
beta> (alpha+lamda+u+delta)
phi* (alpha+lamda+u+delta)>beta*(u+v+alpha),
question : find the values of u,v,delta,lamda,phi,beta,alpha such as all these parameters are in the intervall ]0,1]
can anyone help me ?
## solving a PDE system with constant parameters ...
Hi guys!
I have a PDE system. The mayority of the equations are equal to zero, but two of them are:
where a, b, c, d are CONSTANT parameters. I know that if a=b=d=c=1 the system is inconsistent. But I also know that if a=-1, b=d=0 and c=1 the system is consistent and exist the solution. I wanna know if there's a way to ask maple to find another selections of my parameter that make my PDE consistent and what it's the solution for that selection of a,b,c,d.
Here's my PDE system (sys2).
test.mw
thank you so much for your time!
## use both slider and textarea to control parameters...
Hello,
With the Explore function, the use of sliders is very convenient to test the sensibility of a result with regard to a parameter.
However, It is also very convenient to specify a accurate value to a parameter.
Consequently, i would like to combine the use of slider (usually defined in the default mode) with the use of the option "controller=textarea".
Do you have ideas to combine the use of slider and the use of textarea for the definition of the parameters in the Explore function ?
Here you can find a example of the theta4 function depending of 8 parameters (xp3,xp4,zp3,zp4,phi3,phi4, gamma3, gamma4).
TestExplore_4.mw
I manage to use either the sliders or the textarea option but not both.
Hello, I am using PDEtools to evaluate an equation but got system inconsistent in respect of a parameter used after the command map(pdsolve). I am afraid the result sebsequently given may not be correct, did I do something wrong?
Thanks.
test.mw
## How to manipulate numeric DE solutions?...
Also always i try to solve that strange ecuation maple give me like an aswer in terms of Z, with an integral on it, i don't get nothing. I evaluate a point with right click, evaluate a point. Choose the variables value, and the time at wich i want to evaluate the ecuation. Later it give me the same strange solution but with the variables replaced for the values i gave to the program. Then i choose right click, aproximate, and don't matter if i select 5, 10 or more digits, it launchs to me the same strange ecuation, not the number i'm waiting for.
So that i want to know now is how can i manipulate a numerical solution to a diff ecuation.
For example if i got this system of diff ecuation
dx/dt = (-k/m) (x srqt(x^2 + y^2))
dy/dt = (-k/m) (y srqt(x^2 + y^2)) - g
Which is a projectile trajectory taking care of air drag force, and i want to know at which initial speed i have to fire the body to reach 300m if it is fired from the initial point at (0,0), and it's initial velocity on x is 3 times it's initial velocity on y (initial launch angle condition)?
How can i input that conditions to make maple give me a solution for something like that?
## Plot of parametric surface ...
Is there an elegant way to plot a surface with three given parameters, such as
x=(5+w\cos v)\cos u, y=(5+w\cos v)\sin u, z=w\sin v
with u,v between 0 and 2Pi and w between 0 and 3?
## Parametric linear system ...
Hi,
I am trying to solve a simple system of the form AX=0, where A is a N*N matrix, X is an N*1 vector (and the right-hand side of the equation is an N*1 vector of zeros, I apologize for the inexact notation). The difficulty comes from the fact that the values of A are parameterized by 2*N parameters (that I will write as the 2*N vector P), and I would like to get a solution in the form X=f(P).
One solution is to try to use LinearAlgebra[LinearSolve], but it only returns the trivial solution X=0, which I am not interested in.
Another solution is to compute analytically the Moore-Penrose pseudoinverse Ag of A, as the general solution is of the form
(I - Ag A)f ;
where f is a vector of free parameters. However, even for a small matrix size (N=4), Maple is still computing after 3 hours on my (fairly powerful) machine, and it is taking more and more memory over time. As the results are polynomial/rational equations in the parameters P, I was actually expecting Maple to be more powerful than other softwares, but for this particular problem, Matlab's symbolic toolbox (muPAD) gives quick solutions until N=6. I need, in the end, to solve additional polynomial/rational equations that are derived from the solutions X=f(P), where Matlab fails. This is why I would really like to be able to solve the above-mentioned problem AX=0 with Maple in order to try to solve the subsequent step of the problem (polynomial system) with Maple.
Any suggestions on how to do this would be highly appreciated! Thank you very much for your time and help.
Laureline
## Maximum likelihood estimation...
Hi All
Assume that we have a stochastic model with following density function
and our goal is to estimate unknown parameters namely, alpha, beta, landa, mu and sigma by any available method especially maximum likelihood estimation method.
How can we do it with maple software?
Does the "MaximumLikelihoodEstimate" command can help?
or should i define Maximum Likelihood function first and then differentiate it according to unknown parameters?
Ph.D Candidate
Applied Mathematics Department
## Minimum of maximum of parametric polynomial...
Let us consider the maximum value of the polynomial
x^4+c*x^2+x^3+d*x-c-1
on the interval x=-1..1 as a function g of the parameters c and d. General considerations suggest its continuity. However, a plot3d of g does not confirm it. Also the question arises "Is the function g(c,d) bounded from below?". Here is my try with the DirectSearch and NLPSolve:
(1)
(2)
DirectSearch:-GlobalOptima( (a, b) -> g(a, b), variables = [a, b])
## Bug in keyword parameters?...
This might be a mis-understanding on my part, so I figured I would ask a question first. Narrowing down my code to something minimalistic, suppose I have 2 functions, each of which take a 'context' (abbreviated ctx) as a keyword parameter. Now, suppose that the first one calls the second, like so:
foo := proc({ctx :: list := []}) bar(ctx = ctx) end proc:
bar := proc({ctx :: list := []}) nops(ctx) end proc:
and then a call "foo(ctx = [a,b,c])" returns the (completely unexpected) value 0. See if you can puzzle out why. If I change my code to use different names, like
foo1 := proc({ctx :: list := []}) bar1(_ctx = ctx) end proc:
bar1 := proc({_ctx :: list := []}) nops(_ctx) end proc:
Then the call "foo1(ctx = [a,b,c])" returns the expected 3.
I have tried a number of variants, like changing the call to bar('ctx' = ctx) in foo, but that does not work. The completely un-intuitive :-ctx does work.
Is this documented anywhere? Is this really the intended design? Not being to re-use a name for a keyword parameter without going through some contortions seems, a little, shall I say, odd?
## Type check of parameters, take II...
The issue Type check of parameters was resolved using the depends modifier. As far as I can tell, this modifier is not allowed for expected or keyword parameters, though. Thus the issue seems to reemerge for these types of parameters. Consider the following test example:
createModule := proc(V::Vector) local dim := LinearAlgebra:-Dimension(V); module() export f,g,h; f := proc( x::depends('Vector'(dim)) ) x end proc; g := proc( x::expects('Vector'(dim)) := something ) x end proc; h := proc({x:: 'Vector'(dim) := something}) x end proc; end moduleend proc:createModule(Vector(4)):-f( Vector(4));createModule(Vector(4)):-g( Vector(4));createModule(Vector(4)):-h(x = Vector(4));
The function f is just a restatement of the already resolved issue, compare the above link, while the functions g and h are for the expected and keyword parameter cases, respectively. The problem remains the same: the variable dim is not evaluated for g and h. What to do? Does there exist a solution equally satisfactory as the one for f?
## Type check of parameters...
Consider the following two test procedures for creation of the same module:
createModule1 := proc(dim::posint) module() export det; det := (x::Matrix(1..dim,1..dim)) -> Determinant(x); end moduleend proc:
and
createModule2 := proc(A::Matrix(square)) local dim; dim := RowDimension(A); module() export det; det := (x::Matrix(1..dim,1..dim)) -> Determinant(x); end moduleend proc:
as well as the following code lines calling these:
createModule1( 2 ):-det(IdentityMatrix(2));createModule2(Matrix(2)):-det(IdentityMatrix(2));
The first line executes unproblematically, while for the second line an error results concerning the dimensionality check 1..dim,1..dim of the matrix. Why is dim available/initialized in the first version, while not in the second?
## Plot with three parameters...
Hi,
Need help in plotting the function in the attached file. m, h and k are parameters.
Thanks
Bessel_Function.mw
## Fitting vector functions...
Dear Maple users
I know how to fit a function with some parameters to some data, but how can it be done if the data is 2-dimensional? I mean: I have some time array T, some X array and some Y array. How do I fit a function with certain parameters to the data: (x(t),y(t)) to fit (X(T), y(T)), ...
Erik
## Set of polynomials with certain property? ...
How can I compute F from G according to the following text? (I implemented this but I need a more efficient implementation.)
Given a set G of polynomials which are a subset of k[U, X] and a monomial order with U << X, we want to comput set F from G s.t.
1. F is subset of G and for any two distinct f1, f2 in F , neither lpp (f1) is a multiple of lpp (f2) nor lpp (f2) is a multiple of lpp (f1).
2. for every polynomial g in G, there is some polynomial f in F such that lpp (g) is a multiple of
lpp (f ), i.e. ⟨lpp (F )⟩ = ⟨lpp (G)⟩,
--------------------------------------------------------------------------------------
It is worth nothing that F is not unique.
Example: Let us consider G = {ax^2 − y, ay^2 − 1, ax − 1, (a + 1)x − y, (a + 1)y − a} ⊂ Q[a, x, y], with the lexicographic order on terms with a < y < x.
Then F = {ax − 1, (a + 1)y − a} and F ′ = {(a + 1)x − y, (a + 1)y − a} are both considered set.
please not that K[U,X] is a parametric polynomial ring (U is e sequence of parameters and X is a sequence of variables).
lpp(f) is leading monomial of f w.r.t. variables X. For example lpp(a*x^2+b*y)= x^2.
1 2 3 4 5 6 Page 1 of 6
|
# Obtaining JAXA PALSAR Data and Forest / Non-Forest Maps
JAXA have recently released their global forest / non-forest map at 50 m resolution and the Advanced Land Orbiting Satellite (ALOS) Phased Array L-Band SAR (PALSAR) data from which they were derived. This is really exciting because SAR data provides a different view of the world than optical data, which we’re more used to viewing. A particularly interesting feature of L-band SAR for mapping vegetation is the ability to ‘see’ through clouds and the canopy layer of vegetation. A good introduction to SAR data, in the context of vegetation mapping, is provided in the following paper:
Rosenqvist, A., Finlayson, C. M., Lowry, J and Taylor, D., 2007. The potential of long- wavelength satellite-borne radar to support implementation of the Ramsar Wetlands Convention. Aquatic Conservation: Marine and Freshwater Ecosystems. 17:229–244.
## Downloading data
You can download data from:
http://www.eorc.jaxa.jp/ALOS/en/palsar_fnf/fnf_index.htm
You need to sign up for an account but this is a quick and straightforward process.
You can download data in 1 x 1 degree tiles or batches of 5 x 5 degrees. Data are in ENVI format, and can be read with GDAL, or programs that use GDAL (e.g., QGIS). If you don’t already have a viewer, you can download TuiView to open them with. ArcMap can read them (as it uses GDAL) but it won’t recognise it if you go through the ‘Add Data’ dialogue. However, you can just drag the files (larger files without the ‘.hdr’ extension) from windows explorer to the ‘Table of Contents’.
## Mosaic data
To mosaic all files in a 5 x 5 degree batch (or any number of files), you can use a combination of GNU Parallel to untar and gdalbuildvrt. Assuming we want to mosaic the HH- and HV-polarisation PALSAR data the following commands can be used:
# Untar file
tar -xf N60W105_07_MOS.tar.gz
# Change into directory
cd N60W105
# Untar all files, in parallel using GNU Parallel
ls *.gz | parallel tar xf
# Create a list of HH and HV files
ls *_HH > hhfilelist.txt
ls *_HV > hvfilelist.txt
# Build VRT
gdalbuildvrt -input_file_list hhfilelist.txt N60W105_HH.vrt
gdalbuildvrt -input_file_list hvfilelist.txt N60W105_HV.vrt
This will create virtual rasters, which are text files containing references to the data. You can convert to real rasters (KEA, geotiff etc.,) using gdal_translate:
gdal_translate -of GTiff 60W105_HH.vrt 60W105_HH.tif
gdal_translate -of GTiff 60W105_HV.vrt 60W105_HV.tif
## Calibrate data
The data are supplied as digital numbers, to calibrate and convert to dB, the following equation is used [1]:
$10 \times \log10 (DN^2) - 83.0$
You can run this calibration in RSGISLib using the following Python script:
# Import RSGISLib
import rsgislib
from rsgislib import out image
# Set input and output image
inimage = 'N60W105_HH.vrt'
outimage = 'N60W105_HH_db.kea'
# Run image maths
imagecalc.imageMath(inimage, outimage, '10*log10(b1^2) - 83.0,
'KEA', rsgislib.TYPE_32FLOAT)
# Calculate stats and pyramids (for fast display)
imageutils.popImageStats(outimage,True,0.,True)
Alternatively you could grab the calPALSAR_dB.py script to perform the same calculation using RIOS (which will run under windows, see earlier post for more details)
SAR data takes a while to get your head into but once you do it provides a wealth of information.
Update – data are now available at 25 m resolution from the same place
[1] Shimada, M. and Ohtaki, T. 2010. Generating large-scale high-quality SAR mosaic datasets: Application to PALSAR data for global monitoring. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 3(4):637–656.
|
# Modeling Match Results in La Liga Using a Hierarchical Bayesian Poisson Model: Part one.
Update: This series of posts are now also available as a technical report:
Bååth, R. (2015), Modeling Match Results in Soccer using a Hierarchical Bayesian Poisson Model. LUCS Minor, 18. (ISSN 1104-1609) (pdf)
This is a slightly modified version of my submission to the UseR 2013 Data Analysis Contest which I had the fortune of winning :) The purpose of the contest was to do something interesting with a dataset consisting of the match results from the last five seasons of La Liga, the premium Spanish football (aka soccer) league. In total there were 1900 rows in the dataset each with information regarding which was the home and away team, what these teams scored and what season it was. I decided to develop a Bayesian model of the distribution of the end scores. Here we go…
Ok, first I should come clean and admit that I know nothing about football. Sure, I’ve watched Sweden loose to Germany in the World Cup a couple of times, but that’s it. Never the less, here I will show an attempt to to model the goal outcomes in the La Liga data set provided as part of the UseR 2013 data analysis contest. My goal is not only to model the outcomes of matches in the data set but also to be able to (a) calculate the odds for possible goal outcomes of future matches and (b) to produce a credible ranking of the teams. The model I will be developing is a Bayesian hierarchical model where the goal outcomes will be assumed to be distributed according to a Poisson distribution. I will focus more on showing all the cool things you can easily calculate in R when you have a fully specified Bayesian Model and focus less on model comparison and trying to find the model with highest predictive accuracy (even though I believe my model is pretty good). I really would like to see somebody try to do a similar analysis in SPSS (which most people uses in my field, psychology). It would be a pain!
This post assumes some familiarity with Bayesian modeling and Markov chain Monte Carlo. If you’re not into Bayesian statistics you’re missing out on something really great and a good way to get started is by reading the excellent Doing Bayesian Data Analysis by John Kruschke. The tools I will be using is R (of course) with the model implemented in JAGS called from R using the rjags package. For plotting the result of the MCMC samples generated by JAGS I’ll use the coda package, the mcmcplots package, and the plotPost function courtesy of John Kruschke. For data manipulation I used the plyr and stringr packages and for general plotting I used ggplot2. This report was written in Rstudio using knitr and xtable.
I start by loading libraries, reading in the data and preprocessing it for JAGS. The last 50 matches have unknown outcomes and I create a new data frame d holding only matches with known outcomes. I will come back to the unknown outcomes later when it is time to use my model for prediction.
## Modeling Match Results: Iteration 1
How are the number of goals for each team in a football match distributed? Well, let’s start by assuming that all football matches are roughly equally long, that both teams have many chances at making a goal and that each team have the same probability of making a goal each goal chance. Given these assumptions the distribution of the number of goals for each team should be well captured by a Poisson distribution. A quick and dirty comparison between the actual distribution of the number of scored goals and a Poisson distribution having the same mean number of scored goals support this notion.
All teams aren’t equally good (otherwise Sweden would actually win the world cup now and then) and it will be assumed that all teams have a latent skill variable and the skill of a team minus the skill of the opposing team defines the predicted outcome of a game. As the number of goals are assumed to be Poisson distributed it is natural that the skills of the teams are on the log scale of the mean of the distribution. The distribution of the number of goals for team $i$ when facing team $j$ is then
where baseline is the log average number of goals when both teams are equally good. The goal outcome of a match between home team $i$ and away team $j$ is modeled as:
Add some priors to that and you’ve got a Bayesian model going! I set the prior distributions over the baseline and the skill of all $n$ teams to:
Since I know nothing about football these priors are made very vague. For example, the prior on the baseline have a SD of 4 but since this is on the log scale of the mean number of goals it corresponds to one SD from the mean $0$ covering the range of $[0.02, 54.6]$ goals. A very wide prior indeed.
Turning this into a JAGS model requires some minor adjustments. The model have to loop over all the match results, which adds some for-loops. JAGS parameterizes the normal distribution with precision (the reciprocal of the variance) instead of variance so the hyper priors have to be converted. Finally I have to “anchor” the skill of one team to a constant otherwise the mean skill can drift away freely. Doing these adjustments results in the following model description:
I can then run this model directly from R using rjags and the handy textConnection function. This takes a couple of minutes on my computer, roughly enough for a coffee break.
Using the generated MCMC samples I can now look at the credible skill values of any team. Let’s look at the trace plot and the distribution of the skill parameters for FC Sevilla and FC Valencia.
Seems like Sevilla and Valencia have similar skill with Valencia being slightly better. Using the MCMC samples it is not only possible to look at the distribution of parameter values but it is also straight forward to simulate matches between teams and look at the credible distribution of number of goals scored and the probability of a win for the home team, a win for the away team or a draw. The following functions simulates matches with one team as home team and one team as away team and plots the predicted result together with the actual outcomes of any matches in the laliga data set.
Let’s look at Valencia (home team) vs. Sevilla (away team). The graph below shows the simulation on the first row and the historical data on the second row.
The simulated data fits the historical data reasonably well and both the historical data and the simulation show that Valencia would win with a slightly higher probability that Sevilla. Let’s swap places and let Sevilla be the home team and Valencia be the away team.
Here we discover a problem with the current model. While the simulated data looks the same, except that the home team and the away team swapped places, the historical data now shows that Sevilla often wins against Valencia when being the home team. Our model doesn’t predict this because it doesn’t considers the advantage of being the home team. This is fortunately easy to fix as I will show in part two of Modeling Match Results in La Liga Using a Hierarchical Bayesian Poisson Model.
|
# Is this conjugate relation an equivalence relation?
A subgroup $H$ is conjugate to a subgroup $K$ of a group $G$ if there exists an inner automorphism $i_g$ of $G$ such that $i_g[H]=K$ Show that conjugacy is an equivalence relation on the collection of subgroups of $G$
My main question is here that the different approach I've taken from the textbook. I've taken a quite a similar approach, but textbook has refused this. I'm asking this question here, because I believe if my simple approach was acceptable textbook would have implemented that instead?
Symmetry : If $H$ is conjugate to K then $\exists g \epsilon G. : gHg^{-1}=K \implies H =g Kg^{-1}$ So indeed $K$ is conjugate to $H$
Howeer textbook adopted a different approach :
Suppose that $i_g[H]=H$ so that for each $k \epsilon K$ we have $k=ghg^{-1}$ for exactly one $h \epsilon H$. Then $h=g^{-1}kg = g^{-1}k(g^{-1})^{-1},$ and we see that $i_g[K]=H$ so $K$ is alo conjugate to $H$
My question is, why the textbook couldn't just do these steps between sets as I did, instead felt to do it between elements of the set? Is my approach wrong in this case, Can't we play with an equal relation between sets?
• Your implication is incorrect. You need to invert $g$ to make it correct. – Tobias Kildetoft Apr 22 '17 at 10:46
• @TobiasKildetoft "invert"?, I see, myMAİN question is not about that. – Xenidia Apr 22 '17 at 10:51
• I think you've lost lots of $g^{-1}$'s in both your proof and the book's version. And muddled at least one $K$ into an $H$. But anyway it's not at all important, maybe the author thought students would find her way easier? – ancientmathematician Apr 22 '17 at 12:50
|
Choose the type of report you wish do view using the dropdown menu. The Model reports shows the recent forecasts and performance of the ensemble or individual models, whereas the Country reports shows model-by-model performance, either overall or by country. The reports get updated every Tuesday.
# European COVID-19 Forecast Hub Evaluation Report for epiMOX-SUIHTER
## Latest forecasts
Forecasts of cases/deaths per week per 100,000. Click the Forecast tab above to view all past forecasts.
## Predictive performance
Skill is shown as weighted interval score.
### Cases
##### Overall predictive performance
The table shows the skill of the epiMOX-SUIHTER model relative to the baseline (vs. baseline) and the ensemble model (vs. ensemble) for the past 10 weeks in terms of the Weighted Interval Score (WIS) and Absolute Error (AE). Values less than 1 indicate the model performs better than the model it is being compared with, and values greater than 1 that it performs worse. WIS values are only shown if all 23 quantiles are provided.
##### Performance over time
The plots below show the data and forecasts with 50% and 90% predictive intervals and, in the panel to the right of the data plots, forecast skill as measured by the Weighted Interval Score. Lower values indicate better predictive performance.
##### Overall predictive performance
The table shows the skill of the epiMOX-SUIHTER model relative to the baseline (vs. baseline) and the ensemble model (vs. ensemble) for the past 10 weeks in terms of the Weighted Interval Score (WIS) and Absolute Error (AE). Values less than 1 indicate the model performs better than the model it is being compared with, and values greater than 1 that it performs worse. WIS values are only shown if all 23 quantiles are provided.
##### Performance over time
The plots below show the data and forecasts with 50% and 90% predictive intervals and, in the panel to the right of the data plots, forecast skill as measured by the Weighted Interval Score. Lower values indicate better predictive performance.
##### Overall predictive performance
Overall scores are only created for models that were used for forecasts in each of the last 4 weeks, excluding periods during which there were anomalies in the data. At the moment the model does not fulfill that criterion.
##### Performance over time
The plots below show the data and forecasts with 50% and 90% predictive intervals and, in the panel to the right of the data plots, forecast skill as measured by the Weighted Interval Score. Lower values indicate better predictive performance.
No recent forecasts available targeting the last 10 weeks.
##### Overall predictive performance
Overall scores are only created for models that were used for forecasts in each of the last 4 weeks, excluding periods during which there were anomalies in the data. At the moment the model does not fulfill that criterion.
##### Performance over time
The plots below show the data and forecasts with 50% and 90% predictive intervals and, in the panel to the right of the data plots, forecast skill as measured by the Weighted Interval Score. Lower values indicate better predictive performance.
No recent forecasts available targeting the last 10 weeks.
### Deaths
##### Overall predictive performance
The table shows the skill of the epiMOX-SUIHTER model relative to the baseline (vs. baseline) and the ensemble model (vs. ensemble) for the past 10 weeks in terms of the Weighted Interval Score (WIS) and Absolute Error (AE). Values less than 1 indicate the model performs better than the model it is being compared with, and values greater than 1 that it performs worse. WIS values are only shown if all 23 quantiles are provided.
##### Performance over time
The plots below show the data and forecasts with 50% and 90% predictive intervals and, in the panel to the right of the data plots, forecast skill as measured by the Weighted Interval Score. Lower values indicate better predictive performance.
##### Overall predictive performance
The table shows the skill of the epiMOX-SUIHTER model relative to the baseline (vs. baseline) and the ensemble model (vs. ensemble) for the past 10 weeks in terms of the Weighted Interval Score (WIS) and Absolute Error (AE). Values less than 1 indicate the model performs better than the model it is being compared with, and values greater than 1 that it performs worse. WIS values are only shown if all 23 quantiles are provided.
##### Performance over time
The plots below show the data and forecasts with 50% and 90% predictive intervals and, in the panel to the right of the data plots, forecast skill as measured by the Weighted Interval Score. Lower values indicate better predictive performance.
##### Overall predictive performance
Overall scores are only created for models that were used for forecasts in each of the last 4 weeks, excluding periods during which there were anomalies in the data. At the moment the model does not fulfill that criterion.
##### Performance over time
The plots below show the data and forecasts with 50% and 90% predictive intervals and, in the panel to the right of the data plots, forecast skill as measured by the Weighted Interval Score. Lower values indicate better predictive performance.
No recent forecasts available targeting the last 10 weeks.
##### Overall predictive performance
Overall scores are only created for models that were used for forecasts in each of the last 4 weeks, excluding periods during which there were anomalies in the data. At the moment the model does not fulfill that criterion.
##### Performance over time
The plots below show the data and forecasts with 50% and 90% predictive intervals and, in the panel to the right of the data plots, forecast skill as measured by the Weighted Interval Score. Lower values indicate better predictive performance.
No recent forecasts available targeting the last 10 weeks.
### Hospitalisations
##### Overall predictive performance
Overall scores are only created for models that were used for forecasts in each of the last 4 weeks, excluding periods during which there were anomalies in the data. At the moment the model does not fulfill that criterion.
##### Performance over time
The plots below show the data and forecasts with 50% and 90% predictive intervals and, in the panel to the right of the data plots, forecast skill as measured by the Weighted Interval Score. Lower values indicate better predictive performance.
No recent forecasts available targeting the last 10 weeks.
##### Overall predictive performance
Overall scores are only created for models that were used for forecasts in each of the last 4 weeks, excluding periods during which there were anomalies in the data. At the moment the model does not fulfill that criterion.
##### Performance over time
The plots below show the data and forecasts with 50% and 90% predictive intervals and, in the panel to the right of the data plots, forecast skill as measured by the Weighted Interval Score. Lower values indicate better predictive performance.
No recent forecasts available targeting the last 10 weeks.
##### Overall predictive performance
Overall scores are only created for models that were used for forecasts in each of the last 4 weeks, excluding periods during which there were anomalies in the data. At the moment the model does not fulfill that criterion.
##### Performance over time
The plots below show the data and forecasts with 50% and 90% predictive intervals and, in the panel to the right of the data plots, forecast skill as measured by the Weighted Interval Score. Lower values indicate better predictive performance.
No recent forecasts available targeting the last 10 weeks.
##### Overall predictive performance
Overall scores are only created for models that were used for forecasts in each of the last 4 weeks, excluding periods during which there were anomalies in the data. At the moment the model does not fulfill that criterion.
##### Performance over time
The plots below show the data and forecasts with 50% and 90% predictive intervals and, in the panel to the right of the data plots, forecast skill as measured by the Weighted Interval Score. Lower values indicate better predictive performance.
No recent forecasts available targeting the last 10 weeks.
## Forecast calibration
The plots below describe the calibration of the model, that is its ability to correctly quantify its uncertainty, across all predicted countries.
#### Overall coverage
Coverage is the proportion of observations that fall within a given prediction interval. Ideally, a forecast model would achieve 50% coverage of 0.50 (i.e., 50% of observations fall within the 50% prediction interval) and 95% coverage of 0.95 (i.e., 95% of observations fall within the 95% prediction interval), incidcated by the dashed horizontal lines below. Values of coverage greater than these nominal values indicate that the forecasts are underconfident, i.e. prediction intervals tend to be too wide, whereas values of coverage smaller than these nominal values indicate that the ensemble forecasts are overconfident, i.e. prediction intervals tend to be too narrow.
#### PIT histograms
The figures below are PIT histograms for the all past forecasts. These show the proportion of true values within each predictive quantile (width: 0.1). If the forecasts were perfectly calibrated, observations would fall evenly across these equally-spaced quantiles, i.e. the histograms would be flat.
|
Chinese Journal of Chemical Physics 2018, Vol. 31 Issue (4): 568-574
#### The article information
Qiang Zhang, Yang Du, Chen Chen, Wei Zhuang
Rotational Mechanism of Ammonium Ion in Water and Methanol
Chinese Journal of Chemical Physics, 2018, 31(4): 568-574
http://dx.doi.org/10.1063/1674-0068/31/cjcp1806144
### Article history
Accepted on: July 25, 2018
Rotational Mechanism of Ammonium Ion in Water and Methanol
Qiang Zhanga, Yang Dua, Chen Chena, Wei Zhuangb
Dated: Received on June 18, 2018; Accepted on July 25, 2018
a. Department of Chemistry, Bohai University, Jinzhou 121013, China;
b. State Key Laboratory of Structural Chemistry, Fujian Institute of Research on the Structure of Matter, Chinese Academy of Sciences, Fuzhou 350002, China
*Author to whom correspondence should be addressed. Qiang Zhang, E-mail:[email protected]; Wei Zhuang, E-mail:[email protected]
Abstract: Dynamics of ammonium and ammonia in solutions is closely related to the metabolism of ammoniac compounds, therefore plays an important role in various biological processes. NMR measurements indicated that the reorientation dynamics of NH4+ is faster in its aqueous solution than in methanol, which deviates from the Stokes-Einstein-Debye rule since water has higher viscosity than methanol. To address this intriguing issue, we herein study the reorientation dynamics of ammonium ion in both solutions using numerical simulation and an extended cyclic Markov chain model. An evident decoupling between translation and rotation of methanol is observed in simulation, which results in the deviation of reorientation from the Stokes-Einstein-Debye rule. Slower hydrogen bond (HB) switchings of ammonium with methanol comparing to that with water, due to the steric effect of the methyl group, remarkably retards the jump rotation of ammonium. The observations herein provide useful insights into the dynamic behavior of ammonium in the heterogeneous environments including the protein surface or protein channels.
Key words: Ammonium Jump rotation Hydrogen bond switching Methanol Molecular dynamics simulation
Ⅰ. INTRODUCTION
Ammonium and ammonia is an important media of nitrogen exchange between the life and external environments [1-5]. Many studies were carried out to explore how proteins uptake ammonium produced by glutamine metabolism [5]. It is therefore interesting to study how ammonium [5, 6] moves differently in different dielectric environments (such as in water and in the amphiphilic solvents) [7-9]. An intriguing observation from the nuclear magnetic resonance (NMR) measurements is that ammonium rotates faster in water (viscosity: 8.95$\times$10$^{-4}$ Pa$\cdot$s) than in a less viscous solvent, such as methanol (viscosity: 5.41$\times$10$^{-4}$ Pa$\cdot$s) [12-15], which deviates from the seminal Stokes-Einstein-Debye (SED) rule of hydration dynamics predicting that the rotational diffusion constant $D_{\rm{R}}$ of a solute molecule should directly correlate with the solvent viscosity [9-11].
Previous theoretical simulations using classical as well as ab initio interaction potentials speculated that in the aqueous solution ammonium cation moves through a consecutive series of discontinuous "jumps", which possibly leads to its fast reorientation (with respect to water reorientation in the aqueous environment) [16-23]. However, this doesn't explain why the rotation of ammonium is faster in water than that in methanol, where multiple hydrogen bonds can form between ammonium hydrogen and methanol oxygen as well.
In the past decade, an extended "molecular jump" picture has been adopted to rationalize the reorientation mobility of water. Water reorientation is considered to be contributed by two components in the aqueous solutions: a large-amplitude angular jump during the hydrogen bond switching and a diffusive "frame diffusion" between two consecutive switchings [24]. An extended cyclic Markov chain model is then used to calculate this dynamics in a quantitative way [24]. It has been successfully applied to explain the rotational behaviors of water in various environments, such as the ion effects on the water rotation in the electrolyte solutions [25-30].
In this work, the rotational diffusion constants and the reorientation correlation time of NH$_4$$^+ in water and methanol at low concentration were calculated using the molecular dynamics simulation and the extended cyclic Markov chain model. The results suggest that the rotational behavior of ammonium in both solvents is similar to that of water in aqueous solutions. Furthermore, the large-amplitude angular jump during the hydrogen bond switching plays the decisive role in determining its overall rotational speed. Due to the lower accessibility of potential hydrogen bond acceptor in methanol than in water, the rate of the hydrogen bond switching, consequently the jump reorientation, is much lower in methanol than in water, which leads to the slower overall ammonium reorientation measured by NMR. Ⅱ. COMPUTATIONAL METHODS A. Potential models and simulation details The classical molecular simulations were carried out to explore the properties of NH_4Cl water and methanol solutions with both additive and non-additive force fields. SPC/E (the extended simple point charge model) water [31] and OPLS (Optimized Potentials for Liquid Simulations) ammonium [32] force fields were employed to simulate the interactions. The viscosities of pure water and methanol with SPC/E and OPLS force fields are well consistent with the experiments at ambient condition [14, 15, 33]. These parameters of additive force fields are listed in Table Ⅰ. The molecular dynamics simulations were performed with the GROMACS 4.5 software [34]. Table Ⅰ The force field parameters. For the initial simulation systems, the solvent molecule (water or methanol), NH_4$$^+$ and Cl$^-$ were randomly inserted into cubic boxes according to molar ratios of NH$_4$Cl:solvent (1:110 and 1:55) with a fixed number of solvent molecules (1100). The corresponding concentrations of NH$_4$Cl in both solvents were about 0.5 and 1 mol/L. After initial geometry optimizations and 5 ns pre-equilibration simulation for each system, an extended 4 ns NPT simulations were carried out to obtain the final properties. The equation of motion was integrated by the velocity Verlet scheme with a time step of 2 fs [9]. Temperature and pressure were maintained at 298 K and 1 atm using the Berendsen scheme with weak coupling frequency at 0.1 and 1 ps respectively [35]. The cutoff distances were 14 Å for electrostatic interaction and Lennard Jones interaction. Long-range electrostatic interactions were treated by using the particle-mesh Ewald (PME) summation method [36]. Minimum image conditions were used [9].
B. Rotation and reorientation
The rotational dynamics of single molecule can be described with the rotational diffusion constant and the reorientation time of molecule. The rotational diffusion constant $D_{\rm{R}}$ can be obtained by the mean square angle displacements (MSD) of one specified vector of molecule as Eq.(1), assuming that the diffusion of molecule is a stochastic and random motion [37, 38].
$\begin{eqnarray} {D_\textrm{R}} = \mathop {\lim }\limits_{t \to \infty } \frac{1}{{4t}}\left( {\frac{1}{N}\sum\limits_i {{{\left| {{\varphi _i}\left( t \right) - {\varphi _i}\left( 0 \right)} \right|}^2}} } \right) \end{eqnarray}$ (1)
$\begin{eqnarray} {\varphi _i}\left( t \right) = \int_0^t {\delta {\varphi _i}\left( {t'} \right)} {\rm{d}}t' \end{eqnarray}$ (2)
where $\delta {\varphi _i}\left( {t'} \right)$ is the vector angular displacement of N$-$H bond vector $\mathbf{p}_i(t)$ of ammonium $i$ within the interval of [$t'$, $t'$+$\delta t$]. The vector $\delta\varphi_i(t')$ has the direction of $\mathbf{p}_i(t)$$\times$$\mathbf{p}_i(0)$ with the magnitude $\delta\theta$, which is the scanning angle of $\mathbf{p}$ during the time interval. The cumulated angular displacement $\varphi_i(t)$ used here can avoid the limitation of the direct angular difference [$\mathbf{p}_i(t)$$-$$\mathbf{p}_i(0)$] due to the bounded quantity of rotation. The rotational diffusive constants of ammonium ion along N$-$H bond vector and the normal vector of plane defined by two N$-$H bonds were calculated.
The reorientational time of molecule can also be used for representation of the rotational mobility of molecule. It can be derived from the reorientation correlation function $C_l(t)$ of molecule along any specified vector $\textbf{u}$ , the O$-$H bond vector for water and N$-$H bond vector for NH$_4$$^+ in this work [24-30, 37, 38]. \begin{eqnarray} {C_l}\left( t \right) = \left\langle {{P_l}\left[ {{\mathbf{u}}\left( 0 \right){\mathbf{u}}\left( t \right)} \right]} \right\rangle \end{eqnarray} (3) where P_l is the lth Legendre polynomial. In this work, the 2nd rank Legendre Polynomial P_2(x)=1/2(3x^2$$-$1) is used to obtain the rotational correlation time $\tau_2$, which corresponds to the observation quantities of NMR and the ultrafast IR spectrum measurements [36, 37]. The reorientational time by NMR measurements is the integration value of $C_2(t)$. The reorientational time can be derived from the slope of the linear function $\ln[C_2(t)]$$\propto$$t$, if $C_2(t)$ follows an exponential decay after a short-time libration as previous reports [24-30].
According to the ideal Debye diffusion model, if the reorientational time $\tau_2$ is converted to the rotational diffusion constant $D_\textrm{R}$ by the relationship, $C_l(t)$=$\exp[-l(l$+$1)D_\textrm{R}t]$=$\exp(-t/\tau_l)$ at long time window [37, 38].
C. The translational diffusion constant
The translational diffusion constant can also be obtained from the time-dependent mean square displacements of the labeled molecule center of mass according to the Einstein relation,
$\begin{eqnarray} {D_t} = \langle | {\mathbf{r}}( t ) - {\mathbf{r}}( 0 )|^2\rangle/6t \end{eqnarray}$ (4)
where $\mathbf{r}(t)$ is the position vector of the molecular center of mass at time $t$, and the averaging is performed over all molecules of the same species [9-11, 37, 38].
D. The extended jump model
Following the Ivanov jump picture, the second-rank reorientational correlation function $C_2(t)$ along the hydrogen bonded constrained vector of molecule can be expressed as two independent contributions, the jump correlation function and the frame rotational contributions after a fast libration part. The frame rotation of molecule is a cooperative rotation mode of molecule with intact hydrogen bond pair. If each independent correlation function can be described with a single exponential function, the reorientational correlation time can be obtained from the contributions of them and follows the relationship of Eq.(5) [24-30]:
$\begin{eqnarray} \frac{1}{\tau _2} = \frac{1}{\tau _{\rm{J}}} + \frac{1}{\tau _{\rm{F}}} \end{eqnarray}$ (5)
where $\tau _{\rm{J}}$ is the jump rotational time and $\tau _{\rm{F}}$ is the frame rotational time. The overall reorientation correlation function is therefore calculated as (Eq.(1)) [24-30],
\begin{align} & {{C}_{2}}\left( t \right)=\left\langle {{P}_{2}}[{{{\vec{u}}}_{\text{OH}}}(0){{{\vec{u}}}_{\text{OH}}}(t)] \right\rangle \\ & =\left\langle {{P}_{2}}[{{{\tilde{u}}}_{\text{OH}}}(0){{{\tilde{u}}}_{\text{OH}}}(t)] \right\rangle \times \left\langle {{P}_{2}}[{{{\vec{u}}}_{\text{OX}}}(0){{{\vec{u}}}_{\text{OX}}}(t)] \right\rangle \\ \end{align} (6)
with $P_2$ as the second order Legendre polynomial, $\vec u_{\rm{OH}}$ being the unit vector of ammonium O$-$H bond within the local hydrogen bond OX frame and $\vec u_{\rm{OX}}$ being the orientation of OX frame. X is an acceptor atom.
The jump rotational relaxation time $\tau_{\rm{J}}$ is the function of the average jump angle $\varphi$ and the hydrogen bond lifetime $\tau_{\rm{HB}}$ (1/$\tau_{\rm{HB}}$ is the frequency of hydrogen bond switching).
$\begin{eqnarray} {\tau _{\rm{J}}} = {\tau _{{\rm{HB}}}}{\left\{ {1 - \frac{1}{{2n + 1}}\frac{{\sin [\left( {n + 1/2} \right)\varphi ]}}{{\sin \left( {\varphi /2} \right)}}} \right\}^{ - 1}} \end{eqnarray}$ (7)
The hydrogen bond lifetime $\tau_{\rm{HB}}$ is obtained by fitting the hydrogen bond correlation function of $C_{\rm{HB}}(t)$=$\langle n(0)n(t)\rangle$ with single exponential function [24]. $n_\textrm{R}(0)$ and $n_\textrm{P}(t)$ are the functions of hydrogen bond state. The value of $n(t)$ is 1, if ammonium hydrogen atom uniquely forms a hydrogen bond with initial acceptor. Once a new hydrogen bond forms, the value of $n(t)$ becomes zero after that time.
Additionally, the frame rotational time $\tau_{\rm{F}}$ like the total reorientational time as a fitting factor can be derived by fitting their correlation functions with exponential functions as previous studies [24, 27]. The hydrogen bond lifetime $\tau_{\rm{HB}}$ is obtained by fitting the hydrogen bond correlation function within a single exponential function within linear range according to the stable states pictures [24]. The strict hydrogen bond criterium is used to ensure that the hydrogen bond switching happens indeed along the molecular dynamics trajectory [24, 29]. For the hydrogen bond donator and acceptor molecules, if $R_{\textrm{O}^*-\textrm{O}^{\rm{a}}}$($R_{\rm{O^*-O^b}}$)$<$3.1 Å, $R_{\rm{H^*-O^a}}$ ($R_{\rm{H^*-O^b}}$)$<$2.0 Å and $\theta_{\rm{H^*-O^a-O^b}}$$<20^\circ (O^{\rm{a}} and O^{\rm{b}}: the initial and final hydrogen bond acceptors; H^*: the hydrogen bond donating hydrogen), the hydrogen bond forms for two water molecules. For the hydrogen bond between NH_4$$^+$ and water or methanol, if the conditions, $R_{\rm{N^*-O^a}}$ ($R_{\rm{N^*-O^b}}$)$<$3.0 Å, $R_{\rm{H^*-O^a}}$ ($R_{\rm{H^*-O^b}}$)$<$2.0 Å, and $\theta_{\rm{H^*-N^*-O^a}}$ (${{\theta }_{{{\text{H}}^{\text{*}}}\text{-}{{\text{N}}^{\text{*}}}\text{-}{{\text{O}}^{\text{b}}}}}$)$<$20$^\circ$ are satisfied, we can think the hydrogen bond between them is formed [29].
Ⅲ. RESULTS AND DISCUSSION
The simulated translational diffusion constants of water and methanol in NH$_4$Cl water and methanol solutions at 0.5 mol/L are 2.38$\times$10$^{-5}$ and 2.79 $\times$10$^{-5}$ cm$^2$/s at 298 K and 1 atm with SPC/E water and OPLS methanol non-additive force field. The corresponding experimental values are around 2.3$\times$10$^{-5}$ and 2.4$\times$10$^{-5}$ cm$^2$/s for pure water and methanol. Interaction models used in this work are therefore reasonable in describing the dynamic properties of these systems.
The viscosities of SPC/E water and OPLS methanol are 8.95$\times$10$^{-4}$ and 5.41$\times$10$^{-4}$ Pa$\cdot$s [14, 15]. According to the Stokes-Einstein-Debye relationship, the reorientational diffusion constant of ammonium should be smaller in water than in methanol due to higher viscosity of water than methanol. However, the reorientational time of ammonium in water from the nuclear magnetic resonance (NMR) measurements is 0.93 ps, which is much smaller than that of 3.4 ps in methanol [12, 13]. The reorientational time of ammonium calculated herein is 2.36 ps in 0.5 mol/L (2.43 ps in 1.0 mol/L) of aqueous solution while 8.20 ps in 0.5 mol/L (9.88 ps in 1 mol/L) of methanol solutions, respectively. The calculated rotational diffusion constants of ammonium are 0.14$\times$10$^{12}$ and 0.07 $\times$10$^{12}$ rad$^2$/sin water and methanol, which are qualitatively consistent with the experimental values of 0.18$\times$10$^{12}$ and 0.049$\times$10$^{12}$ rad$^2$/s (derived from the relationship $\tau_n$=$1/n(n+1)D_{\rm{R}}$. Note that the overestimation of the reorientational time of ammonium is probably due to the difference of measurements between the simulation and NMR technique. The integration value of the reorientational correlation is derived from the latter case, which includes the short-time libration contribution.) [12, 13]. The 1st and 2nd order reorientational correlation functions $C_l(t)$ ($l$=1, 2) of NH$_4$$^+ along the N-H bond vector and the short-time decays are shown in FIG. 1 (a) and (b). A faster decay is observed for the reorienational correlation function in water than in methanol for ammonium at any order in Eq.(3). FIG. 1 The 1st and 2nd order reorientational correlation functions C_l(t) (l=1, 2) of NH_4$$^+$ along the N$-$H bond vector in panel (a), as well as the short-time decays in panel (b).
The rotation of ammonium obviously deviates from the Stokes-Einstein-Debye rule, which is observed in water and in methanol. At the same time, the translational dynamics of dilute ammonium in both solvents is also decoupled with macroscopic viscosity. The translational diffusion constants of ammonium are 1.62$\times$10$^{-5}$ and 0.99$\times$10$^{-5}$ cm$^2$s$^{-1}$(mol/L)$^{-1}$ respectively in water and in methanol of 0.5 mol/L. In this work, we focus on the rotational mechanism of ammonium in water and methanol.
A decomposition procedure based on the extended cyclic Markov chain model is then carried out to decompose the overall reorientational time of ammonium in water as well as methanol, and explain the aforementioned seemingly abnormal deviation of rotational mobility from the Stokes-Einstein-Debye rule [9-11]. The motions of ammonium in water and in methanol environments are retarded due to the multiple hydrogen bonds formed between the solute and solvent molecules. Similar to the reorientation of water molecules [24-30], ammonium reorientations should mostly happen as a series of thermally activated "jumps" over the potential barriers. We can consider the consecutive jumps as a cyclic Markov chain, with the characteristics of each jump depending only on the momentary local environment. During the time intervals between the jumps, a slow diffusive (so called "frame") reorientation occurs. In this picture, jump and frame components of the ammonium reorientation practically do not depend on each other due to their different timescales.
The trajectory of ammonium molecule under investigation is dissected into a group of successive hydrogen bond switching events, and a procedure based on the cyclic Markov chain model was used to quantitatively decompose the difference between the overall reorientations in the two solutions into the jump and frame reorientation contributions during each of the switchings. The overall reorientation correlation function can be calculated using Eq.(7) [24-30]. The overall reorientation time $\tau_{\rm{EMJ}}$ is derived from $\tau_{\rm{J}}$ (the jump reorientation time) and $\tau_{\rm{F}}$ (the frame reorientation time) in Eq.(5). The overall reorientation correlation time, according to Eq.(1), is 2.11 and 6.98 ps in water and 0.5 mol/L methanol, respectively. The Markov chain model calculation qualitatively reproduces the values (2.36 and 8.20 ps) calculated directly from fitting the reorientation correlation function of N$-$H bond along the trajectories of molecular dynamics simulations. The decomposition analysis of the reorientation correlation function shows that the jump rotation and the frame rotation of ammonium are both retarded in water with respect to those in methanol, while the difference of overall reorientation is mainly caused by the difference in the jump components. In the following, we analyze in detail the mechanisms of jump and frame components, respectively.
Jump reorientation is controlled by two main factors (Eq.(6)): the jump angle and the hydrogen bond life time. The jump rotation of ammonium happens when ammonium switches its donating hydrogen bond from initial hydrogen bond acceptor water O$^{\rm{a}}$ or methanol O$^{\rm{a}}$ to final hydrogen bond acceptor water O$^{\rm{b}}$ or methanol O$^{\rm{b}}$ in water or in methanol respectively. The hydrogen bond switching is a cooperative process, which involves the departure of initial acceptor O$^{\rm{a}}$ and the approaching of new acceptor O$^{\rm{b}}$. The switching event therefore can be described with the reaction coordinates: the distance between ammonium N$^*$ and O$^{\rm{a}}$ ($R_{\rm{N^*-O^a}}$), the distance between N$^*$ and O$^{\rm{b}}$ ($R_{\rm{N^*-O^b}}$) and the angle ($\theta$) between the project vector of N$^*$$-H^* bond on the plane of O^{\rm{a}}$$-$N$^*$$-O^{\rm{b}} with the bisector of O^{\rm{a}}$$-$N$^*$$-O^{\rm{b}} [24, 27, 29]. The evolutions of these values along the hydrogen bond switching time window are shown in FIG. 2. The time origin is set to the bifurcate hydrogen bond state of N^*$$-$H$^*$ with O$^{\rm{a}}$ and O$^{\rm{b}}$. FIG. 2 indicates that, during the hydrogen bond switching, the angle of O$^{\rm{a}}$$-N^*$$-$O$^{\rm{b}}$ nearly does not change, while a sudden and large-amplitude angular rotation of N$^*$$-H^* is observed around t=0 (73^\circ and 77^\circ in water and methanol respectively). The jump angle therefore has minor influence on the difference of jump rotational time (Eq.(6)) in these two solvents. FIG. 2 The reaction coordinates R_{\rm{N^*-O^a}}, R_{\rm{N^*-O^b}}, \theta and \varphi along hydrogen bond exchange path of ammonium ion in water and methanol. The populations of jump angle in water and methanol of 0.5 mol/L are shown in FIG. 3. The population curve of jump angle for ammonium (P_{\rm{MM}}^{\rm{A}} in FIG. 3(a)) presents near symmetry around the average value, so the average value is used for the calculation of ammonium jump rotational time in Eq.(5). Note the jump angle distribution is related to the structure of transition state during hydrogen bond switching. For instance of the hydrogen bond switching of water hydrogen from initial acceptor water to new acceptor water in current solution, the jump angle distribution for water strongly deviates from the symmetry like in pure water [24]. For the hydrogen bond switching of water with one hydrogen bond between the initial and final HB acceptors at the transition state, the jump angle is about 50^\circ (P_{\rm{WW}}^{\rm{W1}} in FIG. 3(b)) [29]. On the other hand, the jump angle is about 70^\circ for the hydrogen bond switching of water without hydrogen bond between the initial and final HB acceptors at the transition state (P_{\rm{WW}}^{\rm{W2}}). The corresponding population curves P_{\rm{MM}}^{\rm{M}}, P_{\rm{WW}}^{\rm{M1}}, and P_{\rm{WW}}^{\rm{M2}} for methanol switching its donating hydrogen bond partners from methanol to methanol are also shown in FIG. 3(b) and similar to water. However, one hydrogen atom of ammonium usually switches its hydrogen bonds between two acceptors without hydrogen bonds between two hydrogen bond acceptor molecules at the transition state of hydrogen bond switching. P_{\rm{WW}}^{\rm{A2}} is the dominant contribution to P_{\rm{MM}}^{\rm{A}} in FIG. 3(a). It is rare event for the hydrogen bond switching with jump angle of 50^\circ (P_{\rm{MM}}^{\rm{A1}}) as P_{\rm{WW}}^{\rm{M1}} and P_{\rm{WW}}^{\rm{W1}}. FIG. 3 (a) Jump angle populations of ammonium in water (P_{\rm{WW}}^{\rm{A}}) during the hydrogen bond switching from water to water and in methanol from methanol to methanol (P_{\rm{MM}}^{\rm{A}}), and the contributions from the populations (P_{\rm{WW}}^{\rm{A1}}) with hydrogen bond between the initial and final HB acceptors at the transition state and from remaining part without hydrogen bond between the initial and final HB acceptors at the transition state (P_{\rm{WW}}^{\rm{A2}}). (b) Jump angle populations of water and methanol during the hydrogen bond switching from water to water (P_{\rm{MM}}^{\rm{W}}) and the jump angle populations of methanol during the hydrogen bond switching from methanol to methanol (P_{\rm{MM}}^{\rm{M}}), as well as the contributions from the populations (P_{\rm{WW}}^{\rm{W1}} and P_{\rm{WW}}^{\rm{M1}}) with hydrogen bond between the initial and final HB acceptors at the transition state and from remaining part without hydrogen bond between the initial and final HB acceptors at the transition state (P_{\rm{WW}}^{\rm{W2}} and P_{\rm{WW}}^{\rm{M2}}). The hydrogen bond lifetime is another factor to determine the jump rotational time of ammonium (Eqs. (6) and (7)). The multiple hydrogen bonds between the ammonium and water were observed by X-ray and vibrational spectroscopy [39]. Our simulations show that 90% and 94% of ammonium forms four donating hydrogen bonds in water and in methanol, respectively. This means that the tetrahedron hydrogen bonding structure of ammonium is the most popular in water or in methanol. The hydrogen bond lifetime of ammonium-water and ammonium-methanol is obtained by fitting the hydrogen bond correlation function shown in FIG. 4. A good linear behavior is presented for lg[C_{\rm{HB}}(t)] as a function of t (FIG. 4). Their hydrogen bond lifetime is 2.70 and 9.67 ps. As a result, the jump rotational time of ammonium in water and in 0.5 mol/L methanol is 2.66 and 9.04 ps, respectively. The difference of hydrogen bond lifetime, that in methanol is more than three times as in water, driving the difference of jump rotational time. The jump rotation of ammonium is heavily retarded by the methanol molecules relative to solvent water. FIG. 4 The hydrogen bond correlation functions of NH_4$$^+$-water and NH$_4$$^+-methanol, C_{\rm{HB}}, and the frame rotational correlation functions of them. The dramatic difference of hydrogen bond lifetime in water and in methanol is induced by the excluded volume effect of the methyl group, which dilutes the density of the hydrogen bond acceptor in methanol. The number density of methanol oxygen (the number of oxygen atom divided by the volume of simulation box) is only 14.3 nm^{-3}, which is much lower than that of water of 32.6 nm^{-3}. The decrement of the available hydrogen bond density reduces the rate of the hydrogen bond switching, so that the jump rotation of ammonium slows down. The frame rotational time of ammonium is derived by fitting the frame rotational correlation function C_{\rm{f}}(t) with single exponential function at a long time scale. The frame rotational correlation functions C_{\rm{f}}(t) are shown in FIG. 4. The frame rotational time of ammonium is 10.35 and 30.64 ps in water and in methanol. The frame rotation is coupled with the global structure relaxation in electrolyte solutions and water-organic solvent mixtures. It has the same tendency with concentration as viscosity. However, the frame rotation in more viscous methanol is slower than in water. According to the Stokes-Einstein-Debye relationship, the rotational mobility is inversely related to the viscosity of solvent and the effective radius of ammonium. The viscosity of methanol is smaller than water, so the possible reason of slow frame diffusive reorientation of ammonium in methanol is that the effective radius of ammonium is bigger in methanol than that in water. Ammonium can form the tetrahedron hydrogen bond structure both with methanol and water in its first solvation shell. Methanol molecule has bigger size than water, so that the effective radius of the solvated ammonium in methanol is bigger than that in water. Ⅳ. CONCLUSION The rotational mechanism of ammonium ion is explored by molecular dynamics simulations and the extended jump model. The reorientational correlation time can be well described with the EJM decomposition protocol like water in aqueous solutions. The faster rotation of NH_4$$^+$ observed in water than in methanol previous experiments is mainly due to the retardation of the jump rotation of ammonium. The dilution effect of methanol methyl group reduces the density of available hydrogen bond acceptor and retards the jump rotation of NH$_4$$^+$ motivated by the hydrogen bond switchings. The slow frame rotation of the ammonium in methanol is due to bigger size of solvated ammonium in methanol than in water. The solvation size of ammonium is closely related to the formation of the tetrahedron hydrogen bond structure in both solvents. For the future work, it is also interesting to explore the relationship between the microscopic processes and the macroscopic shear viscosity, so that further insight can be shed on the dynamics of ammonium in various solvents.
Ⅴ. ACKNOWLEDGEMENTS
This work was supported by the National Key Research and Development Program of China (2017YFA0206801), the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB20000000 and XDB10040304), and the National Natural Science Foundation of China (No.21373201 and No.21433014).
Reference
[1] O. Ninnemann, J. C. Jauniaux, and W. B. Frommer, EMBO J. 13 , 3464 (1994). [2] U. Ludewig, N. von Wir, and W. B. Frommer, J. Biol. Chem. 277 , 13548 (2002). DOI:10.1074/jbc.M200739200 [3] P. E. Mason, J. Heyda, H. E. Fischer, and P. Jungwirth, J. Phys. Chem. B 114 , 13853 (2010). DOI:10.1021/jp104840g [4] S. Wang, E. A. Orabi, S. Baday, S. Bernèche, and G. Lamoureux, J. Am. Chem. Soc. 134 , 10419 (2012). DOI:10.1021/ja300129x [5] I. Mouro-Chanteloup, S. Cochet, M. Chami, S. Genetet, N. Zidi-Yahiaoui, A. Engel, Y. Colin, O. Bertrand, and P. Ripoche, PLoS One 5 , e8921 (2010). DOI:10.1371/journal.pone.0008921 [6] I. D. Weiner, and J. W. Verlander, Am. J. Physiol. Renal Physiol. 306, F , 1107 (2014). [7] R. Moberg, F. Bokman, O. Bohman, and H. O. G. Siegbahn, J. Am. Chem. Soc. 113 , 3663 (1991). DOI:10.1021/ja00010a005 [8] T. L. Anderson, A. J. Charlson, S. E. Schwartz, R. Knutti, O. Boucher, H. Rodhe, and J. Heintzenberg, Science 300 , 1103 (2003). DOI:10.1126/science.1084777 [9] M. P. Allen, and D. J. Tildesley, Computer Simulation of Liquids. New York: Oxford University Press (1987). [10] P. Debye, Polar Molecules. New York: Dover (1945). [11] J. P. Hansen, and I. R. McDonald, Theory of Simple Liquids. London: Academic (1986). [12] C. L. Perrin, and R. K. Gipe, Science 238 , 1393 (1987). DOI:10.1126/science.238.4832.1393 [13] Y. Masuda, J. Phys. Chem. A 105 , 2989 (2001). DOI:10.1021/jp003300b [14] Y. Masuda, J. G. Guevara-Carrion, J. Vrabec, and H. Hasse, J. Chem. Phys. 134 , 074508 (2011). DOI:10.1063/1.3515262 [15] S. Pařez, and M. Předota, Phys. Chem. Chem. Phys. 14 , 3640 (2012). DOI:10.1039/c2cp22136e [16] O. A. Karim, and A. D. J. Haymet, J. Chem. Phys. 93 , 5961 (1990). DOI:10.1063/1.459479 [17] (a) T. Chang and L. X. Dang, J. Chem. Phys. 118, 8813(2003). (b) L. X. Dang, Chem. Phys. Lett. 213, 541(1993). [18] G. Szasz, W. O. Riede, and K. Heinzinger, Z. Naturforsch. A 34 , 1083 (1979). [19] W. L. Jorgensen, and J. Gao, J. Phys. Chem. 90 , 2174 (1986). DOI:10.1021/j100401a037 [20] K. P. Jensen, and W. L. Jorgensen, J. Chem. Theory Comput. 2 , 1499 (2006). DOI:10.1021/ct600252r [21] (a) F. Bruge, M. Bernasconi, and M. Parrinello, J. Am. Chem. Soc. 121, 10883(1999). (b) F. Bruge, M. Bernasconi, and M. Parrinello, J. Chem. Phys. 110, 4734(1999). [22] E. Kassab, E. M. Evleth, and Z. D. Hamou-Tahra, J. Am. Chem. Soc. 112 , 103 (1990). DOI:10.1021/ja00157a016 [23] W. I. Babiaczyk, S. Bonella, L. Guidoni, and G. Ciccotti, J. Phys. Chem. B 114 , 15018 (2010). DOI:10.1021/jp106282w [24] (a) D. Laage and J. T. Hynes, Science 311, 832(2006). (b) D. Laage and J. T. Hynes, J. Phys. Chem. B 112, 14230(2008). [25] D. Laage, G. Stirnemann, F. Sterpone, R. Rey, and J. T. Hynes, Annu. Rev. Phys. Chem. 62 , 395 (2011). DOI:10.1146/annurev.physchem.012809.103503 [26] D. Laage, T. Elsaesser, and J. T. Hynes, Chem. Rev. 117 , 10694 (2017). DOI:10.1021/acs.chemrev.6b00765 [27] (a) Q. Zhang, T. Wu, C. Chen, S. Mukamel, and W. Zhuang, Proc. Natl. Acad. Sci. USA 114, 10023(2017). (b) Q. Zhang, H. Chen, T. Wu, T. Jin, R. Zhang, Z. Pan, J. Zheng, Y. Gao, and W. Zhuang, Chem. Sci. 8, 1429(2017). [28] Q. Zhang, Z. Pan, L. Zhang, R. Zhang, Z. Chen, T. Jin, T. Wu, X. Chen, and W. Zhuang, WIREs Comput. Mol. Sci. (2018). DOI:10.1002/wcms.1373 [29] (a) Q. Zhang, C. Cheng, X. Zhang, and D. Zhao, Acta Phys. Chim. Sin. 31, 1461(2015). (b) X. Zhang, Q. Zhang, and D. Zhao, Acta Phys. Chim. Sin. 27, 2547(2011). [30] G. Stirnemann, E. Wernersson, P. Jungwirth, and D. Laage, J. Am. Chem. Soc. 135 , 11824 (2013). DOI:10.1021/ja405201s [31] H. J. C. Berendsen, J. R. Grigera, and T. P. Straatsma, J. Phys. Chem. 91 , 6269 (1987). DOI:10.1021/j100308a038 [32] W. L. Jorgensen, and J. Gao, J. Phys. Chem. 90 , 2174 (1986). DOI:10.1021/j100401a037 [33] S. K. Pattanayak, and S. Chowdhuri, J. Mol. Liq. 186 , 98 (2013). DOI:10.1016/j.molliq.2013.05.010 [34] B. Hess, C. Kutzner, D. van der Spoel, and E. Lindahl, J. Chem. Theory Comput. 4 , 435 (2008). DOI:10.1021/ct700301q [35] H. J. C. Berendsen, J. P. M. Postma, W. F. van Gunsteren, A. DiNola, and J. R. Hauk, J. Chem. Phys. 81 , 3684 (1984). [36] T. Darden, and D. York Pedersen, J. Chem. Phys. 98 , 10089 (1993). DOI:10.1063/1.464397 [37] M. G. Mazza, N. Giovambattista, F. W. Starr, and H. E. Stanley, Phys. Rev. Lett. 96 , 057803 (2006). DOI:10.1103/PhysRevLett.96.057803 [38] S. Kammerer, W. Kob, and R. Schilling, Phys. Rev. E 56 , 5450 (1997). DOI:10.1103/PhysRevE.56.5450 [39] M. Ekimova, W. Quevedo, L. Szyc, M. Iannuzzi, P. Wernet, M. Odelius, and E. T. J. Nibbering, J. Am. Chem. Soc. 139 , 12773 (2017). DOI:10.1021/jacs.7b07207
a. 渤海大学, 化学化工学院, 锦州 121013;
b. 中国科学院福建物质结构研究所, 结构化学国家重点实验室, 福州 350002
|
# 6.1E: Exercises
## Identify Polynomials, Monomials, Binomials, and Trinomials
In the following exercises, determine if each of the following polynomials is a monomial, binomial, trinomial, or other polynomial.
##### Exercise 1
1. $$81b^5−24b^3+1$$
2. $$5c^3+11c^2−c−8$$
3. $$\frac{14}{15}y+\frac{1}{7}$$
4. $$5$$
5. $$4y+17$$
1. trinomial
2. polynomial
3. binomial
4. monomial
5. binomial
##### Exercise 2
1. $$x^2−y^2$$
2. $$−13c^4$$
3. $$x^2+5x−7$$
4. $$x^{2}y^2−2xy+8$$
5. $$19$$
##### Exercise 3
1. $$8−3x$$
2. $$z^2−5z−6$$
3. $$y^3−8y^2+2y−16$$
4. $$81b^5−24b^3+1$$
5. $$−18$$
1. binomial
2. trinomial
3. polynomial
4. trinomial
5. monomial
##### Exercise 4
1. $$11y^2$$
2. $$−73$$
3. $$6x^2−3xy+4x−2y+y^2$$
4. $$4y+17$$
5. $$5c^3+11c^2−c−8$$
## Determine the Degree of Polynomials
In the following exercises, determine the degree of each polynomial.
##### Exercise 5
1. $$6a^2+12a+14$$
2. $$18xy^{2}z$$
3. $$5x+2$$
4. $$y^3−8y^2+2y−16$$
5. $$−24$$
1. 2
2. 4
3. 1
4. 3
5. 0
##### Exercise 6
1. $$9y^3−10y^2+2y−6$$
2. $$−12p^4$$
3. $$a^2+9a+18$$
4. $$20x^{2}y^2−10a^{2}b^2+30$$
5. $$17$$
##### Exercise 7
1. $$14−29x$$
2. $$z^2−5z−6$$
3. $$y^3−8y^2+2y−16$$
4. $$23ab^2−14$$
5. $$−3$$
1. 1
2. 2
3. 3
4. 3
5. 0
##### Exercise 8
1. $$62y^2$$
2. $$15$$
3. $$6x^2−3xy+4x−2y+y^2$$
4. $$10−9x$$
5. $$m^4+4m^3+6m^2+4m+1$$
In the following exercises, add or subtract the monomials.
##### Exercise 9
$$7x^2+5x^2$$
$$12x^2$$
##### Exercise 10
$$4y^3+6y^3$$
##### Exercise 11
$$−12w+18w$$
$$6w$$
##### Exercise 12
$$−3m+9m$$
##### Exercise 13
$$4a−9a$$
$$−5a$$
##### Exercise 14
$$−y−5y$$
##### Exercise 15
$$28x−(−12x)$$
$$40x$$
##### Exercise 16
$$13z−(−4z)$$
##### Exercise 17
$$−5b−17b$$
$$−22b$$
##### Exercise 18
$$−10x−35x$$
##### Exercise 19
$$12a+5b−22a$$
$$−10a+5b$$
##### Exercise 20
$$14x−3y−13x$$
##### Exercise 21
$$2a^2+b^2−6a^2$$
$$−4a^2+b^2$$
##### Exercise 22
$$5u^2+4v^2−6u^2$$
##### Exercise 23
$$xy^2−5x−5y^2$$
$$xy^2−5x−5y^2$$
##### Exercise 24
$$pq^2−4p−3q^2$$
##### Exercise 25
$$a^{2}b−4a−5ab^2$$
$$a^{2}b−4a−5ab^2$$
##### Exercise 26
$$x^{2}y−3x+7xy^2$$
##### Exercise 27
$$12a+8b$$
$$12a+8b$$
##### Exercise 28
$$19y+5z$$
##### Exercise 29
Add: $$4a,\,−3b,\,−8a$$
$$−4a−3b$$
##### Exercise 30
Add: $$4x,\,3y,\,−3x$$
##### Exercise 31
Subtract $$5x^6$$ from $$−12x^6$$
$$−17x^6$$
##### Exercise 32
Subtract $$2p^4$$ from $$−7p^4$$
In the following exercises, add or subtract the polynomials.
##### Exercise 33
$$(5y^2+12y+4)+(6y^2−8y+7)$$
$$11y^2+4y+11$$
##### Exercise 34
$$(4y^2+10y+3)+(8y^2−6y+5)$$
##### Exercise 35
$$(x^2+6x+8)+(−4x^2+11x−9)$$
$$−3x^2+17x−1$$
##### Exercise 36
$$(y^2+9y+4)+(−2y^2−5y−1)$$
##### Exercise 37
$$(8x^2−5x+2)+(3x^2+3)$$
$$11x^2−5x+5$$
##### Exercise 38
$$(7x^2−9x+2)+(6x^2−4)$$
##### Exercise 39
$$(5a^2+8)+(a^2−4a−9)$$
$$6a^2−4a−1$$
##### Exercise 40
$$(p^2−6p−18)+(2p^2+11)$$
##### Exercise 41
$$(4m^2−6m−3)−(2m^2+m−7)$$
$$2m^2−7m+4$$
##### Exercise 42
$$(3b^2−4b+1)−(5b^2−b−2)$$
##### Exercise 43
$$(a^2+8a+5)−(a^2−3a+2)$$
$$11a+3$$
##### Exercise 44
$$(b^2−7b+5)−(b^2−2b+9)$$
##### Exercise 45
$$(12s^2−15s)−(s−9)$$
$$12s^2−16s+9$$
##### Exercise 46
$$(10r^2−20r)−(r−8)$$
##### Exercise 47
Subtract $$(9x^2+2)$$ from $$(12x^2−x+6)$$
$$3x^2−x+4$$
##### Exercise 48
Subtract $$(5y^2−y+12)$$ from $$(10y^2−8y−20)$$
##### Exercise 49
Subtract $$(7w^2−4w+2)$$ from $$(8w^2−w+6)$$
$$w^2+3w+4$$
##### Exercise 50
Subtract $$(5x^2−x+12)$$ from $$(9x^2−6x−20)$$
##### Exercise 51
Find the sum of $$(2p^3−8)$$ and $$(p^2+9p+18)$$
$$2p^3+p^2+9p+10$$
##### Exercise 52
Find the sum of
$$(q^2+4q+13)$$ and $$(7q^3−3)$$
##### Exercise 53
Find the sum of $$(8a^3−8a)$$ and $$(a^2+6a+12)$$
$$8a^3+a^2−2a+12$$
##### Exercise 54
Find the sum of
$$(b^2+5b+13)$$ and $$(4b^3−6)$$
##### Exercise 55
Find the difference of
$$(w^2+w−42)$$ and
$$(w^2−10w+24)$$.
$$11w−66$$
##### Exercise 56
Find the difference of
$$(z^2−3z−18)$$ and
$$(z^2+5z−20)$$
##### Exercise 57
Find the difference of
$$(c^2+4c−33)$$ and
$$(c^2−8c+12)$$
$$12c−45$$
##### Exercise 58
Find the difference of
$$(t^2−5t−15)$$ and
$$(t^2+4t−17)$$
##### Exercise 59
$$(7x^2−2xy+6y^2)+(3x^2−5xy)$$
$$10x^2−7xy+6y^2$$
##### Exercise 60
$$(−5x^2−4xy−3y^2)+(2x^2−7xy)$$
##### Exercise 61
$$(7m^2+mn−8n^2)+(3m^2+2mn)$$
$$10m^2+3mn−8n^2$$
##### Exercise 62
$$(2r^2−3rs−2s^2)+(5r^2−3rs)$$
##### Exercise 63
$$(a^2−b^2)−(a^2+3ab−4b^2)$$
$$−3ab+3b^2$$
##### Exercise 64
$$(m^2+2n^2)−(m^2−8mn−n^2)$$
##### Exercise 65
$$(u^2−v^2)−(u^2−4uv−3v^2)$$
$$4uv+2v^2$$
##### Exercise 66
$$(j^2−k^2)−(j^2−8jk−5k^2)$$
##### Exercise 67
$$(p^3−3p^{2}q)+(2pq^2+4q^3) −(3p^{2}q+pq^2)$$
$$p^3−6p^{2}q+pq^2+4q^3$$
##### Exercise 68
$$(a^3−2a^{2}b)+(ab^2+b^3)−(3a^{2}b+4ab^2)$$
##### Exercise 69
$$(x^3−x^{2}y)−(4xy^2−y^3)+(3x^{2}y−xy^2)$$
$$x^3+2x^{2}y−5xy^2+y^3$$
##### Exercise 70
$$(x^3−2x^{2}y)−(xy^2−3y^3)−(x^{2}y−4xy^2)$$
## Evaluate a Polynomial for a Given Value
In the following exercises, evaluate each polynomial for the given value.
##### Exercise 71
Evaluate $$8y^2−3y+2$$ when:
1. $$y=5$$
2. $$y=−2$$
3. $$y=0$$
1. $$187$$
2. $$46$$
3. $$2$$
##### Exercise 72
Evaluate $$5y^2−y−7$$ when:
1. $$y=−4$$
2. $$y=1$$
3. $$y=0$$
##### Exercise 73
Evaluate $$4−36x$$ when:
1. $$x=3$$
2. $$x=0$$
3. $$x=−1$$
1. $$−104$$
2. $$4$$
3. $$40$$
##### Exercise 74
Evaluate $$16−36x^2$$ when:
1. $$x=−1$$
2. $$x=0$$
3. $$x=2$$
##### Exercise 75
A painter drops a brush from a platform $$75$$ feet high. The polynomial $$−16t^2+75$$ gives the height of the brush $$t$$ seconds after it was dropped. Find the height after $$t=2$$ seconds.
$$11$$
##### Exercise 76
A girl drops a ball off a cliff into the ocean. The polynomial $$−16t^2+250$$ gives the height of a ball $$t$$ seconds after it is dropped from a 250-foot tall cliff. Find the height after $$t=2$$ seconds.
##### Exercise 77
A manufacturer of stereo sound speakers has found that the revenue received from selling the speakers at a cost of $$p$$ dollars each is given by the polynomial $$−4p^2+420p$$. Find the revenue received when $$p=60$$ dollars.
$$10,800$$
##### Exercise 78
A manufacturer of the latest basketball shoes has found that the revenue received from selling the shoes at a cost of $$p$$ dollars each is given by the polynomial $$−4p^2+420p$$. Find the revenue received when $$p=90$$ dollars.
## Everyday Math
##### Exercise 79
Fuel Efficiency The fuel efficiency (in miles per gallon) of a car going at a speed of $$x$$ miles per hour is given by the polynomial $$−\frac{1}{150}x^2+\frac{1}{3}x$$, where $$x=30$$ mph.
$$4$$
##### Exercise 80
Stopping Distance The number of feet it takes for a car traveling at $$x$$ miles per hour to stop on dry, level concrete is given by the polynomial $$0.06x^2+1.1x$$, where $$x=40$$ mph.
##### Exercise 81
Rental Cost The cost to rent a rug cleaner for $$d$$ days is given by the polynomial $$5.50d+25$$. Find the cost to rent the cleaner for $$6$$ days.
$$58$$
##### Exercise 82
Height of Projectile The height (in feet) of an object projected upward is given by the polynomial $$−16t^2+60t+90$$ where $$t$$ represents time in seconds. Find the height after $$t=2.5$$ seconds.
##### Exercise 83
Temperature Conversion The temperature in degrees Fahrenheit is given by the polynomial $$\frac{9}{5}c+32$$ where $$c$$ represents the temperature in degrees Celsius. Find the temperature in degrees Fahrenheit when $$c=65°$$.
$$149°$$ F
## Writing Exercises
##### Exercise 84
Using your own words, explain the difference between a monomial, a binomial, and a trinomial.
##### Exercise 85
Using your own words, explain the difference between a polynomial with five terms and a polynomial with a degree of 5.
##### Exercise 86
Ariana thinks the sum $$6y^2+5y^4$$ is $$11y^6$$
##### Exercise 87
Jonathan thinks that $$\frac{1}{3}$$ and $$\frac{1}{x}$$ are both monomials. What is wrong with his reasoning?
## Self Check
a. After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section.
b. If most of your checks were:
…confidently. Congratulations! You have achieved the objectives in this section. Reflect on the study skills you used so that you can continue to use them. What did you do to become confident of your ability to do these things? Be specific.
…with some help. This must be addressed quickly because topics you do not master become potholes in your road to success. In math every topic builds upon previous work. It is important to make sure you have a strong foundation before you move on. Who can you ask for help? Your fellow classmates and instructor are good resources. Is there a place on campus where math tutors are available? Can your study skills be improved?
…no - I don’t get it! This is a warning sign and you must not ignore it. You should get help right away or you will quickly be overwhelmed. See your instructor as soon as you can to discuss your situation. Together you can come up with a plan to get you the help you need.
This page titled 6.1E: Exercises is shared under a CC BY license and was authored, remixed, and/or curated by OpenStax.
|
# How to interpret Box-Cox Transformation [duplicate]
I have used a Box-Behnken experimental design. I have a full quadratic model.
However, I had to transform the response, $Y$ for the model to fit; I did this using a Box-Cox transformation with $\lambda=0.5$.
For example, one of the regressions is like this:
$$Y = 1.28 - 0.008X_1-0.025X_2-0.05X_3-0.13X_1^2-0.01X_2^2-0.006X_3^2\\ +0.02X_1X_2-0.05X_1X_3+0.09X_2X_3$$
How do I interpret the terms?
|
Why doesn't FindRoot work correctly?
I'm trying to find the roots of the following equation:
I need to find λs for different values of ξ. I know that for all values of ξ, I must have n<λ<n+1/2. I use FindRoot to do this. When I put ξ^2=2, I can find all the correct roots. For example, for the first 5 roots, I have
FindRoot[x - Pi*Cot[Pi*x] == 0, {x, 0.5}]
(* Out[173]= {x -> 0.454288} *)
FindRoot[x - Pi*Cot[Pi*x] == 0, {x, 1.5}]
(* Out[174]= {x -> 1.36917} *)
FindRoot[x - Pi*Cot[Pi*x] == 0, {x, 2.5}]
(* Out[175]= {x -> 2.29891} *)
FindRoot[x - Pi*Cot[Pi*x] == 0, {x, 3.5}]
(* Out[176]= {x -> 3.24485} *)
FindRoot[x - Pi*Cot[Pi*x] == 0, {x, 4.5}]
(* Out[177]= {x -> 4.20427} *)
Comparing with the following plot, these roots are correct (The roots are where the red line is coinciding with the green curve):
But when I put some different value for ξ^2, for example ξ^2=0.5 (which is a value which I need to use in my calculations), FindRoot doesn't give me the correct roots anymore:
FindRoot[x - 0.25*Pi*Cot[Pi*x] == 0, {x, 0.5}]
(* Out[191]= {x -> 0.362393} *)
FindRoot[x - 0.25*Pi*Cot[Pi*x] == 0, {x, 1.5}]
(* Out[192]= {x -> 1.18617} *)
FindRoot[x - 0.25*Pi*Cot[Pi*x] == 0, {x, 2.5}]
(* Out[193]= {x -> 2.11326} *)
FindRoot[x - 0.25*Pi*Cot[Pi*x] == 0, {x, 3.5}]
(* Out[194]= {x -> 2.11326} *)
FindRoot[x - 0.25*Pi*Cot[Pi*x] == 0, {x, 4.5}]
(* Out[195]= {x -> 3.07949} *)
FindRoot[x - 0.25*Pi*Cot[Pi*x] == 0, {x, 5.5}]
(* Out[196]= {x -> 5.04912} *)
Which are not the correct roots (specially the 4th and 5th ones) compared to this plot:
Why doesn't FindRoot work correctly? What am I doing wrong?
• It's finding a root, which is all it is asked to do. Starting a bit closer might give the root that you want. Try changing your .5 to x.1. – wxffles Jun 26 '13 at 22:01
• Indeed, put x - 0.25*Pi*Cot[Pi*x] /. in front of each FindRoot and you'll see it has found a correct root. – Sjoerd C. de Vries Jun 26 '13 at 22:02
• @Sjoerd C. de Vries, Thanks a lot for your answer. But I didn't understand what I should do. Where should I put x - 0.25*Pi*Cot[Pi*x]? – ZKT Jun 26 '13 at 22:06
• x - 0.25*Pi*Cot[Pi*x] is the expression that you want to set to zero for a certain x. FindRoot finds this x. If you now write x - 0.25*Pi*Cot[Pi*x] /. FindRoot[...] the found value of x is filled in in the expression and you see that it is indeed (close to) zero. – Sjoerd C. de Vries Jun 27 '13 at 13:30
You can use NSolve with a condition, instead. See:
eq = x - (Pi/2)*0.5*Cot[Pi*x] == 0
NSolve[{eq, 3.5 < x < 4.5}, x][[1]]
The output being
{x -> 4.06081}
which correctly falls between 3.5 and 4.5.
• This looks only good at the first glance because it doesn't find the root at 17.0059 when you use 16.5<x<17.5 and xi is 0.1. On the other hand I have to admit, that NSolve finds most of the roots. I wonder whether one can instruct FindRoot to use the same method, because besically NSolve only knows the interval boundaries. The same is possible with FindRoot but it doesn't seem to work reliable with this. – halirutan Jun 27 '13 at 20:31
• But if you change the lower bound a bit, it gives the corretc result: NSolve[{x - (Pi)*0.1*Cot[Pi*x] == 0, 16.4 < x < 17.5}, x], the output being {x -> 17.0059}. – fcpenha Jun 27 '13 at 21:27
• The problem is, that you maybe don't want to fix all those things when the OP wants to extract all roots in the Range[0.001,1.,0.001] as stated in one of his/her comments. – halirutan Jun 27 '13 at 21:29
• Indeed. In this case I would subtract a small number of the lower bound for each computation. But probably this fix is specific to this function. I wouldn't know what to do in a general situation. – fcpenha Jun 27 '13 at 21:35
• @fcpenha Thanks a lot for your answer, it indeed solved my problem :). – ZKT Jun 27 '13 at 22:33
I see two potential pitfalls in your approach. First, you need to understand that FindRoot is a numerical procedure which starts at a certain point and tries to find a root by approximating the gradient and moving towards it$^{1}$. Therefore, different starting points might lead to different roots, although you probably would expect they give the same root.
Therefore, it can happen if you choose the start point badly, you end up with a completely unexpected root since maybe the gradient was too large and introduced a very big first step:
FindRoot[x - 0.25 Pi*Cot[Pi*x] == 0, {x, 4.99}]
(* {x -> 2.11326} *)
Second, when you are completely out of luck, then it might happen you choose a starting point which is a singularity. Although you get a warning, you might over-read this and use this wrong root:
FindRoot[x - 0.25 Pi*Cot[Pi*x] == 0, {x, 5.0}]
(* {x -> 5.} *)
This is why I suggest for your example a different approach. You could extract all roots in a certain interval and delete those, which are singularities. This is in your case easy, because you used Cot and the singularities are the roots where Sin[Pi x] is zero. After you collected all roots, you can use Nearest to create a function which indeed gives always the nearest root to your input:
With[{roots =
Quiet[Select[
Union[x /. (FindRoot[x - 0.25 Pi*Cot[Pi*x] == 0, {x, #1}] & /@
Range[0.1, 10, .1])], Abs[Sin[Pi #]] > 10.0^-5 &]]},
giveRootAt[x_?NumericQ] := First[Nearest[roots, x]]
]
A quick check reveals that you get the correct roots now:
{#,giveRootAt[#]}&/@Range[.5,5.5]
(*
{{0.5,0.362393},{1.5,1.18617},{2.5,2.11326},
{3.5,3.07949},{4.5,4.06081},{5.5,5.04912}}
*)
Footnote 1: This is not entirely correct. FindRoot has several modes and options where you can influence the behavior. See the reference page for more information.
Beware that for very small $\xi$ it's getting harder to extract the roots reliably. Therefore, I will work with a minimal $\xi$ of 0.1. What you can do is to use the above method and create several giveRootAt functions. I will give for illustration purposes a slightly different version which defines several giveAllRoots function but the definition of giveRootAt is equivalent. The main difference in the code below is, that the whole definition is now a Function which takes $\xi$ as parameter. With this, we can map it over the list of $\xi$ values you want:
Function[xi,
With[{roots =
Quiet[x /. (FindRoot[x - xi*Pi*Cot[Pi*x] == 0, {x, #}] & /@ Range[0.1, 20, xi/2])]},
giveAllRoots[xi] = Select[Union[roots], Chop[Sin[Pi #]] != 0 &]
]
] /@ Range[0.1, 1.0, 0.001];
Now you have a function for which gives you the roots for different values of $\xi$ ranging from 0.1 to 1.0 in 0.001 steps.
To test this you can use for instance a Manipulate showing you the roots
Manipulate[
Plot[x - \[Xi]*Pi*Cot[Pi*x], {x, 0, xmax},
Epilog -> {Red, Point[{#, 0}] & /@ giveAllRoots[\[Xi]]},
PlotRange -> {Automatic, {-3, 3}}],
{xmax, 2, 20},
{\[Xi], 0.1, 1.0, 0.001}
]
• Thanks a lot for your answer. It solves the problem for only one value of ξ. What I did before was that I made a table with respect to ξ, and I used Map to put FindRoot in each element of the table: table=Table[{ξ,x-ξPiCot[Pi*x]},{ξ,0.001,1.,0.001}]; final =table /. {{a_,c_} :> {a, FindRoot[c, {x,0.5}]}; This gives me all λ_0s for different values of ξ. Then I used a For loop to find other λs, putting FindRoot[c, {x,0.5+i}]}, i changing form 0 to 20. How can I do all these things using your method? – ZKT Jun 26 '13 at 23:12
• @ZKT Please see my update in the answer. – halirutan Jun 27 '13 at 1:37
• Dear @halirutan , thanks a lot for the time you put to answer my question. Your method was useful, but since I need to consider very small values of xi in my code (as they are physical parameters and I can't just ignore them), so NSolve works better in my case. Thanks again by the way :). – ZKT Jun 27 '13 at 22:39
• @ZKT Please, then don't accept my answer as final solution. fcpenha's solution is nice and small and if it solves your problem, you really should keep this as accepted answer. If you like my answer, upvote it, but accept the one the solves your problem. – halirutan Jun 27 '13 at 22:43
• Did as you said. I learnt things from your answer by the way :)... – ZKT Jun 27 '13 at 23:06
|
# Laplace's equation
1. May 23, 2010
### squenshl
1. The problem statement, all variables and given/known data
Consider Laplace's equation uxx + uyy = 0 on the region -inf <= x <= inf, 0 <= y <= 1 subject to the boundary conditions u(x,0) = 0, u(x,1) = f(x), limit as x tends to inf of u(x,y) = 0.
Show that the solution is given by u(x,y) = F-1(sinh(wy)f(hat)/sinh(wy))
2. Relevant equations
3. The attempt at a solution
I used Fourier transforms in x.
I got u(hat)(w,y) = Aeky + Be-ky
In Fourier space:
u(hat)(w,0) = F(0) = 0
u(hat)(w,1) = f(hat)(w)
But u(hat) is a function of y. My question is how do I apply the 3rd boundary condition (as this is the limit as x(not y) tends to inf) to u(hat)
2. May 26, 2010
### squenshl
Is it meant to be limit as y tends to inf not x.
3. May 26, 2010
### gabbagabbahey
You mean $\hat{u}(\omega,y)=Ae^{\omega y}+Be^{-\omega y}$, right?
You already used it. If [tex]\lim_{x\to\pm\infty} u(x,y)\neq 0[/itex], its Fourier transform (from $x$ to $\omega$) might not exist (the integral could diverge).
4. May 27, 2010
### squenshl
Very true. That is what I meant.
But when did I use this boundary condition?
5. May 27, 2010
### gabbagabbahey
When you took the FT of $u(x,y)$, and hence assumed that it existed.
|
# Last digit of $3^{459}$. [duplicate]
I am supposed to find the last digit of the number $3^{459}$. Wolfram|Alpha gives me $9969099171305981944912884263593843734515811805621702621829350243852275145577745\\3002132202129141323227530694911974823395497057366360402382950449104721755086093\\572099218479513977932448616356300654729978057481366551670706\color{red}{\mathbf{7}}$
Surely there's some sort of numerical trick to doing this. I thought maybe modular arithmetic was involved? Any ideas on how to approach this problem?
## marked as duplicate by Daniel FischerJun 23 '16 at 14:48
Try out the first few powers of $3$: we have that $$3^1=3, 3^2=9, 3^3=27, 3^4=81, 3^5=243, 3^6=729,\ldots$$ It seems like the final digit cycles in a pattern, namely $$3\to9\to7\to1\to3\to9\to7\to1,$$ of length $4$. Since $$459=4\cdot114+3,$$ the final digit is the third in the cycle, namely $\color{red}{\mathbf{7}}$, and, sure enough, your Wolfram|Alpha computation confirms this.
$$3^{4}=1\pmod{10}\implies (3^{4})^{114}=1 \pmod{10}$$ So $$3^{459}=3^{4 \times 114}\cdot 3^{3}=3^{3}\pmod{10}=7\pmod {10}$$
|
# Algebra Examples
Find Pivot Positions and Pivot Columns
Perform the row operation on (row ) in order to convert some elements in the row to .
Replace (row ) with the row operation in order to convert some elements in the row to the desired value .
Replace (row ) with the actual values of the elements for the row operation .
Simplify (row ).
Perform the row operation on (row ) in order to convert some elements in the row to .
Replace (row ) with the row operation in order to convert some elements in the row to the desired value .
Replace (row ) with the actual values of the elements for the row operation .
Simplify (row ).
Perform the row operation on (row ) in order to convert some elements in the row to .
Replace (row ) with the row operation in order to convert some elements in the row to the desired value .
Replace (row ) with the actual values of the elements for the row operation .
Simplify (row ).
Perform the row operation on (row ) in order to convert some elements in the row to .
Replace (row ) with the row operation in order to convert some elements in the row to the desired value .
Replace (row ) with the actual values of the elements for the row operation .
Simplify (row ).
Perform the row operation on (row ) in order to convert some elements in the row to .
Replace (row ) with the row operation in order to convert some elements in the row to the desired value .
Replace (row ) with the actual values of the elements for the row operation .
Simplify (row ).
Perform the row operation on (row ) in order to convert some elements in the row to .
Replace (row ) with the row operation in order to convert some elements in the row to the desired value .
Replace (row ) with the actual values of the elements for the row operation .
Simplify (row ).
Pivot columns are the columns, which contains pivot positions, so those pivot columns are.
A pivot position in a matrix is a position that after row reduction contains a leading . Thus, the leading one in the pivot columns are the pivot positions.
The leading in the pivot columns are the pivot positions
Enter YOUR Problem
Enter the email address associated with your Mathway account below and we'll send you a link to reset your password.
Please enter an email address
Please enter a valid email address
The email address you entered was not found in our system
The email address you entered is associated with a Facebook user
We're sorry, we were unable to process your request at this time
### Mathway Premium
Step-by-step work + explanations
• Step-by-step work
• Detailed explanations
• No advertisements
• Access anywhere
Access the steps on both the Mathway website and mobile apps
$--.--/month$--.--/year (--%)
### Mathway Premium
Visa and MasterCard security codes are located on the back of card and are typically a separate group of 3 digits to the right of the signature strip.
American Express security codes are 4 digits located on the front of the card and usually towards the right.
This option is required to subscribe.
Go Back
Step-by-step upgrade complete!
Mathway requires javascript and a modern browser.
|
## Physics (10th Edition)
$v=11.54m/s$
1) Find the centripetal acceleration $a_c$ of the satellite Since the centripetal force of a satellite is the same as its gravitational force, the centripetal acceleration is similar to the gravitational acceleration: $a_c=g$ To find $g$, we have the formula $$g=G\frac{M_E}{r^2}$$ We have $G=6.67\times10^{-11}Nm^2/kg^2$, $M_E=5.98\times10^{24}kg$ and radius $r$ is given to be $6.7\times10^6m$. Therefore, $$a_c=g=8.885m/s^2$$ 2) The plane has the same $a_c=8.885m/s^2$ and its radius of flying path $r=15m$. Its speed needs to be $$v^2=a_cr=133.275m^2/s^2$$ $$v=11.54m/s$$
|
# [MD] Lennard Jones
by JorisL
Tags: divergence, jones, lennard, lennard-jones, molecular dynamics
P: 35 Hi For a simulation regarding Lennard Jones fluids I'm getting divergences. I have particles in a fixed volume. I calculate distances between these particles to find the force. However in the first iteration I already get divergences (NaN values in matlab). I use $$F_x^{ij} = 24\varepsilon \left( 2r_{ij}^{-14}-r_{ij}^{-8}\right)\cdot \Delta x_{ij}$$ Where the i,j have to do with the particles I'm viewing. For each direction I get such a force. But some of the particles get so close that this force and hence the acceleration effectively become infinity. My simulation obviously breaks down at this point. I tried changing the timestep, this doesn't do anything. It worked at some point but I don't recall changing anything after that. Except adding the thermostat code which I can turn off. The divergence remains. You can find my (messy) code in this pastebin http://pastebin.com/62v1yTCY I've been looking at it for hours already yet I can't find any solution. JorisL Edit; I had divergences before. Those were caused by an error in applying the periodic boundary conditions. I forgot using the nearest image convention at that time.
P: 35 I found the mistake. In my initialization of the cubic lattice, I had a break off variable g. I initialized this one as g = 1. Which caused the last point to to coincide with my first point and so I got immediate divergences. At least 4 hours to waste :S J
P: 2,491 You may have wasted 4 hours on this, but it took you 20+ years to get to this point...
Related Discussions Introductory Physics Homework 1 Advanced Physics Homework 0 Advanced Physics Homework 3 Materials & Chemical Engineering 0 Atomic, Solid State, Comp. Physics 4
|
# Best way to create an system of equations environment?
I am trying to write a few systems of equations, and I want the terms to be nicely spaced as below
2x + y + 3z = 10 \\
x + y + z = 6 \\
x + 3y + 2z = 13
Now, using some very ugly code, I was able to produce the results above.
\documentclass[10pt,a4paper]{article}
\usepackage{mathtools}
\usepackage{amsmath}
\begin{document}
\begin{align*}
\begin{bmatrix}
\begin{tabular}{r c r c r c r }
$2x$ & $+$ & $y$ & $+$ & $3z$ & $=$ & $10$ \\
$x$ & $+$ & $y$ & $+$ & $z$ & $=$ & $6$ \\
$x$ & $+$ & $3y$ & $+$ & $2z$ & $=$ & $13$
\end{tabular}
\end{bmatrix}
\end{align*}
\section{Another system of equations, now without the brackets}
\begin{table}[!htpb]
\centering
\begin{tabular}{r c r c r c r }
$2x$ & $+$ & $y$ & $+$ & $3z$ & $=$ & $10$ \\
$x$ & $+$ & $y$ & $+$ & $z$ & $=$ & $6$ \\
$x$ & $+$ & $3y$ & $+$ & $2z$ & $=$ & $13$
\end{tabular}
\end{table}
\end{document}
I would very much want a more automatic way to do this, and a simple method for controlling the spacing between elements. I have looked at earlier posts like
Multicol layout for systems of (linear) equations
It seems that I am looking for a simple version of this one. I have no need to use side-by-side equations, nor have numbers in front of them.
Using ideas from the post above, I guess the result is done by redefining commands such as - and + inside of the table? I tried to do something like this, but the code looked rather complex =(
To sum it up: Is there a way to define a simple System of Equations enviroment, with proper alignment and a optional command for defining the spacing?
You can try the package systeme. Its documentation is in French, but there are many examples to play with.
Your example would be input as
\systeme{
2x + y + 3z = 10,
x + y + z = 6,
x + 3y + 2z = 13}
To get right alignment in the column of right hand sides, one has to manually modify the package code:
\makeatletter
\def\SYS@makesyspreamble@i#1{%
\ifnum#1<\SYS@preamblenum
\SYS@addtotok\SYS@systempreamble{\hfil$##$&\hfil$##$&}%
\expandafter\SYS@makesyspreamble@i\expandafter{\number\numexpr#1+\@ne\expandafter}%
\else
\SYS@addtotok\SYS@systempreamble{\hfil$##$&$##$&\hfil$##$\null}%
\ifSYS@extracol
\fi
\fi
}
\makeatother
The patch is simply changing $##$\hfil into \hfil$##$ but, since this involves # it's not possible to use etoolbox's \patchcmd.
One can modify the distance between the lines by saying something like
\syslineskipcoeff{1.2}
and act on the column spacing with the parameter \tabskip; so, for example,
$\syslineskipcoeff{1.2}\setlength{\tabskip}{3pt} \systeme{ 2x + y + 3z = 10, x + y + z = 6, x + 3y + 2z = 13}$
will spread out the equations both vertically and horizontally. The \syslineskipcoeff can also be issued globally, in the preamble of the document; not the horizontal spacing, though, as \tabskip influences all TeX tables, tabular environments included.
• Are there options in this package to (i) get rid of the (presumably unwanted) curly brace on the left-hand side and (ii) also align the "6" correctly in row 2?
– Mico
Nov 17 '11 at 16:28
• \sysdelim.. before the system will avoid the brace; I guess that this declaration remains valid until countermanded, but it will respect grouping. For right alignment of the right hand side I think that the code must be modified. Nov 17 '11 at 16:37
• This seems as almost the perfect solution. Are there any way to add a default spacing length, and perhaps a optional length argument? The optional length argument should simply increase the size between all elements horizontally. Perhaps I am asking too much here. Nov 17 '11 at 17:23
• @N3buchadnezzar See edit Nov 18 '11 at 0:41
• @DamienWalters I don't think it's possible. Mar 3 '12 at 23:44
You could use the alignat* environment of amsmath, it's for multi-line equations with alignment at several places. To correct spacing of operators, you could write them as {}+{}. For example:
\documentclass{article}
\usepackage{amsmath}
\begin{document}
\begin{alignat*}{4}
2x & {}+{} & y & {}+{} & 3z & {}={} & 10 \\
x & {}+{} & y & {}+{} & z & {}={} & 6 \\
x & {}+{} & 3y & {}+{} & 2z & {}={} & 13
\end{alignat*}
\end{document}
• Is there an automatic way to find the number of equations? Then one could make this into an own enviroment by redefinding + as & {}+{} & and so on. The one could fairly easy turn this into a more automatic solution =) Nov 17 '11 at 19:57
spalign provides an elementary interface for setting systems of equations:
\documentclass{article}
\usepackage{spalign}
\begin{document}
\spalignsys{ 2x + y + 3z = 10 ; x + y + z = 6 ; x + 3y + 2z = 13 }
\spalignsysdelims{.}{\}}% {<left>}{<right>}
\spalignsys{ 2x \+ \. - 3z = 1 ; \. \+ 4y + z = -4 }
\end{document}
## Background:
This is inspired by an earlier solution provided by Gonzalo Medina (now deleted), which had an issue if some of the entries were negative.
What I really liked about that solution was its attempt to only require the specification of the basic information. In that solution, + signs were assumed, but adapting that to include - signs it would look like:
\begin{MySystem}{2}{R}
2x & y & z & 10 \\
x & y & -z & 6 \\
-x & -3y & 2z & 13 \\
\end{MySystem}
Note that no plus signs need be entered, just the raw matrix style data.
## Solution:
Here, I follow a similar approach but use the collcell package to analyze the data and handle the case of a leading - or + sign and properly align them. Although, I am not really sure why one would want a left aligned system of equations, since Gonzalo had provided such an environment, I have included that here as well. The parameters to the MySystem environment here are:
\begin{MySystem}[<size>]{<n>}{<align>} ... \end{MySystem}
• <size> is an optional parameter specifying the amount of space to typeset the terms in (necessitated as this solution separates the +/- sign and aligns them). This defaults I used provides a good value for the basic system of equations.
• <n> the number of variables (using solution from Defining repeated tabular/array columns using a computed value to specify the number).
• <align>, the alignment to use: either R or L. This corresponds to the new column types defined here.
## Further Enhancements:
• DONE: I attempted to automate this to accept the actual number of variables (as opposed to one less) since that would be a more natural interface, but ran into trouble using a counter's value to specify the number of repeated columns to use. This issue has been posted at: Defining repeated tabular/array columns using a computed value to specify the number.
• I have used the xstring package to handle the string processing. There are probably ways to achieve the same functionality without requiring this extra package.
• Currently any empty cells are properly processed in that the + sign is suppressed via the \@PlusIfNonZero macro (see the 5 variable example). Perhaps add an option to provide the same behavior for an entry with a zero coefficient
• Provide a starred version of this same environment that outputs a matrix instead. This would require extraction of the variables form the columns, or a change to the user interface where the variables are explicitly provided.
• Note that the signs are not aligned in the first column and last column. Although I think it looks better this way, perhaps one would want an option to also align the sign of those columns as well. This should be as simple as changing the lower case alignment r/l for these columns to the upper case R/L alignment (although I have not tested it).
• Perhaps use a space as the alignment character and eliminate the need for the &. Then it would be even easier to read.
## Code:
\documentclass[border=2pt]{standalone}
\usepackage{collcell}% This includes the array package
\usepackage{xcolor}% Only used for error message in case an unknown column is specified
\usepackage{xstring}% For string processing
\makeatletter
\newcommand{\@RemovePositiveSignIfAny}[1]{\IfBeginWith{#1}{+}{\StrGobbleLeft{#1}{1}}{#1}}%
\newcommand{\@PlusIfNonZero}[1]{\IfStrEq{#1}{}{}{+}}%
\newlength{\@MySystemWidth}%
\newcommand*{\@ChooseSignWithAlignment}[2]{%
\IfBeginWith{#2}{-}%
{{}-\makebox[\@MySystemWidth][#1]{\ensuremath{\StrGobbleLeft{#2}{1}}}}%
{{}\@PlusIfNonZero{#2}\makebox[\@MySystemWidth][#1]{\ensuremath{\@RemovePositiveSignIfAny{#2}}}}%
}%
\newcommand*{\@ChooseSignWithRightAlignment}[1]{\@ChooseSignWithAlignment{r}{#1}}%
\newcommand*{\@ChooseSignWithLeftAlignment}[1]{\@ChooseSignWithAlignment{l}{#1}}%
\newenvironment{MySystem}[3][1.2em]{% [<width>] {num of variables} {alignment}
\newcolumntype{R}{>{\collectcell\@ChooseSignWithRightAlignment}{r}<{\endcollectcell}}
\newcolumntype{L}{>{\collectcell\@ChooseSignWithLeftAlignment}{l}<{\endcollectcell}}
\setlength{\@MySystemWidth}{#1}% Store the width to use to typeset the terms
\IfStrEqCase{#3}{%
{R}{\begin{array}{r@{} *{\numexpr#2-1\relax}{#3@{}} @{{}={}} r}}%
{L}{\begin{array}{l@{} *{\numexpr#2-1\relax}{#3@{}} @{{}={}} l}}%
}[\parbox{\linewidth}{\color{red}MySystem Error: Only L and R alignement types are supported. (using R)}\begin{array}{r *{\numexpr#2-1\relax}{#3} @{{}={}} r}]%
}{%
\end{array}%
}
\makeatother
\begin{document}
Using the \textbf{R} column type with 3 and 5 variables:
$\begin{MySystem}{3}{R} 2x & y & z & 10 \\ x & y & -z & 6 \\ -x &-3y & 2z & 13 \\ \end{MySystem}$
$\begin{MySystem}{5}{R} 3u &-4w & 2x & y & z & 10 \\ -u & 3w & x & y & -z & 6 \\ & &-x & & 2z & 13 \\ \end{MySystem}$
Using the \textbf{R} column type specifying a size:
$\begin{MySystem}[3em]{3}{R} 2x & y & z & 10 \\ x & y & -z & 6 \\ -x &-3y & 2z & 13 \\ \end{MySystem}$
\hrule\medskip
Using the \textbf{L} column type (not sure why one would want this):
$\begin{MySystem}{3}{L} 2x & y & z & 10 \\ x & y & -z & 6 \\ -x &-3y & 2z & 13 \\ \end{MySystem}$
Using the \textbf{L} column type specifying a size:
$\begin{MySystem}[3em]{3}{L} 2x & y & z & 10 \\ x & y & -z & 6 \\ -x &-3y & 2z & 13 \\ \end{MySystem}$
\end{document}
The following code, which takes your code as its starting point,
\documentclass[letterpaper]{standalone}
\usepackage{array} %% provides the command \newcolumntype
%% "o": column type for "operators", e.g., +, -, and =
\newcolumntype{o}{@{}>{{}}c<{{}}@{}}
\begin{document}
$\begin{array}{rororor} %% or, more succinctly, \begin{array}{*{3}{ro}r} 2x & + & y & + & 3z & = & 10 \\ x & + & y & + & z & = & 6 \\ x & + & 3y & + & 2z & = & 13 \end{array}$
\end{document}
produces this output:
column{odd} = {r},
column{even} = {c},
colsep = #1,
}
\BODY
\end{tblr}$} } \begin{document}$A=\left[\begin{myeqn}
2x & + & y & + & 3z & = & 10 \\
x & + & y & + & z & = & 6 \\
x & + & 3y & + & 2z & = & 13 \\
\end{myeqn}\right]\$
%
|
Classical Optimization: Unconstrained Optimization
In this article, we will review and explain three motivating problems. as we have explained in Optimization introduction article, there are two main types of optimization based on its conditions as constrained and unconstrained. Unconstrained optimization considers the problem of minimizing or maximizing an objective function that depends on real variables with no restrictions on their values.
#### Three Motivating Problems
• Profit Maximization – iWidget
• Given cost and demand functions, find the price for the iWidget that produces maximum
• Inventory Replenishment Policy- Gears Unlimited
• Given annual demand and costs for ordering and holding, calculate the re-order quantity that minimizes total
• Package Optimization – boxy.com
• Calculate the package dimensions that maximize total usable volume given a specific cardboard
• Each of these problems …
• Requires the use of a Prescriptive Model,
• Utilizes a math function to make the decision,
• Looks for an “extreme point” solution, and
• Are unconstrained in that there is not a resource
• What is an extreme point of a function?
• The point, or points, where the function takes on an extreme value, typically either a minimum or a maximum.
• The point(s) where the slope or “rate of change” of the function is equal to zero.
Extreme Points
• Types of Extreme points
• Minimum, Maximum, or Inflection Points
• The minimum and maximum points are either global or local.
Classical Optimization
• Use differential calculus to find extreme solutions
• Look for where the rate of change, the slope, goes to zero.
• Check for sufficiency conditions.
• Continuity and convexity come into play.
• We are manufacturing a product where we know:
• The cost function = f(# made) = 500,000 + 75x
• The demand function = f(price) = 20,000 – 80p
• And therefore the profit function = -80p2 + 26,000p – 2,000,000
• We want to find the price, p, that maximizes profits.
Finding the Instantaneous Slope: The First Derivative
iWidget Solution
Find the price, p, that maximizes the profit function: $y=-80p^{2}+26000p-2000000$
1. Take the first derivative: $y^{'}=\frac{\mathrm{d}&space;y}{\mathrm{d}&space;x}=-80(2)p^{2-1}+26000(1)p^{1-1}=-160p+26000$
2. Set the first derivative equal to zero: $-160p+26000=0$
3. Solve for p*:
$-160p=-26000&space;\Rightarrow&space;p^{*}=\frac{26000}{160}\Rightarrow&space;p^{*}=162.5$
This means: Set price at 162.5$then your profits will be maximized. Expected profit will be 112,500$
But:
1. How do I know this is a maximum and not a minimum?
2. How do I know whether this is global or local?
Necessary and Sufficient Conditions
In order to determine x* at the max/min of an unconstrained function:
• Necessary Condition – the slope has to be zero, that is, f'(x*)=0
• Sufficient Conditions – determines whether extreme point is min or max by taking the Second Derivative, f”(x).
• If f”(x) > 0 then the extreme point is a local minimum
• If f”(x) < 0 then the extreme point is a local maximum
• If f”(x) = 0 then it is inconclusive
• Special Cases:
• If f(x) is convex then f(x*) is a global minimum
• If f(x) is concave then f(x*) is global maximum
By observation, we know that p*=162.50 is a global optimal, but lets do it formally.
• Checking Second Order Conditions:
$y=f(p)=-80p^{2}+26000p-2000000$
${y}'={f}'(p)=-160p+26000$
${y}''={f}''(p)=(-160)(1)p^{(1-1)}=-160$
As the slope is Negative, Then this is Local Maximum. And since f(p) is a concave function, so it is become Global Maximum as well.
Inventory Replenishment Policy
Gears Unlimited distributes specialty gears, derailleurs, and brakes for high-end mountain and BMX bikes. One of their most steady selling items is the PK35 derailleur. They sell about 1500 of the PK35’s a year. They cost $75 each to procure from a supplier and Gears Unlimited assumes that the cost of capital is 20% a year. It costs about$350 to place and receive an order of the PK35s, regardless of the quantity of the order.
How many PK35s should Gears Unlimited order at a time to minimize the average annual cost in terms of purchase cost, ordering costs, and holding costs?
What do we know?
• D = Demand = 1500 items/year
• c = Unit Cost = 75 $/item • A = Ordering Cost = 350$/order
• r = Cost of Capital = 0.2 $/$/year
What do we want to find?
• Q = Order Quantity (item/order). Find Q* that minimizes Total Cost.
What is my Objective Function?
Total Cost = Purchase Cost + Order Cost + Holding Cost
• Purchasing Cost = cD = (75)(1500) = 112,500 [$/year] • Order Cost = A(D/Q) = (350)(1500/Q) = 525,000/ Q [$/year]
• Holding Cost = rc(Q/2) = (0.2)(75)(Q/2) = 7.5Q [\$/year]
Gears Unlimited Solution steps:
1- Determine the Objective Function:
$TC(Q)=cD+A(\frac{D}{Q})+rc(\frac{Q}{2})$
$TC(Q)=112500+525000/Q+7.5Q$
2- Take first derivative:
${f}'(Q)=0+(525000)(-1)Q^{(-1-1)}+(7.5)(1)Q^{(1-1)}=-525000/Q^{2}+7.5$
3- Set first derivative equal to zero and solve for Q:
${f}'(Q^{*})=-525000/Q^{*2}+7.5=0&space;\Rightarrow&space;Q^{*}=264.6\cong&space;265&space;[items/order]$
4- Check second order conditions:
${f}''(Q^{*})=-525000(-2)/Q^{*3}+7.5=(1050000)/Q^{*3}>&space;0$
Because Q* will always be greater than zero, then this Q* is a Local Minimum.
Optimal Design
You are consulting with boxy.com, the premier online corrugated packaging company. They just received a large quantity of heavy duty cardboard from a third party at an extremely low cost. All of the sheets are 1 meter by 1.5 meters in dimension. You have been asked to come up with the design that maximizes the total volume of a box made from this sheet. The only cutting that can be made, however, are equal-sized squares from each of the four corners. The edges then fold up to form the box.
How big should the square cut-outs be to maximize the box’s volume?
What do we know?
• W= Width= 1 m
• L =Length= 1.5 m
• x = Height of box (also the amount cut)
What do we want to find?
Find x* that maximizes Volume V =Volume= (Width)(Length)(Height) = (W-2x)(L-2x)(x)
What is my objective function?
max V = (W-2x)(L-2x)(x) = (WL-2xL -2Wx+4x2)x = 4x3-2Wx2-2Lx2+WLx = 4x3 – 2x– 3x2+1.5x =-4-x3-5x2+1.5-x
boxy.com Solution
1. Determine the Objective Function
2. Take first derivative
3. Set 1st derivative equal to zero and solve for x*
4. Check 2nd order conditions
Optimization
### Constrained Optimization
As we have explained in last article ( Introduction to Optimization), Constrained is similar to unconstrained optimization in some points like below: Requires a prescriptive model.
|
gi-gtk-3.0.26: Gtk bindings
Copyright Will Thompson Iñaki García Etxebarria and Jonas Platte LGPL-2.1 Iñaki García Etxebarria ([email protected]) None Haskell2010
GI.Gtk.Objects.Image
Description
The Image widget displays an image. Various kinds of object can be displayed as an image; most typically, you would load a Pixbuf ("pixel buffer") from a file, and then display that. There’s a convenience function to do this, imageNewFromFile, used as follows:
### C code
GtkWidget *image;
image = gtk_image_new_from_file ("myfile.png");
If the file isn’t loaded successfully, the image will contain a “broken image” icon similar to that used in many web browsers. If you want to handle errors in loading the file yourself, for example by displaying an error message, then load the image with pixbufNewFromFile, then create the Image with imageNewFromPixbuf.
The image file may contain an animation, if so the Image will display an animation (PixbufAnimation) instead of a static image.
Image is a subclass of Misc, which implies that you can align it (center, left, right) and add padding to it, using Misc methods.
Image is a “no window” widget (has no Window of its own), so by default does not receive events. If you want to receive events on the image, such as button clicks, place the image inside a EventBox, then connect to the event signals on the event box.
## Handling button press events on a Image.
### C code
static gboolean
button_press_callback (GtkWidget *event_box,
GdkEventButton *event,
gpointer data)
{
g_print ("Event box clicked at coordinates %f,%f\n",
event->x, event->y);
// Returning TRUE means we handled the event, so the signal
// emission should be stopped (don’t call any further callbacks
// that may be connected). Return FALSE to continue invoking callbacks.
return TRUE;
}
static GtkWidget*
create_image (void)
{
GtkWidget *image;
GtkWidget *event_box;
image = gtk_image_new_from_file ("myfile.png");
event_box = gtk_event_box_new ();
g_signal_connect (G_OBJECT (event_box),
"button_press_event",
G_CALLBACK (button_press_callback),
image);
return image;
}
When handling events on the event box, keep in mind that coordinates in the image may be different from event box coordinates due to the alignment and padding settings on the image (see Misc). The simplest way to solve this is to set the alignment to 0.0 (left/top), and set the padding to zero. Then the origin of the image will be the same as the origin of the event box.
Sometimes an application will want to avoid depending on external data files, such as image files. GTK+ comes with a program to avoid this, called “gdk-pixbuf-csource”. This library allows you to convert an image into a C variable declaration, which can then be loaded into a Pixbuf using pixbufNewFromInline.
# CSS nodes
GtkImage has a single CSS node with the name image.
Synopsis
# Exported types
newtype Image Source #
Memory-managed wrapper type.
Constructors
Image (ManagedPtr Image)
Instances
Source # Instance detailsDefined in GI.Gtk.Objects.Image Methods Source # Instance detailsDefined in GI.Gtk.Objects.Image Source # Instance detailsDefined in GI.Gtk.Objects.Image Source # Instance detailsDefined in GI.Gtk.Objects.Image Source # Instance detailsDefined in GI.Gtk.Objects.Image Source # Instance detailsDefined in GI.Gtk.Objects.Image Source # Instance detailsDefined in GI.Gtk.Objects.Image
class GObject o => IsImage o Source #
Type class for types which can be safely cast to Image, for instance with toImage.
Instances
(GObject a, (UnknownAncestorError Image a :: Constraint)) => IsImage a Source # Instance detailsDefined in GI.Gtk.Objects.Image Source # Instance detailsDefined in GI.Gtk.Objects.Image
toImage :: (MonadIO m, IsImage o) => o -> m Image Source #
Cast to Image, for types for which this is known to be safe. For general casts, use castTo.
A convenience alias for Nothing :: Maybe Image.
# Methods
## clear
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> m ()
Resets the image to be empty.
Since: 2.8
## getAnimation
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> m (Maybe PixbufAnimation) Returns: the displayed animation, or Nothing if the image is empty
Gets the PixbufAnimation being displayed by the Image. The storage type of the image must be ImageTypeEmpty or ImageTypeAnimation (see imageGetStorageType). The caller of this function does not own a reference to the returned animation.
## getGicon
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> m (Icon, Int32)
Gets the Icon and size being displayed by the Image. The storage type of the image must be ImageTypeEmpty or ImageTypeGicon (see imageGetStorageType). The caller of this function does not own a reference to the returned Icon.
Since: 2.14
## getIconName
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> m (Text, Int32)
Gets the icon name and size being displayed by the Image. The storage type of the image must be ImageTypeEmpty or ImageTypeIconName (see imageGetStorageType). The returned string is owned by the Image and should not be freed.
Since: 2.6
## getIconSet
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> m (IconSet, Int32)
Deprecated: (Since version 3.10)Use imageGetIconName instead.
Gets the icon set and size being displayed by the Image. The storage type of the image must be ImageTypeEmpty or ImageTypeIconSet (see imageGetStorageType).
## getPixbuf
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> m (Maybe Pixbuf) Returns: the displayed pixbuf, or Nothing if the image is empty
Gets the Pixbuf being displayed by the Image. The storage type of the image must be ImageTypeEmpty or ImageTypePixbuf (see imageGetStorageType). The caller of this function does not own a reference to the returned pixbuf.
## getPixelSize
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> m Int32 Returns: the pixel size used for named icons.
Gets the pixel size used for named icons.
Since: 2.6
## getStock
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> m (Text, Int32)
Deprecated: (Since version 3.10)Use imageGetIconName instead.
Gets the stock icon name and size being displayed by the Image. The storage type of the image must be ImageTypeEmpty or ImageTypeStock (see imageGetStorageType). The returned string is owned by the Image and should not be freed.
## getStorageType
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> m ImageType Returns: image representation being used
Gets the type of representation being used by the Image to store image data. If the Image has no image data, the return value will be ImageTypeEmpty.
## new
Arguments
:: (HasCallStack, MonadIO m) => m Image Returns: a newly created Image widget.
Creates a new empty Image widget.
## newFromAnimation
Arguments
:: (HasCallStack, MonadIO m, IsPixbufAnimation a) => a animation: an animation -> m Image Returns: a new Image widget
Creates a Image displaying the given animation. The Image does not assume a reference to the animation; you still need to unref it if you own references. Image will add its own reference rather than adopting yours.
Note that the animation frames are shown using a timeout with PRIORITY_DEFAULT. When using animations to indicate busyness, keep in mind that the animation will only be shown if the main loop is not busy with something that has a higher priority.
## newFromFile
Arguments
:: (HasCallStack, MonadIO m) => [Char] filename: a filename -> m Image Returns: a new Image
Creates a new Image displaying the file filename. If the file isn’t found or can’t be loaded, the resulting Image will display a “broken image” icon. This function never returns Nothing, it always returns a valid Image widget.
If the file contains an animation, the image will contain an animation.
If you need to detect failures to load the file, use pixbufNewFromFile to load the file yourself, then create the Image from the pixbuf. (Or for animations, use pixbufAnimationNewFromFile).
The storage type (imageGetStorageType) of the returned image is not defined, it will be whatever is appropriate for displaying the file.
## newFromGicon
Arguments
:: (HasCallStack, MonadIO m, IsIcon a) => a icon: an icon -> Int32 size: a stock icon size (IconSize) -> m Image Returns: a new Image displaying the themed icon
Creates a Image displaying an icon from the current icon theme. If the icon name isn’t known, a “broken image” icon will be displayed instead. If the current icon theme is changed, the icon will be updated appropriately.
Since: 2.14
## newFromIconName
Arguments
:: (HasCallStack, MonadIO m) => Maybe Text iconName: an icon name or Nothing -> Int32 size: a stock icon size (IconSize) -> m Image Returns: a new Image displaying the themed icon
Creates a Image displaying an icon from the current icon theme. If the icon name isn’t known, a “broken image” icon will be displayed instead. If the current icon theme is changed, the icon will be updated appropriately.
Since: 2.6
## newFromIconSet
Arguments
:: (HasCallStack, MonadIO m) => IconSet iconSet: a IconSet -> Int32 size: a stock icon size (IconSize) -> m Image Returns: a new Image
Deprecated: (Since version 3.10)Use imageNewFromIconName instead.
Creates a Image displaying an icon set. Sample stock sizes are GTK_ICON_SIZE_MENU, GTK_ICON_SIZE_SMALL_TOOLBAR. Instead of using this function, usually it’s better to create a IconFactory, put your icon sets in the icon factory, add the icon factory to the list of default factories with iconFactoryAddDefault, and then use imageNewFromStock. This will allow themes to override the icon you ship with your application.
The Image does not assume a reference to the icon set; you still need to unref it if you own references. Image will add its own reference rather than adopting yours.
## newFromPixbuf
Arguments
:: (HasCallStack, MonadIO m, IsPixbuf a) => Maybe a pixbuf: a Pixbuf, or Nothing -> m Image Returns: a new Image
Creates a new Image displaying pixbuf. The Image does not assume a reference to the pixbuf; you still need to unref it if you own references. Image will add its own reference rather than adopting yours.
Note that this function just creates an Image from the pixbuf. The Image created will not react to state changes. Should you want that, you should use imageNewFromIconName.
## newFromResource
Arguments
:: (HasCallStack, MonadIO m) => Text resourcePath: a resource path -> m Image Returns: a new Image
Creates a new Image displaying the resource file resourcePath. If the file isn’t found or can’t be loaded, the resulting Image will display a “broken image” icon. This function never returns Nothing, it always returns a valid Image widget.
If the file contains an animation, the image will contain an animation.
If you need to detect failures to load the file, use pixbufNewFromFile to load the file yourself, then create the Image from the pixbuf. (Or for animations, use pixbufAnimationNewFromFile).
The storage type (imageGetStorageType) of the returned image is not defined, it will be whatever is appropriate for displaying the file.
Since: 3.4
## newFromStock
Arguments
:: (HasCallStack, MonadIO m) => Text stockId: a stock icon name -> Int32 size: a stock icon size (IconSize) -> m Image Returns: a new Image displaying the stock icon
Deprecated: (Since version 3.10)Use imageNewFromIconName instead.
Creates a Image displaying a stock icon. Sample stock icon names are STOCK_OPEN, STOCK_QUIT. Sample stock sizes are GTK_ICON_SIZE_MENU, GTK_ICON_SIZE_SMALL_TOOLBAR. If the stock icon name isn’t known, the image will be empty. You can register your own stock icon names, see iconFactoryAddDefault and iconFactoryAdd.
## newFromSurface
Arguments
:: (HasCallStack, MonadIO m) => Maybe Surface surface: a Surface, or Nothing -> m Image Returns: a new Image
Creates a new Image displaying surface. The Image does not assume a reference to the surface; you still need to unref it if you own references. Image will add its own reference rather than adopting yours.
Since: 3.10
## setFromAnimation
Arguments
:: (HasCallStack, MonadIO m, IsImage a, IsPixbufAnimation b) => a image: a Image -> b animation: the PixbufAnimation -> m ()
Causes the Image to display the given animation (or display nothing, if you set the animation to Nothing).
## setFromFile
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> Maybe [Char] filename: a filename or Nothing -> m ()
See imageNewFromFile for details.
## setFromGicon
Arguments
:: (HasCallStack, MonadIO m, IsImage a, IsIcon b) => a image: a Image -> b icon: an icon -> Int32 size: an icon size (IconSize) -> m ()
See imageNewFromGicon for details.
Since: 2.14
## setFromIconName
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> Maybe Text iconName: an icon name or Nothing -> Int32 size: an icon size (IconSize) -> m ()
See imageNewFromIconName for details.
Since: 2.6
## setFromIconSet
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> IconSet iconSet: a IconSet -> Int32 size: a stock icon size (IconSize) -> m ()
Deprecated: (Since version 3.10)Use imageSetFromIconName instead.
See imageNewFromIconSet for details.
## setFromPixbuf
Arguments
:: (HasCallStack, MonadIO m, IsImage a, IsPixbuf b) => a image: a Image -> Maybe b pixbuf: a Pixbuf or Nothing -> m ()
See imageNewFromPixbuf for details.
## setFromResource
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> Maybe Text resourcePath: a resource path or Nothing -> m ()
See imageNewFromResource for details.
## setFromStock
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> Text stockId: a stock icon name -> Int32 size: a stock icon size (IconSize) -> m ()
Deprecated: (Since version 3.10)Use imageSetFromIconName instead.
See imageNewFromStock for details.
## setFromSurface
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> Maybe Surface surface: a cairo_surface_t or Nothing -> m ()
See imageNewFromSurface for details.
Since: 3.10
## setPixelSize
Arguments
:: (HasCallStack, MonadIO m, IsImage a) => a image: a Image -> Int32 pixelSize: the new pixel size -> m ()
Sets the pixel size to use for named icons. If the pixel size is set to a value != -1, it is used instead of the icon size set by imageSetFromIconName.
Since: 2.6
# Properties
## file
No description available in the introspection data.
clearImageFile :: (MonadIO m, IsImage o) => o -> m () Source #
Set the value of the “file” property to Nothing. When overloading is enabled, this is equivalent to
clear #file
Construct a GValueConstruct with valid value for the “file” property. This is rarely needed directly, but it is used by new.
getImageFile :: (MonadIO m, IsImage o) => o -> m (Maybe Text) Source #
Get the value of the “file” property. When overloading is enabled, this is equivalent to
get image #file
setImageFile :: (MonadIO m, IsImage o) => o -> Text -> m () Source #
Set the value of the “file” property. When overloading is enabled, this is equivalent to
set image [ #file := value ]
## gicon
The GIcon displayed in the GtkImage. For themed icons, If the icon theme is changed, the image will be updated automatically.
Since: 2.14
clearImageGicon :: (MonadIO m, IsImage o) => o -> m () Source #
Set the value of the “gicon” property to Nothing. When overloading is enabled, this is equivalent to
clear #gicon
constructImageGicon :: (IsImage o, IsIcon a) => a -> IO (GValueConstruct o) Source #
Construct a GValueConstruct with valid value for the “gicon” property. This is rarely needed directly, but it is used by new.
getImageGicon :: (MonadIO m, IsImage o) => o -> m (Maybe Icon) Source #
Get the value of the “gicon” property. When overloading is enabled, this is equivalent to
get image #gicon
setImageGicon :: (MonadIO m, IsImage o, IsIcon a) => o -> a -> m () Source #
Set the value of the “gicon” property. When overloading is enabled, this is equivalent to
set image [ #gicon := value ]
## iconName
The name of the icon in the icon theme. If the icon theme is changed, the image will be updated automatically.
Since: 2.6
clearImageIconName :: (MonadIO m, IsImage o) => o -> m () Source #
Set the value of the “icon-name” property to Nothing. When overloading is enabled, this is equivalent to
clear #iconName
Construct a GValueConstruct with valid value for the “icon-name” property. This is rarely needed directly, but it is used by new.
getImageIconName :: (MonadIO m, IsImage o) => o -> m (Maybe Text) Source #
Get the value of the “icon-name” property. When overloading is enabled, this is equivalent to
get image #iconName
setImageIconName :: (MonadIO m, IsImage o) => o -> Text -> m () Source #
Set the value of the “icon-name” property. When overloading is enabled, this is equivalent to
set image [ #iconName := value ]
## iconSet
No description available in the introspection data.
clearImageIconSet :: (MonadIO m, IsImage o) => o -> m () Source #
Set the value of the “icon-set” property to Nothing. When overloading is enabled, this is equivalent to
clear #iconSet
Construct a GValueConstruct with valid value for the “icon-set” property. This is rarely needed directly, but it is used by new.
getImageIconSet :: (MonadIO m, IsImage o) => o -> m (Maybe IconSet) Source #
Get the value of the “icon-set” property. When overloading is enabled, this is equivalent to
get image #iconSet
setImageIconSet :: (MonadIO m, IsImage o) => o -> IconSet -> m () Source #
Set the value of the “icon-set” property. When overloading is enabled, this is equivalent to
set image [ #iconSet := value ]
## iconSize
No description available in the introspection data.
Construct a GValueConstruct with valid value for the “icon-size” property. This is rarely needed directly, but it is used by new.
getImageIconSize :: (MonadIO m, IsImage o) => o -> m Int32 Source #
Get the value of the “icon-size” property. When overloading is enabled, this is equivalent to
get image #iconSize
setImageIconSize :: (MonadIO m, IsImage o) => o -> Int32 -> m () Source #
Set the value of the “icon-size” property. When overloading is enabled, this is equivalent to
set image [ #iconSize := value ]
## pixbuf
No description available in the introspection data.
clearImagePixbuf :: (MonadIO m, IsImage o) => o -> m () Source #
Set the value of the “pixbuf” property to Nothing. When overloading is enabled, this is equivalent to
clear #pixbuf
constructImagePixbuf :: (IsImage o, IsPixbuf a) => a -> IO (GValueConstruct o) Source #
Construct a GValueConstruct with valid value for the “pixbuf” property. This is rarely needed directly, but it is used by new.
getImagePixbuf :: (MonadIO m, IsImage o) => o -> m (Maybe Pixbuf) Source #
Get the value of the “pixbuf” property. When overloading is enabled, this is equivalent to
get image #pixbuf
setImagePixbuf :: (MonadIO m, IsImage o, IsPixbuf a) => o -> a -> m () Source #
Set the value of the “pixbuf” property. When overloading is enabled, this is equivalent to
set image [ #pixbuf := value ]
## pixbufAnimation
No description available in the introspection data.
clearImagePixbufAnimation :: (MonadIO m, IsImage o) => o -> m () Source #
Set the value of the “pixbuf-animation” property to Nothing. When overloading is enabled, this is equivalent to
clear #pixbufAnimation
Construct a GValueConstruct with valid value for the “pixbuf-animation” property. This is rarely needed directly, but it is used by new.
getImagePixbufAnimation :: (MonadIO m, IsImage o) => o -> m (Maybe PixbufAnimation) Source #
Get the value of the “pixbuf-animation” property. When overloading is enabled, this is equivalent to
get image #pixbufAnimation
setImagePixbufAnimation :: (MonadIO m, IsImage o, IsPixbufAnimation a) => o -> a -> m () Source #
Set the value of the “pixbuf-animation” property. When overloading is enabled, this is equivalent to
set image [ #pixbufAnimation := value ]
## pixelSize
The "pixel-size" property can be used to specify a fixed size overriding the Image:icon-size property for images of type ImageTypeIconName.
Since: 2.6
Construct a GValueConstruct with valid value for the “pixel-size” property. This is rarely needed directly, but it is used by new.
getImagePixelSize :: (MonadIO m, IsImage o) => o -> m Int32 Source #
Get the value of the “pixel-size” property. When overloading is enabled, this is equivalent to
get image #pixelSize
setImagePixelSize :: (MonadIO m, IsImage o) => o -> Int32 -> m () Source #
Set the value of the “pixel-size” property. When overloading is enabled, this is equivalent to
set image [ #pixelSize := value ]
## resource
A path to a resource file to display.
Since: 3.8
clearImageResource :: (MonadIO m, IsImage o) => o -> m () Source #
Set the value of the “resource” property to Nothing. When overloading is enabled, this is equivalent to
clear #resource
Construct a GValueConstruct with valid value for the “resource” property. This is rarely needed directly, but it is used by new.
getImageResource :: (MonadIO m, IsImage o) => o -> m (Maybe Text) Source #
Get the value of the “resource” property. When overloading is enabled, this is equivalent to
get image #resource
setImageResource :: (MonadIO m, IsImage o) => o -> Text -> m () Source #
Set the value of the “resource” property. When overloading is enabled, this is equivalent to
set image [ #resource := value ]
## stock
No description available in the introspection data.
clearImageStock :: (MonadIO m, IsImage o) => o -> m () Source #
Set the value of the “stock” property to Nothing. When overloading is enabled, this is equivalent to
clear #stock
Construct a GValueConstruct with valid value for the “stock” property. This is rarely needed directly, but it is used by new.
getImageStock :: (MonadIO m, IsImage o) => o -> m (Maybe Text) Source #
Get the value of the “stock” property. When overloading is enabled, this is equivalent to
get image #stock
setImageStock :: (MonadIO m, IsImage o) => o -> Text -> m () Source #
Set the value of the “stock” property. When overloading is enabled, this is equivalent to
set image [ #stock := value ]
## storageType
No description available in the introspection data.
getImageStorageType :: (MonadIO m, IsImage o) => o -> m ImageType Source #
Get the value of the “storage-type” property. When overloading is enabled, this is equivalent to
get image #storageType
## surface
No description available in the introspection data.
clearImageSurface :: (MonadIO m, IsImage o) => o -> m () Source #
Set the value of the “surface” property to Nothing. When overloading is enabled, this is equivalent to
clear #surface
Construct a GValueConstruct with valid value for the “surface” property. This is rarely needed directly, but it is used by new.
getImageSurface :: (MonadIO m, IsImage o) => o -> m (Maybe Surface) Source #
Get the value of the “surface” property. When overloading is enabled, this is equivalent to
get image #surface
setImageSurface :: (MonadIO m, IsImage o) => o -> Surface -> m () Source #
Set the value of the “surface” property. When overloading is enabled, this is equivalent to
set image [ #surface := value ]
## useFallback
Whether the icon displayed in the GtkImage will use standard icon names fallback. The value of this property is only relevant for images of type ImageTypeIconName and ImageTypeGicon.
Since: 3.0
Construct a GValueConstruct with valid value for the “use-fallback” property. This is rarely needed directly, but it is used by new.
getImageUseFallback :: (MonadIO m, IsImage o) => o -> m Bool Source #
Get the value of the “use-fallback” property. When overloading is enabled, this is equivalent to
get image #useFallback
setImageUseFallback :: (MonadIO m, IsImage o) => o -> Bool -> m () Source #
Set the value of the “use-fallback” property. When overloading is enabled, this is equivalent to
set image [ #useFallback := value ]
|
Integer type: int32 int64 nag_int show int32 show int32 show int64 show int64 show nag_int show nag_int
Chapter Contents
Chapter Introduction
NAG Toolbox
# NAG Toolbox: nag_rand_quasi_uniform (g05ym)
## Purpose
nag_rand_quasi_uniform (g05ym) generates a uniformly distributed low-discrepancy sequence as proposed by Sobol, Faure or Niederreiter. It must be preceded by a call to one of the initialization functions nag_rand_quasi_init (g05yl) or nag_rand_quasi_init_scrambled (g05yn).
## Syntax
[quas, iref, ifail] = g05ym(n, iref, 'rcord', rcord)
[quas, iref, ifail] = nag_rand_quasi_uniform(n, iref, 'rcord', rcord)
Note: the interface to this routine has changed since earlier releases of the toolbox:
At Mark 24: rcord was made optional
## Description
Low discrepancy (quasi-random) sequences are used in numerical integration, simulation and optimization. Like pseudorandom numbers they are uniformly distributed but they are not statistically independent, rather they are designed to give more even distribution in multidimensional space (uniformity). Therefore they are often more efficient than pseudorandom numbers in multidimensional Monte–Carlo methods.
nag_rand_quasi_uniform (g05ym) generates a set of points ${x}^{1},{x}^{2},\dots ,{x}^{N}$ with high uniformity in the $S$-dimensional unit cube ${I}^{S}={\left[0,1\right]}^{S}$.
Let $G$ be a subset of ${I}^{S}$ and define the counting function ${S}_{N}\left(G\right)$ as the number of points ${x}^{i}\in G$. For each $x=\left({x}_{1},{x}_{2},\dots ,{x}_{S}\right)\in {I}^{S}$, let ${G}_{x}$ be the rectangular $S$-dimensional region
$G x = 0, x 1 × 0, x 2 ×⋯× 0, x S$
with volume ${x}_{1},{x}_{2},\dots ,{x}_{S}$. Then one measure of the uniformity of the points ${x}^{1},{x}^{2},\dots ,{x}^{N}$ is the discrepancy:
$DN* x1,x2,…,xN = sup x∈IS SN Gx - N x1 , x2 , … , xS .$
which has the form
$DN*x1,x2,…,xN≤CSlogNS+OlogNS-1 for all N≥2.$
The principal aim in the construction of low-discrepancy sequences is to find sequences of points in ${I}^{S}$ with a bound of this form where the constant ${C}_{S}$ is as small as possible.
The type of low-discrepancy sequence generated by nag_rand_quasi_uniform (g05ym) depends on the initialization function called and can include those proposed by Sobol, Faure or Niederreiter. If the initialization function nag_rand_quasi_init_scrambled (g05yn) was used then the sequence will be scrambled (see Description in nag_rand_quasi_init_scrambled (g05yn) for details).
## References
Bratley P and Fox B L (1988) Algorithm 659: implementing Sobol's quasirandom sequence generator ACM Trans. Math. Software 14(1) 88–100
Fox B L (1986) Algorithm 647: implementation and relative efficiency of quasirandom sequence generators ACM Trans. Math. Software 12(4) 362–376
## Parameters
Note: the following variables are used in the parameter descriptions:
### Compulsory Input Parameters
1: $\mathrm{n}$int64int32nag_int scalar
The number of quasi-random numbers required.
Constraint: ${\mathbf{n}}\ge 0$ and ${\mathbf{n}}+\text{previous number of generated values}\le {2}^{31}-1$.
2: $\mathrm{iref}\left(\mathit{liref}\right)$int64int32nag_int array
Contains information on the current state of the sequence.
### Optional Input Parameters
1: $\mathrm{rcord}$int64int32nag_int scalar
Default: $1$
The order in which the generated values are returned.
Constraint: ${\mathbf{rcord}}=1$ or $2$.
### Output Parameters
1: $\mathrm{quas}\left(\mathit{ldquas},\mathit{tdquas}\right)$ – double array
Contains the n quasi-random numbers of dimension idim.
If ${\mathbf{rcord}}=1$, ${\mathbf{quas}}\left(i,j\right)$ holds the $j$th value for the $i$th dimension.
If ${\mathbf{rcord}}=2$, ${\mathbf{quas}}\left(i,j\right)$ holds the $i$th value for the $j$th dimension.
2: $\mathrm{iref}\left(\mathit{liref}\right)$int64int32nag_int array
Contains updated information on the state of the sequence.
3: $\mathrm{ifail}$int64int32nag_int scalar
${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see Error Indicators and Warnings).
## Error Indicators and Warnings
Errors or warnings detected by the function:
${\mathbf{ifail}}=1$
Constraint: ${\mathbf{n}}\ge 0$.
On entry, value of n would result in too many calls to the generator.
${\mathbf{ifail}}=2$
Constraint: ${\mathbf{rcord}}=1$ or $2$.
${\mathbf{ifail}}=4$
Constraint: if ${\mathbf{rcord}}=1$, $\mathit{ldquas}\ge \mathit{idim}$.
Constraint: if ${\mathbf{rcord}}=2$, $\mathit{ldquas}\ge {\mathbf{n}}$.
${\mathbf{ifail}}=5$
On entry, iref has either not been initialized or has been corrupted.
${\mathbf{ifail}}=-99$
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
Not applicable.
None.
## Example
This example calls nag_rand_quasi_init (g05yl) and nag_rand_quasi_uniform (g05ym) to estimate the value of the integral
$∫01 ⋯ ∫01 ∏ i=1 s 4xi-2 dx1, dx2, …, dxs = 1 .$
In this example the number of dimensions $S$ is set to $8$.
```function g05ym_example
fprintf('g05ym example results\n\n');
% Initialize the Sobol generator, skipping some variates
iskip = int64(1000);
idim = int64(8);
genid = int64(1);
% Initialize the Sobol generator
[iref, ifail] = g05yl( ...
genid,idim,iskip);
% Number of variates
n = int64(200);
% Generate N quasi-random variates
[quas, iref, ifail] = g05ym( ...
n, iref);
% Evaluate the function, and sum
p(1:n) = prod(abs(4*quas(1:idim,:)-2));
fsum = sum(p);
% Convert sum to mean value
vsbl = fsum/double(n);
fprintf('Value of integral = %8.4f\n\n', vsbl);
fprintf('First 10 variates\n');
for i = 1:10
fprintf(' %3d', i);
fprintf(' %7.4f', quas(1:idim,i));
fprintf('\n');
end
```
```g05ym example results
Value of integral = 1.0410
First 10 variates
1 0.7197 0.5967 0.0186 0.1768 0.7803 0.4072 0.5459 0.3994
2 0.9697 0.3467 0.7686 0.9268 0.5303 0.1572 0.2959 0.1494
3 0.4697 0.8467 0.2686 0.4268 0.0303 0.6572 0.7959 0.6494
4 0.3447 0.4717 0.1436 0.3018 0.1553 0.7822 0.4209 0.0244
5 0.8447 0.9717 0.6436 0.8018 0.6553 0.2822 0.9209 0.5244
6 0.5947 0.2217 0.3936 0.0518 0.9053 0.0322 0.1709 0.7744
7 0.0947 0.7217 0.8936 0.5518 0.4053 0.5322 0.6709 0.2744
8 0.0635 0.1904 0.0498 0.4580 0.6240 0.2510 0.9521 0.8057
9 0.5635 0.6904 0.5498 0.9580 0.1240 0.7510 0.4521 0.3057
10 0.8135 0.4404 0.2998 0.2080 0.3740 0.5010 0.7021 0.0557
```
|
### Basic Combinatorial Ideas Part 1
Hiiiiiiiii everyone! Today I will be showing you a few combinatorics problems which are not very hard. Each one has a different idea. My aim is to introduce you to various ideas you can think of while trying combo problems and realize that combo is very fun!!!!
Oki without wasting much time, let's begin :P
For those of you who want to try the problems before seeing the solution/ideas presented, here's a list of problems I have discussed below (note that I have modified a few questions, and some of them have answers to the original questions, so you can try reading the problems from AoPS instead if you're too worried on getting "spoiled" xD)
1) ISL 2009 C1 (a)
2) USAMO 1997 P1
3) IMO 2011 P4
4) USAMO 2002 P1
Rather than just giving out the solution, I will try to motivate how you come up with such a step, so that it helps to develop intuition :D
Problem 1 [ISL 2009 C1 (a)] : Consider $2009$ cards, each having one gold side and one black side, lying on parallel on a long table. Initially all cards show their gold sides. Two player, standing by the same long side of the table, play a game with alternating moves. Each move consists of choosing a block of $50$ consecutive cards, the leftmost of which is showing gold, and turning them all over, so those which showed gold now show black and vice versa. The last player who can make a legal move wins. Does the game necessarily end?
Ideas : So the first thing you see is that you are talking about how cards are placed, like if a specific card is showing the gold side or the black side? Well, it's kinda hard to think of so many cards with different colors altogether so we think of some other way to represent it. Why do we really need cards? Since they ain't have a specific purpose here, so the answer is no. Instead, what we do is that we represent a card that is showing its gold side by $1$ and a card that is showing its black side by $0$.
So, initially, you have a sequence of $2009$ $1$'s, and in each move, you change $50$ consecutive numbers from $1$ to $0$ or $0$ to $1$ such that the first number was initially $1$. Moreover, since the only digits we are using are $0$ and $1$, at any given point in time, the sequence can't be negative.
Now if you read the question carefully(or the rephrased version), you notice a line that says
The first number among the $50$ consecutive numbers should initially be $1$
But how does this help us? So basically in every move, you have some initial sequence of $1$'s and $0$'s starting with $1$, e.g. $$1011\ldots$$
and you flip all the numbers, so the above sequence becomes $$0100\ldots$$
So what you do is $$1011\ldots \mapsto 0100$$
Do you see something interesting? The second number is less than the first number! and this always holds true, because initially there was a $1$ in beginning and after we do a move, it's $0$. Now suppose this sequence was surrounded by some other numbers too, does this still hold true? The answer is YES! As a quick exercise, you can try to prove it yourself :P
Okay so now let's accumulate all the information we have now:-
1) The initial sequence is $\underbrace{1 \ 1 \ \ldots \ 1}_{2019\text{ ones}}$
2) After every move, the sequence decreases.
3) The sequence is always non-negative
Hmm, this looks interesting. Ok so from points $2$ and $3$ we can see that a positive integer is constantly decreasing, but after a certain point of time, it can't decrease anymore, because if it continues to decrease infinitely it will become negative, which can't happen. And hence, the game must always end.
Remarks : The main idea of this problem was to note that a lower bounded sequence of integers can never decrease infinitely and that the gold and black-sided cards are just flavor text and you can instead use binary numbers to denote them. Rewriting the problem in different ways is a very important step in many questions.
You can generalize this problem for $n$ cards, where you flip $k$ cards in a given move, the idea is still the same.
Problem 2 [USAMO 1997 P1] : Let $p_1, p_2, p_3, \ldots$ be the prime numbers listed in increasing order, and let $0 < x_0 < 1$ be a real number between 0 and 1.
$x_k = \begin{cases} 0 & \mbox{if} \; x_{k-1} = 0, \\[.1in] {\displaystyle \left\{ \frac{p_k}{x_{k-1}} \right\}} & \mbox{if} \; x_{k-1} \neq 0, \end{cases}$
where $\{x\}$ denotes the fractional part of $x$. Find, with proof, all $x_0$ satisfying $0 < x_0 < 1$
for which the sequence $x_0, x_1, x_2, \ldots$ eventually becomes $0$.
Ideas : In this problem, if $x_{i - 1} \ne 0$ and $x_i = 0$, it means that $$\left\{ \frac{p_i}{x_{i-1}}\right\} = 0$$ which is possible only if $x_{i - 1} \mid p_i \implies x_{i - 1} \in \{1, p_i\}$ for some $i$.
Now basically you try a few seemingly nice values of $x_0$ and make some conclusions. Well if you have enough patience and try out a few values you note that most of those work out... This is because when we think of picking a "nice" value, we most of the time pick a rational value, in a very rare case we will pick some irrational value of $x_0$. Well if you picked all rational values, then you might guess that all possible $x_0$ work, but then you should be careful! You didn't try irrational values.
So on the basis of our observations, we claim that all rational values for $x_0$ work, and irrational values don't. Let's prove that rational values work. We know that $x_0 \in (0,1)$, so it's very natural to consider $x_0 = \frac{a}{b}$, where $a,b \in \mathbb{Z}$ and $a < b$.
$$\implies x_1 = \left\{ \frac{2b}{a} \right\}$$
Now when we simplify the above expression for $x_1$ and write it in fractional form, we see that it will be of form $x_1 = \frac{t}{a}$, where $t < a$. Now we find $x_2$
$$x_2 = \left\{\frac{3a}{t}\right\}$$
So basically what happened was, the numerator of $x_i$ becomes the denominator of $x_{i+1}$, and since the numerator of $x_i$ is always less than the denominator of $x_i$ as $x_i \le 1$ by definition. And hence we get that denominators continuously decrease, or there's a point when the denominator divides the numerator. The first case can be solved using an idea that we used in Problem 1 !! Can you figure it out and finish it? For the second case, it's almost similar as well, sooo you can finish it as an exercise :P
Now we also have to prove that irrationals don't work, hmm. So for proving that, we use the above fact. If in the sequence, at any point of time a rational number occurs, it will just end up being $0$ as proved. So if an irrational value for $x_0$ doesn't work, it means that $\{x_n\}_{n \ge 0}$ should be a sequence of irrational numbers only. This is kind of easy to prove, so I recommend trying it on your own! But for sanity purposes, here's the idea:-
If $x$ is irrational, then so is $\frac{1}{x}$ and now you should be able to finish by induction by using the fact that how $x_i$ is defined :P
Remarks : The main thing is to guess the answer correctly, and realize that putting $x_0 = \frac{a}{b}$ is nice. Tho at first glance it looks like you're adding an extra variable in some sense, well sometimes you gotta be strong and just work it out!
Problem 3 [IMO 2011 P4] : Let $n > 0$ be an integer. We are given a balance and $n$ weights of weight $2^0, 2^1, \cdots, 2^{n-1}$. We are to place each of the $n$ weights on the balance, one after another, in such a way that the right pan is never heavier than the left pan. At each step we choose one of the weights that has not yet been placed on the balance, and place it on either the left pan or the right pan, until all of the weights have been placed.
Determine the number of ways in which this can be done.
Ideas: At first glance, the most natural thing to try is to try for $n = 1,2,3$ and maybe other small values too. Well counting the number of ways directly for $n$ seems a bit vague right? So it gives vibes to find the number of ways for $n$ in terms of $n-1$ and maybe some other small values. To formalize this, we let $a_n$ be the number of required ways, and we are basically trying to find a recurrence in $a$.
Now, we try to extract some information from the question. Note that the weights are in a geometric progression, there should be a reason behind it, right? It can't be just random, this is not the same as having the weights to be $1, 2, 3, 4, \ldots$. But what's special about a geometric series and total weight on each pan after placing $i$ blocks?
One interesting thing to note is that $$2^0 + 2^1 + \ldots + 2^k < 2^{k +1}$$ But now how is this useful? The thing is, whenever you place the heaviest block existing(it should always go on the left pan), all the other blocks after it can go anywhere, as the right pan will always remain lighter than the left pan.
Suppose we place the weight $2^{n - 1}$ after placing $i$ blocks. Then we first choose any $i$ blocks from the other $n - 1$ blocks, in $\binom{n - 1}{i}$ ways. Now, since those $i$ weights could be organized on the pan in $a_i$ ways. This is because of the property of the geometric progression I mentioned above. So which means there are $$\binom{n - 1}{i} \cdot a_i$$
ways to place $i$ blocks before placing the $2^{n -1 }$ block. Clearly, this block can only go to the left pan and there's only $1$ way to do so. Now the remaining $n - i - 1$ blocks can be placed on any of the pans in any order, so we first consider the number of orders which is just $(n - i - 1)!$ and since each block has $2$ options to go to, we get $$2^{n - i - 1}\cdot (n - i - 1)!$$ ways to put the remaining blocks.
Combining, we get that if the block $2^{n - 1}$ is placed after $i$ blocks, the number of ways will be $$2^{n - i - 1} \cdot (n - i - 1)! \binom{n - 1}{i} \cdot a_i = 2^{n - i - 1} \cdot \frac{(n - 1)!}{i!}\cdot a_i$$
Now, notice that $i$ can be anything from $0$ to $n - 1$
$$\implies a_n = \sum_{i = 0}^{n - 1} \left(2^{n - i - 1} \cdot \frac{(n - 1)!}{i!}\cdot a_i\right)$$
Now this sum can be solved easily, but for the sake of completeness, here's the remaining solution :-
$$\implies a_n = \sum_{i=0}^{n-2} \left(2^{n-i-1}\cdot \frac{(n-1)!}{i!}\cdot a_i\right) + a_{n-1}$$
$$\implies a_n = 2(n-1) \cdot\sum_{i=0}^{n-2} \left(2^{n-i-2}\cdot \frac{(n-2)!}{i!}\cdot a_i\right) + a_{n-1}$$
Note that we can write $$a_{n - 1} = \sum_{i = 0}^{n - 2} \left(2^{n - i - 2}\cdot \frac{(n - 2)!}{i!} \cdot a_i\right)$$
$$\implies a_n = 2(n-1)\cdot a_{n-1} + a_{n-1} = (2n-1)a_{n-1}$$
Since $a_1 = 1$, we get
$$a_n = (2n - 1)\cdot (2n - 3) \cdot \ldots \cdot 3 \cdot 1$$ $$\implies \boxed{a_n = (2n - 1)!!}$$
Remarks: The main idea is to realize that you can write $a_n$ in terms of smaller $n$'s, and basically to realize how to do that, which is a bit non-trivial. Then solving the recurrence is pretty standard if you have enough practice with it.
Problem 4 [USAMO 2002 P1] : Let $S$ be a set with $2002$ elements, and let $N$ be an integer with $0 \leq N \leq 2^{2002}$. Prove that it is possible to color every subset of $S$ either black or white so that the following conditions hold:
• the union of any two white subsets is white;
• the union of any two black subsets is black;
• there are exactly $N$ white subsets.
Ideas: Clearly at first glance it's obvious that $2002$ is just a bluff, and it should hopefully work for all $k$ or maybe even $k$, either of which can be true or maybe all $k$ with some specific property.(Here $k = |S|$) Moreover, for simplicity we can assume that if $S$ is a $t$ length subset then $S = \{1, 2, \ldots, t\}$.
The most natural thing now is to try for $k = 1, 2, 3\ldots$. After you try it with enough patience, you realize it's true for all small values of $k$, so you make a guess that it might be true for all $k$!
Now the next thing you try to see is if there's some sort of "pattern" on how to color because c'mon $2^{2022}$ is like very HUGE. So if you're observant enough, you can notice that we can color the elements inductively. For $k = 1, 2$ it's easy.
Assume that we can color all the subsets for $\mid S \mid = k - 1$, we need to prove we can also color the subsets of $S$ when $\mid S \mid = k$ with the given conditions.
If $N \le 2^{k - 1}$, color the same subsets white which you did for $k - 1$ case. Note that for $k - 1$ case, none of the subsets has element $k$ in it. And hence exactly $N$ white subsets will be there for $N \le 2^{k - 1}$ case satisfying the other $2$ properties (if this doesn't seem obvious to you, think over it by considering some small cases, say $k - 1 = 4$ and try to prove it yourself!)
Now when $N > 2^{k - 1}$ what do we do?? Notice that if there are exactly $N$ white subsets, then there are exactly $2^k - N$ black subsets!
$$2^k - N < 2^k - 2^{k - 1} < 2^{k - 1}$$
This means there are less than $2^{k - 1}$ black subsets. This means if you color all the subsets black which was white in $2^k - N$ white subsets case, you will be done! SO YAYYY we proved the problem for all $\mid S \mid \ge 1$.
Remarks : The main thing was to notice that $2002$ is not important. Moreover realizing that we can do this inductively is a key idea to solve this. I personally like this problem a lot because you don't really need paper to solve this, you can try this problem in your mind when just hovering around and finish it off :P
Problem 5 [Canada 2018 P3] : Two positive integers $a$ and $b$ are prime-related if either $a/b$ or $b/a$ is prime. Prove that if $n \ne 1$ is a perfect square then it's not possible to arrange all the divisors of $n$ in a circle, so that any two adjacent divisors are prime-related.
Ideas : Since we are talking about prime thingies and factors of $n$, it's natural to consider $n = p_1^{a_1}p_2^{a_2} \ldots p_k^{a_k}$, where $a_i$ is even because $n$ is a perfect square. Now working directly with $k$ seems a bit hard, so we try to work for smaller values of $k$ first. For $k = 1$ it's trivial, so let's take $k = 2$ first. Now I don't really know how to motivate the next step, but maybe it's just practice and observations.
We basically try to make it look a bit organized. So what we do is, we arrange all the factors of $n$ in a $(a_1 + 1) \times (a_2 + 1)$ size grid, with the top-left element as $1$. And whenever we move to the right, we multiply with $p_1$ and when we move down we multiply by $p_2$. So basically the grid looks something like this:-
Now, what we try to do is consider a cycle starting from $1$, such that at any given time we can move only $1$ step right/left/up/down, basically, no diagonal moves are permitted. This ensures that consecutive elements are prime-related. And we also want the cycle to end at $1$ and cover all the numbers in the grid exactly once so that we can arrange the numbers in the circle. So if it was possible to do so, then we can find such a cycle, but we have to show it's not possible. Note that from $1$, you can only go to $p_1$ or $p_2$, say you go to $p_1$. Now to end at $1$, the second last element should be $p_2$ only (think why?).
This next step is a bit hard to think of but is a fairly standard idea. We now do chessboard coloring in this grid, and since $p_1$ and $p_2$ are diagonal, they should be of the same color. This means our path(starting from $p_1$ and ending at $p_2$) should be of an odd length so as to not change the color (parity reasons, as in each move it changes its color). But the number of numbers it passes through other than $1$ is $$(a_1 + 1)(a_2 + 1) - 1$$ Since $a_1, a_2$ are even, the above number is even, which is a contradiction.
Okay so this works! Now we try to generalize this idea for $k$ prime factors too :P Soooo get ready for the nicest idea hehe.....
We consider a $k-$dimensional grid, and basically whenever we move along a particular direction, we multiply by a particular prime number. If it's hard to imagine this way, we can also consider a $k-$length tuple denoting indices of the grid, so for example
$$(x_1, \ x_2, \ \ldots, x_i, \ldots, x_k) \mapsto (x_1, x_2, \ \ldots, x_{i} + 1, \ldots, x_k)$$
we multiply by $p_i$.
Now we do the same chessboard coloring, so basically if the distance between two points is $\sqrt{2}$, they both are of the same color, and here we want to have a cycle with the same conditions, which is basically a path from one prime factor to another prime factor is needed, wlog say from $p_1$ to $p_2$. And now since $p_1, p_2$ are both of the same color and the path will be of even length, this is not possible. And hence a contradiction.
Which means if $n \ne 1$ is a perfect square then it's not possible to arrange all the divisors of $n$ in a circle, so that any two adjacent divisors are prime-related.
Remarks : The actual problem is a bit different, I found the other part of problem boring, but this idea to consider a $k-$dimensional chessboard was pretty cool. Moreover, this just comes from basic intuition from the $k = 2$ case. Like at first glance you might try $k$ colors instead of just black/white for $k$ dimensional grid and that might not work, then you try some other things until you realize the $2$ color idea just works.
Moreover, you could also rephrase this thing graphically, but I think that's not as easy as the idea presented above.
Phew! we are done for today yayy, I hope you liked the problems and learned some new ideas from them.
You can try the following questions as an exercise which are based on similar ideas :-
Exercise 1 [Kazakhstan 2016] : Let $n\geq 2$ be a positive integer. Prove that all divisors of $n$ can be written as a sequence $d_1,\dots,d_k$ such that for any $1\leq i<k$ one of $\frac{d_i}{d_{i+1}}$
and $\frac{d_{i+1}}{d_i}$ is a prime number.
Exercise 2 [USAMO 2019 P4] :Let $n$ be a nonnegative integer. Determine the number of ways to choose sets $S_{ij} \subseteq \{1, 2, \dots, 2n\}$, for all $0 \le i \le n$ and $0 \le j \le n$ (not necessarily distinct), such that
• $|S_{ij}| = i+j$, and
• \item $S_{ij} \subseteq S_{kl}$ if $0 \le i \le k \le n$ and
• $0 \le j \le l \le n$.
I will bring part $2$ of it soon where we will work on $5$ more problems hehe. Do tell in the comments which problem you liked the most, and if the overall difficulty was fair enough or if I should modify (increase/decrease) it a bit in part 2.
Byeeeeeee :DDD
Pranav
1. Very nice blog, Pranav. I really enjoyed while solving them and I'm looking forward for part 2. But I think it maybe better if the difficulty level of problems are increased in part 2.
### Constructions in Number Theory
Hi, I am Emon, a ninth grader, an olympiad aspirant, hailing from Kolkata. I love to do olympiad maths and do some competitive programming in my leisure hours, though I take it seriously. I had written INOI this year. Today, I would be giving you a few ideas on how to Construct in Number Theory . Well, so we shall start from the basics and shall try to dig deeper into it with the help of some important and well-known theorems and perhaps, some fancy ones as well. Okay, so without further delay, let's start off... What do we mean by "Constructions"? If noticed, you can see that you often face with some problems in olympiad saying, "... Prove that there exists infinitely many numbers satisfying the given conditions" or "... Prove that there exists a number satisfying the above conditions." These are usually the construction-problems . For example, let's consider a trivial example : Problem. Prove that there exist infinitely many integers $a$ such th
### EGMO solutions, motivations and reviews ft. Atul, Pranjal and Abhay
The European Girls' Mathematical Olympiad a.k.a EGMO 2022 just ended. Congrats to Jessica Wan from USA, Taisiia Korotchenko, and Galiia Sharafetdinova for the perfect scores! Moreover, the Indian girls brought home 4 bronze medals! By far, this is the best result the EGMO India Team has ever achieved! To celebrate the brilliant result, here's a compilation of EGMO 2022 solutions and motivations written by my and everyone's favorite IMOTCer Atul ! And along with that, we also have reviews of each problem written by everyone's favorite senior, Pranjal ! These solutions were actually found by Atul, Pranjal, and Abhay during the 3-hour live solve. In the live solve, they solved all the 6 problems in 3 hours 😍!!! Okie Dokie, I think we should get started with the problems! Enjoy! Problem 1: Let $ABC$ be an acute-angled triangle in which $BC<AB$ and $BC<CA$. Let point $P$ lie on segment $AB$ and point $Q$ lie on segment $AC$ such that $P \neq B$, $Q \neq C$ and
### Q&A with experts about bashing
Heyy everyone! From the title, you can see that we are going to talk about BASH. Yesss, we are going to discuss whether you really need to learn bash or not? Let me first introduce myself, I am Pranav Choudhary, a 10th grader from Haryana(India). I like to do geo and combo the most :PP. Oki so let's begin! For those of you who don't know what bashing is, lemme give you a brief introduction first. Bashing is basically a technique used to solve Geometry Problems. In general, when you try a geo problem you might think of angles, similarities, and some other techniques (e.g. Inversion, spiral similarity etc). All of which are called synthetic geometry. But sometimes people use various other techniques called "bash" to solve the same problems. Now there are different kinds of bashing techniques, e.g. - Coordinate bash, Trig Bash, Complex bash, Barycentric Coordinates. Let me give you a brief introduction to each of them. Coordinate Bash : You set one point as the orig
|
# Zero-knowledge transfer of value protocol inspired by EC El Gamal
This is a follow up on the question I asked here. I designed a scheme that allows the following:
• Alice has a value $a$ which she wants to keep secret
• Bob has a value $b$ which he wants to keep secret
• Alice can "transfer" a part of her value to Bob such that whatever she transfers must be subtracted from her value (the sum of their value must remain the same). For example, if she has $10$ and Bob has $5$, after transferring $2$, she should have $8$ and Bob should have $7$
• Victor is an independent observer and must be able to verify that the sum of the values does not change as a result of the transfer. But he should do it without learning any of the numbers involved
The scheme below is inspired by EC El Gamal. If anyone can see holes in it, would really appreciate feedback.
Setup
• Alice and Bob hold pairs of EC keys generated using an Elliptic Curve with generator $G$ (an example of such curve could be secp256k1)
• Their private keys are $x_a, x_b$, and $X_a, X_b$ are corresponding public keys
• Alice has committed to her number by making $A = a*G + X_a$ public
• Bob has committed to his number by making $B = b*G + X_b$ public
Transfer
Alice wants to transfer value $c$ to Bob such that $a' = a - c$ and $b' = b + c$. To do this, she does the following:
1. Calculates commitment to the new number $A' = (a - c)*G + X_a$
2. Calculates commitment to the transferred number $C_1 = c*G$
3. Calculates shared secret with Bob and uses it to build a shared key $s = H(x_a * X_b$), where $H$ is a hashing function
4. Encrypts $c$ with the shared key $C_2 = E(c, s)$, where $E$ is a symmetric encryption function
5. Makes the following info public $(A', C_1, C_2)$
Bob receives the info and does the following:
1. Calculates shared secret with Alice and uses it to build a shared key $s = H(x_b * X_a$)
2. Uses the shared key to decrypt the value of $c = D(C_2, s)$, where $D$ is a symmetric decryption function
3. Verifies that $C_1 = c * G$
4. Calculates commitment to the new number $B' = (b + c) * G + X_b$
5. Makes $B'$ public
Verification
An independent observer (Victor) can verify that the total value in the system didn't change by doing the following:
1. Verify that $A = A' + C_1$
2. Verify that $B' = B + C_1$
The scheme above should be secure because all numbers are padded. The numbers are 256-bit integers with:
• 64 most significant bits containing the actual number
• The remaining 192 bits set randomly
So, effectively, instead of transferring something like $2$, Alice would be transferring something like $2.00034094035343043$
As is, there is nothing to prevent Alice from spending more than she holds by letting her $a$ underflow modulo the order $n=2^{256}-\mathtt{14551231950b75fc4402da1732fc9bebf_h}$ of $g$.
Update: and as pointed by VincBreaker in that answer, as is there is nothing to prevent Bob from computing Alice's commitment of transfer of $\bar c$ from Alice to Bob: the only use of Alice's private key is to compute $s$ that she shares with Bob, and with that $s$ Bob can do anything Alice can. A fix is considered in said answer.
Whether the above properties are a functional issue or not depends on the application, but these are something to watch for electronic money.
As proposed, the padding scheme for 64-bit value/amount $\bar x$ (with restriction to $0\le\bar x\le2^{64}-2$) is $x=P(\bar x,r)=\bar x\,2^{192}+r$ for $k=192$-bit uniform $r$, and $\bar x=Q(x)=\displaystyle\left\lfloor\frac x{2^{192}}\right\rfloor$ used to extract value/amount $\bar x$ from $x$, with $Q(P(\bar x))=\bar x$. It follows that if $a=b+c$ then either $\bar a=\bar b+\bar c$ or $\bar a+1=\bar b+\bar c$ hold, with next to equal odds. Whether this is a functional issue or not depends on the application (but see last section for a workaround).
Another security issue is that for $k$-bit random padding $r$ there is at best $k/2$-bit security against someone trying to confirm a guess, for example of a transaction amount $\bar c$ from $C_1=c*G=\bar c\,2^{192}*G+r*G$, by solving for $k$-bit $r$ the equation $r*G=C_1-\bar c\,2^{192}*G$ using e.g. the Baby-Step/Giant-Step algorithm. Same for values held $\bar a$ and $\bar b$.
Thus I'd say that 256-bit ECC is too little, especially if we want exact balance as follows.
If the uncertainty of one unit is an issue, we can improve on that with a sacrifice of some bits of $r$. That could be $k=176$-bit uniform $r$ with $P(\bar x,r)=\bar x\,2^{192}+r-2^{175}$ and $Q(x)=\displaystyle\left\lfloor\frac{x+2^{191}}{2^{192}}\right\rfloor$. In this way most transactions are exact, including the first $2^{16}$ with certainty, and in practice likely the first $2^{22}$.
• Thank you! I am wondering if there is a way to make sure that the values are never greater than $2^{256}$ without revealing the values. I'll probably post another question about it. Regarding the level of security: I could just make the values larger - say 320 bits and it would work fire, correct? (public and private keys would still remain 256 bits). Jun 17 '18 at 14:31
• @Irakliy: with 320-bit ECC, at least the public key will grow in size to 320-bit. Usually the private key also does, but this one can be reduced to the security level in bit (by hashing the private key to change it from its storage form to its usage form).
– fgrieu
Jun 17 '18 at 15:10
• I updated the protocol to address the issues stated above. The description of the new protocol is here. Jul 9 '18 at 4:59
Despite the potential flaws fgrieu stated, it is actually possible for Bob to forge a commit from Alice to Bob by first calculating $C_1 = c * G$ and then $A' = A - C_1$. Having done that, Bob can compute the shared key $s = H(x_b * X_a)$ and determine $C_2 = E(c, s)$ which allows him to publish $(A', C_1, C_2)$ which is a "valid" commit from Alice.
He then can go on and publish his commitment $B' = B + C_1$ which will result in a commitment Vector will accept.
However, this potential insecurity may actually become a feature since the commit no longer requires Bob to actually accept the commit so Alice could send $c$ coins to Bob at any time. Keep in mind that this scheme does not provide any overflow protection as fgrieu stated.
Also, Alice is able to calculate a secret key $k = a + x_a - c$ which leads to $k * G = A'$ so she can sign the commitment with $A'$ to ensure it's actually her committing it. Bob can to the same by calculating $k = b + x_b + c$. Since calculating $k_a$ or $k_b$ requires knowledge of $x_a$ or $x_b$, finding the private key for $A'$ or $B'$ should be as hard as finding the keys for $X_a$ or $X_b$.
• Thank you! Originally, I was thinking that Alice could sign the info she shares with her private key. So effectively, she would share $(A′,C_1,C_2, S(A` || C_1 || C_2, x_a))$ where $S$ is a signature function. But I think what you are suggesting is much more elegant. Jun 17 '18 at 14:23
• @Irakly Also, there is not need to decoding $X_a$ to verify the signature so you save a few milliseconds / cpu-cycles when verifying since $A'$ has to be decoded anyway. Jun 17 '18 at 14:37
What you did in the above can be achieved by the following much simpler protocol:
1 Alice and Bob publish a commitment of their values $A=a\cdot G+ X_a$ and $B=b\cdot G+ X_b$, and their public keys.
2 Alice sends $c$ through a secure channel to Bob, and publish $c\cdot G$.
3 Bob verifies that $c\cdot G$ published by Alice is the same as he computes locally. If not, abort the protocol.
4 If no one aborts (or upon receiving acknowledgements from both Alice and Bob), Victor updates the commitment $A'=A -c\cdot G$ and $B'=B+ c\cdot G$.
|
## Elementary Algebra
Published by Cengage Learning
# Chapter 7 - Algebraic Fractions - 7.2 - Multiplying and Dividing Algebraic Fractions - Concept Quiz 7.2: 5
True
#### Work Step by Step
You never need to divide any number by a fraction. While a calculator will perform this operation, it is best to flip the numerator and denominator of the divisor and to multiply.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
# OpenGL problem with simple screen capture to ppm
This topic is 3971 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
hello, For some time I have implemented screen captures in my OpenGL programs using the following function (which a kind soul gave me in another forum)
int capture_to_ppm(void)
{
int width, height, colorDepth, maxColorValue,y;
unsigned char *pixels;
int fd;
char sbuf[256]; /* for sprintf() */
/* open output file: you can name it as you like */
fd = open(picfile,O_CREAT|O_TRUNC|O_WRONLY,
S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH);
if(fd == -1) return -1;
/* width & height of the window */
width = glutGet(GLUT_WINDOW_WIDTH);
height = glutGet(GLUT_WINDOW_HEIGHT);
/* maxColorValue is 255 in most cases... */
colorDepth = glutGet(GLUT_WINDOW_RED_SIZE);
maxColorValue = (1 << colorDepth) - 1;
/* allocate pixels[]: 3 is for RGB */
pixels = malloc(3*width*height);
if( !pixels ) return -2;
/* get RGB values from the frame buffer into pixels[] */
glReadBuffer(GL_FRONT); /* if you are using "double buffer" */
/* write ppm file header */
sprintf(sbuf,"P6 %d %d %d\n",width,height,maxColorValue);
write(fd,sbuf,strlen(sbuf));
/* write ppm RGB data: we must invert upside down */
for(y = height-1; y >= 0; --y) {
write(fd, pixels+3*width*y, 3*width);
}
close(fd);
free(pixels);
return 0;
}
It has always worked very nicely, provided that my OpenGL program does not open subwindows. I can open any number of windows, and get a perfect window grab of one of them using a glutSetWindow() call. Now this same code is failing when I use it to grab a subwindow. I have a program that opens a main window with 3 subwindows inside. I need to grab one of the subwindows. I call the capture window this way:
glutSetWindow(modelwin);
capture_to_ppm();
where modelwin is the subwindow to be grabbed. The problem is I get a skewed image (the pixels' x coordinate is displaced from its expected position), and without colors. I have been trying to read about glReadPixels(), where I think the problem resides, but I still do not know how to correct it (I read something about setting GL_UNPACK_ALIGNMENT differently than default, but whatever I did had no effect). Could anyone give me an idea what I need to do? Thanks a million, mc61
##### Share on other sites
GL_UNPACK_ALIGNMENT is for transferring pixel data to OpenGL. Notice the word, "unpack".
For reading data from OpenGL to your application, GL_PACK_ALIGNMENT should be used. Try to set this state to 1 if it fixes your problem.
##### Share on other sites
Thank you, it worked!!!
Naively I thought I was "unpacking" the pixels, while I was actually packing them it seems [smile].
Since you have been so kind, I have one more related question, just to show how little I really know about OpenGL. I set this by placing glPixelStoref(GL_PACK_ALIGNMENT,1) inside my capture_to_ppm function. However, it seems to me that this should be set once, and not every time the screen capture is invoked. Where should it go?
mc61
1. 1
2. 2
Rutin
19
3. 3
khawk
18
4. 4
5. 5
A4L
11
• 9
• 12
• 16
• 26
• 10
• ### Forum Statistics
• Total Topics
633769
• Total Posts
3013754
×
|
MoMath sandbox for program audiences
Project: Math 367-17S
Views: 7
Visibility: Unlisted (only visible to those who know the link)
Image: ubuntu1804
# Modeling dropping an object from a height
g = 2 #acceleration of "gravity"
y = 0 #starting y value. As we move downward, y will INCREASE. The variable y is POSITION
v = 0 #the velocity starts at zero, since we are just "letting go" of the object
t = 0 #starting time
dt = 0.1 # this is our "delta t" which should be SMALL as we approximate the object's motion and take "snapshots" every dt
# we are going to store the points (t, y) in a list to plot later
data = [(t,y)] #so the data holds just [(0,0)] to start.
#Now we make a table of t, y, v.
nsteps = 200 #the number of snapshots. So we will go for nsteps*dt time units.
print("t y v") # the top row
for k in range(nsteps): #this means that k goes from 0 until nsteps-1 and we loop
#print n(t, digits =3), n(y, digits =3), n(v, digits =3) #self explanatory (digits tells it how many decimal places to display)
# now we iterate for the next snapshot
t = t + dt # time advances
y = y + v*dt # the "delta y" is equal to v*dt, since distance = rate times time!
v = v + g*dt # same idea
data.append((t,y)) # add the new point to the list of points
list_plot(data) # when the table is done, plot the t vs. y graph
t y v
# Modeling population growth
#
# we will assume that P'(t) = aP(t), where a is a smallish constant
a = 0.2 #constant
P = 100 #starting P value.
v = P*a #the starting velocity in people per time units (say, years)
t = 0 #starting time
dt = 0.1 # this is our "delta t" which should be SMALL as we approximate the object's motion and take "snapshots" every dt
# we are going to store the points (t, y) in a list to plot later
data = [(t,P)] #so the data holds just [(0,0)] to start.
#Now we make a table of t, y, v.
nsteps = 200 #the number of snapshots. So we will go for nsteps*dt time units.
print("t P v") # the top row
for k in range(nsteps): #this means that k goes from 0 until nsteps-1 and we loop
#print n(t, digits =3), n(P, digits =3), n(v, digits =3) #self explanatory (digits tells it how many decimal places to display)
# now we iterate for the next snapshot
t = t + dt # time advances
P = P + v*dt # the "delta P" is equal to v*dt, since distance = rate times time!
v = P*a # same idea
data.append((t,P)) # add the new point to the list of points
list_plot(data) # when the table is done, plot the t vs. y graph
t P v
#catenary simulation
# we will try to create an approximate solution to the differential equation
# tan(t) = s, where t is theta and s is "ell" (this is a standard letter, and "ell" is hard to read in this font.)
t=0 #start at the bottom of the chain
x=0
y=0
ds =0.01 # this is what we increment length by; a smaller value yields a better approx
s = 0 #the starting arc length from (0,0). It will increase as we move to the right.
nsteps = 300 #self explanatory
data=[(x,y)] # So we start with (0,0) and then we will append and finally plot
for k in range(nsteps): #the expression "range(nsteps)" just means, "the numbers from 0 to nsteps-1, inclusive"
#we will move by a length of ds
# in the direction with slope = tan(t)
# and tan(t)=s, the total distance so far
x += ds*cos(t) #another programming expression: "x += u" means "increase x by the value u;" i.e. x = x+u
y += ds*sin(t)
#include the printout for debugging
#print k, x, y,t
#append the point
data.append((x,y))
#now increment
s += ds #computing the total length so far
t = atan(s) # the angle t is such that tan(t) = s; i.e., the SLOPE equals s at this point
#loop is over, now plot
list_plot(data,aspect_ratio=1)
# compare this with a "real" catenary
#you need to state that x is a variable used in the plot command
%var x
approx = list_plot(data)
real = plot((exp(x)+exp(-x))/2-1, (x,0,1.8), color='red') #exp(x) means e^x
show(approx+real,aspect_ratio=1)
#square-wheeled trike simulation
# we will try to create an approximate solution to the differential equation
# 1- tan(t) = s, where t is theta and s is "ell" (this is a standard letter, and "ell" is hard to read in this font.)
t=pi/4 #start at the left of the surface, with an angle of 45 degrees (square is on its corner)
x=-.8814 #this is the x-coordinate where the actual curve starts. We could start with other values if we wanted.
y=0
ds =0.1 # this is what we increment length by; a smaller value yields a better approx
s = 0 #the starting arc length from (0,0). It will increase as we move to the right.
nsteps = 30 #self explanatory
data=[(x,y)] # So we start with (0,0) and then we will append and finally plot
for k in range(nsteps): #the expression "range(nsteps)" just means, "the numbers from 0 to nsteps-1, inclusive"
#we will move by a length of ds
# in the direction with slope = tan(t)
# and tan(t)=s, the total distance so far
x += ds*cos(t) #another programming expression: "x += u" means "increase x by the value u;" i.e. x = x+u
y += ds*sin(t)
#include the printout for debugging
#print k, x, y,t
#append the point
data.append((x,y))
#now increment
s += ds #computing the total length so far
t = atan(1-s) # the angle t is such that 1-tan(t) = s; i.e., the SLOPE equals s at this point
#loop is over, now plot
list_plot(data,aspect_ratio=1)
def spiralOfcirc(phase, twist, ratio, rmin, ncircles):
r=rmin
g=1+ratio
sc=Graphics()
for _ in range(ncircles):
sc += circle((xc,yc),r,fill=True, facecolor='white')
theta +=twist
xc += g*r*cos(theta); yc +=g*r*sin(theta)
r *= ratio
return sc
show(spiralOfcirc(0,pi/6,1.1,1,12))
r=1.01
g=(1+sqrt(5))/2
ph=2*pi/g
flower=[spiralOfcirc(t*ph,pi/12,r^(t+1),1,12) for t in range(12)]
spirals=Graphics()
for s in flower:
spirals += s
spirals.show(axes=false,xmin=-30,xmax=30,ymin=-30,ymax=30)
save(spirals,"spirals.png",axes=false,xmin=-30,xmax=30,ymin=-30,ymax=30)
sp=spirals(axes=false)
Error in lines 1-1 Traceback (most recent call last): File "/cocalc/lib/python3.9/site-packages/smc_sagews/sage_server.py", line 1234, in execute flags=compile_flags), namespace, locals) File "", line 1, in <module> TypeError: 'Graphics' object is not callable
spirals=Graphics()
n=2
mult=1.1
rmin=.5
ncircs=12
phase=0
turn = 2*pi/12
for k in range(n):
spirals += spiralOfcirc(phase,turn,mult,rmin,ncircs )
phase += 2*turn
#rmin *= mult
spirals.show()
spirs=Graphics()
for f in frames2:
spirs +=f
show(spirs)
nSpirs=3
nTwists=6
ratio=1.05
multiplier=1/ratio
spiralAn=[]
size=5
for tw in range(nTwists):
spiral=Graphics()
#spiral += polygon([(-size,-size), (size,-size), (size,size),(-size,size)])
spiral += circle((0,0),size, fill=False)
for t in range(nSpirs-1):
spiral += spiralOfcirc(tw*2*pi/nTwists+t*2*pi/nSpirs ,pi/12,ratio,r,nSpirs)
r *= multiplier
spiralAn.append(spiral)
frames2=[spiralOfcirc(t*pi/6 ,pi/12,1.2,1,10) for t in range(12)]
spirs=Graphics()
for f in frames2:
spirs +=f
show(spirs)
frames2=[spiralOfcirc(t*2*pi*0.388 ,pi/12,1.01^t,1,10) for t in range(15)]
spirs=Graphics()
for f in frames2:
spirs +=f
show(spirs)
frames2=[spiralOfcirc(t*2*pi*0.388 ,pi/12,1.1,1.05^t,10) for t in range(18)]
spirs=Graphics()
for f in frames2:
spirs +=f
show(spirs)
ph=pi*(1+sqrt(5))
n= 100
mult = 1.00
pic=Graphics()
for t in range(n):
r=sqrt(t)
a = t*ph
#pic += point((r*cos(a),r*sin(a)))
max=20
show(pic,aspect_ratio=1,axes=false,xmin=-max, xmax=max, ymin=-max,ymax=max)
save(pic,"spirals-g.png",aspect_ratio=1,axes=false,xmin=-max, xmax=max, ymin=-max,ymax=max)
show(pic,aspect_ratio=1)
%var r, t
n=13
f=2*pi/13
sp1=Graphics()
for k in range(n):
u=f*k
sp1 += polar_plot(e^(t+u),(t,-u,pi-u),aspect_ratio=1)
n=8
f=2*pi/8
sp2=Graphics()
for k in range(n):
u=f*k
sp2 += polar_plot(e^(pi-t+u),(t,u,pi+u),aspect_ratio=1,color='red')
show(sp2+sp1)
show(sp2+sp1)
save(sp1+sp2,"spir-fib.png",aspect_ratio=1, axes=false)
#transformations needed to build Edmark's tiles
def rot(t,vec):
#rotate vector by angle t counterclockwise
c = cos(t)
s = sin(t)
r= matrix([[c,-s],[s,c]])
return r*vec
def trans(U, vec):
#trans vec by vector U
return vec+U
def cloneDown(A,B,C,D):
#starting with quad ABCD with AB on "bottom", create quad A'B'C'D' below where D'=A, C'=B
h=vector((1,0))
DC = C-D
AB = B-A
r = AB.norm()/DC.norm() #dilation ratio
U=r*(A-D)
#dilate, centered at A
A1= A
B1= r*(B-A) +A
C1 = r*(C-A)+A
D1=r*(D-A)+A
# now translate so D1 moves to A
A2=trans(U,A1)
B2=trans(U,B1)
C2=trans(U,C1)
D2=trans(U,D1)
#now rotate by t = angle to move CD to be parallel with AB
#we will need a sign of a determinant to know direction
m=Matrix([DC,AB])
t=sgn(m.det())*acos(DC.dot_product(AB)/(DC.norm()*AB.norm()))
A3=rot(t, A2-A)+A
B3=rot(t, B2-A)+A
C3=rot(t, C2-A)+A
D3=rot(t, D2-A)+A
return A3,B3,C3,D3
def cloneLeft(A,B,C,D):
#starting with quad ABCD with AB on "bottom", create quad A'B'C'D' on left where B'=A, C'=D, A'D' on left
X, Y, Z, W = cloneDown(D, A, B, C)
return Y, Z, W, X
def cloneRight(A,B,C,D):
#starting with quad ABCD with AB on "bottom", create quad A'B'C'D' on right where A'=B, D'=C, B'C' on right
X, Y, Z, W = cloneDown(B,C,D,A)
return W,X,Y,Z
#test run, using the coord of the "mother tile"
A=vector((0,0))
B=vector((1,0))
C= vector((1.1056158046757, 1.3611084974398))
D =vector((-0.294552318493650,1.0533128073132))
pic = Graphics()
pic += line([A,B,C,D,A])
X,Y,Z,W = cloneDown(A,B,C,D)
pic += line([X,Y,Z,W,X])
X,Y,Z,W = cloneLeft(A,B,C,D)
pic += line([X,Y,Z,W,X])
X,Y,Z,W = cloneRight(A,B,C,D)
pic += line([X,Y,Z,W,X])
show(pic,aspect_ratio=1)
#draw full tiling
A=vector((0,0))
B=vector((1,0))
C= vector((1.1056158046757, 1.3611084974398))
D =vector((-0.294552318493650,1.0533128073132))
#the four coords below "almost work"
#A=vector((0,0))
#B=vector((1,0))
#C= vector((1, 1.5))
#D =vector((-.5,1))
nsteps =21 #how many levels
pic = Graphics()
pic += line([A,B,C,D,A])
pic += polygon([A,B,C,D],color=hue(1)) #mother quad (so filled in)
for row in range(nsteps):
X,Y,Z,W = cloneDown(X,Y,Z,W)
#pic += line([X,Y,Z,W,X])
pic += polygon([X,Y,Z,W], color=hue(.5*int(mod(row+col+1,2))))
for row in range(nsteps):
for col in range(1, row+3):
X,Y,Z,W = cloneLeft(X,Y,Z,W)
#pic += line([X,Y,Z,W,X])
pic += polygon([X,Y,Z,W], color=hue(.5*int(mod(row+col,2))))
for row in range(nsteps):
for col in range(1, 4):
X,Y,Z,W = cloneRight(X,Y,Z,W)
#pic += line([X,Y,Z,W,X])
pic += polygon([X,Y,Z,W], color=hue(.5*int(mod(row+col,2))))
show(pic,aspect_ratio=1,axes=false)
save(pic,"edmark.pdf",aspect_ratio=1,axes=false)
row
0
.5*int(mod(5,2))
0.500000000000000
a=5
n=12
data =[]
for k in range(n):
print k, a
data.append([k,a])
a = a^2
latex(table(data))
0 5 1 25 2 625 3 390625 4 152587890625 5 23283064365386962890625 6 542101086242752217003726400434970855712890625 7 293873587705571876992184134305561419454666389193021880377187926569604314863681793212890625 8 86361685550944446253863518628003995711160003644362813850237034701685918031624270579715075034722882265605472939461496635969950989468319466936530037770580747746862471103668212890625 9 7458340731200206743290965315462933837376471534600406894271518333206278385070118304936174890400427803361511603255836101453412728095225302660486164829592084691481260792318781377495204074266435262941446554365063914765414217260588507120031686823003222742297563699265350215337206058336516628646003612927433551846968657326499008153319891789578832685947418212890625 10 55626846462680034577255817933310101605480399511558295763833185422180110870347954896357078975312775514101683493275895275128810854038836502721400309634442970528269449838300058261990253686064590901798039126173562593355209381270166265416453973718012279499214790991212515897719252957621869994522193843748736289511290126272884996414561770466127838448395124802899527144151299810833802858809753719892490239782222290074816037776586657834841586939662825734294051183140794537141608771803070715941051121170285190347786926570042246331102750604036185540464179153763503857127117918822547579033069472418242684328083352174724579376695971173152319349449321466491373527284227385153411689217559966957882267024615430273115634918212890625 11 3094346047382578275480183369971197853892556303884969045954098458217021364691229981426352946556259795253241437925401811752196686587974858300172368368138733125186061074284643736990470985187297054554299280845687415532065869107175268273614914079919451498396758201719634752768716320786430383849047583971326289816201205757042613947819180980672005101042630551776963848492580515763228422194205105528219980245495047005616012864622912201600352471713015158045534728307404176895018366960267524059270145304825352506681132363099774878756679292686131556110845011043463378706055597131761908315431198704846311568948881773397779068614881830957489568601480084153293693894869108316525162755529109279622020185143950303962042574196734629855975332530865630162585454286462276973637232439143739157810226810193065511912215461090477312372972236985158496357200031595045778223229181777984057608080727232899920794990550253812318348125460488862923975738912839914768937870513971019200450747608740083773989106089243768942044904738561199032074943292242626646859106725039882814425691037346493790407231123914098041407526301841706149525214943424226821613989004317509345843872795604700727869464468991967905694380830535928512027512496833244483273618867296842941691082751697737211017236389627905793492664437797515223884794021082891431511614939687909024149023453904144326123741074082132970518886413995393818295837875447436245008301682057894055333235883153975009918212890625 \begin{tabular}{ll} $0$ & $5$ \\ $1$ & $25$ \\ $2$ & $625$ \\ $3$ & $390625$ \\ $4$ & $152587890625$ \\ $5$ & $23283064365386962890625$ \\ $6$ & $542101086242752217003726400434970855712890625$ \\ $7$ & $293873587705571876992184134305561419454666389193021880377187926569604314863681793212890625$ \\ $8$ & $86361685550944446253863518628003995711160003644362813850237034701685918031624270579715075034722882265605472939461496635969950989468319466936530037770580747746862471103668212890625$ \\ $9$ & $7458340731200206743290965315462933837376471534600406894271518333206278385070118304936174890400427803361511603255836101453412728095225302660486164829592084691481260792318781377495204074266435262941446554365063914765414217260588507120031686823003222742297563699265350215337206058336516628646003612927433551846968657326499008153319891789578832685947418212890625$ \\ $10$ & $55626846462680034577255817933310101605480399511558295763833185422180110870347954896357078975312775514101683493275895275128810854038836502721400309634442970528269449838300058261990253686064590901798039126173562593355209381270166265416453973718012279499214790991212515897719252957621869994522193843748736289511290126272884996414561770466127838448395124802899527144151299810833802858809753719892490239782222290074816037776586657834841586939662825734294051183140794537141608771803070715941051121170285190347786926570042246331102750604036185540464179153763503857127117918822547579033069472418242684328083352174724579376695971173152319349449321466491373527284227385153411689217559966957882267024615430273115634918212890625$ \\ $11$ & $3094346047382578275480183369971197853892556303884969045954098458217021364691229981426352946556259795253241437925401811752196686587974858300172368368138733125186061074284643736990470985187297054554299280845687415532065869107175268273614914079919451498396758201719634752768716320786430383849047583971326289816201205757042613947819180980672005101042630551776963848492580515763228422194205105528219980245495047005616012864622912201600352471713015158045534728307404176895018366960267524059270145304825352506681132363099774878756679292686131556110845011043463378706055597131761908315431198704846311568948881773397779068614881830957489568601480084153293693894869108316525162755529109279622020185143950303962042574196734629855975332530865630162585454286462276973637232439143739157810226810193065511912215461090477312372972236985158496357200031595045778223229181777984057608080727232899920794990550253812318348125460488862923975738912839914768937870513971019200450747608740083773989106089243768942044904738561199032074943292242626646859106725039882814425691037346493790407231123914098041407526301841706149525214943424226821613989004317509345843872795604700727869464468991967905694380830535928512027512496833244483273618867296842941691082751697737211017236389627905793492664437797515223884794021082891431511614939687909024149023453904144326123741074082132970518886413995393818295837875447436245008301682057894055333235883153975009918212890625$ \\ \end{tabular}
a=5
n=8
data =[]
for k in range(n):
print k, a^2-a
data.append([k,a^2-a])
a = a^2
latex(table(data))
0 20 1 600 2 390000 3 152587500000 4 23283064365234375000000 5 542101086242752217003703117370605468750000000 6 293873587705571876992184134305561419454666388650920794134435709565877914428710937500000000 7 86361685550944446253863518628003995711160003644362813850237034701685918031624270579715074740849294560033595947277362330408531534801930273914649660582654178142547607421875000000000 \begin{tabular}{ll} $0$ & $20$ \\ $1$ & $600$ \\ $2$ & $390000$ \\ $3$ & $152587500000$ \\ $4$ & $23283064365234375000000$ \\ $5$ & $542101086242752217003703117370605468750000000$ \\ $6$ & $293873587705571876992184134305561419454666388650920794134435709565877914428710937500000000$ \\ $7$ & $86361685550944446253863518628003995711160003644362813850237034701685918031624270579715074740849294560033595947277362330408531534801930273914649660582654178142547607421875000000000$ \\ \end{tabular}
# find x,y such that x^2+y^2=p for primes p
def sumOfSquares(n):
# find x,y s.t. x^2+y^=n or if fail, print "no"
output = "no"
u=floor(sqrt(n))
for x in range(u+1):
y= n-x^2
if y.is_square():
output = (x,sqrt(y))
return output
#start value
p=2
#end
biggest = 1000
while p<biggest:
print p, sumOfSquares(p)
#iterate to next prime
p = p.next_prime()
2 (1, 1) 3 no 5 (2, 1) 7 no 11 no 13 (3, 2) 17 (4, 1) 19 no 23 no 29 (5, 2) 31 no 37 (6, 1) 41 (5, 4) 43 no 47 no 53 (7, 2) 59 no 61 (6, 5) 67 no 71 no 73 (8, 3) 79 no 83 no 89 (8, 5) 97 (9, 4) 101 (10, 1) 103 no 107 no 109 (10, 3) 113 (8, 7) 127 no 131 no 137 (11, 4) 139 no 149 (10, 7) 151 no 157 (11, 6) 163 no 167 no 173 (13, 2) 179 no 181 (10, 9) 191 no 193 (12, 7) 197 (14, 1) 199 no 211 no 223 no 227 no 229 (15, 2) 233 (13, 8) 239 no 241 (15, 4) 251 no 257 (16, 1) 263 no 269 (13, 10) 271 no 277 (14, 9) 281 (16, 5) 283 no 293 (17, 2) 307 no 311 no 313 (13, 12) 317 (14, 11) 331 no 337 (16, 9) 347 no 349 (18, 5) 353 (17, 8) 359 no 367 no 373 (18, 7) 379 no 383 no 389 (17, 10) 397 (19, 6) 401 (20, 1) 409 (20, 3) 419 no 421 (15, 14) 431 no 433 (17, 12) 439 no 443 no 449 (20, 7) 457 (21, 4) 461 (19, 10) 463 no 467 no 479 no 487 no 491 no 499 no 503 no 509 (22, 5) 521 (20, 11) 523 no 541 (21, 10) 547 no 557 (19, 14) 563 no 569 (20, 13) 571 no 577 (24, 1) 587 no 593 (23, 8) 599 no 601 (24, 5) 607 no 613 (18, 17) 617 (19, 16) 619 no 631 no 641 (25, 4) 643 no 647 no 653 (22, 13) 659 no 661 (25, 6) 673 (23, 12) 677 (26, 1) 683 no 691 no 701 (26, 5) 709 (22, 15) 719 no 727 no 733 (27, 2) 739 no 743 no 751 no 757 (26, 9) 761 (20, 19) 769 (25, 12) 773 (22, 17) 787 no 797 (26, 11) 809 (28, 5) 811 no 821 (25, 14) 823 no 827 no 829 (27, 10) 839 no 853 (23, 18) 857 (29, 4) 859 no 863 no 877 (29, 6) 881 (25, 16) 883 no 887 no 907 no 911 no 919 no 929 (23, 20) 937 (24, 19) 941 (29, 10) 947 no 953 (28, 13) 967 no 971 no 977 (31, 4) 983 no 991 no 997 (31, 6)
def sumOfSquares3(n):
# find x,y,z such that x^2+y^2+z^2= n or if fail, print "no"
output = "no"
u=floor(sqrt(n))
for x in range(u+1):
d = n-x^2
if sumOfSquares(d) != "no":
y,z = sumOfSquares(d)
output = x,y,z
return output
for k in [4..100]:
print k, sumOfSquares3(k)
4 (2, 0, 0) 5 (2, 1, 0) 6 (2, 1, 1) 7 no 8 (2, 2, 0) 9 (3, 0, 0) 10 (3, 1, 0) 11 (3, 1, 1) 12 (2, 2, 2) 13 (3, 2, 0) 14 (3, 2, 1) 15 no 16 (4, 0, 0) 17 (4, 1, 0) 18 (4, 1, 1) 19 (3, 3, 1) 20 (4, 2, 0) 21 (4, 2, 1) 22 (3, 3, 2) 23 no 24 (4, 2, 2) 25 (5, 0, 0) 26 (5, 1, 0) 27 (5, 1, 1) 28 no 29 (5, 2, 0) 30 (5, 2, 1) 31 no 32 (4, 4, 0) 33 (5, 2, 2) 34 (5, 3, 0) 35 (5, 3, 1) 36 (6, 0, 0) 37 (6, 1, 0) 38 (6, 1, 1) 39 no 40 (6, 2, 0) 41 (6, 2, 1) 42 (5, 4, 1) 43 (5, 3, 3) 44 (6, 2, 2) 45 (6, 3, 0) 46 (6, 3, 1) 47 no 48 (4, 4, 4) 49 (7, 0, 0) 50 (7, 1, 0) 51 (7, 1, 1) 52 (6, 4, 0) 53 (7, 2, 0) 54 (7, 2, 1) 55 no 56 (6, 4, 2) 57 (7, 2, 2) 58 (7, 3, 0) 59 (7, 3, 1) 60 no 61 (6, 5, 0) 62 (7, 3, 2) 63 no 64 (8, 0, 0) 65 (8, 1, 0) 66 (8, 1, 1) 67 (7, 3, 3) 68 (8, 2, 0) 69 (8, 2, 1) 70 (6, 5, 3) 71 no 72 (8, 2, 2) 73 (8, 3, 0) 74 (8, 3, 1) 75 (7, 5, 1) 76 (6, 6, 2) 77 (8, 3, 2) 78 (7, 5, 2) 79 no 80 (8, 4, 0) 81 (9, 0, 0) 82 (9, 1, 0) 83 (9, 1, 1) 84 (8, 4, 2) 85 (9, 2, 0) 86 (9, 2, 1) 87 no 88 (6, 6, 4) 89 (9, 2, 2) 90 (9, 3, 0) 91 (9, 3, 1) 92 no 93 (8, 5, 2) 94 (9, 3, 2) 95 no 96 (8, 4, 4) 97 (9, 4, 0) 98 (9, 4, 1) 99 (9, 3, 3) 100 (10, 0, 0)
%var x,y
plot3d((x-y)^2+(sqrt(2-x^2)-9/y)^2,(x,0,sqrt(2)),(y,2,3))
3D rendering not yet implemented
|
# Problems with converting ieeetran to lncs
I have to convert my paper from IEEEtran format to lncs and it's my frist experience with this format. For this reason, I downloaded llncs2e.zip package and use llncs instead of ieeetran in the .tex file and change the title section.
The first question is the margins when I use this class is different from the margins in the instruction file, I mean large margins. Also, it leaves the first page blank and just put the emails on the top. In addition, there are other problems different from instruction, for example, the captions do not follow the instruction. Shall I correct them manually?
\begin{document}
\title{Congestion Control for Vehicular Environments by Adjusting IEEE 802.11 Contention Window Size}
\author{Ali Balador \inst{1}, Carlos T. Calafate \inst{1}, Juan-Carlos Cano \inst{1} \and Pietro Manzoni\inst{1}}
\institute{Universitat Politecnica de Valencia\\Camino de Vera, s/n, 46022 Valencia, Spain}
\email{[email protected], {calafate, jucano, pmanzoni}@disca.upv.es}
\maketitle
• Please provide an MWE of what you have done. The first page blank means that there are some problems with the title section of your paper. If you print on an A4 or lettersize you have "large" but correct margin. For the caption llncs style should produce them with the right formatting (you should not change them manually) – Guido Oct 14 '13 at 8:23
• You are right it should produce them but the captions are not similar to the instructions. – user32422 Oct 14 '13 at 8:27
I used the below code and it works.
\usepackage{url}
\urldef{\mailsa}\path|[email protected],{calafate, jucano, pmanzoni}@disca.upv.es|
|
# Chapter34.pdf - The Photoelectric Effect Chapter 34 Lecture...
• 12
This preview shows page 1 - 3 out of 12 pages.
Chapter 34 LectureQuantum PhysicsSlide 34-1The Photoelectric EffectThe photoelectric effect is the ejection of electrons from a metal surface when light shines on the surface.Classical physics predicts that– it should take some time before electronsit should take some time before electrons absorb enough energy to be ejected.the electrons’ kinetic energy should depend on the light intensitydepend on the light intensity.the wavelength of the light shouldn’t matter.Experiment shows thatelectrons are ejected immediately.– electron energy is independent of lightelectron energy is independent of light intensity.electron kinetic energy depends on wavelength with no electrons ejected forSlide 34-2wavelength, with no electrons ejected for wavelengths longer than some maximum value that depends on the metal.Einstein’s ResolutionIn 1905, the same year he developed special relativity, Einstein offered an explanation of the photoelectric effect.Einstein suggested that light energy comes in particle-like “bundles” called quanta or photons.The energy of a single photon is given by E = hf, with fthe frequency of the light and h= 6.626 x 10-34J∙s is Planck’s constant.The more intense the light the more photonsbut the energy ofThe more intense the light, the more photons – but the energy of each photon is unrelated to the light intensity.Each material has a minimum energyll d thk ftiid tcalled the work function, , required to eject an electron.Each photon can give all its energy to an electron in the metalThe electrons emerge with maximum kinetic energy given bySlide 34-3maxKhfEinstein’s InterpretationThe maximum KE, depends max,Khfonly on the frequency and the work function, not on the light intensity.The maximum KE increases with increasing maxfrequency.The effect is instantaneous since there is a one-to-one interaction between the photon d thltand the electron.The effect is not observed below a certain cutoff frequency fcsince the photon energy must be greater than or equal to the workmust be greater than or equal to the work function:The corresponding cutoff wavelength iscfhThe corresponding cutoff wavelength is given by:The figure shows the linear relationship betweenKand ƒ1,240 eVnmchcSlide 34-4between Kmaxand ƒThe slope of each line is hccf
Clicker QuestionWhat is the approximate energy of a photon of red lightWhat is the approximate energy of a photon of red light (=635 nm)?A.0.5 eVB. 1.0 eVB.1.0 eVC.2.0 eVD3 0 eVD.3.0 eVSlide 34-5Clicker QuestionA red and green laser both produce light at a power level of 5 mW. Which one produces more photons/second?A. RedB. GreenC. SameSlide 34-6Clicker Question
|
全文文献 工具书 数字 学术定义 翻译助手 学术趋势 更多
sum 的翻译结果: 查询用时:0.158秒
历史查询
sum 和(7216)求和(386)总和(454)
和
THE CENTRAL LIMIT THEOREM FOR THE SUM OF A RANDOM NUMBER OF INDEPENDENT RANDOM VARIABLES AND ITS APPLICATIONS IN MARKOV CHAINS 随机个数独立随机变量之和的中心极限定理及其在马尔可夫链上的应用 短句来源 ON THE RANGE OF SUM OF MONOTONE MAPPINGS AND NONLINEAR INTEGRAL EQUATIONS OF URYSOHN TYPE 单调映射之和的值域与Urysohn型非线性积分方程 短句来源 The Expectation of the Sum of Consecutive AL Variables and the Double“Zero-One”Linear Estimates for the Standard Deviation of Normal Population 相邻AL变数和的期望值与正态总体标准差的双“零—壹”线性估计量 短句来源 ON THE CENTRAL LIMIT THEOREM FOR THE SUM OF THE RANDOM NUMBERS OF INDEPENDENT RANDOM VARIABLES 关于随机个数独立随机变量之和的中心极限定理 短句来源 Some results on the estimaticn of moments of the sum of independent random variables with symmetric distributions 关于对称型分布的随机变量独立和的矩的估计的几点结果 短句来源 更多
求和
Computer Method on sum from i=1 to n(i~m) sum from i=1 to n(i~m)求和的计算机算法 短句来源 In this paper, we probe into the method of Summation to the series sum from n=1 to ∞1/[n(n+1)…(n+k)]~m[k≥0 is integar, m = 1, 2, 3; k≠0(m = 1 )]. 本文探讨了级数sum from n=1 to ∞(1/[n(n+1)…(n+k)]~m)(其中k≥0为整数,m=1,2,3且m=1时k≠0)的求和方法. 短句来源 A Elementary Method Summating the Series sum form n=1 to ∝(1/n~(2m)) 级数sum form n=1 to ∝(1/n~(2m))的一种初等求和法 短句来源 A Formula for Calculating the Sum of sum from n=1 to ∞(f(n)x~(n-1)) 幂级数sum from n=1 to ∞(f(n)x~(n-1))的一个求和公式 短句来源 Q~2-Dependence of the Gerasimov-Drell-Hearn Sum Rule Gerasimov-Drell-Hearn求和规则的Q~2依赖性质 短句来源 更多
总和
The sum of positive cell area was(7.86±0.49)×102 μm2, and the sum of integral absorbance was (13.66±70.00)×105 in the cmy-c experimental group. Compared with the cmy-c control group, there was significant difference (P < 0.01). cmy-c实验组阳性细胞面积总和(7.86±0.49)×102μm2,积分吸光度总和为(13.66±70.00)×105,与cmy-c对照组比较,差异有显著性意义(P<0.01)。 短句来源 The results showed that the sum of positive cell area was (16.11±3.01)×102 μm2, and the sum of integral absorbance was (19.9±2.42)×105 in the bcl-2 experimental group. Compared with the bcl-2 control group, there was significant difference (P < 0.01). 实验结果bcl-2实验组阳性细胞面积总和为(16.11±3.01)×102μm2,积分吸光度总和为(19.9±2.42)×105,与bcl-2对照组比较,差异有显著性意义(P<0.01)。 短句来源 Teachers' quality is the sum of ideological quality,professionalism,competence and mental status,and it is the key to medical education. 教师素质是教师在教育和管理工作中所具备的思想品质、业务、能力、心理诸多要素的总和,它是医学教育的关键; 短句来源 Result: Normal blood flow volume of vertebral artery,right side of it stadys at 130.47±56.10ml/min while left side 164.21±63.54ml/min,the sum of both side is 293.70±80.74ml/min. 结果:正常人椎动脉血流量右侧130.47±56.10ml/min,左侧164.21±63.54ml/min,双侧血流量总和293.70±80.74ml/min; 短句来源 Cladosporium,Alternaria and Phoma are the dominant species of the two counties and the sum of them account for 81.25 % and 85.94 % in spore-producing fungus,respectively. 2县青稞籽粒的真菌优势种都是C ladosporium、Alternaria和Phoma,优势种总和分别占尼木、林周2县产孢真菌的81.25%、85.94%。 短句来源 更多
总数
This assembly has definite assembling technological level with size of the assembly 95mm×67mm×1mm , it’s wiring density 12. 5wires/cm2,pore density 7.8pores/cm2 and sum of components and devices 122. 该组装件尺寸为95mm×67mm×1mm,布线密度为12.5根/cm,介质通孔密度7.8个/cm2,元器件总数为122只,显示了该组装件的组装工艺水平。 短句来源 The sum of heterotrophic bacteria was( 2.37±1.83)×10~7 cfu/L and Vibrio were (11.77±13.86)×10~5 cfu/L in cultural water, but in sediment surface the heterotrophic bacteria were (7.90±29.08)×10~8 cfu/L, the Vibrio (1.18±3.27)×10~7 cfu/L. 健康虾池水体异养菌群数量(2 37±1 83)×107cfu/L,弧菌数量(11 77±13 86)×105cfu/L,底泥异养细菌总数(7 90±29 08)×108cfu/L,弧菌总数(1 18±3 27)×107cfu/L。 短句来源 the sum of ischemia ST segments was 58 at baseline,(47 after) ISDN was infused. 缺血ST段片段总数,基线为58段,ISDN静脉滴注后47段。 短句来源 the sum of ischemia ST segments was 60 at baseline,51 after ISDN was infused. 缺血ST段片段总数,基线为60段,ISDN静脉滴注后51段。 短句来源 the sum of ischemia ST segments was 64 at baseline,41 after ISDN was infused. 缺血ST段片段总数,基线为64段,ISDN静脉滴注后41段。 短句来源 更多
查询“sum”译词为用户自定义的双语例句
我想查看译文中含有:的双语例句
为了更好的帮助您理解掌握查询词或其译词在地道英语中的实际用法,我们为您准备了出自英文原文的大量英语例句,供您参考。
sum
We show that ${\mathcal M}(G,R)$ is a symmetric tensor category, i.e., the motive of the product of two projective homogeneous G-varieties is a direct sum of twisted motives of projective homogeneous G-varieties. We also study the problem of uniqueness of a direct sum decomposition of objects in ${\mathcal M}(G,R).$ We prove that the Krull--Schmidt theorem holds in many cases. For a split reductive algebraic group, this paper observes a homological interpretation for Weyl module multiplicities in Jantzen's sum formula. The new interpretation makes transparent for GLn (and conceivable for other classical groups) a certain invariance of Jantzen's sum formula under "Howe duality" in the sense of Adamovich and Rybnikov. From elements of the invariant algebra C[V]G we obtain, by polarization, elements of C[kV]G, where k ≥ 1 and kV denotes the direct sum of k copies of V. 更多
This paper is a supplement to the author's previous paper "The Constants and Analysis of Rigid Frames", published in the first issue of the Journal. Its purpose is to amplify as well as to improve the method of propagating joint rotations developed, separately and independently, by Dr. Klouěek and Prof. Meng, so that the formulas are applicable to rigid frames with non-prismatic bars and of closed type. The method employs joint propagation factor between two adjacent joints as the basic frame constant and the... This paper is a supplement to the author's previous paper "The Constants and Analysis of Rigid Frames", published in the first issue of the Journal. Its purpose is to amplify as well as to improve the method of propagating joint rotations developed, separately and independently, by Dr. Klouěek and Prof. Meng, so that the formulas are applicable to rigid frames with non-prismatic bars and of closed type. The method employs joint propagation factor between two adjacent joints as the basic frame constant and the sum of modified stiffness of all the bar-ends at a joint as the auxiliary frame constant. The basic frame constants at the left of right ends of all the bars are computed by the consecutive applications of a single formula in a chain manner. The auxiliary frame constant at any joint where it is needed is computed from the basic frame constants at the two ends of any bar connected to the joint, so that its value may be easily checked by computing it from two or more bars connected to the same joint.Although the principle of this method was developed by Dr. Klouěek and Prof. Meng, the formulas presented in this paper for computing the basic and auxiliary frame constants, besides being believed to be original and by no means the mere amplification of those presented by the two predecessors, are of much improved form and more convenient to apply.By the author's formula, the basic frame constants in closed frames of comparatively simple form may be computed in a straight-forward manner without much difficulties, and this is not the case with any other similar methods except Dr. Klouěek's.The case of sidesway is treated as usual by balancing the shears at the tops of all the columns, but special formulas are deduced for comput- ing those column shears directly from joint rotations and sidesway angle without pre-computing the moments at the two ends of all the columns.In the method of propagating unbalanced moments proposed by Mr. Koo I-Ying and improved by the author, the unbalanced moments at all the bar-ends of each joint are first propagated to the bar-ends of all the other joints to obtain the total unbalanced moments at all the bar-ends, and then are distributed at each joint only once to arrive at the balanced moments at all the bar-ends of that joint. Thus the principle of propagating joint rotations with indirect computation of the bar-end moments is ingeneously applied to propagate unbalanced moments with direct computation of the bar-end moments, and, at the same time, without the inconvenient use of two different moment distribution factors as necessary in all the onecycle methods of moment distribution. The basic frame constant employed in this method is the same as that in the method of propagating joint rotations, so that its nearest approximate value at any bar end may be computed at once by the formula deduced by the author. Evidently, this method combines all the main advantages of the methods proposed by Profs.T. Y. Lin and Meng Chao-Li and Dr. Klouěek, and is undoubtedly the most superior one-cycle method of moment distribution yet proposed as far as the author knows.Typical numerical examples are worked out in details to illustrate the applications of the two methods. 本文為著者前文“剛構常數與剛構分析”之補充,其目的在將角變傳播法及不均衡力矩傳播法加以改善,以便實用。此二法均只需一個公式以計算剛構中所有各桿端之基本剛構常數(即任何二相鄰結點间之角變傳播係數),將此項公式與柯勞塞克之公式相比較,藉以指出前者較後者為便於應用,並亦可用之以直接分析較簡單之閉合式剛構,此外補充說明此法中之剛構常數與定點法之關係,剛構有側移時計算各結點角變所需之各項公式亦行求出。不均衡力矩傳播法係顧翼鹰同志最近研究所得者,既係直接以桿端力矩為計算之對象,而且只須採用不均衡力矩分配比將各結點作用於各桿端不均衡力矩之總和,一次分配,即得所求各桿端分配力矩之總值,實係力矩一次分配法之一大改進,著者將顧氏之法加以推廣与改善,使其原則簡明而計算便捷,著者認為此法係將林、柯、孟三氏法之所有優點熔冶於一爐,實可稱為现下最優之力矩一次分配法。最後列舉算例,以說明此二法在實際工作中之應用。 (1) Sodium salt of reduced codehydrogenase I has been obtained in good yield as a dry powder from codehydrogenase I by reduction with alcohol and alcohol dehydrogenase. This preparation was stable for at least 5 months when kept dry at -15℃. (2) The properties of the particle-bound codehydrogenase I cytochrome reductase system in heart muscle preparation were found to differ considerably from those of the soluble enzyme as obtained by Mahler et al. Among other things, the affinity for cytochrome c of the particle-bound... (1) Sodium salt of reduced codehydrogenase I has been obtained in good yield as a dry powder from codehydrogenase I by reduction with alcohol and alcohol dehydrogenase. This preparation was stable for at least 5 months when kept dry at -15℃. (2) The properties of the particle-bound codehydrogenase I cytochrome reductase system in heart muscle preparation were found to differ considerably from those of the soluble enzyme as obtained by Mahler et al. Among other things, the affinity for cytochrome c of the particle-bound enzyme is much greater than the soluble enzyme. The Michaelis constant for cytochrome c of the former is only one twelfth of that of the latter.(Fig. 2A). (3) With either oxygen or excess cytochrome c as electron acceptor, it was found that the overall activity, in terms of rate of oxygen consumption or cytochrome c reduction, when both succinate and reduced codehydrogenase I were oxidized simultanously, did not represent the sum of the rates of oxidation when these two substrates were separately oxidized but equalled only the faster of the two separate oxidation rates(Fig. 5, Tables 1, 2). If 2,6-dichlorophenol indophenol was used as the electron acceptor, the overall rate of simultaneous oxidation of these two substrates was found to equal exactly the sum of the rates of separate oxidation(Table 3). (4) When either oxygen or excess cytochrome c was used as the electron acceptor, reduced codehydrogenase I and succinate each inhibited the rate of oxidation of the other(Figs 4, 6 & 7). Evidence has been presented to show that the inhibition of succinate oxidation by reduced codehydrogenase I is not due to the accumulation of oxaloacetate. (5) When malonate was also added to the reaction mixture, succinate no longer produced any inhibition of the oxidation of reduced codehydrogenase I(Fig. 8). (6) It is therefore concluded that in heart muscle preparation both succinate and reduced codehydrogenase I are oxidized by cytochrome c through a common, velocity limiting factor. This is in accordance with the view previously reached by some workers from studies on the action of certain inhibitors. However, it should be noted that in our experiments no agents which might produce any conceivable change in the colloidal structure of the enzyme system has been employed. (7) It should be emphasized that our results clearly show that great caution must be exercised in drawing conslusion on the role an enzyme might play in a complex enzyme system from studies of the properties of a solubilized enzyme. (8) It is believed that the competition of two enzyme systems for a common linking factor as demonstrated in this report has provided a new method for studies on the mutual relations of two or more enzyme systems. (一)本報告提供了一個從輔酶Ⅰ,用酶還原法製備還原輔酶Ⅰ的方法。我們所製得的還原輔酶Ⅰ鈉鹽乾粉,可以在低温保存數月而不被氧化。 (二)與心肌製劑中顆粒相結合的輔酶Ⅰ細胞色素還原酶系,和用乙醇抽出的水溶性的輔酶Ⅰ細胞色素還原酶的性質頗不相同。其中比較重要的不同點是對於細胞色素c的親力,前者遠大於後者,其米氏常數僅約為後者的十二分之一。 (三)用一心肌顆粒製劑作為材料,無論用氧或過量之細胞色素c作為氫受體,還原輔酶Ⅰ與琥珀酸同時氧化時的總速度,不等於二者分別氧化時速度之和,而僅等於其中氧化較快者單獨氧化時之速度。但如用[2,6]二氯靛酚作為氫受體時,二者共同氧化時之總速度完全等於二者分別氧化時速度的和。 (四)當用氧或過量之細胞色素c作為氫受體時,琥珀酸與還原輔酶Ⅰ能彼此互相抑制對方氧化的速度。有足夠的實驗材料說明,還原輔酶Ⅰ對於琥珀酸氧化的抑制,不是由於草醯乙酸聚集的緣故。 (五)如果在反應混合物中同時含有琥珀酸脫氫酶的專一抑制劑,丙二酸,則琥珀酸對於還原輔酶Ⅰ氧化作用的抑制即被解除。 (六)根據以上的實驗結果,可以認為,還原輔酶Ⅰ及琥珀酸先通過一個共同的因子與細胞色素c作用。這個共同的因子在一般情形之下,也是...(一)本報告提供了一個從輔酶Ⅰ,用酶還原法製備還原輔酶Ⅰ的方法。我們所製得的還原輔酶Ⅰ鈉鹽乾粉,可以在低温保存數月而不被氧化。 (二)與心肌製劑中顆粒相結合的輔酶Ⅰ細胞色素還原酶系,和用乙醇抽出的水溶性的輔酶Ⅰ細胞色素還原酶的性質頗不相同。其中比較重要的不同點是對於細胞色素c的親力,前者遠大於後者,其米氏常數僅約為後者的十二分之一。 (三)用一心肌顆粒製劑作為材料,無論用氧或過量之細胞色素c作為氫受體,還原輔酶Ⅰ與琥珀酸同時氧化時的總速度,不等於二者分別氧化時速度之和,而僅等於其中氧化較快者單獨氧化時之速度。但如用[2,6]二氯靛酚作為氫受體時,二者共同氧化時之總速度完全等於二者分別氧化時速度的和。 (四)當用氧或過量之細胞色素c作為氫受體時,琥珀酸與還原輔酶Ⅰ能彼此互相抑制對方氧化的速度。有足夠的實驗材料說明,還原輔酶Ⅰ對於琥珀酸氧化的抑制,不是由於草醯乙酸聚集的緣故。 (五)如果在反應混合物中同時含有琥珀酸脫氫酶的專一抑制劑,丙二酸,則琥珀酸對於還原輔酶Ⅰ氧化作用的抑制即被解除。 (六)根據以上的實驗結果,可以認為,還原輔酶Ⅰ及琥珀酸先通過一個共同的因子與細胞色素c作用。這個共同的因子在一般情形之下,也是這兩個酶系統的速度限制因子。應該指出在我們的實驗中,並未使用任何可能影響酶系統結構的條件,因此我們的結果是在一個比較接近於生理狀態的情形之下獲得的。 (七)應該着重指出,從本報告的結果可以看到,一個用人為的方法從複雜酶系上溶解下來的酶的性質,有時並不能代表這個酶在有組織的酶系統中的真實情况。 (八)我們相信,本報告所說明的兩酶系競爭一個共同因子的一些現象,將为研究複雜酶系之間的相互關係,提供一個新的方法。 A new approximation method is proposed in this article for the discussion of molecular structures,and this new method includes the two well-known theories,molecular orbital theory and electron-pair bond theory as two special cases.Let a molecule have n bonds and let the ith bond be described by the anti-symmetrical two-electron bond function ψ_i(v_(2i-1),v_(2i)).(If there exist one- electron,three-electron or many-electron bonds,they can be similarly described by the corresponding one-electron,three-electron... A new approximation method is proposed in this article for the discussion of molecular structures,and this new method includes the two well-known theories,molecular orbital theory and electron-pair bond theory as two special cases.Let a molecule have n bonds and let the ith bond be described by the anti-symmetrical two-electron bond function ψ_i(v_(2i-1),v_(2i)).(If there exist one- electron,three-electron or many-electron bonds,they can be similarly described by the corresponding one-electron,three-electron or many-electron bond func- tions.) Then the stationary state of the molecule is represented by the follow- ing wave function Ψ, where the summation is over all permutations of 1,2,……,2n except those within the interior of the functions,since each ψ_i is already anti-symmetrical.Obviously (2~n/((2n)/!))~(1/2) is the normalization factor. By quantum mechanics the energy of the molecule equals (1) here H_i,T_(ij) and S_(11)' are respectively the following three kinds of operators, (2) (3) (4) The third term of equation (1) is the exchange integral of electrons 1 and 1', while (1,2') is that of electrons 1 and 2'.According to the definition of bond functions,ψ_i may be written as (5) Substituting equation (5) into equation (1) and carrying out the integration over spin coordinates,we obtain (6) It can be easily seen from equation (6) that the combining energy of a mole- cule consists of two parts,one being the binding energy of the bonds represent- ed by the first term of equation (6),and the other being the interaction energy of the bonds denoted by the second term of that equation. If we choose certain functions φ_i~('s) involving several parameters and substi- tute them into equation (6),we may determine the values of those parameters by means of the variation principle. For the discussion of bond interaction energies,we develop a new method for the evaluation of certain types of three-center and four-center integrals.The interaction energy of a unit positive charge and an electron cloud of cylindrical- symmetry distribution may be written as (7) where (8) and R_0~2=a~2+b~2+c~2 The interaction energy of two electron clouds both of cylindrical-symmetry distributions with respect to their own respective axes is evaluated to be (9) (10) where is to sum over j from zero to the lesser value of n-2i and m, is to sum over i from zero to the integral one of n/2 and (n-1)/2,and is to sum over all cases satisfying the relation =m-j,while b_(n,n-2i) represents the coefficient of x~(n-2i) in the n th Legendre polynomial. 本文在分子结构理论方面,作了下列两点贡献:首先建议了用双电子或多电子键函数作为近似基础,来计算分子的近似能量和近似电子云分布。这样计算得来的结果,一定会比用分子轨道理论或电子配对理论好,因为它更真实的反映了分子的化学性质,同时它也包括了后两者,而以它们为特例。我们得到了分子结合能的表示式,用表示式证明了分子结合能由两部分组成:一部分是键的结合能,另一部分是键与键间的作用能。其次是建议了一种新方法,把在计算化学键相互间的作用能中遇到的一些三中心和四中心积分,还原为容易计算的二中心积分。这方法比以往所用的好,因为它计算比较简单,同时限制性也小。 << 更多相关文摘
相关查询
CNKI小工具 在英文学术搜索中查有关sum的内容 在知识搜索中查有关sum的内容 在数字搜索中查有关sum的内容 在概念知识元中查有关sum的内容 在学术趋势中查有关sum的内容
CNKI主页 | 设CNKI翻译助手为主页 | 收藏CNKI翻译助手 | 广告服务 | 英文学术搜索
2008 CNKI-中国知网
京ICP证040431号 互联网出版许可证 新出网证(京)字008号
北京市公安局海淀分局 备案号:110 1081725
2008中国知网(cnki) 中国学术期刊(光盘版)电子杂志社
|
Pat Blythe: Yesiree! CMW 2015 Continues….
Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on May 6, 2015 by segarini
Yessirreee….it’s that time once again. My dawgs are barking, my body’s aching and wondering what the hell I’m doing to it. Ahhhh….CMW. It’s been such a busy four days and the party hasn’t even started yet.
|
# How to Latex-print matrices as a bold uppercase letter?
Hello Sage users!
I was reading the documentation and trying to find a solution for my personal chimera of LaTeX usage: if I define $\mathbf A$ to be a matrix at some scope (document, chapter or section for example), why do I have to explictly write \mathbf every time?
So my question for Sage is this: say I have a simple 2x2 matrix
$$\mathbf A = \left[ \begin{array}{rr} a & b \\ c & d \end{array} \right]$$
I want to, at times, do symbolic manipulations with the letter representation of this matrix, then use the latex(...) command to print it as a bold upcase letter, and to use the full matrix form at other times (with its corresponding LaTeX form).
Example of what I want:
sage: A = matrix(2,2,var('a', 'b', 'c', 'd'))
sage: latex(A)
\left(\begin{array}{rr}
a & b \\
c & d
\end{array}\right)
sage: latex(A.as_symbol())
\Bold{A}
Similarly, I would like it very much to perform symbolic calculations with A in sage defined as an "abstract matrix", i.e., a symbol of type matrix, that will render as a bold uppercase A (because is of type matrix) but will be used as a symbol in computations. Example.
sage: var('A', 'B', 'c')
sage: A,B :: AbstractMatrix
sage: latex(A)
\Bold{A}
sage: A*B
A*B
sage: (A+B)*c
A*c + B*c
sage: latex(A*B)
\Bold{A}\Bold{B}
sage: latex((A+B)*c)
\Bold{A}*c + \Bold{B}*c
Is it possible to do this in Sage? And how?
edit retag close merge delete
( 2019-09-07 06:48:08 -0500 )edit
Added examples of what I meant.
( 2019-09-13 03:54:13 -0500 )edit
Sort by » oldest newest most voted
Ragarding the first question, you should understand that when you write:
sage: A = matrix(2,2,var('a', 'b', 'c', 'd'))
A is a Python name, that is a kind of pointer to the matrix object, but the matrix object doesn't know anything about which names point to it. For example, you can then do:
sage: B = A
Here B is just another Python name pointing to the very same object in memory as you can check with:
sage: B is A
True
So, if you want to type A.as_symbol(), there is nowhere in the matrix object that registered the fact that the matrix has "A" as a string representation.
We can define a function that finds one of the Python names of the matrix object within the globals() dictionary and pick the first one that appears (and that is not ugly like _42), but it is kind of artificial, like:
sage: def sym(a):
....: for i in globals():
....: if not i.startswith('_') and globals()[i] is a:
....: return SR.var(i, latex_name="\Bold{{{}}}".format(i))
You have:
sage: latex(sym(A))
{\Bold{A}}
But of course:
sage: latex(sym(B))
{\Bold{A}}
Regarding the second question, it is a bit different, since the symbolic variable knows a string reprentation of itself, and you can even tune a latex one as follows:
sage: A = SR.var('A', latex_name="\Bold{A}")
sage: B = SR.var('B', latex_name="\Bold{B}")
sage: c = SR.var('c')
sage: A*B
A*B
sage: latex(A*B)
{\Bold{A}} {\Bold{B}}
sage: e = (A+B)*c
sage: e
(A + B)*c
sage: e.expand()
A*c + B*c
sage: latex(e.expand())
{\Bold{A}} c + {\Bold{B}} c
more
## Stats
Seen: 83 times
Last updated: Sep 13 '19
|
Provides a better estimate of the common-mode (" COM" ) signal, by excluding samples that fall within fixed regions on the sky specified by an external mask
If com.zero_mask is set to one of " REF" , " MASK2" or " MASK3" then an NDF will be obtained using the specified ADAM parameter (REF, MASK2 or MASK3) and used as a user-defined mask. Setting com.zero_mask to an integer value larger than zero has the same effect as setting it to " REF" . Setting it to an integer less than or equal to zero results in no external mask being used with the COM model. Note, using " REF" ensures that the mask and the output image of MAKEMAP are on the same pixel grid - using " MASK2" or " MASK3" does not provide this guarantee (it is then the users responsibility to ensure that the supplied masks are aligned with the output image in pixel coordinates). The pixels in the map that are to be included in the common-mode estimation should be set to the bad value in the mask. All other pixels will be excluded from the COM estimation. [0]
|
# How many households have both treadmill and exercise bike? (Venn Diagram problem)
Full question: Suppose that among 1000 households surveyed, 30 have neither an exercise bicycle nor a treadmill, 50 have only an exercise bicycle, and 60 have only a treadmill. How many households have both?
I am trying to figure out a formula for this kind of problem...
I know that the answer must have come from adding 30 + 50 + 60 and then subtracting this from 1000. Is this because 110 only have one or the other and 30 have neither, so the difference between the universe and those cases must be the case of having both?
A good formula is this: $$|A\cup B|=|A\setminus B|+|B\setminus A|+|A\cap B|$$
In this case, you know that $|A\cup B|=1000-30=970$, and $|A\setminus B|=50$, and $|B\setminus A|=60$. That leaves only one term unknown.
This works, because $A\cup B$ is the disjoint union of the other three sets in the formula.
Another, related formula, equivalent to the above one, that also comes up in such problems, is this one:
$$|A\cup B| = |A| + |B|- |A\cap B|$$
Whether you use one or the other just depends on which information you start out with.
• Thank you for taking the time to give me a proper formula! – numericalorange Dec 12 '17 at 20:42
• @numericalorange You're welcome. I've added another formula to the answer, which you may also find useful some time. – G Tony Jacobs Dec 13 '17 at 15:53
Your reasoning is correct. I suspect the instructor wanted you to draw a "pretty picture" Venn diagram, but I don't see the need in this case, given how clear your reasoning was.
|
# Chain rule while differentiating
I am trying to find the derivative of a function defined in polar coordinates with respect to $x$ and $y$. My function is defined as follows:
$v_x(r, \theta ) = v_r \cos (\theta ) - v_{\theta }\sin (\theta )$
To do this, I start by defining the relation between Cartesian and Polar coordinates:
(* Define the mapping between Cartesian and Polar coordinate systems. *)
x[r_, θ_] = r Cos[θ];
y[r_, θ_] = r Sin[θ];
Then I define the function and find its derivative with respect to $x$:
Subscript[v, r][r_, θ_] = Subscript[v, r][r, θ] Cos[θ] - Subscript[v, θ][r, θ] Sin[θ];
D[Subscript[v, r][r, θ], x]
I am getting 0 because Mathematica is not considering the relation between $r$ and $x$. Is there anyway to tell Mathematica to use the chain rule to find the derivative of $v_x$ with respect to $x$?
The other problem is that Mathematica is considering the subscripts to be variables (which is reasonable), is there anyway to tell it that the subscripts are only notational symbols?
EDIT: The function is better defined as:
vx[r_, θ_] = vr[r, θ] Cos[θ] - vtheta[r, θ] Sin[θ];
to avoid evaluating subscripts and possibly having recursion.
-
In your case it might be more convenient to define the inverse transformation :
rho[x_, y_] = Sqrt[x^2 + y^2]
theta[x_, y_] = ArcTan[x, y]
vx[r_, \[Theta]_] = vr[r, \[Theta]] Cos[\[Theta]] - vtheta[r, \[Theta]] Sin[\[Theta]];
Then this will use the chain rule :
D[vx[rho[x, y], theta[x, y]], x]
One can simplify the result in terms of the polar coordinated :
Simplify[D[vx[rho[x, y], theta[x, y]], x] /. {x^2 + y^2 -> rho^2, ArcTan[x, y] -> theta, x -> rho Cos[theta], y -> rho Sin[theta]}, Assumptions -> {rho >= 0}]
-
This works but it produces everything in terms of x and y, which is very complicated! – Rafid Aug 4 '12 at 11:03
well if it works you could always upvote it to indicate that you appreciate his effort :) – acl Aug 4 '12 at 11:21
@Rafid Please see edit for some additional simplification. – b.gatessucks Aug 4 '12 at 11:25
@b.gatessucks, for total conversion back to polar coordinates you will need to add x -> rho Cos[theta], y -> rho Sin[theta] to the replacement rules. – Simon Woods Aug 4 '12 at 11:41
@SimonWoods Thank you. – b.gatessucks Aug 4 '12 at 12:06
You can use the total derivative Dt:
x[r_, \[Theta]_] = r Cos[\[Theta]];
y[r_, \[Theta]_] = r Sin[\[Theta]];
then for instance
Dt[a[r, \[Theta]]*Cos[\[Theta]] - b[r, \[Theta]]*Sin[\[Theta]], x]
does this
I can't test your example because Subscript[v, r][r_, \[Theta]_] = Subscript[v, r][r, \[Theta]] Cos[\[Theta]] - Subscript[v, \[Theta]][r, \[Theta]] Sin[\[Theta]] hits the recursion limit (because the way it's defined and the evaluation sequence works, it'll never finish). However, here's how to indicate that something is a constant. Suppose I try:
Dt[Sin[\[Theta]] + c, x]
but $c$ is a constant; I can indicate this like so:
Dt[Sin[\[Theta]] + c, x, Constants -> {c}]
-
The problem with this solution is that it is not substituting the values and dr/dx and dtheta/dx, so it is not really making use of the Cartesian-Polar equations! – Rafid Aug 4 '12 at 12:46
@Rafid Right. So you expect to define x[r,Theta]:=etc and have Mathematica automatically insert the values for $\partial_x\theta$ etc? That's not going to be 2 lines of code. By the way, did you notice that the code in your question goes into infinite recursion? – acl Aug 4 '12 at 12:53
Well, honestly, I am relatively new to Mathematica, so I thought this might be possible since I am already passing in the relation. Anyway, thanks, one vote :-) And yes, I did notice the recursion, I am going to edit my question now. – Rafid Aug 4 '12 at 13:02
|
FARGO3D
Home > Legacy archive > User’s manual > Parameters > CavityRatio
This parameter is specific to a set of simulations performed to evaluate the efficiency of planet trapping at a cavity edge. Unless you want to reproduce these simulations, you should simply ignore this variable. If you use it, you also have to define `CavityRadius` and `CavityWidth`. `CavityRatio` is the target surface density ratio of the outer to the inner disk. Outside of the cavity (r > `CavityRadius`), the surface density profile is whatever is given by `Sigma0` and `SigmaSlope`, while inside the cavity it corresponds to this profile divided by `CavityRatio`. Similarly, outside the cavity the viscosity is whatever is prescribed by `Viscosity` or `AlphaViscosity`, while inside the cavity it corresponds to either value multiplied by `CavityRatio`.
Site Map | COAST | Contact | RSS 2.0
|
Since 18 of December 2019 conferences.iaea.org uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus.
# 28th IAEA Fusion Energy Conference (FEC 2020)
May 10 – 15, 2021
Virtual Event
Europe/Vienna timezone
The Conference will be held virtually from 10-15 May 2021
## Development of megawatt radiofrequency ion source for the neutral beam injector on HL-2A tokamak
May 14, 2021, 8:30 AM
4h
Virtual Event
#### Virtual Event
Regular Poster Fusion Energy Technology
### Speaker
Dr Longwen Yan (Southwestern Institute of Physics)
### Description
Development of megawatt radiofrequency ion source for the neutral beam injector on HL-2A tokamak
L.W. Yan, G.J. Lei, M. Li, X.M. Zhang, M. Zhao, Y.N. Bu, W.M. Xie, Y.X. Zhang, G.Q. Zou, H.L. Wei, L.P. Huang, S.F. Geng, X.Z, Ma, Q. Yu, J.Y. Cao, Bo Lu, Z.B. Shi, C.P. Zhou, M. Xu and X.R. Duan
Southwestern Institute of Physics (SWIP), Chengdu, Sichuan 610225, China
E-mail: [email protected]
Neutral beam injection (NBI) is rather significant for plasma heating, current drive, fueling and profile control in large tokamaks, whose key component is the ion source. The construction of NBI system on ITER still meets some challenges. Its typical parameters are 16.5 MW/ 1 MV/ 3600 s. The radiofrequency (RF) negative ion source is a suitable choice for ITER NBI $[1]$.
Recently, an RF ion source with megawatt power extraction has been developed for the neutral beam injector on HL-2A tokamak at SWIP. A full solid-state RF generator with output power of 80 kW and frequency of 2 MHz is built by an RF combiner using 8 modules of solid-state RF generator of $P_{RF}$ = 10 kW. The line electric efficiency of whole RF generator reaches 92% and its voltage standing wave ratio (VSWR) is 1.01 after using fully automatic match technique. A quartz vessel with the inner diameter of 250 mm is directly adopted for resisting atmospheric pressure, which can dramatically simplify ion source structure, as shown in figure 1.
Nowadays, the extraction parameters of RF hydrogen ion beam are 32 kV/20 A/0.1 s on a test bed using the power of $P_{RF}$ = 26 kW, as shown in figure 2, while its design parameters are 50 kV/20 A/3 s. The discharge duration is limited by the power supply of capacitor bank. The half width of 1/e power decay is 83 mm at 1.3 m downstream from the accelerator using infrared imaging diagnostics at 3.4 m downstream, which obeys Gaussian distribution, see figure 3. The beam divergence angle is smaller than $1^o$. The extractable current density increases almost linearly with the RF power. It reaches 0.24 A$\cdot$cm$^{-2}$ at $P_{RF}$ = 32 kW. The ion density in front of plasma grid is about $1.2\times 10^{18}$ m$^{-3}$ at gas pressure of 0.5 Pa. The hydrogen ion fraction of extraction beam increases with the accelerator current, which reaches 79 % at $I_{acc}$ = 12.4 A. Plasma homogeneity is over 90% at low RF power.
The RF plasma source of innovative high-pressure density gradient solves its initial ignition problem, providing an important startup scheme for RF negative ion sources and a good method for generating thermal atoms. The relevant components include two sets of RF systems (40 kW/2 MHz and 3.5 kW/13.56 MHz), main and auxiliary discharge chambers, plasma diagnosis, vacuum system and so on. The initial plasma excitation, main plasma discharge, plasma diffusion, RF attenuation along axis are investigated. Using this system, gas density measurement and control are completed. The plasma density profiles in the two discharge chambers and diffusion section are simulated to optimize their gas densities.
The total RF power of 80 kW is going to use in the next experimental campaign. The beam characteristics will be carefully investigated by using Faraday cups and thermocouples, whose results may be compared with the infrared imaging diagnostics for providing more plentiful information of ion source profiles. The extraction capability will be enhanced by improving vacuum condition, using stronger power supply such as motor generator and so on. In addition, the RF negative ion source of 200 kV/20 A/3600 s is also developed at SWIP for the CFETR (China Fusion Engineering Test Reactor), and the accelerator voltage will be extended to 500 kV in future.
In summary, a megawatt RF ion source has been developed with quartz chamber and an innovative method of high-pressure density gradient produced by two sets of RF systems solves its initial ignition problem. The line electric efficiency of whole RF generator reaches 92% after using fully solid-state RF generator and automatic match technique. The extracted ion beam presents Gaussian distribution according to infrared imaging. The RF negative ion source of 200 kV/20 A/3600 s is also developed at SWIP for the CFETR.
This work is partially supported by Natural Science Foundation of China with Grant Nos. 11875020 and 11320101005, and by National Key R&D Program of China under Grant No. 2017YFE0300100.
1 U. Fantz, C. Hopf, D. Wünderlich, et al. Nucl. Fusion 57, 116007 (2017).
Country or International Organization China Southwesten Institute of Physics
### Primary author
Dr Longwen Yan (Southwestern Institute of Physics)
### Co-authors
Dr Guangjiu Lei (Southwestern Institute of Physics) Dr Ming LI (Southwestern Institute of Physics ) Mr Xianming Zhang (Southwestern Institute of Physics ) Mr Miao Zhao (Southwestern Institute of Physics ) Mr Yingnan Bu (Southwestern Institute of Physics) Mr Weiming Xie (Southwestern Institute of Physics) Mr Yuxian Zhang (Southwestern Institute of Physics) Dr Guiqing Zou (Southwestern Institute of Physics) Dr Huiling Wei (Southwestern Institute of Physics) Mrs Liping Huang (Southwestern Institute of Physics) Dr Shaofei Geng (Southwestern Institute of Physics) Ms Xuezhen Ma (Southwestern Institute of Physics) Ms Qi Yu (Southwestern Institute of Physics) Dr Jianyong Cao (Southwestern Institute of Physics) Dr Bo Lu (Southwestern Institute of Physics) Prof. Zhongbing Shi (Southwestern Institute of Physics) Dr Caiping Zhou (Southwestern Institute of Physics) Prof. Min Xu (Southwestern Institute of Physics) Prof. Xuru Duan
|
## TD or not TD: Analyzing the Role of Temporal Differencing in Deep Reinforcement Learning
Feb 15, 2018 (edited Feb 16, 2018)ICLR 2018 Conference Blind SubmissionReaders: Everyone
• Abstract: Our understanding of reinforcement learning (RL) has been shaped by theoretical and empirical results that were obtained decades ago using tabular representations and linear function approximators. These results suggest that RL methods that use temporal differencing (TD) are superior to direct Monte Carlo estimation (MC). How do these results hold up in deep RL, which deals with perceptually complex environments and deep nonlinear models? In this paper, we re-examine the role of TD in modern deep RL, using specially designed environments that control for specific factors that affect performance, such as reward sparsity, reward delay, and the perceptual complexity of the task. When comparing TD with infinite-horizon MC, we are able to reproduce classic results in modern settings. Yet we also find that finite-horizon MC is not inferior to TD, even when rewards are sparse or delayed. This makes MC a viable alternative to TD in deep RL.
• Keywords: deep learning, reinforcement learning, temporal difference
7 Replies
|
# 0.8 Metathesis: to exchange or not?
Page 1 / 4
## Objectives
• To give practice writing equations for metathesis reactions, including net ionic equations
• To illustrate the concept of solubility and the effect of temperature and crystallization
You will be determined according to the following:
• Pre-lab (10%)
• Must attach graph
• Lab Report Form (80%)
• Must include detailed observations for each reaction
• TA Evaluation of lab procedure (10%)
## Before coming to lab…
• Complete the pre-lab exercise, including the plot (due at the beginning of lab)
## Introduction
In molecular equations for many aqueous reactions, cations and anions appear to exchange partners. These reactions conform to the following general equation:
Equation 1: $\text{AX}+\text{BY}\to \text{AY}+\text{BX}$
These reactions are known as metathesis reactions. For a metathesis reaction to lead to a net change in solution, ions must be removed from the solution. In general, three chemical processes can lead to the removal of ions from solution, comcomitantly serving as a driving force for metathesis to occur:
1. The formation of a precipitate2. The formation of a weak electrolyte or nonelectrolyte3. The formation of a gas that escapes from solution
The reaction of barium chloride with silver nitrate is a typical example:
Equation 2: ${\text{BaCl}}_{2}\left(\text{aq}\right)+{\text{2AgNO}}_{3}\left(\text{aq}\right)\to \text{Ba}\left({\text{NO}}_{3}{\right)}_{2}\left(\text{aq}\right)+\text{2AgCl}\left(s\right)$
This form of the equation for this reaction is referred to as the molecular equations. Since we know that the salts ${\text{BaCl}}_{2}$ , ${\text{AgNO}}_{3}$ , and $\text{Ba}\left({\text{NO}}_{3}{\right)}_{2}$ are strong electrolytes and are completely dissociated in solution, we can more realistically write the equation as follows:
Equation 3: ${\text{Ba}}^{2+}\left(\text{aq}\right)+{\text{2Cl}}^{-}\left(\text{aq}\right)+{\text{2Ag}}^{+}\left(\text{aq}\right)+{\text{2NO}}_{{3}^{-}}\left(\text{aq}\right)\to {\text{Ba}}^{2+}\left(\text{aq}\right)+{\text{2NO}}_{{3}^{-}}\left(\text{aq}\right)+\text{2AgCl}\left(s\right)$
This form, in which all ions are shown, is known as the complete ionic equation. Reaction occurs because the insoluble substance AgCl precipitates out of solution. The other product, barium nitrate, is soluble in water and remains in solution. We see that ${\text{Ba}}^{2+}$ and ${\text{NO}}_{{3}^{-}}$ ions appear on both sides of the equation and thus do not enter into the reaction. Such ions are called spectator ions. If we eliminate or omit them from both sides, we obtain the net ionic equation:
Equation 4: ${\text{Ag}}^{+}\left(\text{aq}\right)+{\text{Cl}}^{-}\left(\text{aq}\right)\to \text{AgCl}\left(s\right)$
This equation focuses our attention on the salient feature of the reaction: the formation of the precipitate AgCl. It tells us that solutions of any soluble ${\text{Ag}}^{+}\text{salt}$ and any soluble ${\text{Cl}}^{-}\text{salt}$ , when mixed, will form insoluble AgCl. When writing net ionic equations, remember that only strong electrolytes are written in the ionic form. Solids, gases, nonelectrolytes, and weak electrolytes are written in the molecular form. Frequently the symbol (aq) is omitted from ionic equations. The symbols (g) for gas and (s) for solid should not be omitted. Thus, Equation 4 can be written as
Equation 5: ${\text{Ag}}^{+}+{\text{Cl}}^{-}\to \text{AgCl}\left(s\right)$
Consider mixing solutions of KCl and ${\text{NaNO}}_{3}$ . The ionic equation for the reaction is
Equation 6: ${K}^{+}\left(\text{aq}\right)+{\text{Cl}}^{-}\left(\text{aq}\right)+{\text{Na}}^{+}\left(\text{aq}\right)+{\text{NO}}_{{3}^{-}}\left(\text{aq}\right)\to {K}^{+}\left(\text{aq}\right)+{\text{NO}}_{{3}^{-}}\left(\text{aq}\right)+{\text{Na}}^{+}\left(\text{aq}\right)+{\text{Cl}}^{-}\left(\text{aq}\right)$
Is there any normative that regulates the use of silver nanoparticles?
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
why we need to study biomolecules, molecular biology in nanotechnology?
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
why?
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
anyone know any internet site where one can find nanotechnology papers?
research.net
kanaga
sciencedirect big data base
Ernesto
Introduction about quantum dots in nanotechnology
what does nano mean?
nano basically means 10^(-9). nanometer is a unit to measure length.
Bharti
do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment?
absolutely yes
Daniel
how to know photocatalytic properties of tio2 nanoparticles...what to do now
it is a goid question and i want to know the answer as well
Maciej
Abigail
for teaching engĺish at school how nano technology help us
Anassong
Do somebody tell me a best nano engineering book for beginners?
there is no specific books for beginners but there is book called principle of nanotechnology
NANO
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball.
Tarell
what is the actual application of fullerenes nowadays?
Damian
That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes.
Tarell
what is the Synthesis, properties,and applications of carbon nano chemistry
Mostly, they use nano carbon for electronics and for materials to be strengthened.
Virgil
is Bucky paper clear?
CYNTHIA
carbon nanotubes has various application in fuel cells membrane, current research on cancer drug,and in electronics MEMS and NEMS etc
NANO
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
Do you know which machine is used to that process?
s.
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
Got questions? Join the online conversation and get instant answers!
|
## Stream: new members
### Topic: tidy intercalation
#### Horatiu Cheval (Apr 03 2021 at 08:10):
I have some complicated proof I want to solve by tidy. It almost does it, only after some progress it gets stuck and I use a specialize to advance the goal, after which tidy finishes the proof. So I do something like this:
tidy,
specialize h x,
tidy,
which works fine but looks like an antipattern. Can I merge them into one, or somehow remain only with a terminal tidy? (note that the specialize does not work before the first tidy).
#### Horatiu Cheval (Apr 03 2021 at 08:13):
I read tidy supports additional tactics, so I tried local attribute [tidy] tactic.interactive.specialize but nothing changed, though I'm not sure that's the right way of doing it.
#### Damiano Testa (Apr 03 2021 at 11:07):
My strategy would be to expand the first tidy with the output of tidy?, tidy up the result so that you get to the stage where specialize, tidy finishes the proof.
In my experience, a lot of the steps thattidy does are not needed for the final argument.
#### Damiano Testa (Apr 03 2021 at 11:08):
I would probably also repeat this process after the second tidy and leave a tidy-free proof, but is not required, I think.
Last updated: May 12 2021 at 04:19 UTC
|
multi_class_cm {ConfusionTableR} R Documentation
## Multiple Confusion Matrix data frame
### Description
a confusion matrix object for multiple outcome classification machine learning problems.
### Usage
multi_class_cm(train_labels, truth_labels, ...)
### Arguments
train_labels the classification labels from the training set truth_labels the testing set ground truth labels for comparison ... function forwarding for passing mode and other parameters to 'caret' confusionMatrix
### Value
A list containing the outputs highlighted hereunder:
• "confusion_matrix" a confusion matrix list item with all the associated confusion matrix statistics
• "record_level_cm" a row by row data.frame version of the above output, to allow for storage in databases and row by row for tracking ML model performance
• "cm_tbl" a confusion matrix raw table of the values in the matrix
• "last_run"datetime object storing when the function was run
### Examples
# Get the IRIS data as this is a famous multi-classification problem
library(caret)
library(ConfusionTableR)
library(randomForest)
df <- iris
df <- na.omit(df)
table(iris$Species) # Create a training / test split train_split_idx <- caret::createDataPartition(df$Species, p = 0.75, list = FALSE)
# Here we define a split index and we are now going to use a multiclass ML model to fit the data
train <- df[train_split_idx, ]
test <- df[-train_split_idx, ]
# Fit a random forest model on the data
rf_model <- caret::train(Species ~ .,data = df,method = "rf", metric = "Accuracy")
# Predict the values on the test hold out set
rf_class <- predict(rf_model, newdata = test, type = "raw")
predictions <- cbind(data.frame(train_preds=rf_class, test$Species)) # Use ConfusionTableR to create a row level output cm <- ConfusionTableR::multi_class_cm(predictions$train_preds, predictions$test.Species) # Create the row level output cm_rl <- cm$record_level_cm
print(cm_rl)
#Expose the original confusion matrix list
cm_orig <- cm\$confusion_matrix
print(cm_orig)
[Package ConfusionTableR version 1.0.4 Index]
|
# Do $PR/Poly$ solve the halting problem of Turing Machine?
I know that $$R/Poly$$ solves the halting problem, as we can have the program that runs for longest time as advice and check which halts earlier. But what if we weaken the $$R$$ into something, like $$PR$$?
• Can you define PR/poly and the halting problem? The exact definitions might matter. – Yuval Filmus Dec 28 '19 at 16:04
|
## characteristic length
in thin films
Synonym: characteristic scale
https://doi.org/10.1351/goldbook.C00977
The term characteristic length or scale refers, in general, to the parameter which characterizes the density profile (of a given physical quantity). The static (equilibrium) or dynamic character of a characteristic length must be specified. The terms out of plane and in plane refer to characteristic lengths normal or parallel to the @I03082@, respectively. Since @I03085@ 'thickness' and characteristic length correspond to various concepts, the current usage where an out of plane characteristic length is referred to as the @I03085@ thickness, is confusing and should be abandoned.
Source:
PAC, 1994, 66, 1667. (Thin films including layers: terminology in relation to their preparation and characterization (IUPAC Recommendations 1994)) on page 1674 [Terms] [Paper]
|
Take an interval. Cut it into H pieces, where H is hyperfinite. This serves as the index set of a stochastic process, among many other uses. Imagine that for each of the H steps, you flip a coin to get -1 or +1. Then move an infinitesimal distance left or right based on the sign. This is Brownian motion. Each infinitesimal piece of the timeline is profitably thought of as a Planck time.
Discrete events, such as sudden hard shocks, can be modeled on this line. They are appreciable over an infinitesimal fraction of the line.
|
# Martingale Convergence
Last quarter, I did a DRP on martingales, and at the start of this quarter, gave a short talk on what I learned. Here are the notes from that talk.
### 1. Conditional Expectation
Definition 1. Suppose ${(\Omega, \mathcal{F}_o, \mathbb{P})}$ is a probability space, ${\mathcal{F}\subset \mathcal{F}_o}$ is a sub ${\sigma}$-algebra, and ${X}$ is a random variable.
1. The conditional expectation of ${X}$ given ${\mathcal{F}}$, ${E(X\mid \mathcal{F})}$, is any ${\mathcal{F}}$-measurable random variable ${Y}$ such that
$\displaystyle \int_A X\, d\mathbb{P}=\int_A Y\, d\mathbb{P}\text{ for all } A\in \mathcal{F}.$
2. If ${Y}$ is another random variable, then ${E(X\mid Y)}$ is defined as ${E(X\mid \mathcal{F})}$ where ${\mathcal{F}}$ is the ${\sigma}$-algebra generated by ${Y}$.
Fact. The conditional expectation exists, is unique, and is integrable.
Intuitively, we can think of ${E(X\mid \mathcal{F})}$ as the best guess of the value of ${X}$ given the information available in ${\mathcal{F}}$.
Example 1.
1. If ${X}$ is ${\mathcal{F}}$-measurable, then ${E(X\mid \mathcal{F})=X}$.
2. If ${X}$ and ${Y}$ are independent, then ${E(X\mid Y)=E(X)}$.
3. Let ${X_1,\cdots}$ be random variables with mean ${\mu}$, and let ${S_n=X_1+\cdots+X_n}$ be the partial sums. Then, if ${m>n}$,
$\displaystyle E(S_m\mid S_n)=E(X_m+\cdots+X_{n+1}+S_n\mid S_n)=E(X_m\mid S_n)+\cdots+E(X_{n+1}\mid S_n)+E(S_n\mid S_n)=\mu(m-n)+S_n.$
The idea is that you know your position at ${S_n}$, and you take ${m-n}$ steps whose sizes are, on average, ${\mu}$, so your best guess for your position is ${S_n+\mu(m-n)}$.
### 2. Martingales
A martingale is a model of a fair game.
Definition 2. Consider a filtration (increasing sequence of ${\sigma}$-algebras) ${\mathcal{F}_n}$ and a sequence of random variables ${X_n}$, each measurable with respect to ${\mathcal{F}_n}$ and integrable. Then, if ${E(X_{n+1}\mid \mathcal{F}_n)=X_n}$ for all ${n}$, we say ${S_n}$ is a martingale.
Example 2. Let ${X_1,\cdots}$ be random variables that take only the values ${-1}$ and ${1}$ with probability ${1/2}$ each. Then ${S_n=X_1+\cdots+ X_n}$ is a martingale, because
$\displaystyle E(S_{n+1}\mid S_n)=E(X_{n+1}+S_n\mid S_n)=E(X_{n+1}\mid S_n)+E(S_n\mid S_n)=E(X_{n+1})+S_n=S_n.$
Theorem 3. If ${X_n}$ is a martingale with ${\sup E(X_n^+)<\infty}$, then ${X_n}$ converges a.s. to a limit ${X}$ with ${E(|X|)<\infty}$.
I’ll only sketch this proof, because even though the idea is nice, the details are a little annoying. The idea is to set up a way of betting on a martingale, show that you can’t make money off such a betting system, and then use this to draw conclusions about the martingales behavior.
Definition 4. Let ${\mathcal{F}_n}$ be a filtration. A sequence ${H_m}$ of random variables is said to be predictable if ${H_m}$ is ${\mathcal{F}_{m-1}}$ measurable for all ${m}$.
That is, the value of ${H_m}$ can always be predicted with certainty at time ${m}$. Then, if we “bet” an amount ${H_m}$ on the martingale ${X_m}$, our total winnings will be
$\displaystyle (H\cdot X)_n=\sum_{m=1}^n H_m(X_m-X_{m-1}).$
If ${X_n}$ is a supermartingale and ${H_n}$ is bounded, then ${(H\cdot X)_n}$ is a supermartingale as well.
Now, fix an interval ${(a,b)}$ and choose ${H_n}$ in the following way: if ${X_n, then bet 1, and continue to do so until ${X_n>b}$. Then bet zero until you fall back below ${a}$, and repeat. Every time you go from below ${a}$ to above ${b}$, you will make a profit of at least ${(b-a)}$. Let ${U_n}$ be the number of these upcrossings that occur. Then, you can use the previous fact, together with the bound on ${\sup E(X_n^+)}$, to show that the number of these upcrossings is finite with probability 1. Since this is true for arbitrary ${(a,b)}$, the ${X_n}$ must converge.
Finally, we give a brief application of the martingale convergence theorem.
Proposition 5. Let ${X_n}$ be a sequence of random variables such that ${P(X_n=1)=P(X_n=-1)=\frac{1}{2}.}$ Then ${{\sum_{j=1}^{\infty} \frac{X_j}{j}}}$ converges with probability 1.
Proof: Let ${{S_n=\sum_{j=1}^n\frac{X_j}{j}}.}$ It’s easy to check that ${S_n}$ is a martingale, and that ${E(S_n)}$ is bounded (in fact, it’s always 0), so the ${S_n}$ converge almost surely. Thus the random harmonic series converges almost surely. $\Box$
|
Alphalens with a single security?
Hello all,
I'd like to use Alphalens to evaluate a factor on just one security. However, I always end up with an error: ValueError: Bin edges must be unique I've tried to work around it using ranking instead of raw numbers (guaranteed no duplicate values), trying bins instead of quantiles, using defined bins instead of equally-sized, etc, but I cannot seem to get it going.
This Pull Request says that this error happens when you have a "single value for factor per group (only one value over a date, or sector)" - pretty sure this is what I'm hitting.
Big picture question: Can Alphalens be used to check out signal alpha on a single equity?
More direct question: If so, what do I need to do to get it working? (Please see attached notebook)
2
5 responses
You are getting that error due to bugs in panda.qcut and pandas.cut that were fixed in version v0.20.0 (one, two and three). But don't expect to see pandas v0.20.0 on Quantopian too soon (this is my opinion but v0.20.0 hasn't been released yet). You can use a workaround described here though
Hi Peter,
I don't think Alphalens will give sensible results for one security. It is set up to consider a factor as one value per day per security, which it then 'bins' into say 3 or 5 or 10 groups of securities and shows you the returns to each of those sub-portfolios.
If you pass it a timing signal for a single stock it can only create 1 'bin' on any given day and doesn't have anything to compare amongst. Alphalens is a tool designed to evaluate a cross-sectional signal which can be used to rank many securities each day versus what sounds like a timing signal that tells you what days to be long or short a single stock.
Have you tried out using just the pyfolio tearsheet analysis or a backtest report on your signal? I'd think those should work ok.
Best, Jess
Disclaimer
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.
While I agree with Jessica, there is still a subset of Alphalens features that can be used to quickly verify the ability of a signal to generate alpha: the returns analysis. I used Alphalens to analyse signals generated by technical patterns in here. No assumptions on the number of stocks are required.
Obviously the goal of Alphalens is not to analyse those kind of scenarios, but it works nevertheless.
Here is Peter's NB fixed.
7
Tried cloning and running Luca's version of Peter's NB, and I when I run create_returns_tear_sheet(factor_data, long_short=False) , I get:
|
## Innovation and Disruption in Everyday Education
Two nights ago I came across a tweet from Huntington Post Education;
I then modified and retweeted it;
What followed was an overwhelming number of retweets, favorites, and follows (at least for me, with a measly 600 some followers). Additionally, if you click on the link, you will see that HuffPo has since changed the title of the article to These 11 Leaders are Running Education But Have Never TaughtInteresting.
The vast majority of the RTs and interactions shared my sentiment, but one caught my eye;
And a conversation ensued;
Challenge Accepted.
As I started thinking about who and what I was going to highlight here, the tweets kept rolling in. This one really got me thinking.
The excerpt that really struck me;
Of course, even in Disrupting Class, the predictions of the ed-tech end-times were already oriented towards changing the business practices, not necessarily the pedagogy or the learning. [Emphasis mine]
I think that the ‘disruption’ really needed in education is to simply utilize methods of instruction and systems that have been demonstrated to be effective through research. In the end I don’t think we need to revolutionize the entire system, as we have pockets and individuals to serve as wonderful models. The real problem is how to scale from individuals doing great things to a great system as a whole.
As I highlight some of these innovations by everyday teachers, let’s start with the greatest disruption in my teaching, Modeling Instruction. Modeling is a highly researched, highly effective method for teaching Physics. Modeling came out of a great disruption; physics teacher David Hestenes wrote a basic concept inventory for his physics classes thinking they would rock it. Instead, they bombed it. Years of research then gave birth to Modeling. Frank Noschese, a ‘normal’ physics teacher in New York State, gave a great TEDx talk demonstrating how students “Learn Science by Doing Science” using Modeling. In fact, Frank was recently lauded by a non-educator for his work with modeling. Kelly O’Shea is closing in on 200,000 views on her blog where she posts guides to how to implement MI, her modified MI materials, and other thoughts relating to physics education. She teaches at a private school in NYC. Both (and the many other modelers ‘disrupting’ traditional physics teaching) are ‘just’ teachers.
Standards Based Grading (SBG) is a movement in education more widespread than modeling instruction. The basis of SBG is to guide students towards mastery of topics rather than pushing them through an outdated factory model of learning. Rick Wormeli and Robert Marzano are two academics leading the charge in SBG, though it has primarily succeeded as a grassroots movement of educators working in isolation. Frank and Kelly, mentioned above, are also teacher-leaders in this field. SBG has in fact even entered the higher-ed realm, with Andy Rundquist pioneering its use through non-standard assessments in his physics classes. In my district my wife was one of the first to implement SBG 5ish years ago as a result of her Masters thesis. Many others have followed suit, and, for certain in my case, the result is increased student learning.
Project Based Learning (PBL) is a movement where students learn by doing, with a flexible route to demonstrating learning in comparison to other methods of instruction. The most visible example of PBL I know of is Shawn Cornally’s BIG school, where he is attempting to scale PBL to make school more awesome, a worthy task. Project Lead the Way is an example being implemented in my district, a program where students learn engineering through PBL. Students interact regularly with engineers from Seagate, Toro, and other local firms, and produce plans and prototypes with their guidance. Two other teachers at my school pioneered the building of an Environmental Learning Center around “the idea that meaningful learning happens when students engage with the community around them, including the natural environment.”
Many teachers were Flipping the Classroom before Khan Academy popularized it, and many have similarly continued to innovate within the flipped structure. Ramsey Musallam in particular popularized a variation called Explore Flip Apply, which was developed because of research indicating that sparking students’ interest and thinking through inquiry before providing content delivery improves learning outcomes. A local colleague of mine, Andy Schwen, wrote a nice post describing his transition from a pure flip to the EFA model.
Twitter is utopia for individual educators uniting to improve learning, and perhaps the best example of this that I know of is a loose collection of math teachers known as the Math Twitter Blog-o-Sphere. They use the hashtag #MTBoS, interact regularly, and have fantastic conversations about student learning. What’s really amazing is that from this virtual community has sprouted a real one. Tweetups are a regular occurrence (I have participated in three), and for two years now they have organized a loose, edcamp-style workshop called Twitter Math Camp. Last year 100+ educators took part.
I’m fairly certain that I’ve missed numerous ‘disruptions’ and ‘innovations’ out there. So my challenge to you; fill the comments with examples. They can be specific instances (projects, lessons, whatever), or general cases. I am particularly interested in examples outside of the math and physics world in which I primarily live. Blow it up, my hope is that maybe someone important will notice and realize that educators are the voice that’s missing from the education reform table.
## When You Can’t Do Standards Based Grading
My wife first introduced me to Standards Based Grading (SBG) 5ish years ago, while writing her masters thesis on the topic. After 3 years of pushing I finally bought in, particularly because of what I perceive as a special harmony between SBG and the the method of instruction I use in physics, Modeling Instruction. I helped implemented SBG in our regular physics course and was happy with the results. However, the only class I teach this year is a concurrent enrollment U of MN course, which I wrote more extensively about here. The students are mostly highly motived high school juniors and seniors. I love, LOVE teaching this class, but it has a glaring problem for SBG; it is articulated through the U of MN.
I thought there were some significant problems I was having could be addressed with SBG.
1. There are only four exams and a final all year for the U of MN aspect of the course. This is far too little assessment; neither my students nor I really knew where they stood before taking these high stakes exams. It made my grading load nicer, but it wasn’t best for kids.
2. I stopped grading homework last year (I still checked it, but for no credit), and I found that students simply didn’t do it. I still believe that it is practice and thus shouldn’t be part of a grade, but also that they really do need to practice to succeed.
3. Four exams per year means two per semester, which meant that a significant part of a student’s HS semester grade was based on just two exams.
4. Students didn’t know what they had to be good at to succeed on the exams.
I set out to solve these problems using SBG.
Background: Before I continue there are some features of the course that are very relevant to making this work. First of all, it is important that I have a bit of flexibility with how grades are calculated. There is a 10% category as set by the U that was for homework, but I was told I could use it how I see fit. As I mentioned before, I decided last year that I was done assigning a grade to homework (but that’s a different post…), so I had 10% of the U grade that I could use for re-assessable quizzes. Furthermore, since the HS grade is split into two semester grades, whereas there is only one college grade for the whole year (it’s a one semester U course taught in a year at the HS), I have even more flexibility with the HS grade. Thus I am able to carve out 25% for a ‘SBG’ category for their HS grade. It’s not perfect, but it’s what I have.
The grading scale, as set by the U, is fairly forgiving with 15% increments instead of the standard 10%, such that the A cutoff is 85%, B is 70%, etc. This is key to a using a non-standard scoring methods (such as a 4 point scale) because a 2/4 at 50%, is still a D+. That said, one could always use a 4 point scale and map those scores to percents. Really the four points are meant to represent levels of mastery (exemplary, proficient, developing, basic), not percentages. In my case I can use an alternate scale and it still fits within my percentages, but some tweaking could certainly fix that in other cases.
The Quizzes are based on standards I have written for the course, which in turn are based on the skills I deemed necessary for students to succeed on the four U of MN exams. The quizzes are scored on a 2-1-0 scale, where 2 means they nailed it, 1 means they understand something but not everything, and zero means they didn’t know where to begin. The first quiz generated an awesome amount of learning, as most students scored themselves a 1 and were very motivated to improve their learning and thus that score. After a couple of quizzes and students getting very frustrated at multiple reassessments at 1, I caved and started giving 1.5 (B/B+). I don’t mind that distinction as I give 1.5s when I can’t give a 2 (they didn’t nail it), but they have still shown proficiency (they demonstrated understanding but made some minor mistakes). I’m still seeing kids reassess to shoot for the 2.
Wait, did you say self-graded quizzes? OK, this is my favorite part of the course, I stole it from Frank. Students take a quiz. There’s a bunch of bright colored keys in the back of the room along with red pens (side note: don’t love red, but it’s what I have at the moment). When a student finishes the quiz, they walk to the back, check their work against the key, correct and annotate their quiz, score themselves a 0,1, 1.5, or 2, and hand it in. I hover in the back both to keep them honest and to answer questions when needed. This serves two purposes;
1. Students get instant feedback on exactly what they did and did not do correctly, which is vastly more important than…
2. Takes the correcting load off of me.
I almost hate mentioning #2, but the reality is that the normal student load in a public high school is something like 150 kids, so doing SBG with reassessments could get overwhelming. I want this to be something that is helpful for the students and for me. Quiz grading takes me very little time with this method, and we also don’t ‘waste’ class time going over the quiz as a group since they already corrected it themselves.
When I take a look at the quizzes, I am looking at a couple of things. First of all, I am looking to see if there are patterns in what was answered incorrectly so I can adjust instruction if necessary. Second, I am looking closely at the 2′s to make sure they really demonstrated a complete understanding of the material. This process is MUCH faster than scrutinizing each quiz to see if they get an 8.5 or a 9.
Hold on. If you don’t give partial credit on a quiz, then don’t they all get D+’s? Kind of. At first they did, with only 2,1,0. I didn’t want to be haggling over points. I want students to fully understand each and every standard so they can nail those U of MN exams. Case study; on quiz 1 (Constant Velocity problem solving), most of my students got a 1. This is because the U (like the college board for AP exams) strongly emphasizes algebraic problem solving, and students resist doing so. Last year I didn’t feel like my students had a good feel for algebraic problem solving until second semester; this year, they all took the bet and lost, and as a result, they are reassessing. And they are reassessing well.
What I love
• Forced reassessments force learning
• The feeling that students are in control of their learning and their grade
What I don’t as much
• Part of me is ok with certain timelines for demonstrating learning (the exams), but another part believes the final deadline should really be the end of the course. In a more pure SBG system students could potentially figure out physics in the last month of school and then earn a grade that reflects that understanding, an A. In my system, if they didn’t figure out kinematics by Exam 1, then they probably won’t get the A in the course even if they reassess on the quizzes, due to the high weights of the non-reassessable exams. But I can’t change this anyway.
Conclusions
The learning gains I have seen over last year so far are exceeding even my optimistic expectations; below is a box plot comparing the last two years of the U of MN Exam 1 (more dynamic link here);
So far all indicators point to success of this new system over my old one. Do you see holes? Have suggestions to make it even better? Let me know in the comments.
Frank wrote a great post about The Spirit of SBG that I think complements this post in that it emphasizes that SBG is about increasing learning, not about a system itself. I’m using a framework for SBG as best as I can to attempt to help increasing learning, so I hope that the sprit of SBG is being kept in that.
## CVPM Unit Summary
I only have one standard for CVPM, as I didn’t want to get bogged down with a super granular standard list.
CVPM.1: I can represent a constant velocity problems graphically and algebraically and solve problems using both numeric and algebraic methods.
I start day one of my essentially honors level, first year physics course with the Buggy Lab. (If you’re not familiar with the Buggy Lab, or even if you are, read Kelly’s post about it). This takes 2 full days, sometimes 2.5, with 45 minute periods.
From there I use Practice 1 stolen from Kelly, found in my CVPM Packet, which takes me about a day and a half (of 45 minute periods). Here’s a post about the board meeting to discuss the data.
Days 5-6 or so are the Cart Launch Lab. Here’s a picture of my notes while students discussed the data in a board meeting.
Next is Practice 2, also stolen from Kelly, though I add that we walk them with motion detectors, 1 day ish. (Update: Whiteboarding took the whole period and I decided that that was more worthwhile than actually walking them with motion detectors, we’ll do more of that in CAPM)
The last worksheet is Practice 3, which I developed to help develop more algebraic problem solving. This is because my class is actually a U of MN class taught at the HS level, and the U emphasizes algebraic problem solving. 2 days. This worksheet went very well, and here are some notes about starting the whiteboarding process with it as well as the ensuing conversation.
After Practice 3 I have two days of difficult problem solving practice. The first is the standard lab practicum where students must cause two buggies of different speeds to head-on crash at a particular location. Here’s a post describing the practicum. The second is a difficult, context rich problem that students work on in groups.
All in all the unit takes me 13-14 days, including the quiz at the end and a day to FCI pretest.
## Transitioning from Energy to Momentum
In my college level physics class we study Energy right before momentum. I really like this, particularly because we can begin our study of momentum as driven by the fact that a pattern emerges from data that is not explainable by Energy.
On the first day of my momentum unit I typically do a fun car crash activity to help students start thinking about how force and time are related in collisions. The next day we start building the momentum transfer model. (We’ll come back to force-time relationship at the end of this paradigm series) Last year, not having experience with Modeling Instruction, I just dove right in (chronicled starting with day 1 here). This year I wanted to utilize the discover, build, break cycle that Frank Noschese talked about in his TEDx talk. One of the tenants of modeling is that models are useful for certain cases and not for others. Thus I used an inelastic collision to springboard into momentum based on the fact that an energy analysis is not particularly useful for this situation.
When students walked in I showed them a scenario where a moving cart (A) collides with a stationary cart (B) of equal mass. I asked them to use the Energy Transfer Model (ETM) to predict the final velocity of the carts. A typical analysis looks something like this;
Assuming there is no conversion of energy to thermal energy, the kinetic energy of the first cart should end up as combined kinetic energy for both carts after the collision;
$\frac{1}{2}m_Av_{Ai}^2=\frac{1}{2}m_Av_{Af}^2+\frac{1}{2}m_Bv_{Bf}^2$
Noting that for this case $m_A=m_B$ and $v_{Af}=v_{Bf}$, the whole thing simplifies to
$v_{Ai}^2=2v_f^2$
Solving for the final velocity of the two carts together in terms of the initial velocity of the first one,
$v_f=\frac{v_{Ai}}{\sqrt{2}}$
Once we got to here I simply said “Go test it,” and they got to work in the lab.
Before I go on I want to comment on the lack of thermal energy in the above derivation. Many of my students correctly tried to include E_therm in their analysis. This is great, but I pointed out that today was a lab day and thus we need to be able to measure things. Me: “Can we easily measure E_therm?” Student:”Ummmm…no.” “Right, so let’s ignore it and see if the data upholds that assumption.” They almost always (correctly) want to include E_therm in every energy analysis, but we have done a couple situations in the lab where stored gravitational interaction energy transfers to kinetic energy for dynamics carts where assuming no changes in E_therm yielded good data. Thus students were primed for me to suggest that we could ignore E_therm. However, this is tempered with the fact that I do a demonstration showing that kinetic energy transfers to thermal energy in collisions (a couple weeks prior) and that they are used to me guiding towards ‘wrong’ answers. So I believe students went into lab cautiously optimistic that our the lab evidence would support the derived equation.
It doesn’t.
It only takes students 5-10 minutes to realize that the final velocities are closer to half the initial rather than the initial divided by the square root of two. Some of them try to justify the data (well, it seems kind of close to root two…), but after conferring with their classmates they give up and go with two. At that point I pulled them back up to the front of the room.
Me: So, did our equation work?
Students: Nope
M: But was their a pattern?
S: Yep. Final velocity is half the initial.
M: Wait, you mean that energy doesn’t predict the final velocity, but something else does?
S: Um…..
We had a quick discussion about how something must be going on that is different from energy. We also talked about how it makes sense that energy wouldn’t work; we expect some of the initial kinetic energy to convert to E_therm after the collision.
From here I continued day 1 in pretty much the same way as last year. I found after a 45 minute period students were just about ready to talk about a relationship, just slightly behind where day 1 ended before adding the energy piece. My students are much more used to the idea of paradigm labs this year and are getting pretty good at looking for meaning in lab data, so I am not surprised that this addition didn’t significantly change the day one timeframe. Tomorrow we start with presenting the student derived relationships.
## An Empirical Start to the Energy Transfer Model (Part 2)
At the end of the first post in this series I lamented that starting energy empirically meant that I couldn’t include changes in thermal energy like starting this modeling unit more traditionally does. I shouldn’t have worried. Turns out that emphasizing that changing the energy of a system through working, heating, or radiating helps them overall with energy conservation despite that thermal energy in particular isn’t address. But I’m getting ahead of myself.
Days 1-4 ish are outlined in the first post of this series. I’m now picking up at around Day 5.
Kinetic Energy
We started this unit by finding that the area under the force vs. postion graphs for two different springs, when made equal, yielded equal velocities when launching carts. I emphasized at this time (and over and over again as we went through the unit) that the area under graphs, if it has a physical meaning, means a change in something. In this case it’s a change in energy, though we hadn’t gotten that far yet. I just emphasized it’s a change in something. So in the first activity the change in something predicted velocities. In the second it correlated with a change in height. At that point we coined the term gravitational interaction energy, and we looked at how the final gravitational interaction energy was the same as the initial plus the change in energy (as found from the area under the F vs. x graph) The third, starting now, looks at the correlation of that change with velocity. They now know that this has something to do with kinetic energy, since we had the energy=pain talk, but not exactly how.
There are many variations of this lab, most using springs. I found that if you attach a force detector to a cart (which we did for the area vs. change in height experiment previously), you can just pull the cart with a rope and get pretty good data for area vs. v^2 even though the force isn’t constant. Which I think is extra cool. Basic setup for this experiment is below. Note the horizontal track.
I learned one pretty neat trick when I performed the lab myself. For each trial, it doesn’t really matter where the end point is, as long as you find the area for some displacement and then record the final velocity that corresponds to the end point for that displacement (assuming you start from rest, which I did). So I had students graph force vs. position to find the area (change in energy) that we were interested in, and then plot velocity vs. position so that they easily find the corresponding ending velocity. This way they can set the integral (area) section to be the same for each trial, then quickly use the examine function in logger pro to find the ending velocity at that same endpoint for each trial. Slick.
Plotting change in energy vs. v looks like this. Note that since I took this data I actually called the area work, since that is the means by which the energy is changing in this case. I did not instruct them to do that, however.
It actually looks fairly linear, especially to kids who are looking for things to be linear. However, typically data was non-linear enough, and we linearized a quadratic doing central force, so most groups linearized using v^2 on the x axis.
When the data is linearized, it looks like this.
Certainly that looks more linear! Student data actually turned out good as well. Always nice when that happens.
The board meeting for this went amazingly fast. In the first class a student commented almost right away about the units of the slope. They started trying to figure out what the units should be, and I wrote on the board. With a little prodding we finally figured this out;
$\frac{\text{units of rise}}{\text{units of run}}=\frac{N\cdot m}{\frac{m^2}{s^2}}=\frac{kg\cdot\frac{m}{s^2}\cdot m}{\frac{m^2}{s^2}}=\frac{kg\cdot\frac{m^2}{s^2}}{\frac{m^2}{s^2}}=kg$
Whoa. All that simplifies to kg? Cool.
The classes did this in different orders, but essentially within 10 minutes they had figured out that the intercept was zero (both empirically from their data as well as logically by thinking through why it should be zero), that the slope was half the mass, and that the slope relating to the mass made sense because the units of the slope simplify to kg.
Thus
$\{\text{Area under F vs. x graph}\}=\frac{1}{2}mv^2$
From here we went on to be explicit about the names of everything. The area represented a change in energy. In the first case (pulling carts up ramps), it’s a change in gravitational interaction energy. In this case, it’s a change in kinetic energy.
This is more or less where day 5 ended. No, seriously, at this point they (keep in mind this is a college level class taught at the high school, so essentially top 20% kids) took data, whiteboarded it, and figured out meaning in a 45 minute class period.
Day 6 ish: Lab wrap up and transition to Energy Bar Charts
I started the day by teaching energy bar charts (LOLs). (Need a primer on energy bar charts? Kelly comes through again). We then went through the labs drawing the LOL for each one. This did two things; first, and most importantly, it emphasized that the area under the force vs. position graph found a value that measured how energy changed from the first snapshot to the second snapshot. Secondly, it was a way to show students how to draw LOLs. After drawing the LOLs for our two experiments, we had a conversation about how energy changes. The modeling instruction teacher notes lists that there are three ways energy changes; working, heating, and radiating. (Side note: I strongly prefer starting energy from a First Law of Thermodynamics perspective (strict conservation of energy) rather than from a Work-KE theorem perspective. More on that in a later post on partial truthsThey brought up convection and conduction, and I talked about how these are just two different ways for heat to transfer. We briefly talked about molecular interactions and KE transfer here, but I kept it quick. The point here was to plant the seed that what we are doing generalizes beyond work performing the energy transfers in and out of the system, but that for now we are going to focus on work (rather than heating or radiating) as a mechanism to transfer energy.
This took an entire day, as I have them draw the LOLs first, then we have a conversation about them. After today I assigned a worksheet on drawing LOLs and writing the qualitative energy conservation equations. This is a modified version of worksheet 3 in the standard modeling curriculum, modified by myself, Kelly O’Shea, and Marc Schrober (in reverse order?).
I’m hoping to write more about the development process, but overall I found, very anecdotally, that starting energy this way helped students see conservation on a system basis, and they have no problems with the idea that energy can enter or leave a system through working, heating, or radiating. It took a while to differentiate between energy stored in the system as thermal energy versus energy leaving the system through work done by friction, air resistance, or normal force (bouncing ball or other examples), but that’s to be expected no matter how this is done. My regular physics students certainly had trouble with that distinction despite starting ETM ‘traditionally.’ Both classes saw this demonstration (video here) to show that kinetic energy certainly does, often, transfer to thermal energy. The difficultly generally is tracking that energy; is it stored as a change in E_therm in the system, or does it leave via work? It took a while to work through that (pun intended).
Concluding Thoughts
I’m going to leave you with this. When I first started learning about Modeling Instruction, I assumed it was all about the labs, such as those outlined so far in this series. I have since learned, however, that though the labs provide a foundation for the concepts being learned, working through those concepts through whiteboarding is as much as important as the paradigm labs. Whiteboarding is where students flesh out the differences between what they think and what science demonstrates as a better truth, and where they hopefully cement their beliefs as those that align with science. Don’t underestimate the full framework of Modeling Instruction as a complete system for helping students through the process of learning like scientists.
## An Empirical Start to the Energy Transfer Model (Part 1)
I’ve been thinking a lot about the Energy Transfer Model (ETM). The Modeling Instruction curriculum seems to start this model by jumping right into the concept of Energy Transfer without much empirical model building, contrary to many of the earlier models. I really like the way Kelly starts energy, showing students how previous models don’t work to predict the desired outcome. Still, I was unsatisfied in that I felt like I would just be telling students what energy is and how it transfers without letting them get a feel for it for themselves. So I set out to design my own version of the beginning of ETM. I used this version of ETM in my college physics class after starting ETM the standard way in regular physics.
Day 1: Area of Force vs. Position graphs
Day 1 started just as Kelly’s post details above, though she has modified it since posting to use Pasco’s spring cart launcher instead of regular springs. The idea is simple. How can I make the final velocity of these carts the same if they are launched by two different springs? We spent 10 minutes playing with the carts, and I showed them at maximum compression, both at about 8 cm, the carts launch at different speeds. Predictably, the spring with the highest spring constant launches fastest. So how can we make them go the same speed using their Force vs Position graphs?
We (my colleague Ben, with whom I teach the regular class, and I) tested the springs and their constants fell very close to those stated in the documentation, so we used that to make expected F vs. x graphs rather than take real data. It worked just fine.
In all classes I did this (three different sections, one regular and two college), the first guess was to make the force equal for each spring. So we did that. My regular class just looked at the graph, saw that if we wanted a force just over 4 N we could use about 5 cm for the red spring and 3 cm for the blue one. For the college classes I asked them to choose an arbitrary compression for the red spring, then find the blue compression to give the same force.
Either way, it failed miserably.
Turns out that if two different springs are compressed to the same force value, they do in fact have the same average force, and thus the same average acceleration. However, the weaker spring has to be compressed further to get that same force value, and thus the same acceleration happens over a larger distance. The weaker spring actually gives a faster speed when the force each exerts is the same!
They get this. I asked them what would happen if you had two cars that had the same acceleration, but one accelerates for 10 meters and one for 20 meters. The 20 meter one ends up at a faster speed. Yep, that happens here too. The red spring car goes faster because it has the same acceleration on average as the blue spring but for a longer distance.
So anyway, what now? I had to guide them to check area. I did not do as awesome of a job as I would like using the area under velocity vs. time graphs to find displacement, and as a result area of graphs is not a formost thought for them. However, all classes jumped on the idea once I led them there (by referring back to kinematics graphs and the parts of those graphs that do in fact have physical meaning). Most students needed help with the idea that they should pick an arbitrary compression of the weak spring. Once there, however, we worked through the math and found the compression of the blue spring such that its area equaled that of the red spring with our arbitrary compression.
The launch was perfect. In all 3 sections.
Kids really like it when things work, and boy, does this work. It took about one 45 minute class period to get this done, but they definitely got the idea that the area under the F vs x graph meant something. I emphasized, over and over, that area under graphs, if it has a physical meaning, means a change in something. We don’t know, however, what that something is yet.
This is where the classes diverged. The regular class went into a lecture day on types of energy and energy pie charts. But that’s not what I want to write about.
To continue empirically, I wanted them to see that the area under the F vs. x graph (a change in something, as I kept calling it) was meaningful in other situations as well. So next we looked at ramps.
Day 2: Ramps and the Area of F vs. x graphs
On day 2 I told them we were going to again look at the Area of F vs. x graphs, but this time in a different situation. We started with a cart at rest at point A, arbitrary but constant. We wanted to end with the cart at rest at point B up the ramp, also arbitrary and constant. I had them pull carts from A to B in any way they wanted and to find the area under the F vs. x graph. Here’s a sample trial.
I learned some things. First of all, most of them didn’t end the cart at rest at point B at first. But we did, however, use that to establish that the faster the cart was going at B, the larger the area seemed to be. We will go back and quantify this later (part 3 or 4 of this series, I believe). So we went back and got some data for starting and ending at the same points each time, starting and ending at rest, but getting from A to B in different ways. Here’s some sample data.
In discussion it became evident that outliers appeared in one of two general cases; when the cart was difficult to actually stop at point B, and when the cart moved backward at some point. On the whole, it was pretty easy to convince them that the area was the same no matter how you got from A to B as long as the cart didn’t move backward and the cart was at rest again at B. Pretty awesome.
That same day I asked them what measurement would always correlate with the area. Horizontal distance up the ramp? Angle? Height? We were able to quickly show that though distance correlated with area, it didn’t work well if we kept the same distance and changed the angle (we got different areas then). Thus distance is not a universal predictor of the area. How about angle? Similar problem; for one angle you could get infinite areas. How about height? We spent the last minutes of this period showing that if we had an equal change in height, even for two different ramps (same cart of course), that the area was approximately the same. Cool.
Day 3 and 4: Finding the Correlation with Height and the Entrance of Energy
Day 3 was short classes, only 30 minutes because of a pep fest, and I think data collection and whiteboarding could probably be done in one class period. However, the conversation we had about types of energy at the end of day 4 fit really well and it was nice to have that there. But I’m getting ahead of myself.
Day 3, 30 minutes, was spent collecting area vs. change in height data. Some students changed the height just by pulling the cart further up the ramp, and some by changing the angle of the ramp, or a combination of the two. Part of the awesomeness of this lab is that it doesn’t matter; no matter how they change the height, if they collect data consistently and correctly, the results turn out well. (Students won’t, by the way, take data consistently and correctly; I had at least 2 groups in each class with non-sensical data. They don’t set the endpoints of the integral in Loggerpro correctly, or they don’t change the endpoints (thus making the change in height the same for all trials), or they measure change in distance rather than height, or they do one of I’m sure many other things that yield poor results. It’s a learning experience though, and the conversations that come from ‘bad’ data are often just as useful as those that come from ‘good.’)
In any case, the graphs were decently linear. Through a board meeting (circle sharing) groups quickly noticed that the intercept was zero, and that that made sense as if we don’t have any change in height, we shouldn’t have gone anywhere, so the area of F vs x would also be zero. They then noticed that some groups (conveniently with carts of different masses, *cough cough*) had different slopes. At some point someone notices that the slope appears to be approximately 10 times the mass. Hmmm, isn’t g really close to 10? Then we look at units. The slope must have units of Newtons, as y axis has units N*m and the x has units of meters. If the slope was mass times g, then the units would be in Newtons. Hmm. Note: In all this, I try to at ask questions with a couple of words max and let the conversation take its course.
This was convincing enough for my students that the slope should be mg. It was, pretty close, for the groups that had decent data. I then asked them to write a general equation to model our data. Most were able to get here;
$A=mg(\Delta h)$
where A is the area under the Force vs Position graph, in N*m.
I pointed out that even though this was a different situation than day 1, the area still gave us something meaningful. But seemingly unrelated to speed! We’re getting there. Let’s rearrange the above equation a bit.
$A=mg(\Delta h)=mgh_2-mgh_1$
$mgh_1+A=mgh_2$
Here is where I finally defined that the quantity mgh is called Gravitational Interaction (or Potential) Energy. I took a side trip for a bit on energy as pain, as described very well (better than I could) in Kelly’s aforementioned post on building the ETM.
Thus what we have found is that the initial gravitational interaction energy plus a the Area under F vs x (which recall we had emphasized as a change something) gave us the final gravitational interaction energy. So I guess the area is a change in Energy, huh?
Starting with Day 5 we are going to look at how the area correlates with speed, and use that to figure out Kinetic Energy. We will then use that to transition in to Energy Bar Charts and the rest of the energy unit. More on that in later posts (I think 1731 words is enough for now, huh?)
Concluding thoughts, for now.
I really like that this method strongly emphasizes that the energy is changing due to the Work done (though we haven’t used that word yet), and I plan to use it to strengthen both their methods of using graphs and multiple representations to solve problems as well as to help with the idea of Work itself, which when taught traditionally has really only served to confuse my students. I don’t like, however, that for now I am ignoring changes in thermal energy, which the typical intro to ETM in Modeling Instruction emphasizes from the get go. I used to teach energy where we would ignore friction for weeks, then finally add it in and start all over, and didn’t like that. I think, however, that the idea that the F vs x graph influences the transfer of energy will transfer (hehe) to friction as well. We’ll see, and I’ll keep you updated.
## A Physics PLC: Collaboration at a Distance
This year my school district, like many others, implemented PLCs (Professional Learning Communities) as the driving force behind how we collaborate to help students learn. The directive was that all teachers should meet in a PLC weekly for approximately 30 minutes. This sounds, and can be, great, but I had a problem.
You’re Gonna Need Some Background Info
For 7 years I had been the only physics teacher. This year I took on technology integration half-time, and in addition we have more physics sections, so there are now three of us who teach physics part time. The other two also teach math and chemistry. When the PLC directive came out I was excited to have someone to work with, finally. However, it was not to be. All three of us each teach a different course (I teach a college level course, the math teacher has regular physics, and the chem teacher has ‘applied’ physics, essentially a conceptual class). Since none of us teach the same course and PLC work was important with the other courses those teachers were teaching, they both decided to go with their other courses. Great, I’m a singleton. Again.
Enter Twitter. I’ve been on Twitter almost two years now, and I have learned more on Twitter in these two years than the previous six, which included a masters degree. Among other things I have managed to build a pretty awesome PLN (Personal Learning Network) that includes a couple hundred incredible physics and math teachers from around the country. In particular, the physics Modeling Instruction community is active and extremely helpful on Twitter. So I decided I’d try to find out if there was anyone else in the same boat as I, or anyone else who simply wanted to use student work to inform instruction. I posted a short tweet with a link to a Google doc with this request;
My name is Casey Rutherford. I am entering teaching for the 8th year, my 7th teaching physics, and my first using Modeling Instruction. I have a relatively odd request.
My school is implementing PLCs, certainly a worthy task. The problem is that at this point there is not a logical person with whom I would form a PLC. Thus my request. I am wondering if any of you would like to form an online PLC with me, working together approximately 30 minutes/week to compare student work. My thought is that we can do a lot with formative assessments, using photos of student whiteboards to form the basis for our conversations. I am, however, open to other ideas as well.
I am very interested in Standards Based Grading as well; however, this particular class is articulated through the University of Minnesota (in fact, it is U of MN Physics 1101 and they get a college transcript upon completing the course), and thus I am not able to implement SBG for this course. It is the only class I am teaching this semester due to a new half-time gig as a technology integration specialist. Thus I think I would like to focus on the impact of modeling on student learning.
I was blown away from the response. Initially I had over 10 people who were interested (ok, so it’s not like that’s hundreds, but I didn’t know if anyone would!). We spent a couple of weeks trying to accommodate multiple, mutually exclusive, schedules. I must admit I got a bit caught up in wanting to include the masses; I thought it was fun that so many people thought this was something worthwhile. However, at some point Kelly, who ended up in the core group, said that this really only made sense if it was something one could attend regularly.
Duh. PLC. Norms, relationships, student work.
The Core Group
We ended up with a core group of six of us; myself, Kelly, Fran, Meg, Leah, and Matt.
This group is both diverse and similar. All of us use Modeling as our primary mode of instruction. We are all at least open to Standards Based Grading, if not practicing it. We are all already on Twitter and thus relatively connected to the larger physics education community. We all like to learn and to work towards increasing student learning.
On the other hand, we all teach in very different settings. Fran, Matt, and I teach in very different public schools in Minnesota, Iowa, and Pennsylvania. Kelly teaches at a private boarding school in Delaware Leah at a private, girls, Jewish high school in New York City, and Meg at a public charter school in upstate New York. That diversity of perspective has been awesome.
The Hangout
We typically meet on Thursday nights for about an hour, though that time frame is flexible depending on what people bring to look at. When we started we thought that despite teaching in different settings with different classes that we could try doing some common formative assessments. We developed a formative assessment for constant velocity motion, and a number of us assigned it to our students. We then took a week to look at the data for the first teacher who was already ahead of the rest of us. It was pretty fascinating that the students were using a particular reference, ‘the motion detector’, in answering the questions despite the fact that no detector was mentioned in the problem. It turned out they had done much of the development of the concept using motion detectors, thus they thought of detectors as a universal reference point. Turns out looking at student work informs instruction!
In the next week or two after we then looked at other teachers’ students answers, but there was a problem. The sheer amount of information from the Google Form was pretty overwhelming. We spent a significant amount of time just sifting through it and trying to get the other PLC members to see the same cell. We did some color coding, but didn’t have a very well-defined system.
A Different Way to Analyze Student Work
We fairly organically decided that it would be easier, especially because of very different pacing for our different classes, to simply have volunteers ‘bring’ student work to look at for each meeting. Thus whenever I give a quiz I scan or take a picture of some examples that represent common or interesting mistakes students made on the quiz. Others do the same. Not only do we get the chance to see how each others students are responding to similar questions (it really helps here that we all use, at the core, the Modeling Instruction curriculum), but we can discuss how to best help students avoid pitfalls and misunderstandings. A typical night starts with a check in on how things are going and, often, advice for someone who is struggling with something. Then someone posts a link to a quiz and we take a minute or two to look over it. Someone notices something, and discussion ensues. As discussion slows on one quiz someone posts another. There is no rule or defined procedure here, but it seems to work well.
Often these quizzes lead to discussions on instructional techniques. One week Kelly was sharing her thoughts on having students use vector addition diagrams rather than the traditional use of components, for solving force problems. She then opened a shared Google Drawings window and demonstrated their usefulness. I introduced this diagram to my kids the next day and was blown away by how much they liked it. Collaboration for the win!
Building Relationships
Since the start of our gatherings I’ve thought a lot about Kelly’s statement that it would make more sense with a regular group. As we’ve been meeting for almost half a year now, I have found that I’ve become very comfortable with the other members. It’s humbling and sometimes embarrassing to share work that your students produced that is not perfect. A great PLC meets those imperfections with empathy and advice rather than with judgement. We’re all in this together, and all students make mistakes. In fact, one thing that I have become more convinced of as a result of our meetings is that the very process of making mistakes is essential to learning. Lots of research in science education, physics in particular, points to the idea that in order to learn and retain scientific reasoning, students must first wrestle with the dissonance between their own thinking and scientific explanations. (citations needed, I know; call me out if you want and I’ll dig some up for you! Here’s a bit to tide you over.) Anyway, the point is that as teachers it is hard to open up and be vulnerable, but the so far my experience is that my learning about student learning has been very worth it.
One highlight for me was that when I was in the NYC area over winter break I was able to meet Leah in person for coffee. It is really fun getting a chance to meet someone in person whom you previously only knew in an online environment! I look forward to continue to build relationships with my PLC, and I hope to meet more of them in person eventually.
Why G+ Hangouts?
G+ hangouts were a natural choice for us. We all had Google accounts already, and G+ allows us to video chat, share documents, chat on the side (which also helps in posting links to student work stored in Dropbox, Evernote, or Drive), and even to use Google Drawings or screenshare. G+ also allows for recording hangouts, but we have not done that as there was consensus that recording would take away from the ‘safe harbor’ aspect of the meeting. There are certainly other options to G+; the Global Physics Department uses an enterprise version of Blackboard Collaborate and the Global Math Department uses Big Marker. We never even considered anything else, however, as G+ hangouts has performed as well as we need it to.
At the End of the Day…
What’s better about my teaching now? So far this year my PLC meetings have resulted in changes in unit placement, improvements in teaching specific topics, additions of representations to help student visualizations, improvements in my understanding of student misconceptions, and an overall increase in the big picture view of learning physics through a cyclic treatment of the various models (rather than treating topics as isolated units). I can only imagine what further meetings will lead to!
|
# Math Help - Proof involving rational numbers
1. ## Proof involving rational numbers
a) Prove that between any two rational numbers there is another rational number; that is, if a, b are in Q and a < b, then there exists z in Q such that a < z < b.
b) Prove that between any two rational numbers there are infinitely many rational numbers.
2. ## Proof that there is a rational between any two real numbers
Since $a then
$0< \frac{1}{b-a}$ by the archimedian principle we can pick an Integer N such that
$0< \frac{1}{b-a} This implies that
$1 < N(b-a)$ we can now pick another integer n such that
$n < N(a) < n+1$ adding 1 we get
$n+1 < N(a)+1$ usin g the fact that $1 < N(b-a)$ we get
$n+1 this implies that
$N(a) < n+1 < N(b) \implies a < \frac{n+1}{N}< b$
3. You don't have to be that complicated! If a and b are rational numbers, then $c= \frac{a+ b}{2}$ is a rational number between a and b.
Suppose there were only a finite number of rational numbers between a and b. Then there must be largest such number: c. But (b+c)/2 is a rational number between c and b and so between a and b and is larger than c: contradiction.
|
# Local temporary file system in memory¶
The memory of compute nodes can also be used to store data using a file system-like access. Every Linux OS mounts (part of) the system memory under the path /dev/shm/, where any user can write into, similarly to the /tmp/ directory. This kind of file system may help in case of multiple accesses to the same files or in case of very small files, which are usually troublesome for parallel file systems.
Warning
Writing to /dev/shm reduces the memory available to the OS and may cause compute node to go Out Of Memory (OOM), which will kill your processes and interrupt your job and/or crash the compute node itself.
Each architecture of compute nodes at NERSC ships with different memory layouts, so the advice is to first inspect the memory each architecture reserves to /dev/shm in an interactive session using df -h /dev/shm (by default the storage space reserved to tmpfs is half the physical RAM installed). Note that /dev/shm is a file system local to each node, so no shared file access is possible across multiple nodes of a job.
Since data is purged after every job is completed, users cannot expect their data to persist across jobs, and will be required to manually stage in their data into /dev/shm before every execution and stage it out before completing. If this data movement involves several small files, the best approach would be to create an archive containing all the files beforehand (e.g. on a DTN node, to avoid wasting precious compute time), then inside the job extract the data from the archive into /dev/shm: this minimizes the number of accesses to small files on the parallel file systems and produce instead large contiguous file accesses.
## Example stage-in¶
For example, let’s assume several small input files are needed to bootstrap your jobs, and are stored in your scratch directory at $SCRATCH/files/. Here’s how you could produce a compressed archive input.tar.gz (note that the $SCRATCH variable is not expanded in the dtn nodes):
ssh dtn03.nersc.gov
cd /global/cscratch1/sd/$USER/ tar -czf input.tar.gz files/ Now you can unarchive it in /dev/shm when inside a job. Note that there may already be some system directories in /dev/shm which may cause your process to misbehave: for this reason you may want to create a subdirectory only for you and unarchive your files in there, when inside your job: mkdir /dev/shm/$USER
tar -C /dev/shm/$USER -xf$SCRATCH/input.tar.gz
## Example stage-out¶
A similar approach to the stage-in needs to be taken before the job completion, in order to store important files created by the job. For example, if a job created files in /dev/shm/$USER/, we may want to archive and compress them into a single file with: cd /dev/shm/$USER/
tar -czf $SCRATCH/output_collection/output.tar.gz . ## MPI example jobs¶ When dealing with multiple nodes using MPI, only one process per node has to create directories or create archives, to avoid collisions or data corruption. The following example creates a directory /dev/shm/$SLURM_JOBID on each node, runs a mock application that generates multiple files in /dev/shm/$SLURM_JOBID/ and finally creates a tarball archive of the data from each node, storing it in $SCRATCH/outputs/:
#!/bin/bash
#SBATCH ... # here go all the slurm configuration options of your application
set -e # Exit on first error
export OUTDIR="$SCRATCH/outputs/$SLURM_JOBID"
export LOCALDIR="/dev/shm/$SLURM_JOBID" export CLEANDIR="$SCRATCH/cleaned_outputs/"
mkdir -p "$OUTDIR" "$CLEANDIR"
# Create the local directory in /dev/shm, using one process per node
srun --ntasks $SLURM_NNODES --ntasks-per-node 1 mkdir -p "$LOCALDIR"
# The following is just an example, it creates 1 small file per process in $LOCALDIR/$RANDOM
# Substitute with your application, and make it create files in $LOCALDIR srun bash -c 'hostname >$LOCALDIR/$RANDOM' # And finally send one "collecting" process to archive all local directories into separate archives # We have to use 'bash -c' because 'hostname' needs to be interpreted on each node separately srun --ntasks$SLURM_NNODES --ntasks-per-node 1 bash -c 'tar -cf "$OUTDIR/output_$(hostname).tar" -C "$LOCALDIR" .' You may also want to concatenate these archives into a single one for easier analysis (note that only uncompressed archives can be concatenated). To do so you can add this line after the last 'srun' above: tar -Af "$(/usr/bin/ls -1 $OUTDIR)" && cp -a "$(/usr/bin/ls -1 $OUTDIR |head -1)" "$CLEANDIR/$SLURM_JOBID.tar" The line above will make all the nodes of your job wait for this single process, therefore "wasting" compute hours. Alternatively you can use a separate job (e.g. a shared job using a single core) or manually using the datatransfer nodes, so not to waste compute resources, especially if the archives are large. Here's a separate "aggregator" script: #!/bin/bash #SBATCH ... # here go all the slurm configuration options of your application set -e # Exit on first error # Get name of directory containing files to be merged [[$# -ne 1 ]] && echo "Error. Missing input arg: DIRECTORY" && exit 1 || cd $1 export CLEANDIR="$SCRATCH/cleaned_outputs/"
mkdir -p "$CLEANDIR" # Concatenate all *.tar archives into a single one using the name of the # current dir (the job id) as the new name. cat *.tar > "$CLEANDIR/$(basename$PWD).tar"
This "aggregator" job should be submitted after the first script has completed, or you can use a for loop to iterate over all the output directories, like this:
for d in $SCRATCH/outputs/*; do sbatch aggregator.slurm "$d"; done
## Final notes¶
The user will need to pay attention at the memory usage on the node: storing too much data in a tmpfs on memory may force the kernel to kill running processes and/or cause the node to crash if not enough memory is left available.
Also important to note is that /dev/shm, being volatile memory, does not offer any fault tolerance solution, and a node crash will cause the data to be lost: see also our documentation on Checkpointing for solutions.
If you're creating large archives (over the GB threshold) please consider striping the scratch directory where you will create the archive.
A similar solution is to use temporary XFS file systems on top of Lustre, when using shifter containers.
|
# ALGEBRAIC COMBINATORICS
Intersection Pairings for Higher Laminations
Algebraic Combinatorics, Volume 4 (2021) no. 5, pp. 823-841.
One can realize higher laminations as positive configurations of points in the affine building [7]. The duality pairings of Fock and Goncharov [1] give pairings between higher laminations for two Langlands dual groups $G$ and ${G}^{\vee }$. These pairings are a generalization of the intersection pairing between measured laminations on a topological surface.
We give a geometric interpretation of these intersection pairings in a wide variety of cases. In particular, we show that they can be computed as the minimal weighted length of a network in the building. Thus we relate the intersection pairings to the metric structure of the affine building. This proves several of the conjectures from [9]. We also suggest the next steps toward giving geometric interpretations of intersection pairings in general.
The key tools are linearized versions of well-known classical results from combinatorics, like Hall’s marriage lemma, König’s theorem, and the Kuhn–Munkres algorithm, which are interesting in themselves.
Received:
Revised:
Accepted:
Published online:
DOI: 10.5802/alco.182
Classification: 05B35, 20E42, 90C24, 13F60
Keywords: Discrete geometry, buildings, matroid, convexity, tropical geometry, cluster algebras.
Le, Ian 1
1 Perimeter Institute for Theoretical Physics Waterloo, ON N2L 2Y5, Canada
License: CC-BY 4.0
Copyrights: The authors retain unrestricted copyrights and publishing rights
@article{ALCO_2021__4_5_823_0,
author = {Le, Ian},
title = {Intersection {Pairings} for {Higher} {Laminations}},
journal = {Algebraic Combinatorics},
pages = {823--841},
publisher = {MathOA foundation},
volume = {4},
number = {5},
year = {2021},
doi = {10.5802/alco.182},
language = {en},
url = {https://alco.centre-mersenne.org/articles/10.5802/alco.182/}
}
TY - JOUR
TI - Intersection Pairings for Higher Laminations
JO - Algebraic Combinatorics
PY - 2021
DA - 2021///
SP - 823
EP - 841
VL - 4
IS - 5
PB - MathOA foundation
UR - https://alco.centre-mersenne.org/articles/10.5802/alco.182/
UR - https://doi.org/10.5802/alco.182
DO - 10.5802/alco.182
LA - en
ID - ALCO_2021__4_5_823_0
ER -
%0 Journal Article
%T Intersection Pairings for Higher Laminations
%J Algebraic Combinatorics
%D 2021
%P 823-841
%V 4
%N 5
%I MathOA foundation
%U https://doi.org/10.5802/alco.182
%R 10.5802/alco.182
%G en
%F ALCO_2021__4_5_823_0
Le, Ian. Intersection Pairings for Higher Laminations. Algebraic Combinatorics, Volume 4 (2021) no. 5, pp. 823-841. doi : 10.5802/alco.182. https://alco.centre-mersenne.org/articles/10.5802/alco.182/
[1] Fock, Vladimir; Goncharov, Alexander Moduli spaces of local systems and higher Teichmüller theory, Publ. Math. Inst. Hautes Études Sci. (2006) no. 103, pp. 1-211 | DOI | MR | Zbl
[2] Fomin, Sergey; Shapiro, Michael; Thurston, Dylan Cluster algebras and triangulated surfaces. I. Cluster complexes, Acta Math., Volume 201 (2008) no. 1, pp. 83-146 | DOI | MR | Zbl
[3] Goncharov, Alexander; Shen, Linhui Geometry of canonical bases and mirror symmetry, Invent. Math., Volume 202 (2015) no. 2, pp. 487-633 | DOI | MR | Zbl
[4] Goncharov, Alexander; Shen, Linhui Donaldson–Thomas transformations of moduli spaces of G-local systems, Adv. Math., Volume 327 (2018), pp. 225-348 | DOI | MR | Zbl
[5] Gross, Mark; Hacking, Paul; Keel, Sean; Kontsevich, Maxim Canonical bases for cluster algebras, J. Amer. Math. Soc., Volume 31 (2018) no. 2, pp. 497-608 | DOI | MR | Zbl
[6] Kamnitzer, Joel Hives and the fibres of the convolution morphism, Selecta Math. (N.S.), Volume 13 (2007) no. 3, pp. 483-496 | DOI | MR | Zbl
[7] Le, Ian Higher laminations and affine buildings, Geom. Topol., Volume 20 (2016) no. 3, pp. 1673-1735 | DOI | MR | Zbl
[8] Le, Ian Cluster structures on higher Teichmuller spaces for classical groups, Forum Math. Sigma, Volume 7 (2019), Paper no. e13, 165 pages | DOI | MR | Zbl
[9] Le, Ian; O’Dorney, Evan Geometry of positive configurations in affine buildings, Doc. Math., Volume 22 (2017), pp. 1519-1538 | DOI | MR | Zbl
[10] Moshonkin, Andrey G. Concerning Hall’s theorem, Mathematics in St. Petersburg (Amer. Math. Soc. Transl. Ser. 2), Volume 174, Amer. Math. Soc., Providence, RI, 1996, pp. 73-77 | DOI | MR | Zbl
[11] Murota, Kazuo Discrete convex analysis, Math. Programming, Volume 83 (1998) no. 3, Ser. A, pp. 313-371 | DOI | MR | Zbl
[12] Rado, Richard A theorem on independence relations, Quart. J. Math. Oxford Ser., Volume 13 (1942), pp. 83-89 | DOI | MR | Zbl
Cited by Sources:
|
### Theory:
Humans have always searched for a magical medicine, which if drunk once, maintains all forms of life healthily; a spiritual substance which can sustain all forms of life forever. But the real cure is right in front of our eyes - the most common type of liquid substances - the humble plain water! The author recollected his experience when he had been to the Libyan desert. He stood on the line that split the Libyan desert from the Nile Valley in Egypt.
The line that divides the desert from the river!
On one side, he saw huge waves of sand without a single green spot or any life on it. On the other side, he saw the most fertile, most occupied land on earth - swarming with people and greenery. He wondered what brought about this magnificent difference. It is due to the river Nile that flows down the Mediterranean, from a distance of $$2000$$ miles. Geologists, or people who study about the constituents of earth, claim that the river itself has created the soil of the Nile valley. It has accumulated all the excellent mineral sediments from its floodwaters while flowing from the Abyssinia highlands and from interiors of Central Africa. It has been sedimenting in the trenches where the river Nile flows into the sea, for hundreds of years. The author says, the country Egypt itself was created by the river Nile. It has given life to the whole civilization and sustained the livelihood in the area by its regular presence, every year.
Egypt's ancient civilization was created and is sustained by the life-giving waters of Nile!
The author says he had mentioned the example of Nile, and that he can state many more - only to reiterate the fact that we do not take "water" seriously. Since it is the most common substance found on earth, it does not mean it is not essential. It is the most powerful and glorious thing on earth. It has a huge role in defining our history and is continuously playing a critical part in our life on this planet.
Water, in any form, adds beauty to a place - it can be a small stream dripping down the rocks, or a small pond on the way where domestic animals satisfy their thirst during evenings. It makes the whole place beautiful.
There is nothing which adds so much to the beauty of the countryside as water!
The concept of rain-fed tanks are common in South India, where tanks are built to collect rainwater and used for irrigation. It looks lovely when such tanks are full, and is a wonderful treat for the eyes. But the system has not been given proper care and maintenance. They have less depth, but it does not matter - the water is full of mineral sediments and it reflects the light in such a way that the river-bed part is not seen from outside. These tanks are essential for agriculture in South India. Some huge tanks are a beautiful sight, especially during sunrise and sunset time. The author compares the water in a land area to the eyes in a human face. He explains its importance as it demonstrates the mood of the hour - when the sun is bright and shining, the water also sparkles bright; it becomes dull and dark when it is cloudy.
A distinguished fact about water is that it has the power to carry mineral sediments or fine particles of soil in a mixture state. This is how water gets its colour in the rainwater tanks. The colour keeps changing with the earth's different forms, in the tanks where rainwater is stored. It varies according to the different weather patterns - when there is fresh rain, the colour changes into brighter tones. Fast-moving water has the power to carry bigger and heavier particles, due to its force. But, the dissolved small particles keep moving within the water, even if it has greater volume; they travel to far distances. Such constituents - though small, they are higher in number and huge amounts of solid sediments travel in water through this method.
Silt-laden water - filled with nutrients and minerals!
When the water with soil and minerals mixes with the seawater, there is a swift action of precipitation. Precipitation means the water that falls as rain or snow, due to the chemical reactions in the environment. This action is very evident to our eyes when we travel by steamboats in huge rivers to the deep sea. The colour of the water also varies continuously from muddy red, brown (when silt is mixed in it) to different shades of yellow/green and finally becomes blue in the deep sea. A detailed study of the silt-deposited soil in such areas has revealed that the silt has thus formed vast areas of land. These lands are very fertile in nature.
One of the most important geological process is the simple flow of water. It is one of the primary reasons for the conversion of rocks (from earth's crust) into soil. Though it serves as a huge advantage, there is an equal disadvantage as well. In some circumstances, it erodes the soil which forms the basic layer of all farming lands. If not checked at appropriate conditions, it can have devastating effects for the whole country. Washing away of the soil is one of the grave problems in India, especially being an agricultural country. The causes and measures to avoid soil erosion have to be examined carefully.
Soil erosion!
Soil erosion takes place in a step-by-step process. The first few stages may not be evident; but as it progresses to the later stages, the cutting and washing away of the earth's soil is very clearly seen, and it is very upsetting. It is visible in the formation of deep, narrow passes which makes the land unfit for farming. Sudden heavy rain is a huge factor contributing to this problem - heavy rains result in massive outflow of water, which washes away the soil along with it. Other factors which worsen this problem are:
• slope in the land - makes water rush out faster
• there is no vegetation (trees) on the land, which can naturally check erosion
• ruts or grooves made by the passage of vehicles, allows quicker, stronger outflow of water
• absence of any checks of such outflow of water
When such erosion occurs, unbelievable quantities of soil can be washed out. It is to be noted that such huge amounts of soil are washed away frequently!
This problem of soil erosion proves to be a significant threat to agriculture in many parts of India. It requires immediate action and calls for preventive measures. Some of them are:
• Terracing of land refers to a process of cutting pieces of sloped land into a series of successively receding flat surfaces or platforms, which resemble steps. It stops water from rushing out and therefore silt sediments are also retained.
• Building walls (bunds) around water bodies to hold back water.
• Contour cultivation or ploughing is the farming practice of planting across a slope following its elevation border lines. These borderlines already formed in the hill slopes, (contours) create breaks for water, reducing the formation of gullies during heavy rainfall. This process allows water some extra time to settle in the soil before it gets washed away.
• Planting correct types of crops in the related areas of soil. Each crop requires a different kind of soil and climate and it has to be grown accordingly.
The primary purpose of the above measures are to check the inflow/outflow of water at the earliest possible stage before the water gains speed and force to wash away the topsoil along with the nutrients. It can save us from massive destruction in future.
Water is a basic necessity. All plants, animals, even humans, have a considerable amount of water in their bodies. Fluids facilitate physical movements in the body parts to function smoothly. Like how water is imperative for body functions in animals, it is also vital for plants. Wetness in the soil is an essential factor that decides the growth of crops/plants. Each species has varying levels of consumption of water. Hence, conservation and effective utilization of water resources is a basic requirement for the welfare of the people.
Artesian water is a specific type of underground water that comes out through springs, due to natural pressure. Except for artesian water, our only sources are rainwater or snowfall. Indian agriculture depends heavily on monsoon rains and is immediately affected when seasonal rains fail.
Artesian water, an underground source of water!
Soil erosion and insufficient rains are related to each other. If the preventive measures are adopted effectively, it checks soil erosion and also helps to protect the available water resources on and in the soil. Therefore it serves a double purpose.
It is clear that India being an agricultural country, depends on seasonal rainfall. When a considerable quantity of water comes down as rainfall in a particular season, some part of it must be able to run off the ground. At the same time, we need to collect and effectively use the excess water in future. Hence this conservation of water is of great importance. A lot of the rainwater flows into the rivers and streams and finally reaches the ocean. Large amounts of silt-laden water (with all nutrients) are thus mixed in the sea, making a huge loss for the mankind. We have to take control of the abundant water and use it to produce sustainable energy - it is a matter of national importance. If we can handle this issue with a well-planned and bold course of action, enormous areas of land which are wasted as shrubs and bushes can be converted into fertile farming lands.
Our agriculture depends on the seasonal rainfall!
Another aspect closely connected with the conservation of water resources is afforestation. Planting trees according to the vegetation, climate and soil in every possible bit of land is essential. Converting all the wild forests into fertile lands that reap benefits is the need of the hour in our country. Such an action would lead to a good amount of creation of wealth for the country.
• Having more trees also helps in preventing soil erosion.
• Conserve rainwater by avoiding wastage.
• Reduce the conversion of farm manure into fuel, as it provides for more fuel at cheaper rates.
The measures discussed above, to handle the flow of water and to conserve water, also serves a secondary purpose. It helps the people of the country by serving as a mode of transport. One of the cheapest forms of transportation in a country is the water transport (boats). Boats sail through canals and rivers. Instead of spending huge amounts on rail and road transport systems, we can concentrate on internal waterways. Also, hydro-electric power may be tapped from the water resources. The production of electric power can greatly benefit rural parts of India, and it facilitates growth in all directions.
Boats barges through canals and rivers!
Water is the most common form of all liquids because it is available to all. But how can it also be the most uncommon form is a question; it has wonderful properties which enables it to sustain plant and animal life. How can the most common form of liquid also have such exceptional properties? The examination of the nature and features of water is always of tremendous interest and research in the field of water is still a never-ending process.
Water - the most common and uncommon form of liquid!
|
## Alignment: Overall Summary
The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 partially meet the expectations for alignment. The instructional materials meet expectations for Gateway 1, focus and coherence, by focusing on the major work of the grade and being coherent and consistent with the Standards. The instructional materials partially meet the expectations for Gateway 2, rigor and practice-content connections. The materials partially meet the expectations for rigor by reflecting the balances in the Standards and giving appropriate attention to procedural skill and fluency. The materials partially meet expectations for practice-content connections. The materials identify the practices and attend to the specialized language of mathematics, however, they do not attend to the full intent of the practice standards.
|
## Gateway 1:
### Focus & Coherence
0
7
12
14
13
12-14
Meets Expectations
8-11
Partially Meets Expectations
0-7
Does Not Meet Expectations
## Gateway 2:
### Rigor & Mathematical Practices
0
10
16
18
11
16-18
Meets Expectations
11-15
Partially Meets Expectations
0-10
Does Not Meet Expectations
|
## Gateway 3:
### Usability
0
22
31
38
N/A
31-38
Meets Expectations
23-30
Partially Meets Expectations
0-22
Does Not Meet Expectations
## The Report
- Collapsed Version + Full Length Version
## Focus & Coherence
#### Meets Expectations
+
-
Gateway One Details
The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 meet the expectations for Gateway 1, focus and coherence. Assessments represent grade-level work, and items that are above grade level can be modified or omitted. Students and teachers using the materials as designed would devote a majority of time to the major work of the grade. The materials are coherent and consistent with the standards.
### Criterion 1a
Materials do not assess topics before the grade level in which the topic should be introduced.
2/2
+
-
Criterion Rating Details
The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 meet the expectations that the materials do not assess topics from future grade levels. The instructional materials do contain assessment items that assess above grade-level content, but these can be modified or omitted.
### Indicator 1a
The instructional material assesses the grade-level content and, if applicable, content from earlier grades. Content from future grades may be introduced but students should not be held accountable on assessments for future expectations.
2/2
+
-
Indicator Rating Details
The instructional materials reviewed for Big Ideas Math: Modeling Real Life Grade 7 meet the expectations for assessing the grade-level content and if applicable, content from earlier grades.
There are no above grade level assessment items for Grade 7. Examples of assessment items which assess grade-level standards include:
• Chapter 1, Quiz 2, Item 9, students use a vertical number line that shows elevations of a submarine after certain events to determine the distance the submarine rises after diving and the original elevation of the submarine. (7.NS.1.c)
• Chapter 3, Test A, Item 13, students factor a linear expression in order to determine the length of a square patio that has a perimeter of 16x + 12 feet. (7.EE.1)
• Chapter 3, Performance Task, Item 1, students write and simplify expressions from information provided in a diagram and a table. They describe and explain what they notice about the two expressions. (7.EE.1-2)
• Chapter 5, Test A, Item 6, students find the density of a substance in grams per millimeter by examining a graph. (7.RP.2.d)
• Course Benchmark 2, Item 30, students find the actual perimeter and area of a square using information about the scale drawing of a square. (7.G.1)
• Chapter 8, Alternative Assessment, Item 1, students are given the scenario about finding out how the residents in their town feel about opening a new gas station. Students describe how to conduct a survey so that the sample is biased, and unbiased survey of 200 people. They project how many residents out of 6200 will support the gas station if 80 out of 200 supported it. (7.SP.1-2)
### Criterion 1b
Students and teachers using the materials as designed devote the large majority of class time in each grade K-8 to the major work of the grade.
4/4
+
-
Criterion Rating Details
The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 meet the expectations for spending a majority of class time on major work of the grade when using the materials as designed. Time spent on the major work was figured using chapters, lessons, and days. Approximately 78% of the time is spent on the major work of the grade.
### Indicator 1b
Instructional material spends the majority of class time on the major cluster of each grade.
4/4
+
-
Indicator Rating Details
The instructional materials reviewed for Big Ideas Math: Modeling Real Life Grade 7 meet expectations for spending a majority of instructional time on major work of the grade. This includes all the clusters in 7.RP.A, 7.NS.A, and 7.EE.A, B.
To determine focus on major work, three perspectives were evaluated: the number of chapters devoted to major work, the number of lessons devoted to major work, and the number of instructional days devoted to major work.
• There are 10 chapters, of which 7.4 address major work of the grade, or approximately 74%
• There are 152 lessons, of which 119 focus on the major work of the grade, or approximately 78%
• There are 152 instructional days, of which 119 focus on the major work of the grade, or approximately 78%
A day-level analysis is most representative of the instructional materials because the number of days is not consistent within chapters and lessons. As a result, approximately 78% of the instructional materials focus on the major work of the grade.
### Criterion 1c - 1f
Coherence: Each grade's instructional materials are coherent and consistent with the Standards.
7/8
+
-
Criterion Rating Details
The instructional materials reviewed for Big Ideas Math: Modeling Real Life Grade 7 meet the expectations that the materials are coherent and consistent with the standards. The materials represent a year of viable content. Teachers using the materials would give their students extensive work in grade-level problems, and the materials describe how the lessons connect with the grade-level standards. However, above grade-level content is present and not identified.
### Indicator 1c
Supporting content enhances focus and coherence simultaneously by engaging students in the major work of the grade.
2/2
+
-
Indicator Rating Details
The instructional materials reviewed for Big Ideas Math: Modeling Real Life Grade 7 meet expectations that supporting work enhances focus and coherence simultaneously by engaging students in the major work of the grade.
The supporting domain Statistics and Probability enhances focus and coherence to major standard/clusters of the grade, especially domains 7.NS and 7.RP. For example:
• In Chapter 5, Section 5.2, Solve Problems Involving Scale Drawings of Geometric figures (7.G.1) is connected to the major work of analyzing proportional relationships (7.RP.A). Students write and solve a proportion using the scale and ratios of the lengths of a drawing.
• In Chapter 7, Section 7.1, 7.SP.5 is connected to 7.RP.A as students work with probability as the ratio of desired outcomes to possible outcomes, and examine the probability between 0 and 1 including 0 and 1 of an event. Relative frequency is also defined as a ratio. For example, in Problem 4, students describe the likelihood of each event when making three-point shots or missing the shots.
• In Chapter 7, Section 7.3, Compound Events, connects 7.G.8a with 7.RP.3 when students determine probability by computation of rational numbers, and representing answers as fractions and percents. For example, Problem 4 expresses the probability as 1/6 or 16 2/3%.
• In Chapter 7, Section 7.3, Probability of Compound Events, 7.SP.8 is connected to the major work of solving real-world problems with rational numbers involving the four operations, 7.NS.3. Students solve simple and compound probabilities using rational numbers in various forms.
• Chapter 8, Section 8.1, Example 3 utilizes proportions to solve a problem to make projections for modeling real world problems. After randomly surveying 75 students, students use the results to estimate the number of students from the total population of 1200. Cluster 7.SP.A supports 7.RP.3.
• In Chapter 8, Section 8.2, Self-Assessment, Problem 4, students apply and extend previous understandings of operations with fractions (7.NS.A) to draw inferences about a population (7.SP.A). Students find the means of three samples of the number of hours music students practice each week, and use the means to make one estimate for the mean number of practice hours. The calculations result in a rational number that, when converted to a decimal, results in a repeating decimal, which they make sense of in order to answer the question about the number of hours music students practice each week (7.NS.2).
• Chapter 9, Section 9.5, Problem Solving with Angles, 7.G.5 is connected to the major work of solving word problems leading to equations, 7.EE.4.a as students write and solve equations to find the missing angle using properties of supplementary, complementary, adjacent, and vertical angles.
### Indicator 1d
The amount of content designated for one grade level is viable for one school year in order to foster coherence between grades.
2/2
+
-
Indicator Rating Details
The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 meet expectations that the amount of content designated for one grade-level is viable for one year. As designed, the instructional materials can be completed in 152 days.
The pacing shown in the Teacher Edition includes a total of 152 days. This is comprised of:
• 122 days of lessons (62 lessons),
• 20 days for assessment (one day for review, one day for assessment), and
• 10 days for “Connecting Concepts”, which is described as lessons to help prepare for high-stakes testing by learning problem-solving strategies.
The print resources do not contain a pacing guide for individual lessons. The pacing guide allows three days for this section. Additional time may be spent utilizing additional resources not included in the pacing guide: Problem-Based Learning Investigations, Rich Math Tasks, and the Skills Review Handbook. In addition, there are two quizzes per chapter located in the Assessment Book which indicates where quizzes should be given. The Resources by Chapter materials also include reteaching, enrichment, and extensions. In the online lesson plans, it is designated that lessons take between 45-60 minutes. The day to day lesson breakdown is also noted in the teacher online resources.
### Indicator 1e
Materials are consistent with the progressions in the Standards i. Materials develop according to the grade-by-grade progressions in the Standards. If there is content from prior or future grades, that content is clearly identified and related to grade-level work ii. Materials give all students extensive work with grade-level problems iii. Materials relate grade level concepts explicitly to prior knowledge from earlier grades.
1/2
+
-
Indicator Rating Details
The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 partially meet expectations for the materials being consistent with the progressions in the Standards.
The materials concentrate on the mathematics of the grade, and are consistent with the progressions in the Standards. The publisher recommends using four resources together for a full explanation of the progression of skill and knowledge acquisition from previous grades to current grade to future grades. These resources include: “Laurie’s Notes”, “Chapter Overview”, “Progressions”, and “Learning Targets and Success Criteria”. For example:
• Laurie’s Notes, “Preparing to Teach” describe connections between content from prior grades and lessons to the current learning. For example, in Chapter 4, Section 4, “Students should know how to graph numbers on a number line and how to solve one-variable inequalities using whole numbers. In the exploration, students will be translating inequalities from verbal statements to graphical representations and symbolic sentences.”
• Chapter Overviews describe connections between content from prior and future grades to the current learning, and the progression of learning that will occur. For example, Chapter 5, “Laurie’s Notes: Chapter Overview”, “The study of ratios and proportions in this chapter builds upon and connects to prior work with rates and ratios in the previous course.” This supports Standard 6.RP. In Sections 5.1 and 5.2, students decide whether two quantities are in a proportional relationship using ratio tables. This supports Standard 7.RP.2.a and uses unit rates involving rational numbers. During Sections 5.3, 5.4, and 5.5, students write, solve, and graph proportions. This supports Standard 7.RP.2.a-7.RP.3, “Graphing proportional relationships enables students to see the connection between the constant of proportionality and equivalent ratios”, but the term “Slope”, Standards 8.EE.5-6, is not included. In Section 5.6, students work with scale drawings, which supports Standard 7.G.1.
• Each chapter’s Progressions page contains two charts. “Through the Grades”, lists the relevant portions of standards from prior and future grades (grades 6 and 8) that connect to the grade 7 standards addressed in that chapter. For example, Chapter 4, Sections 4.1-4.2, students use algebra tiles to review the process of solving one-step equations. This is identified as revisiting work from a prior grade-level in the “Chapter Exploration and supports grade-level work in section 4.3 of solving equations of the form px + q =r and p(x + q) =r. This supports Standard 7.EE.4a.
Each lesson presents opportunities for students to work with grade-level problems. However, “Scaffolding Instruction” notes suggest assignments for students at different levels of proficiency (emergent, proficient, advanced). These levels are not defined, nor is there any tool used to determine which students fall into which level. In the Concepts, Skills and Problem Solving section at the end of each lesson problems are assigned based on these proficiencies, therefore, not all students have opportunities to engage with the full intent of grade-level standards. For example:
• In the Teacher Edition, Chapter 6, Section 6.5, the assignments for proficient and advanced students includes a reasoning task in which students determine the price of a drone that is discounted 40%, and then discounted an additional 60% a month later. This reasoning task is omitted from the assignments for emerging students.
• In the Teacher Edition, Chapter 9, Section 9.2, the assignments for advanced students include a critical thinking task in which students determine how increasing the radius of a circle impacts the area of the circle. This critical thinking task is omitted from the assignments for emerging and proficient students.
• Each section within a chapter includes problems where the publisher states, “students encounter varying “Depth of Knowledge” levels, reaching higher cognitive demand and promoting student discourse”. In Chapter 8, Section 8.1, students examine a sample of a population for validity. This supports Standard 7.SP.1 and use a random sample to draw inferences about a population which supports Standard 7.SP.2.
• In “Exploration 1” students “make conclusions about the favorite extracurricular activities of students at their school” by first identifying the population and samples of the population, (DOK Level 1) and then by evaluating the differences between two samples and evaluating their conclusions for validity and explain their thinking, (DOK Level 3).
• Problem 2 students compare two samples to determine which sample is unbiased, (DOK Level 2).
• In Chapter 4, Section 4.6, students roll two different colored dice with negative and positive numbers on each cube. When the students roll a pair of dice, they write an inequality to represent them. Then they roll one die and multiply each side of the inequality to represent them. They are then asked if the original inequality is still true. Finally, they are asked to make conjectures about how to solve an inequality of the form ax <b for x when a>0 and when a<0. These conjectures will help to develop the key idea(s) for the section which is to write and solve inequalities using multiplication and division. This supports standard 7.EE.4.b.
• In Chapter 6, students use a percent model to justify their answers, instead of assessing the reasonableness of answers using mental computation and estimation strategies. Mental computation and estimation are strategies specifically called for in standard 7.EE.3.
Materials explicitly relate grade-level concepts to prior knowledge from earlier grades. At the beginning of each section in Laurie’s Notes, there is a heading marked “Preparing to Teach”, which includes a brief explanation of how work in prior courses relates to the work involved in that lesson. In some cases it outlines what happened in prior courses, but is not specific to which grade or course this happens. For example:
• In Chapter 1, Section 1.1, it states that in prior courses students were introduced to integers, absolute value, and number lines. For example, “It is important that students review these foundational skills because they are necessary for adding and subtracting rational numbers.” In Chapter 1, Section 1.1, students review the concept of absolute value (6.NS.7). This leads into Section 1.2 where students begin adding integers (7.NS.1.b).
• In Chapter 3, Section 3.3 states that students have used the distributive property in previous courses. It adds, “They will extend their understanding to include algebraic expressions involving rational numbers. This property is very important to algebraic work in future courses”. In Chapter 3, Section 3.3, Exploration 1, students build upon their experience with the distributive property to include rational numbers. In Example 1, students apply the distributive property to simplify expressions.
• In Chapter 5, Section 5.2, the Preparing to Teach notes, explain the connection between students’ prior work with ratios (describing ratio relationships, completing tables), (6.RP.A), and the content in Section 5.2, stating, “In this lesson, they will extend their work with ratios to include fractions, making connections to their recent work with fractions.” In Section 5.1, students complete ratio tables, and write and interpret ratios, but now with fractions, forming a bridge to upcoming work of finding and using unit rates involving rational numbers (7.RP.1).
• In Chapter 6, Section 6.1, Preparing to Teach, notes state students “should know how to solve simple percent problems, and how to use ratio tables, Standard 6.RP.3.” The remainder of Chapter 6, “will build upon this understanding to write and solve percent proportions.” (7.RP.3)
• In the Resources by Chapter book, each chapter has a few questions that are named as “Prerequisite Skills Practice”. The intent is for practice from prior knowledge. There is no mention of previous grade knowledge or previous lesson knowledge.
### Indicator 1f
Materials foster coherence through connections at a single grade, where appropriate and required by the Standards i. Materials include learning objectives that are visibly shaped by CCSSM cluster headings. ii. Materials include problems and activities that serve to connect two or more clusters in a domain, or two or more domains in a grade, in cases where these connections are natural and important.
2/2
+
-
Indicator Rating Details
The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 meet expectations that materials foster coherence through connections at a single grade, where appropriate and required by the standards.
Materials include learning objectives that are visibly shaped by CCSSM cluster headings. Chapter headings indicate the learning targets for each section and are outlined at the beginning of each chapter in the Teacher Edition. Each chapter also begins with a table that identifies the standard that is taught in each section with an indication if the lesson is preparing students, if it completes the learning or if students are learning or extending learning. For example:
• In Chapter 5, Algebraic Expressions and Properties, 6.EE, Apply and extend previous understandings of arithmetic to algebraic expressions is directly related to the Chapter 5 learning goals of, “Evaluate algebraic expressions given values of their variables (Section 5.1), Write algebraic expressions and solve problems involving algebraic expressions (Section 5.2), Identify equivalent expressions and apply properties to generate equivalent expressions (Section 5.3), Identify equivalent expressions and apply properties to generate equivalent expressions (Section 5.4), and Factor numerical and algebraic expressions (Section 5.5).
Materials consistently include problems and activities that connect two or more clusters in a domain or two or more domains in a grade, in cases where these connections are natural and important. Multiple examples of tasks connecting standards within and across clusters and domains are present. These connections build deeper understanding of grade-level concepts and the natural connections which exist in mathematics. For example:
• In Chapter 3, students engage simultaneously in Standards 7.NS.A and 7.EE.A, as they simplify, add, subtract, factor and expand linear expressions involving positive and negative number coefficients. For example, in Section 3.1, Try It, Problem 9, students simplify 2s - 9s + 8t - t. In Section 3.3, Try It, Problem 5, students use the distributive property to simplify the expression -3/2 (a - 4 - 2a).
• In Chapter 4, students use operations with integers, Cluster 7.NS.A to solve problems using numerical and algebraic expressions and equations, Cluster 7.EE.B.
• In Chapter 5, Domain 7.RP connects ratio with computations with rational numbers 7.NS, as students explore rates and unit rates. For example, in Section 5.6, students analyze proportional relationships and use them to solve real-world problems.
• Chapter 6, the problems and activities provide connections between the skills and understandings of Cluster 7.EE.B to those of Cluster 7.RP.A as students write proportions and equations to represent and solve percent problems, and to write equations to solve problems involving discounts and markups. In Section 6.3, Practice, Problem 23, students write and solve an equation to determine the percent of sales tax on a model rocket costing $24 with a sales tax of$1.92.
• Chapter 8, Section 8.4, students use random sampling to draw inferences about a population, connecting 7.SP.A with drawing informal comparative inferences about two populations, 7.SP.B.
## Rigor & Mathematical Practices
#### Partially Meets Expectations
+
-
Gateway Two Details
The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 partially meet the expectations for rigor and mathematical practices. The materials partially meet the expectations for rigor by reflecting the balances in the Standards and giving appropriate attention to procedural skill and fluency. The materials partially meet the expectations for practice-content connections, they identify the Standards for Mathematical Practices, and attend to the specialized language of mathematics, but do not attend to the full intent of each practice standard.
### Criterion 2a - 2d
Rigor and Balance: Each grade's instructional materials reflect the balances in the Standards and help students meet the Standards' rigorous expectations, by helping students develop conceptual understanding, procedural skill and fluency, and application.
5/8
+
-
Criterion Rating Details
The instructional materials reviewed for Big Ideas Math: Modeling Real Life Grade 7 partially meet the expectations for rigor and balance. The instructional materials give appropriate attention to procedural skill and fluency, but only partially give appropriate attention to conceptual understanding and application, due to the lack of opportunities for students to fully engage in the work. The materials partially address these three aspects with balance, treating them separately but never together. Overall, the instructional materials partially help students meet rigorous expectations by developing conceptual understanding, procedural skill and fluency, and application.
### Indicator 2a
Attention to conceptual understanding: Materials develop conceptual understanding of key mathematical concepts, especially where called for in specific content standards or cluster headings.
1/2
+
-
Indicator Rating Details
The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 partially meet expectations that the materials develop conceptual understanding of key mathematical concepts, especially where called for in specific standards or cluster headings. The instructional materials do not always provide students opportunities to independently demonstrate conceptual understanding throughout the grade-level.
Each lesson begins with an Exploration section where students develop conceptual understanding of key mathematical concepts through teacher-led activities. For example:
• In Chapter 1, Section 2, Exploration 1 (7.NS.1.d), students are taught to add integers with chips and using number lines. “Write an addition expression represented by the number line. Then find the sum.” After these examples, students are asked to use conceptual strategies (number line or chips).
• In Chapter 3, Lesson 2, Exploration 1, students use algebra tiles to model a sum of terms equal to zero and simplify expressions. In the Concepts, Skills and Problem Solving section, students have two additional problems where they use algebra tiles to simplify expressions. (7.EE.1)
• Chapter 1, Section 4, “Subtracting Integers,” Exploration 1 asks students to work with partners and use integer counters to find the differences and sums of several problems with two different representations. For example, “4 - 2” and “4 + (-2)”; “-3 - 1” and “-3 + (-1)” and “13 - 1”. Student pairs are asked to generate a rule for subtracting integers. Students who can’t generate a rule are prompted to use a number line. After working independently students share their rule with a partner and discuss any discrepancies. (7.NS.1)
• Chapter 4, Section 1 “Solving Equations Using Addition or Subtraction” Exploration 1, students are asked, “Write the four equations modeled by the algebra tiles. Explain how you can use algebra tiles to solve each equation.” (7.EE.3)
The instructional materials do not always provide students opportunities to independently demonstrate conceptual understanding throughout the grade-level. The shift from conceptual understanding, most prevalent in the Exploration Section, to procedural understanding occurs within the lesson. The Examples and “Concepts, Skills, and Problem Solving” sections have a focus that is primarily procedural with limited opportunities to demonstrate conceptual understanding. For example:
• In Chapter 3, Section 2, only Problems 8 and 9 ask students to demonstrate conceptual understanding. For example, Problems 10-17 ask students to “Find the Sum.” Problem 10: “(n+8) + (n-12)”; Problem 16: “(6-2.7h) + (-1.3j-4).” Problems 19-26 ask students to “Find the difference.” Problem 19: “(-2g+7) - (g+11)”; Problem 26: “(1-5q) - (2.5s+8) - (O.5q+6)”. (7.EE.1)
• In Chapter 2, Section 2, Concepts, Skills & Problem Solving, the majority of the questions require procedural knowledge and do not ask students to demonstrate conceptual understanding. For example, Problems 13-28 ask students to “Find the quotient, if possible”, such as Problem 16: “-18 ÷ (-3)"; and Problem 22: “-49 ÷ (-7)”. (7.NS.1)
### Indicator 2b
Attention to Procedural Skill and Fluency: Materials give attention throughout the year to individual standards that set an expectation of procedural skill and fluency.
2/2
+
-
Indicator Rating Details
The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 meet expectations that they attend to those standards that set an expectation of procedural skill. The instructional materials attend to operations with rational numbers (7.NS.A), using the properties of operations to generate equivalent expressions (7.EE.1), and solving real-life and mathematical problems using numerical and algebraic expressions (7.EE.B). For example:
• In Chapter 1, Lesson 5, students subtract rational numbers. Examples 1-3 provide step-by-step explanations of the procedural skill of rational numbers. In the Concept, Skills, and Problem Solving section, students have many opportunities to demonstrate their skill of subtracting rational numbers. (7.NS.1)
• In Chapter 2, Lesson 1, students multiply rational numbers. Examples 1-3 provide step-by-step explanations of the procedural skill of multiplying rational numbers. In the Concept, Skills, and Problem Solving section, students have many opportunities to demonstrate their skill of multiplying rational numbers. (7.NS.2)
• In Chapter 3, Lesson 4, students factor expressions. Examples 1-3 provide step-by-step explanations of the procedural skill of factoring an expression. In the Concept, Skills, and Problem Solving section, students have many opportunities to demonstrate their skill of factoring an expression. (7.EE.1)
• In Chapter 4, Lesson 1, students solve equations using addition and subtraction. Examples 1-3 provide step-by-step explanations of the procedural skill of solving an equation using addition and subtraction. In the Concept, Skills, and Problem Solving section, students have many opportunities to demonstrate their skill of solving an equation. (7.EE.4.a)
In each lesson there is a “Review & Refresh” section, which provides additional practice for skills previously taught. Within these sections are further opportunities to practice the procedural skills. For example:
• In Chapter 2, Lesson 2, there are four problems requiring multiplication of rational numbers. For example: “Problem 1: 8 x 10; Problem 2: -6(9); Problem 3: 4(7); Problem 4: -9(-8)”. (7.NS.2)
• In Chapter 3, Lesson 4, there are three problems requiring simplifying expressions. For example: “Problem 1: 8(k-5); Problem 2: -4.5(-6+2d); Problem 3: -1/4(3g-6-5g)”. (7.EE.1)
• In Chapter 4, Lesson 1, there are four problems asking students to factor out the coefficient of the variable term. For example: "Problem 1: 4x-20; Problem 2: -6y-18; Problem 3: -2/5w + 4/5; Problem 4: 0.75z - 6.75”. (7.EE.4.a)
In addition to the Student Print Edition, Big Ideas Math: Modeling Real Life Grade 7 has a technology package called Dynamic Classroom. The Dynamic Student Edition includes a middle school game library where students can practice fluency and procedures. The game library is not specific for any one grade in grades 6-8, so teachers and students may select the skill for which they wish to address. Some of the activities are played on the computer. For example, the game “Tic Tac Toe” allows up to two players to practice solving one-step, two-step, or multi-step equations. The game “M, M & M” allows up to two players to practice mean, median, and mode. There are also non-computer games within the game library that are printed and played by students. For example, “It’s All About the Details” is a game that reinforces details about shapes and played with geometry game cards that are also included and prepared by the teacher. In addition to the game library, the Dynamic Student Edition includes videos that explain procedures and and can be accessed through the bigideasmath.com website.
### Indicator 2c
Attention to Applications: Materials are designed so that teachers and students spend sufficient time working with engaging applications of the mathematics, without losing focus on the major work of each grade
1/2
+
-
Indicator Rating Details
The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 partially meet expectations that the materials are designed so that teachers and students spend sufficient time working with engaging applications of mathematics.
The instructional materials present opportunities for students to engage in application of grade-level mathematics; however, the problems are scaffolded through teacher-led questions and procedural explanation. The last example of each lesson is titled, “Modeling Real Life,” which provides a real-life problem involving the key standards addressed for each lesson. This section provides a step-by-step solution for the problem; therefore, students do not fully engage in application. For example:
• Chapter 5, Lesson 1, Example 3, Modeling Real Life, “You mix 1/2 cup of yellow paint for every 3/4 cup of blue paint to make 15 cups of green paint. How much yellow paint do you use?” Students are given two methods to solve the questions with both methods being explained and answered. For example, “Method 1: The ratio of yellow paint to blue paint is 1/2 to 3/4. Use a ratio table to find an equivalent ratio in which the total amount of yellow paint and blue paint is 15 cups.” [A completed ratio table with annotated description as to how it was filled out is included.] “Method 2: You can use the ratio of yellow paint to blue paint to find the fraction of the green paint that is made from yellow paint. You use 1/2 cup of yellow paint for every ¾ cup of blue paint, so the fraction of the green paint that is made from yellow paint is 2/5 [included equation and solution]. So, you use 2/5 ⋅ 15 = 6 cups of yellow paint.” (7.RP.1)
• Chapter 1, Lesson 1, Example 3, Modeling Real Life, “A moon has an ocean underneath its icy surface. Scientists run tests above and below the surface. [Table Provided] The table shows the elevations of each test. Which test is deepest? Which test is closest to the surface?” The explanation from this point provides students with step-by-step directions on how to solve the problem. “To determine which test is deepest, find the least elevation. Graph the elevations on a vertical number line. [Vertical line provided.] The number line shows that the salinity test is deepest. The number line also shows that the atmosphere test and the ice test are closest to the surface. To determine which is closer to the surface, identify which elevation has a lesser absolute value. Atmosphere: ∣0.3∣ = 0.3 Ice: ∣−0.25∣ = 0.25 So, the salinity test is deepest and the ice test is closest to the surface.” (7.NS.1)
Throughout the series, there are examples of routine application problems that require both single and multi-step processes; however, there are limited opportunities to engage in non-routine problems. For example:
• Chapter 2, Lesson 1, Problem 17, “On a mountain, the temperature decreases by 18°F for each 5000-foot increase in elevation. At 7000 feet, the temperature is 41°F. What is the temperature at 22,000 feet? Justify your answer.” (7.NS.3, multi-step, routine)
• Chapter 3, Lesson 4, Problem 41, Dig Deeper, “A square fire pit with a side length of s feet is bordered by 1-foot square stones as shown. [Diagram provided] a. How many stones does it take to border the fire pit with two rows of stones? Use a diagram to justify your answer.” (routine) "b. You border the fire pit with n rows of stones. How many stones are in the nth row? Explain your reasoning.” (non-routine) (7.EE.3)
• Chapter 6, Lesson 3, Problem 32, Dig Deeper, “At a restaurant, the amount of your bill before taxes and tip is \$19.83. A 6% sales tax is applied to your bill, and you leave a tip equal to 19% of the original amount. Use mental math to estimate the total amount of money you pay. Explain your reasoning. (Hint: Use 10% of the original amount.)” (7.RP.3, routine)
### Indicator 2d
Balance: The three aspects of rigor are not always treated together and are not always treated separately. There is a balance of the 3 aspects of rigor within the grade.
1/2
+
-
Indicator Rating Details
The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 partially meet expectations that the three aspects of rigor are not always treated together and are not always treated separately.
The instructional materials present opportunities in most lessons for students to engage in each aspect of rigor, however, these are often treated together. There is an over-emphasis on procedural skill and fluency. For example:
• In Chapter 4, Lesson 3, Solving Two-Step Equations, students begin with an Exploration example that uses algebra tiles to show the steps for solving an equation and the relationship to the properties of equality. These examples show the conceptual solving of an equation through models. The lesson shifts to a procedural steps of solving two step equations with Examples 1: “-3x + 5 = 2” and Example 2: “x/8- 1/2 = -7/2”. Example 3 is a procedural example of solving two step equations by combining like terms “3y - 8y = 25”. The lesson progresses to independent application of the skill in Concepts, Skills, and Problem Solving. Students solve equations procedurally.
• Chapter 6, Lesson 1, Fractions, Decimals and Percents, students begin the lesson with an Exploration activity where they compare numbers in different forms based on a variety of strategies. Example 1, presents a conceptual model of a decimal using a hundredth grid, and how to convert a decimal to a percent. Example 2, shows the students how to procedurally build on what they have learned to convert a fraction to a decimal to a percent using division. The lesson then moves to independent practice in Concepts, Skills, and Problem Solving where students procedurally convert between decimals, percents, and fractions.
• Chapter 7, Lesson 2, Experimental and Theoretical Probability, students’ learning begins with an Exploration activity in which students conduct two experiments to find relative frequencies (Flip a Quarter and Toss and Thumbtack) to understand the concept behind probability. The lesson moves on to Example 1, Finding an Experimental Probability by utilizing a formula. “$$P(event) =\frac {number of times the event occurs}{total number of trials}$$”, and Example 2, Finding a Theoretical Probability, by utilizing the formula “$$P (event)= \frac{number of favorable outcomes}{number of possible outcomes}$$”. Example 3, shows the steps for applying each formula to compare probabilities. The bar growth shows the results of rolling a number cube 300 times. How does the experimental probability of rolling an odd number compare with the theoretical probability?” The independent practice in Concepts, Skills, and Problem Solving has the students finding an experimental probability and theoretical probability based on an event.
• Chapter 9, Lesson 1, Circles and Circumference, begins with Exploration 1, where students use a compass to draw circles and conceptually see the length of the diameter and circumference. Exploration 2, continues to explore diameter and circumference through hands on modeling. The lesson continues with three examples showing the steps of applying the formula for finding radius, circumference, and perimeter of a circle. The independent work of the students is within the Concepts, Skills, and Problem Solving in which students are asked to procedurally solve for the radius, diameter, circumference and perimeter.
### Criterion 2e - 2g.iii
Practice-Content Connections: Materials meaningfully connect the Standards for Mathematical Content and the Standards for Mathematical Practice
6/10
+
-
Criterion Rating Details
The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 partially meet the expectations for practice-content connections. The materials identify the practice standards and explicitly attend to the specialized language of mathematics. However, the materials do not attend to the full meaning of each practice standard.
### Indicator 2e
The Standards for Mathematical Practice are identified and used to enrich mathematics content within and throughout each applicable grade.
2/2
+
-
Indicator Rating Details
The instructional materials reviewed for Big Ideas Math: Modeling Real Life Grade 7 meet expectations for identifying the Mathematical Practices (MPs) and using them to enrich the mathematical content.
The Standards for Mathematical Practice (MP) are identified in the digital Teacher's Edition on page vi. The guidance for teachers includes the title of the MP, how each MP helps students, where in the materials the MP can be found, and how it correlated to the student materials using capitalized terms. For example, MP2 states, "Reason abstractly and quantitatively.
• "Visual problem-solving models help students create a coherent representation of the problem.
• Explore and Grows allow students to investigate concepts to understand the REASONING behind the rules.
• Exercises encourage students to apply NUMBER SENSE and explain and justify their REASONING."
The MPs are explicitly identified in Laurie’s Notes in each lesson, and are connected to grade-level problems within the lesson. For example:
• Chapter 1, Lesson 4, Subtracting Rational Numbers, Exploration 1 (MP2), students work with a partner in answering the following questions: a. Choose a unit fraction to represent the space between the tick marks on each number line. “What expressions involving subtraction are being modeled? What are the differences? b. Do the rules for subtracting integers apply to all rational numbers? Explain your reasoning. You have used the commutative and associative properties to add integers. Do these properties apply in expressions involving subtraction? Explain your reasoning.” MP2 is identified in the teaching notes, “The number line helps students see that the rules for subtracting rational numbers shouldn’t be different from the rules for subtracting integers.”
• Chapter 8, Lesson 1, Samples and Populations, Example 2 (MP3), students are given the scenario, “You want to know how the residents of your town feel about adding a new landfill. Determine whether each conclusion is valid.” Students are provided with information about the survey. MP3 is identified in the teaching notes, “Ask a volunteer to read part (a). Then ask whether the conclusion is valid. Students should recognize that the sample is biased because the survey was not random—you only surveyed nearby residents. Ask a volunteer to read part (b). Then ask whether the conclusion is valid. Students should recognize that the sample is random and large enough to provide accurate data, so it is an unbiased sample.”
• Chapter 5, Lesson 4, Writing and Solving Proportions, Example 3 (MP1), students are provided with two examples of solving proportions using cross products. MP1 is identified in the teaching notes, “As you work through the problems with students, share with them the wisdom of analyzing the problem first to decide what method makes the most sense.”
The MPs are identified in the digital Student Dashboard under Student Resources, Standards for Mathematical Practice. This link takes you to the same information found in the Teacher Edition. For example:
• Chapter 9, Lesson 1, Circles and Circumference, Exploration 2 - Exploring Diameter and Circumference, students work with a partner and find the circumference and diameter of a circular base. They determine whether the circumference or diameter is greater and by how much. “Math Practice - Calculate Accurately,” students are asked, “What other methods can you use to calculate the circumference of a circle? Which methods are more accurate?”
• Chapter 6, Lesson 1, Fractions, Decimals, and Percents, Concepts, Skills & Problem Solving, Problem 39, “MP Problem Solving", “The table shows the portion of students in each grade that participate in School Spirit Week. Order the grades by portion of participation from least to greatest.”
• Chapter 2, Lesson 4, Multiplying Rational Numbers, Concept Skills, & Problem Solving, Problems 10-12. “MP Reasoning”, “Without multiplying, tell whether the value of the expression is positive or negative. Explain your reasoning.”
MP7 and MP8 are under-identified in the series, both are identified in four of the ten chapters.
### Indicator 2f
Materials carefully attend to the full meaning of each practice standard
0/2
+
-
Indicator Rating Details
The instructional materials reviewed for Big Ideas Math: Modeling Real Life Grade 7 do not meet expectations that the instructional materials carefully attend to the full meaning of each practice standard. The materials do not attend to the full meaning of three or more Mathematical Practices.
The instructional materials do not present opportunities for students to engage in MP1: Make Sense of Problems and Persevere in Solving Them, MP4: Model with mathematics, and MP5: Use appropriate tools strategically.
MP1: The instructional materials present few opportunities for students to make sense of problems and persevere in solving them. For example:
• Chapter 2, Lesson 3, Laurie’s Notes, Example 1, “Mathematically proficient students are able to plan a solution. Choosing between methods may help students be more efficient and accurate when writing fractions as decimals. Complete part (a) as a class. The first step is to write the mixed number as an equivalent improper fraction. Then divide the numerator by the denominator. Point out that the negative sign is simply placed in the answer after the calculations are complete. Discuss the Another Method note with students. Point out that to find an equivalent fraction with a denominator that is a power of 10, you multiply the numerator and denominator by powers of 2 or 5. This is not possible for repeating decimals. Complete part (b) as a class. Remind students to always divide the numerator by the denominator, regardless of the size of the numbers!” In Example 1, the solution is provided for students and therefore they do not have to persevere in solving the problem.
MP4: The instructional materials present few opportunities for students to model with mathematics. For example:
• Chapter 5, Lesson 5, Laurie’s Notes, Example 3, “Ask students to explain why the graph represents a ratio relationship and to identify the unit rate. Plotting the ordered pairs confirms that x and y are proportional. ‘What is the constant of proportionality?’ 16. ‘What is the equation of the line?’ y = 16x. Students can use the equation to find the area cleaned for any amount of time.” Students are analyzing a given model, not using a model to solve a problem.
• Chapter 7, Lesson 3, Laurie’s Notes, Example 1, “The tree diagram helps students visualize the 8 outcomes in the sample space.” Students are provided with a worked out example, and do not create a tree diagram as a way to model a problem independently.
MP5: While the Dynamic Student Edition includes tools for students, the instructional materials present few opportunities for students to choose their own tool, therefore, the full meaning of MP5 is not being attended to. For example:
• Chapter 8, Lesson 2, Laurie’s Notes, Example 2, “Students can use calculators to quickly find the mean of each sample.” Teachers direct students to use calculators.
• Chapter 7, Lesson 2, Laurie’s Notes, Exploration 1, “Combine the results for each experiment. As the data are gathered and recorded, several students with calculators can summarize the results.” Students are not selecting their own tool in this example.
### Indicator 2g
Emphasis on Mathematical Reasoning: Materials support the Standards' emphasis on mathematical reasoning by:
### Indicator 2g.i
Materials prompt students to construct viable arguments and analyze the arguments of others concerning key grade-level mathematics detailed in the content standards.
1/2
+
-
Indicator Rating Details
The instructional materials reviewed for Big Ideas Math: Modeling Real Life Grade 7 partially meet expectations that the instructional materials prompt students to construct viable arguments and analyze the arguments of others concerning key grade-level mathematics.
“You be the Teacher”, found in many lessons, presents opportunities for students to critique the reasoning of others, and construct arguments. Examples of where students engage in the full intent of MP3 include the following:
• Chapter 4, Lesson 2, Problem 28, You Be the Teacher, “Your friend solves the equation -4.2x=21. Is your friend correct? Explain your reasoning.” The student work is provided to examine.
• Chapter 6, Lesson 1, Problem 20, You Be the Teacher, “Your friend uses the percent proportion to answer the question below. Is your friend correct? Explain your reasoning. ‘40% of what number is 34?’” The student work is provided to examine.
The Student Edition labels MP3 as “MP Construct Arguments,” however, these activities do not always require students to construct arguments. In the Student Edition, “Construct Arguments” was labeled only once for students and “Build Arguments” was labeled once for students. For example:
• Chapter 2, Lesson 1, Construct Arguments, students construct viable arguments by writing general rules for multiplying (i) two integers with the same sign and (ii) two integers with different signs. Students are prompted to “Construct an argument that you can use to convince a friend of the rules you wrote in Exploration 1(c).”
• Chapter 8, Lesson 4, Exploration 1, Build Arguments is identified in the Math Practice blue box with the following question, “How does taking multiple random samples allow you to make conclusions about two populations?”
### Indicator 2g.ii
Materials assist teachers in engaging students in constructing viable arguments and analyzing the arguments of others concerning key grade-level mathematics detailed in the content standards.
1/2
+
-
Indicator Rating Details
The instructional materials reviewed for Big Ideas Math: Modeling Real Life Grade 7 partially meet expectations that the instructional materials assist teachers in engaging students to construct viable arguments and analyze the arguments of others concerning key grade-level mathematics.
There are some missed opportunities where the materials could assist teachers in engaging students in both constructing viable arguments and analyzing the arguments of others. For example:
• In Chapter 1, Lesson 4, Subtracting Integers, students are shown an example of subtracting integers. In Laurie’s notes, teachers are prompted, “Ask students if it is possible to determine when the difference of two negative numbers will be positive and when the difference of two negative numbers will be negative.”
• In Chapter 5, Lesson 2, Example 1, students find a unit rate based on given information. In Laurie’s notes, teachers are prompted, “There are several ways in which students may explain their reasoning. Take time to hear a variety of approaches.” This is labeled as MP3, but there is no support for teachers to assist students in constructing a viable argument or critiquing the thoughts of others.
• Chapter 1, Lesson 2, Example 2, The Teacher’s Guide is noted with MP3 with the following directions, “‘When you add two integers with different signs, how do you know if the sum is positive or negative?’ Students answered a similar question in Example 1, but now they should be using the concept of absolute value, even if they don’t use the precise language. You want to hear something about the size of the number, meaning its absolute value.” There is no reference to MP3 in the Student Edition in this Lesson.
### Indicator 2g.iii
Materials explicitly attend to the specialized language of mathematics.
2/2
+
-
Indicator Rating Details
The instructional materials reviewed for Big Ideas Math: Modeling Real Life Grade 7 meet expectations that materials use precise and accurate mathematical terminology and definitions when describing mathematics and the materials support students to use precise mathematical language.
• The materials attend to the vocabulary at the beginning of each chapter in the Getting Ready section. For example, in the Getting Ready section for Chapter 3, students read, “The following vocabulary terms (like terms, linear expression, factoring an expression) are defined in this chapter. Think about what each term might mean and record your thoughts.” In Laurie’s Notes for the chapter, teachers are provided with the following notes regarding the vocabulary: “A. These terms represent some of the vocabulary that students will encounter in Chapter 3. Discuss the terms as a class. B. Where have students heard the word like terms outside of a math classroom? In what contexts? Students may not be able to write the actual definition, but they may write phrases associated with like terms. C. Allowing students to discuss these terms now will prepare them for understanding the terms as they are presented in the chapter. D. When students encounter a new definition, encourage them to write in their Student Journals. They will revisit these definitions during the Chapter Review.”
• Key vocabulary for a section is noted in a box in the margins of the student textbook, along with a list of pages where the students will encounter the vocabulary. Vocabulary also appears in some of the Key Ideas boxes. For example, in Chapter 6, Lesson 4, the Key Idea box contains the definition for percent of change, percent of increase, and percent of decrease with an equation of how to find each.
• Each chapter has a review section that includes a list of vocabulary important to the unit and the page number the students will find the terms. For example, in Chapter 4, Review, teachers are given the prompt: “As a review of the chapter vocabulary, have students revisit the vocabulary section in their Student Journals to fill in any missing definitions and record examples of each term.” In the Student Edition, the terms and page number are provided and students are asked to “Write the definition and give an example of each vocabulary term.” Additionally, there is a Graphic Organizer Section where students need to create a “Summary Triangle” for each concept.
The materials provide explicit instruction in how to communicate mathematical thinking using words, diagrams, and symbols. For example:
• The Chapter 4, Laurie’s Notes, Chapter 4 Overview states, “Be sure to use precise language when discussing multiplying or dividing an inequality by a negative quantity. Use language such as, “The direction of the inequality symbol must be reversed.” Simply saying, “switch the sign” is not precise.”
• In Chapter 7, Chapter Exploration includes a list of vocabulary words related to probability. Laurie’s Notes (page T-282) guides teachers to have students use contextual clues and record notes and definitions related to the mathematical terms throughout the chapter.
• In Chapter 9, Section 9.4, Laurie’s Notes, “Motivate, guides teachers to play a game that will help students remember vocabulary and their meanings relating to triangles.”
• In Chapter 2, Lesson 1, Laurie’s Notes remind teachers that “students should say, “Negative 5 times negative 6 equals 30”. Teachers are advised to respond to students saying, “minus 5”, by reminding them that minus represents an operation.
• In Chapter 8, Lesson 1, Laurie’s Notes, teachers are asked to discuss the following, “Define unbiased sample and biased sample. Give a few examples of each. Then ask students to write the definitions in their own words and share an example of each type of sample. The size of a sample can have a great influence on the results. A sample that is not large enough may not be unbiased and a sample that is too large may be too cumbersome to use. As a rule of thumb, a sample of 30 is usually large enough to provide accurate data for modest population sizes.”
• In Chapter 7, Lesson 1, Laurie’s Notes, teachers are asked to “Discuss the vocabulary words: experiment, outcomes, event, and favorable outcomes. You can relate the vocabulary to the exploration and to rolling two number cubes. ‘What does it mean to perform an experiment at random?’ All of the possible outcomes are equally likely. Ask students to identify the favorable outcomes for the events of choosing each color of marble. green (2), blue (1), red (1), yellow (1), purple (1) Be sure students understand that there can be more than one favorable outcome. ‘What are some other examples of experiments and events? What are the favorable outcomes for these events?’ Sample answer: An experiment is rolling a number cube with the numbers 1–6. An event is rolling a number greater than 4, with favorable outcomes of 5 and 6.”
Overall, the materials accurately use numbers, symbols, graphs, and tables. The students are encouraged throughout the materials to use accurate mathematical terminology. The teaching guide reinforces the use of precise and accurate terminology.
## Usability
#### Not Rated
+
-
Gateway Three Details
This material was not reviewed for Gateway Three because it did not meet expectations for Gateways One and Two
### Criterion 3a - 3e
Use and design facilitate student learning: Materials are well designed and take into account effective lesson structure and pacing.
### Indicator 3a
The underlying design of the materials distinguishes between problems and exercises. In essence, the difference is that in solving problems, students learn new mathematics, whereas in working exercises, students apply what they have already learned to build mastery. Each problem or exercise has a purpose.
N/A
### Indicator 3b
Design of assignments is not haphazard: exercises are given in intentional sequences.
N/A
### Indicator 3c
There is variety in what students are asked to produce. For example, students are asked to produce answers and solutions, but also, in a grade-appropriate way, arguments and explanations, diagrams, mathematical models, etc.
N/A
### Indicator 3d
Manipulatives are faithful representations of the mathematical objects they represent and when appropriate are connected to written methods.
N/A
### Indicator 3e
The visual design (whether in print or online) is not distracting or chaotic, but supports students in engaging thoughtfully with the subject.
N/A
### Criterion 3f - 3l
Teacher Planning and Learning for Success with CCSS: Materials support teacher learning and understanding of the Standards.
### Indicator 3f
Materials support teachers in planning and providing effective learning experiences by providing quality questions to help guide students' mathematical development.
N/A
### Indicator 3g
Materials contain a teacher's edition with ample and useful annotations and suggestions on how to present the content in the student edition and in the ancillary materials. Where applicable, materials include teacher guidance for the use of embedded technology to support and enhance student learning.
N/A
### Indicator 3h
Materials contain a teacher's edition (in print or clearly distinguished/accessible as a teacher's edition in digital materials) that contains full, adult-level explanations and examples of the more advanced mathematics concepts in the lessons so that teachers can improve their own knowledge of the subject, as necessary.
N/A
### Indicator 3i
Materials contain a teacher's edition (in print or clearly distinguished/accessible as a teacher's edition in digital materials) that explains the role of the specific grade-level mathematics in the context of the overall mathematics curriculum for kindergarten through grade twelve.
N/A
### Indicator 3j
Materials provide a list of lessons in the teacher's edition (in print or clearly distinguished/accessible as a teacher's edition in digital materials), cross-referencing the standards covered and providing an estimated instructional time for each lesson, chapter and unit (i.e., pacing guide).
N/A
### Indicator 3k
Materials contain strategies for informing parents or caregivers about the mathematics program and suggestions for how they can help support student progress and achievement.
N/A
### Indicator 3l
Materials contain explanations of the instructional approaches of the program and identification of the research-based strategies.
N/A
### Criterion 3m - 3q
Assessment: Materials offer teachers resources and tools to collect ongoing data about student progress on the Standards.
### Indicator 3m
Materials provide strategies for gathering information about students' prior knowledge within and across grade levels.
N/A
### Indicator 3n
Materials provide strategies for teachers to identify and address common student errors and misconceptions.
N/A
### Indicator 3o
Materials provide opportunities for ongoing review and practice, with feedback, for students in learning both concepts and skills.
N/A
### Indicator 3p
Materials offer ongoing formative and summative assessments:
N/A
### Indicator 3p.i
Assessments clearly denote which standards are being emphasized.
N/A
### Indicator 3p.ii
Assessments include aligned rubrics and scoring guidelines that provide sufficient guidance to teachers for interpreting student performance and suggestions for follow-up.
N/A
### Indicator 3q
Materials encourage students to monitor their own progress.
N/A
### Criterion 3r - 3y
Differentiated instruction: Materials support teachers in differentiating instruction for diverse learners within and across grades.
### Indicator 3r
Materials provide strategies to help teachers sequence or scaffold lessons so that the content is accessible to all learners.
N/A
### Indicator 3s
Materials provide teachers with strategies for meeting the needs of a range of learners.
N/A
### Indicator 3t
Materials embed tasks with multiple entry-points that can be solved using a variety of solution strategies or representations.
N/A
### Indicator 3u
Materials suggest support, accommodations, and modifications for English Language Learners and other special populations that will support their regular and active participation in learning mathematics (e.g., modifying vocabulary words within word problems).
N/A
### Indicator 3v
Materials provide opportunities for advanced students to investigate mathematics content at greater depth.
N/A
### Indicator 3w
Materials provide a balanced portrayal of various demographic and personal characteristics.
N/A
### Indicator 3x
Materials provide opportunities for teachers to use a variety of grouping strategies.
N/A
### Indicator 3y
Materials encourage teachers to draw upon home language and culture to facilitate learning.
N/A
### Criterion 3aa - 3z
Effective technology use: Materials support effective use of technology to enhance student learning. Digital materials are accessible and available in multiple platforms.
### Indicator 3aa
Digital materials (either included as supplementary to a textbook or as part of a digital curriculum) are web-based and compatible with multiple internet browsers (e.g., Internet Explorer, Firefox, Google Chrome, etc.). In addition, materials are "platform neutral" (i.e., are compatible with multiple operating systems such as Windows and Apple and are not proprietary to any single platform) and allow the use of tablets and mobile devices.
N/A
### Indicator 3ab
Materials include opportunities to assess student mathematical understandings and knowledge of procedural skills using technology.
N/A
### Indicator 3ac
Materials can be easily customized for individual learners. i. Digital materials include opportunities for teachers to personalize learning for all students, using adaptive or other technological innovations. ii. Materials can be easily customized for local use. For example, materials may provide a range of lessons to draw from on a topic.
N/A
Materials include or reference technology that provides opportunities for teachers and/or students to collaborate with each other (e.g. websites, discussion groups, webinars, etc.).
N/A
### Indicator 3z
Materials integrate technology such as interactive tools, virtual manipulatives/objects, and/or dynamic mathematics software in ways that engage students in the Mathematical Practices.
N/A
abc123
Report Published Date: 2019/12/05
Report Edition: 2019
Title ISBN Edition Publisher Year
BIG IDEAS MATH: MODELING REAL LIFE GRADE 7 STUDENT EDITION 9781635989014 BIG IDEAS LEARNING, LLC 2019
BIG IDEAS MATH: MODELING REAL LIFE GRADE 7 TEACHER EDITION 9781635989038 BIG IDEAS LEARNING, LLC 2019
BIG IDEAS MATH: MODELING REAL LIFE SKILLS REVIEW HANDBOOK 9781642080155 BIG IDEAS LEARNING, LLC 2019
BIG IDEAS MATH: MODELING REAL LIFE GRADE 7 STUDENT JOURNAL 9781642081251 BIG IDEAS LEARNING, LLC 2019
BIG IDEAS MATH: MODELING REAL LIFE GRADE 7 ASSESSMENT BOOK 9781642081268 BIG IDEAS LEARNING, LLC 2019
BIG IDEAS MATH: MODELING REAL LIFE GRADE 7 RESOURCES BY CHAPTER 9781642081275 BIG IDEAS LEARNING, LLC 2019
RICH MATH TASKS GRADES 6 TO 8 9781642083057 BIG IDEAS LEARNING, LLC 2019
## Math K-8 Review Tool
The mathematics review criteria identifies the indicators for high-quality instructional materials. The review criteria supports a sequential review process that reflect the importance of alignment to the standards then consider other high-quality attributes of curriculum as recommended by educators.
For math, our review criteria evaluates materials based on:
• Focus and Coherence
• Rigor and Mathematical Practices
• Instructional Supports and Usability
The K-8 Evidence Guides complements the review criteria by elaborating details for each indicator including the purpose of the indicator, information on how to collect evidence, guiding questions and discussion prompts, and scoring criteria.
The EdReports rubric supports a sequential review process through three gateways. These gateways reflect the importance of alignment to college and career ready standards and considers other attributes of high-quality curriculum, such as usability and design, as recommended by educators.
Materials must meet or partially meet expectations for the first set of indicators (gateway 1) to move to the other gateways.
Gateways 1 and 2 focus on questions of alignment to the standards. Are the instructional materials aligned to the standards? Are all standards present and treated with appropriate depth and quality required to support student learning?
Gateway 3 focuses on the question of usability. Are the instructional materials user-friendly for students and educators? Materials must be well designed to facilitate student learning and enhance a teacher’s ability to differentiate and build knowledge within the classroom.
In order to be reviewed and attain a rating for usability (Gateway 3), the instructional materials must first meet expectations for alignment (Gateways 1 and 2).
Alignment and usability ratings are assigned based on how materials score on a series of criteria and indicators with reviewers providing supporting evidence to determine and substantiate each point awarded.
Alignment and usability ratings are assigned based on how materials score on a series of criteria and indicators with reviewers providing supporting evidence to determine and substantiate each point awarded.
For ELA and math, alignment ratings represent the degree to which materials meet expectations, partially meet expectations, or do not meet expectations for alignment to college- and career-ready standards, including that all standards are present and treated with the appropriate depth to support students in learning the skills and knowledge that they need to be ready for college and career.
For science, alignment ratings represent the degree to which materials meet expectations, partially meet expectations, or do not meet expectations for alignment to the Next Generation Science Standards, including that all standards are present and treated with the appropriate depth to support students in learning the skills and knowledge that they need to be ready for college and career.
For all content areas, usability ratings represent the degree to which materials meet expectations, partially meet expectations, or do not meet expectations for effective practices (as outlined in the evaluation tool) for use and design, teacher planning and learning, assessment, differentiated instruction, and effective technology use.
|
# Numerical Analysis: Using Forward Euler to approximate a system of Differential Equations
I'm giving the following system of ODE's: \begin{array} d \begin{bmatrix} x^{'}(t) \\ y^{'}(t) \end{bmatrix} & = & \begin{bmatrix} 7 & -1\\-1 & 2 \end{bmatrix}\cdot \begin{bmatrix} x(t)\\ y(t) \end{bmatrix} \end{array} with an arbitrary initial condition of $\vec{x}(0) = \vec{x}_0$.
We are asked to solve the linear system using the Forward Euler Method using MATLAB and then plotting our solution.
Now I am familiar with how to solve the closed form of this IVP using Fundamental Theorem of Linear Systems and Spectural Decomposition. However, we are asked to solve the problem numerically using Forward Euler.
• The Forward Euler Method is derived using the forward difference approximation. In this case since we are dealing with matrices and vectors:
\begin{eqnarray} \frac{\vec{x}_{n+1}-\vec{x}_n}{\Delta t} \approx A\vec{x}_n \end{eqnarray} which can be rewritten as \begin{eqnarray} \vec{x}_{n+1} = \vec{x}_n + \Delta t A\vec{x}_n \end{eqnarray}
In order to advance time steps, the second equation stated above is recursively applied as
\begin{eqnarray} \vec{x}_{1} &=& \vec{x}_0 + \Delta t A\vec{x}_0\\ \vec{x}_{2} &=& \vec{x}_1 + \Delta t A\vec{x}_1\\ \vec{x}_{3} &=& \vec{x}_2 + \Delta t A\vec{x}_2\\ &\vdots&\\ \vec{x}_{n} &=& \vec{x}_{n-1}+ \Delta t A\vec{x}_{n-1}\\ \end{eqnarray}
This is where I am stuck. I am not familiar with MATLAB so I am not sure how to go about coding the problem to do what I desire. Especially when I don't have a set initial condition. (I should add that I do know how to create matrices, vectors, and do some calculations in MATLAB, but nothing of this level... yet).
I am familiar with core basics of programming languages like C++ and Python and I am roughly familiar with the meta-language Mathematica; therefore, I am sure I will catch on right away. I can more a less guess that my algorithm will require defining my starting point, my $2\times 2$ matrix $A$, the number of iterations, my time step, and my starting and ending value for $t$. (The professor didn't specify on the assignment so I am just going to say an interval from $0$ to $10$ should do). I would then apply a loop (most probably a for-loop) to recursively apply forward Euler for $n$th number of times. I would assume I would need to create a dynamical array to store all the values as I go along, that way I can eventually call them up when I decide to plot my data points.
I was wondering if someone could give me an idea and how to start the code.
Thank You for taking the time to read my post. I greatly appreciate any suggestions, comments, or feedback. Have a wonderful day.
for i=1:n
|
# Pesin theory
An important branch of the theory of dynamical systems (cf. Dynamical system) and of smooth ergodic theory, with many applications to non-linear dynamics. The name is due to the landmark work of Ya.B. Pesin in the mid-1970{}s [a20], [a21], [a22]. Sometimes Pesin theory is also referred to as the theory of smooth dynamical systems with non-uniformly hyperbolic behaviour, or simply the theory of non-uniformly hyperbolic dynamical systems.
## Introduction.
One of the paradigms of dynamical systems is that the local instability of trajectories influences the global behaviour of the system, and paves the way to the existence of stochastic behaviour. Mathematically, instability of trajectories corresponds to some degree of hyperbolicity (cf. Hyperbolic set). The "strongest possible" kind of hyperbolicity occurs in the important class of Anosov systems (also called $Y$- systems, cf. $Y$- system) [a1]. These are only known to occur in certain manifolds. Moreover, there are several results of topological nature showing that certain manifolds cannot carry Anosov systems.
Pesin theory deals with a "weaker" kind of hyperbolicity, a much more common property that is believed to be "typical" : non-uniform hyperbolicity. Among the most important features due to hyperbolicity is the existence of invariant families of stable and unstable manifolds and their "absolute continuity" . The combination of hyperbolicity with non-trivial recurrence produces a rich and complicated orbit structure. The theory also describes the ergodic properties of smooth dynamical systems possessing an absolutely continuous invariant measure in terms of the Lyapunov exponents. One of the most striking consequences is the Pesin entropy formula, which expresses the metric entropy of the dynamical system in terms of its Lyapunov exponents.
## Non-uniform hyperbolicity.
Let $f : M \rightarrow M$ be a diffeomorphism of a compact manifold. It induces the discrete dynamical system (or cascade) composed of the powers $\{ {f ^ {n} } : {n \in \mathbf Z } \}$. Fix a Riemannian metric on $M$. The trajectory $\{ {f ^ {n} x } : {n \in \mathbf Z } \}$ of a point $x \in M$ is called non-uniformly hyperbolic if there are positive numbers $\lambda < 1 < \mu$ and splittings $T _ {f ^ {n} x } M = E ^ {u} ( f ^ {n} x ) \oplus E ^ {s} ( f ^ {n} x )$ for each $n \in \mathbf Z$, and if for all sufficiently small $\epsilon > 0$ there is a positive function $C _ \epsilon$ on the trajectory such that for every $k \in \mathbf Z$:
1) $C _ \epsilon ( f ^ {k} x ) \leq e ^ {\epsilon | k | } C _ \epsilon ( x )$;
2) $Df ^ {k} E ^ {u} ( x ) = E ^ {u} ( f ^ {k} x )$, $Df ^ {k} E ^ {s} ( x ) = E ^ {s} ( f ^ {k} x )$;
3) if $v \in E ^ {u} ( f ^ {k} x )$ and $m < 0$, then
$$\left \| {Df ^ {m} v } \right \| \leq C _ \epsilon ( f ^ {m + k } x ) \mu ^ {m} \left \| v \right \| ;$$
4) if $v \in E ^ {s} ( f ^ {k} x )$ and $m > 0$, then
$$\left \| {Df ^ {m} v } \right \| \leq C _ \epsilon ( f ^ {m + k } x ) \lambda ^ {m} \left \| v \right \| ;$$
5) ${ \mathop{\rm angle} } ( E ^ {u} ( f ^ {k} x ) ,E ^ {s} ( f ^ {k} x ) ) \geq C _ \epsilon ( f ^ {k} x ) ^ {- 1 }$.
(The indices "s" and "u" refer, respectively, to "stable" and "unstable" .) The definition of non-uniformly partially hyperbolic trajectory is obtained by replacing the inequality $\lambda < 1 < \mu$ by the weaker requirement that $\lambda < \mu$ and $\min \{ \lambda, \mu ^ {- 1 } \} < 1$.
If $\lambda < 1 < \mu$( respectively, $\lambda < \mu$ and $\min \{ \lambda, \mu ^ {- 1 } \} < 1$) and the conditions 1)–5) hold for $\epsilon = 0$( i.e., if one can choose $C _ \epsilon = \textrm{ const }$), the trajectory is called uniformly hyperbolic (respectively, uniformly partially hyperbolic).
The term "non-uniformly" means that the estimates in 3) and 4) may differ from the "uniform" estimates $\mu ^ {m}$ and $\lambda ^ {m}$ by at most slowly increasing terms along the trajectory, as in 1) (in the sense that the exponential rate $\epsilon$ in 1) is small in comparison to the number ${ \mathop{\rm log} } \mu, - { \mathop{\rm log} } \lambda$); the term "partially" means that the hyperbolicity may hold only for a part of the tangent space.
One can similarly define the corresponding notions for a flow (continuous-time dynamical system) with $k \in \mathbf Z$ replaced by $k \in \mathbf R$, and the splitting of the tangent spaces replaced by $T _ {x} M = E ^ {u} ( x ) \oplus E ^ {s} ( x ) \oplus X ( x )$, where $X ( x )$ is the one-dimensional subspace generated by the flow direction.
## Stable and unstable manifolds.
Let $\{ {f ^ {n} x } : {n \in \mathbf Z } \}$ be a non-uniformly partially hyperbolic trajectory of a $C ^ {1 + \alpha }$- diffeomorphism ( $\alpha > 0$). Assume that $\lambda < 1$. Then there is a local stable manifold $V ^ {s} ( x )$ such that $x \in V ^ {s} ( x )$, $T _ {x} V ^ {s} ( x ) = E ^ {s} ( x )$, and for every $y \in V ^ {s} ( x )$, $k \in \mathbf Z$, and $m > 0$,
$$d ( f ^ {m + k } x,f ^ {m + k } y ) \leq KC _ \epsilon ( f ^ {k} x ) ^ {2} \lambda ^ {m} e ^ {\epsilon m } d ( f ^ {k} x,f ^ {k} y ) ,$$
where $d$ is the distance induced by the Riemannian metric and $K$ is a positive constant. The size $r ( x )$ of $V ^ {s} ( x )$ can be chosen in such a way that $r ( f ^ {k} x ) \geq K ^ \prime e ^ {- \epsilon | k | } r ( x )$ for every $k \in \mathbf Z$, where $K ^ \prime$ is a positive constant. If $f \in C ^ {r + \alpha }$( $\alpha > 0$), then $V ^ {s} ( x )$ is of class $C ^ {r}$.
The global stable manifold of $f$ at $x$ is defined by $W ^ {s} ( x ) = \cup _ {k \in \mathbf Z } f ^ {- k } ( V ^ {s} ( f ^ {k} x ) )$; it is an immersed manifold with the same smoothness class as $V ^ {s} ( x )$. One has $W ^ {s} ( x ) \cap W ^ {s} ( y ) = \emptyset$ if $y \notin W ^ {s} ( x )$, $W ^ {s} ( x ) = W ^ {s} ( y )$ if $y \in W ^ {s} ( x )$, and $f ^ {n} W ^ {s} ( x ) = W ^ {s} ( f ^ {n} x )$ for every $n \in \mathbf Z$. The manifold $W ^ {s} ( x )$ is independent of the particular size of the local stable manifolds $V ^ {s} ( y )$.
Similarly, when $\mu > 1$ one can define a local (respectively, global) unstable manifold as a local (respectively, global) stable manifold of $f ^ {- 1 }$.
## Non-uniformly hyperbolic dynamical systems and dynamical systems with non-zero Lyapunov exponents.
Let $f : M \rightarrow M$ be a diffeomorphism and let $\nu$ be a (finite) Borel $f$- invariant measure (cf. also Invariant measure). One calls $f$ non-uniformly hyperbolic (respectively, non-uniformly partially hyperbolic) with respect to the measure $\nu$ if the set $\Lambda \subset M$ of points whose trajectories are non-uniformly hyperbolic (respectively, non-uniformly partially hyperbolic) is such that $\nu ( \Lambda ) > 0$. In this case $\lambda$, $\mu$, $\epsilon$, and $C _ \epsilon$ are replaced by measurable functions $\lambda ( x )$, $\mu ( x )$, $\epsilon ( x )$, and $C _ \epsilon ( x )$, respectively.
The set $\Lambda$ is $f$- invariant, i.e., it satisfies $f \Lambda = \Lambda$. Therefore, one can always assume that $\nu ( \Lambda ) = 1$ when $\nu ( \Lambda ) > 0$; this means that if $\nu ( \Lambda ) > 0$, then the measure ${\widehat \nu }$ on $\Lambda$ defined by ${\widehat \nu } ( B ) = { {\nu ( B ) } / {\nu ( \Lambda ) } }$ is $f$- invariant and ${\widehat \nu } ( \Lambda ) = 1$.
For $( x, v ) \in M \times T _ {x} M$, one defines the forward upper Lyapunov exponent of $( x, v )$( with respect to $f$) by
$$\tag{a1 } \chi ( x, v ) = {\lim\limits \sup } _ {m \rightarrow + \infty } { \frac{1}{m} } { \mathop{\rm log} } \left \| {Df ^ {m} v } \right \|$$
for each $v \neq 0$, and $\chi ( x,0 ) = - \infty$. For every $x \in M$, there exist a positive integer $s ( x ) \leq { \mathop{\rm dim} } M$( the dimension of $M$) and collections of numbers $\chi _ {1} ( x ) < \dots < \chi _ {s ( x ) } ( x )$ and linear subspaces $E _ {1} ( x ) \subset \dots \subset E _ {s ( x ) } ( x ) = T _ {x} M$ such that for every $i = 1 \dots s ( x )$,
$$E _ {i} ( x ) = \left \{ {v \in T _ {x} M } : {\chi ( x,v ) \leq \chi _ {i} ( x ) } \right \} ,$$
and if $v \in E _ {i} ( x ) \setminus E _ {i - 1 } ( x )$, then $\chi ( x,v ) = \chi _ {i} ( x )$.
The numbers $\chi _ {i} ( x )$ are called the values of the forward upper Lyapunov exponent at $x$, and the collection of linear subspaces $E _ {i} ( x )$ is called the forward filtration at $x$ associated to $f$. The number $k _ {i} ( x ) = { \mathop{\rm dim} } E _ {i} ( x ) - { \mathop{\rm dim} } E _ {i - 1 } ( x )$ is the forward multiplicity of the exponent $\chi _ {i} ( x )$. One defines the forward spectrum of $f$ at $x$ as the collection of pairs $( \chi _ {i} ( x ) ,k _ {i} ( x ) )$ for $i = 1 \dots s ( x )$. Let $\chi _ {1} ^ \prime ( x ) \leq \dots \leq \chi _ { { \mathop{\rm dim} } M } ^ \prime ( x )$ be the values of the forward upper Lyapunov exponent at $x$ counted with multiplicities, i.e., in such a way that the exponent $\chi _ {i} ( x )$ appears exactly a number $k _ {i} ( x )$ of times. The functions $s ( x )$ and $\chi _ {i} ^ \prime ( x )$, for $i = 1 \dots { \mathop{\rm dim} } M$, are measurable and $f$- invariant with respect to any $f$- invariant measure.
One defines the backward upper Lyapunov exponent of $( x,v )$( with respect to $f$) by an expression similar to (a1), with $m \rightarrow + \infty$ replaced by $m \rightarrow - \infty$, and considers the corresponding backward spectrum.
A Lyapunov-regular trajectory $\{ {f ^ {n} x } : {n \in \mathbf Z } \}$( see, for example, [a3], Sect. 2) is non-uniformly hyperbolic (respectively, non-uniformly partially hyperbolic) if and only if $\chi ( x,v ) \neq 0$ for all $v \in T _ {x} M$( respectively, $\chi ( x,v ) \neq 0$ for some $v \in T _ {x} M$). For flows, a Lyapunov-regular trajectory is non-uniformly hyperbolic if and only if $\chi ( x,v ) \neq 0$ for all $v \notin X ( x )$.
The multiplicative ergodic theorem of V. Oseledets [a19] implies that $\nu$- almost all points of $M$ belong to a Lyapunov-regular trajectory. Therefore, for a given diffeomorphism, one has $\chi ( x,v ) \neq 0$ for all $v \in T _ {x} M$( respectively $\chi ( x,v ) \neq 0$ for some $v \in T _ {x} M$) on a set of positive $\nu$- measure if and only if the diffeomorphism is non-uniformly hyperbolic (respectively, non-uniformly partially hyperbolic). Hence, the non-uniformly hyperbolic diffeomorphisms (with respect to the measure $\nu$) are precisely the diffeomorphisms with non-zero Lyapunov exponents (on a set of positive $\nu$- measure).
Furthermore, for $\nu$- almost every $x \in \Lambda$ there exist subspaces $H _ {j} ( x )$, for $j = 1 \dots s ( x )$, such that for every $i = 1 \dots s ( x )$ one has $E _ {i} ( x ) = \oplus _ {j = 1 } ^ {i} H _ {j} ( x )$,
$${\lim\limits } _ {m \rightarrow \pm \infty } { \frac{1}{m} } { \mathop{\rm log} } \left \| {Df ^ {m} v } \right \| = \chi _ {i} ( x )$$
for every $v \in H _ {i} ( x ) \setminus \{ 0 \}$, and if $i \neq j$, then
$${\lim\limits } _ {m \rightarrow \pm \infty } { \frac{1}{m} } { \mathop{\rm log} } \left | { { \mathop{\rm angle} } ( H _ {i} ( f ^ {m} x ) ,H _ {j} ( f ^ {m} x ) ) } \right | = 0.$$
## Pesin sets.
To a non-uniformly partially hyperbolic diffeomorphism one associates a filtration of measurable sets (not necessarily invariant) on which the estimates 3)–5) are uniform.
Let $f$ be a non-uniformly hyperbolic diffeomorphism and let $C ( x ) = C _ {\epsilon ( x ) } ( x )$. Given $l > 0$, one defines the measurable set $\Lambda _ {l}$ by
$$\left \{ {x \in \Lambda } : {C ( x ) \leq l, \lambda ( x ) \leq { \frac{l - 1 }{l} } < { \frac{l + 1 }{l} } \leq \mu ( x ) } \right \} .$$
One has $\Lambda _ {l} \subset \Lambda _ {L}$ when $l \leq L$, and $\cup _ {l > 0 } \Lambda _ {l} = \Lambda ( { \mathop{\rm mod} } 0 )$. Each set $\Lambda _ {l}$ is closed but need not be $f$- invariant; for every $m \in \mathbf Z$ and $l > 0$ there exists an $L = L ( m,l )$ such that $f ^ {m} \Lambda _ {l} \subset \Lambda _ {L}$. The distribution $E ^ {s} ( x )$ is, in general, only measurable on $\Lambda$ but it is continuous on $\Lambda _ {l}$. The local stable manifolds $V ^ {s} ( x )$ depend continuously on $x \in \Lambda _ {l}$ and their sizes are uniformly bounded below on $\Lambda _ {l}$. Each set $\Lambda _ {l}$ is called a Pesin set.
One similarly defines Pesin sets for arbitrary non-uniformly partially hyperbolic diffeomorphisms.
## Lyapunov metrics and regular neighbourhoods.
Let $\langle {\cdot, \cdot } \rangle$ be the Riemannian metric on $TM$. For each fixed $\epsilon > 0$ and every $x \in \Lambda$, one defines a Lyapunov metric on $H _ {i} ( x )$ by
$$\left \langle {u,v } \right \rangle _ {x} ^ \prime = \sum _ {m \in \mathbf Z } \left \langle {Df ^ {m} u,Df ^ {m} v } \right \rangle _ {x} e ^ {- 2m \chi _ {i} ( x ) - 2 \epsilon \left | m \right | } ,$$
for each $u, v \in H _ {i} ( x )$. One extends this metric to $T _ {x} M$ by declaring orthogonal the subspaces $H _ {i} ( x )$ for $i = 1 \dots s ( x )$. The metric $\langle {\cdot, \cdot } \rangle ^ \prime$ is continuous on $\Lambda _ {l}$. The sequence of weights $\{ e ^ {- 2m \chi _ {i} ( x ) - 2 \epsilon | m | } \} _ {m \in \mathbf Z }$ is called a Pesin tempering kernel. Any linear operator $L _ \epsilon ( x )$ on $T _ {x} M$ such that
$$\left \langle {u,v } \right \rangle _ {x} ^ \prime = \left \langle {L _ \epsilon ( x ) u,L _ \epsilon ( x ) v } \right \rangle _ {x}$$
is called a Lyapunov change of coordinates.
There exist a measurable function $q : \Lambda \rightarrow {( 0,1 ] }$ satisfying $e ^ {- \epsilon } \leq { {q ( fx ) } / {q ( x ) } } \leq e ^ \epsilon$, and for each $x \in \Lambda$ a collection of imbeddings ${\Psi _ {x} } : {B ( 0,q ( x ) ) } \rightarrow M$, defined on the ball $B ( 0,q ( x ) ) \subset T _ {x} M$ by $\Psi _ {x} = { \mathop{\rm exp} } _ {x} \circ L _ \epsilon ( x ) ^ {- 1 }$, such that if $f _ {x} = \Psi _ {fx } ^ {- 1 } \circ f \circ \Psi _ {x}$, then:
1) the derivative $D _ {0} f _ {x}$ of $f _ {x}$ at the point $0$ has the Lyapunov block form
$$D _ {0} f _ {x} = \left ( \begin{array}{ccc} A _ {1} ( x ) &{} &{} \\ {} &\dvd &{} \\ {} &{} &A _ {s ( x ) } ( x ) \\ \end{array} \right ) ,$$
where each $A _ {i} ( x )$ is an invertible linear operator between the $k _ {i} ( x )$- dimensional spaces $L _ \epsilon ( x ) H _ {i} ( x )$ and $L _ \epsilon ( fx ) H _ {i} ( fx )$, for $i = 1 \dots s ( x )$;
2) for each $i = 1 \dots s ( x )$,
$$e ^ {\chi _ {i} ( x ) - \epsilon } \leq \left \| {A _ {i} ( x ) ^ {- 1 } } \right \| ^ {- 1 } \leq \left \| {A _ {i} ( x ) } \right \| \leq e ^ {\chi _ {i} ( x ) + \epsilon } ;$$
3) the $C ^ {1}$- distance between $f _ {x}$ and $d _ {0} f _ {x}$ on the ball $B ( 0,q ( x ) )$ is at most $\epsilon$;
4) there exist a constant $K$ and a measurable function $A : \Lambda \rightarrow \mathbf R$ satisfying $e ^ {- \epsilon } \leq { {A ( fx ) } / {A ( x ) } } \leq e ^ \epsilon$ such that for every $y,z \in B ( 0,q ( x ) )$,
$$Kd ( \Psi _ {x} y, \Psi _ {x} z ) \leq \left \| {y - z } \right \| \leq A ( x ) d ( \Psi _ {x} y, \Psi _ {x} z ) .$$
The function $A ( x )$ is bounded on each $\Lambda _ {l}$. The set $\Psi _ {x} ( B ( 0,q ( x ) ) ) \subset M$ is called a regular neighbourhood of the point $x$.
## Absolute continuity.
A property playing a crucial role in the study of the ergodic properties of (uniformly and non-uniformly) hyperbolic dynamical systems is the absolute continuity of the families of stable and unstable manifolds. It allows one to pass from the local properties of the system to the study of its global behaviour.
Let $\nu$ be an absolutely continuous $f$- invariant measure, i.e., an $f$- invariant measure that is absolutely continuous with respect to Lebesgue measure (cf. Absolute continuity). For each $x \in \Lambda$ and $l > 0$ there exists a neighbourhood $U ( x )$ of $x$ with size depending only on $l$ and with the following properties (see [a21]). Choose $y \in \Lambda _ {l} \cap U ( x )$. Given two smooth manifolds $W _ {1} , W _ {2} \subset U ( x )$ transversal to the local stable manifolds in $U ( x )$, one defines
$$A _ {i} = \left \{ {w \in W _ {i} \cap V ^ {s} ( z ) } : {z \in \Lambda _ {l} \cap U ( x ) } \right \}$$
for $i = 1,2$. Let $p : {A _ {1} } \rightarrow {A _ {2} }$ be the correspondence that takes $w \in W _ {1}$ to the point $p ( w ) \in W _ {2}$ such that $w,p ( w ) \in V ^ {s} ( z )$ for some $z$. If $\nu _ {i}$ is the measure induced on $W _ {i}$ by the Riemannian metric, for $i = 1,2$, then $p ^ {*} \nu _ {1}$ is absolutely continuous with respect to $\nu _ {2}$( if $l$ is sufficiently large, then $\nu _ {i} ( A _ {i} ) > 0$ for $i = 1,2$).
This result has the following consequences (see [a21]). For each measurable set $B \subset W _ {1} \cap \Lambda _ {l}$, let ${\widehat{B} }$ be the union of all the sets $V ^ {s} ( z ) \cap U ( x )$ such that $z \in \Lambda _ {l}$ and $V ^ {s} ( z ) \cap B \neq \emptyset$. The partition of ${\widehat{B} }$ into the submanifolds $V ^ {s} ( z )$ is a measurable partition (also called measurable decomposition), and the corresponding conditional measure of $\nu$ on $V ^ {s} ( z )$ is absolutely continuous with respect to the measure $\nu _ {z}$ induced on $V ^ {s} ( z )$ by the Riemannian metric, for each $z \in \Lambda _ {l}$ such that $V ^ {s} ( z ) \cap B \neq \emptyset$. In addition, $\nu _ {z} ( V ^ {s} ( z ) ) > 0$ for $\nu$- almost all $z \in {\widehat{B} } \cap \Lambda _ {l}$, and the measure ${\widehat \nu }$ on $W _ {1}$ defined for each measurable set $B$ by ${\widehat \nu } ( B ) = \nu ( {\widehat{B} } )$, is absolutely continuous with respect to $\nu _ {1}$.
## Smooth ergodic theory.
Let $f : M \rightarrow M$ be a non-uniformly hyperbolic $C ^ {1 + \alpha }$- diffeomorphism ( $\alpha > 0$) with respect to a Sinai–Ruelle–Bowen measure $\nu$, i.e., an $f$- invariant measure $\nu$ that has a non-zero Lyapunov exponent $\nu$- almost everywhere and has absolutely continuous conditional measures on stable (or unstable) manifolds with respect to Lebesgue measure (in particular, this holds if $\nu$ is absolutely continuous with respect to Lebesgue measure and has no zero Lyapunov exponents [a21]; see also above: "Absolute continuity" ). Then there is at most a countable number of disjoint $f$- invariant sets $\Lambda _ {0} , \Lambda _ {1} , \dots$( the ergodic components) such that [a21], [a11]:
1) $\cup _ {i \geq 0 } \Lambda _ {i} = \Lambda$, $\nu ( \Lambda _ {0} ) = 0$, and $\nu ( \Lambda _ {i} ) > 0$ and $f \mid _ {\Lambda _ {i} }$ is ergodic (see Ergodicity) with respect to $\nu \mid _ {\Lambda _ {i} }$ for every $i > 0$;
2) each set $\Lambda _ {i}$ is a disjoint union of sets $\Lambda _ {i1 } \dots \Lambda _ {in _ {i} }$ such that $f ( \Lambda _ {ij } ) = \Lambda _ {i,j + 1 }$ for each $j < n _ {i}$, and $f ( \Lambda _ {in _ {i} } ) = \Lambda _ {i1 }$;
3) for every $i$ and $j$, there is a metric isomorphism between $f ^ {n _ {i} } \mid _ {\Lambda _ {ij } }$ and a Bernoulli automorphism (in particular, the mapping $f ^ {n _ {i} } \mid _ {\Lambda _ {ij } }$ is a $K$- system).
If $\nu$ is an absolutely continuous $f$- invariant measure and the foliation $W ^ {s}$( or $W ^ {u}$) of $\Lambda$ is $C ^ {1}$- continuous (i.e., for each $x \in \Lambda$ there is a neighbourhood of $x$ in $W ^ {s} ( x )$ that is the image of an injective $C ^ {1}$- mapping $\varphi _ {x}$, defined on the ball with centre at $0$ and of radius $1$, and the mapping $x \mapsto \varphi _ {x}$ from $\Lambda$ into the family of $C ^ {1}$- mappings is continuous), then any ergodic component of positive $\nu$- measure is an open set (mod $0$); if, in addition, $f \mid _ \Lambda$ is topologically transitive (cf. Topological transitivity; Chaos), then $f \mid _ \Lambda$ is ergodic [a21].
If $f \mid _ \Lambda$ is ergodic, then for Lebesgue-almost-every point $x \in M$ and every continuous function $g$, one has
$${ \frac{1}{n} } \sum _ {k = 0 } ^ { {n } - 1 } g ( f ^ {k} x ) \rightarrow \int\limits _ { M } g {d \nu } \textrm{ as } n \rightarrow + \infty.$$
There is a measurable partition $\eta$ of $M$ with the following properties:
1) for $\nu$- almost every $x \in M$, the element $\eta ( x ) \in \eta$ containing $x$ is an open subset (mod $0$) of $W ^ {s} ( x )$;
2) $f \eta$ is a refinement of $\eta$, and $\lor _ {k = 0 } ^ \infty f ^ {k} \eta$ is the partition of $M$ into points;
3) $\wedge _ {k = 0 } ^ \infty f ^ {- k } \eta$ coincides with the measurable hull of $W ^ {s}$, as well as with the maximal partition with zero entropy (the $\pi$- partition for $f$; see Entropy of a measurable decomposition);
4) $h _ \nu ( f ) = h _ \nu ( f, \eta )$( cf. Entropy theory of a dynamical system).
## Pesin entropy formula.
For a $C ^ {1 + \alpha }$- diffeomorphism ( $\alpha > 0$) $f : M \rightarrow M$ of a compact manifold and an absolutely continuous $f$- invariant probability measure $\nu$, the metric entropy $h _ \nu ( f )$ of $f$ with respect to $\nu$ is given by the Pesin entropy formula [a21]
$$\tag{a2 } h _ \nu ( f ) = \int\limits _ { M } {\sum _ {i = 1 } ^ { {s } ( x ) } \chi _ {i} ^ {+} ( x ) k _ {i} ( x ) } {d \nu ( x ) } ,$$
where $\chi _ {i} ^ {+} ( x ) = \max \{ \chi _ {i} ( x ) ,0 \}$ and $( \chi _ {i} ( x ) ,k _ {i} ( x ) )$ form the forward spectrum of $f$ at $x$.
For a $C ^ {1}$- diffeomorphism $f : M \rightarrow M$ of a compact manifold and an $f$- invariant probability measure $\nu$, the Ruelle inequality holds [a25]:
$$\tag{a3 } h _ \nu ( f ) \leq \int\limits _ { M } {\sum _ {i = 1 } ^ { {s } ( x ) } \chi _ {i} ^ {+} ( x ) k _ {i} ( x ) } {d \nu ( x ) } .$$
An important consequence of (a3) is that any $C ^ {1}$- diffeomorphism with positive topological entropy has an $f$- invariant measure with at least one positive and one negative Lyapunov exponent; in particular, for surface diffeomorphisms there is an $f$- invariant measure with every exponent non-zero. For arbitrary invariant measures the inequality (a3) may be strict [a7].
The formula (a2) was first established by Pesin in [a21]. A proof which does not use the theory of invariant manifolds and absolute continuity was given by R. Mañé [a17]. For $C ^ {2}$- diffeomorphisms, (a2) holds if and only if $\nu$ has absolutely continuous conditional measures on unstable manifolds [a13], [a12].
The formula (a2) has been extended to mappings with singularities [a12]. For $C ^ {2}$- diffeomorphisms and arbitrary invariant measures, results of F. Ledrappier and L.-S. Young [a14] show that the possible defect between the left- and right-hand sides of (a3) is due to the defects between ${ \mathop{\rm dim} } E _ {i} ( x )$ and the Hausdorff dimension of $\nu$" in the direction of Eix" for each $i$.
## Hyperbolic measures.
Let $f$ be a $C ^ {1 + \alpha }$- diffeomorphism ( $\alpha > 0$) and let $\nu$ be an $f$- invariant measure. One says that $\nu$ is hyperbolic (with respect to $f$) if $\chi _ {i} ( x ) \neq 0$ for $\nu$- almost every $x \in M$ and all $i = 1 \dots s ( x )$. The measure $\nu$ is hyperbolic (with respect to $f$) if and only if $f$ is non-uniformly hyperbolic with respect to $\nu$( and the set $\Lambda$ has full $\nu$- measure). The fundamental work of A. Katok has revealed a rich and complicated orbit structure for diffeomorphisms possessing a hyperbolic measure.
Let $\nu$ be a hyperbolic measure. The support of $\nu$ is contained in the closure of the set of periodic points. If $\nu$ is ergodic and not concentrated on a periodic orbit, then [a7], [a9]:
1) the support of $\nu$ is contained in the closure of the set of hyperbolic periodic points possessing a transversal homoclinic point;
2) for every $\epsilon > 0$ there exists a closed $f$- invariant hyperbolic set $\Gamma$ such that the restriction of $f$ to $\Gamma$ is topologically conjugate to a topological Markov chain with topological entropy $h ( f \mid _ \Gamma ) \geq h _ \nu ( f ) - \epsilon$, i.e., the entropy of a hyperbolic measure can be approximated by the topological entropies of invariant hyperbolic sets.
If $f$ possesses a hyperbolic measure, then $f$ satisfies a closing lemma: given $\epsilon > 0$, there exists a $\delta = \delta ( l, \epsilon ) > 0$ such that for each $x \in \Lambda _ {l}$ and each integer $m$ satisfying $f ^ {m} x \in \Lambda _ {l}$ and $d ( x,f ^ {m} x ) < \delta$, there exists a point $y$ such that $f ^ {m} y = y$, $d ( f ^ {k} x,f ^ {k} y ) < \epsilon$ for every $k = 0 \dots m$, and $y$ is a hyperbolic periodic point [a7]. The diffeomorphism $f$ also satisfies a shadowing lemma (see [a9]) and a Lifschitz-type theorem [a9]: if $\varphi$ is a Hölder-continuous function (cf. Hölder condition) such that $\sum _ {k = 0 } ^ {m - 1 } \varphi ( f ^ {k} p ) = 0$ for each periodic point $p$ with $f ^ {m} p = p$, then there is a measurable function $h$ such that $\varphi ( x ) = h ( fx ) - h ( x )$ for $\nu$- almost every $x$.
Let $P _ {n} ( f )$ be the number of periodic points of $f$ with period $n$. If $f$ possesses a hyperbolic measure or is a surface diffeomorphism, then
$${\lim\limits \sup } _ {n \rightarrow + \infty } { \frac{1}{n} } { \mathop{\rm log} } ^ {+} P _ {n} ( f ) \geq h ( f ) ,$$
where $h ( f )$ is the topological entropy of $f$[a7].
Let $\nu$ be a hyperbolic ergodic measure. L.M. Barreira, Pesin and J. Schmeling [a2] have shown that there is a constant $d$ such that for $\nu$- almost every $x \in M$,
$${\lim\limits } _ {r \rightarrow 0 } { \frac{ { \mathop{\rm log} } \nu ( B ( x,r ) ) }{ { \mathop{\rm log} } r } } = d,$$
where $B ( x,r )$ is the ball in $M$ with centre at $x$ and of radius $r$( this claim was known as the Eckmann–Ruelle conjecture); this implies that the Hausdorff dimension of $\nu$ and the lower and upper box dimensions of $\nu$ coincide and are equal to $d$( see [a2]). Ledrappier and Young [a14] have shown that if $\nu _ {x} ^ {s}$( respectively, $\nu ^ {u} _ {x}$) are the conditional measures of $\nu$ with respect to the stable (respectively, unstable) manifolds, then there are constants $d ^ {s}$ and $d ^ {u}$ such that for $\nu$- almost every $x \in M$,
$${\lim\limits } _ {r \rightarrow 0 } { \frac{ { \mathop{\rm log} } \nu _ {x} ^ {s} ( B ^ {s} ( x,r ) ) }{ { \mathop{\rm log} } r } } = d ^ {s} ,$$
$${\lim\limits } _ {r \rightarrow 0 } { \frac{ { \mathop{\rm log} } \nu _ {x} ^ {u} ( B ^ {u} ( x,r ) ) }{ { \mathop{\rm log} } r } } = d ^ {u} ,$$
where $B ^ {s} ( x,r )$( respectively, $B ^ {u} ( x,r )$) is the ball in $V ^ {s} ( x )$( respectively, $V ^ {u} ( x )$) with centre at $x$ and of radius $r$. Moreover, $d = d ^ {s} + d ^ {u}$[a2] and $\nu$ has an "almost product structure" (see [a2]).
## Criteria for having non-zero Lyapunov exponents.
Above it has been shown that non-uniformly hyperbolic dynamical systems possess strong ergodic properties, as well as many other important properties. Therefore, it is of primary interest to have verifiable methods for checking the non-vanishing of Lyapunov exponents.
The following Katok–Burns criterion holds: A real-valued measurable function $Q$ on the tangent bundle $TM$ is called an eventually strict Lyapunov function if for $\nu$- almost every $x \in M$:
1) the function $Q _ {x} ( v ) = Q ( x,v )$ is continuous, homogeneous of degree one and takes both positive and negative values;
2) the maximal dimensions of the linear subspaces contained, respectively, in the sets $\{ 0 \} \cup Q _ {x} ^ {- 1 } ( 0, + \infty )$ and $\{ 0 \} \cup Q _ {x} ^ {- 1 } ( - \infty,0 )$ are constants $r ^ {+} ( Q )$ and $r ^ {-} ( Q )$, and $r ^ {+} ( Q ) + r ^ {-} ( Q )$ is the dimension of $M$;
3) $Q _ {fx } ( Dfv ) \geq Q _ {x} ( v )$ for all $v \in T _ {x} M$;
4) there exists a positive integer $m = m ( x )$ such that for all $v \in T _ {x} M \setminus \{ 0 \}$,
$$Q _ {f ^ {m} x } ( Df ^ {m} v ) > Q _ {x} ( v ) ,$$
$$Q _ {f ^ {- m } x } ( Df ^ {- m } v ) < Q _ {x} ( v ) .$$
If $f$ possesses an eventually strict Lyapunov function, then there exist exactly $r ^ {+} ( Q )$ positive Lyapunov exponents and $r ^ {-} ( Q )$ negative ones [a8] (see also [a28]).
Another method to estimate the Lyapunov exponents was presented in [a6].
## Generalizations.
There are several natural and important generalizations of Pesin theory. Examples of these are: generalizations to non-invertible mappings; extensions of the main results of Pesin's work to mappings with singularities [a10], including billiard systems and other physical models; infinite-dimensional versions of results on stable and unstable manifolds in Hilbert spaces [a27] and Banach spaces [a18], given certain compactness assumptions; some results have been extended to random mappings [a15].
Related results have been obtained for products of random matrices (see [a5] and the references therein).
#### References
[a1] D. Anosov, "Geodesic flows on closed Riemann manifolds with negative curvature" Proc. Steklov Inst. Math. , 90 (1969) (In Russian) MR0242194 Zbl 0176.19101 Zbl 0163.43604 [a2] L. Barreira, Ya. Pesin, J. Schmeling, "On the pointwise dimension of hyperbolic measures: A proof of the Eckmann–Ruelle conjecture" Electronic Research Announc. Amer. Math. Soc. , 2 (1996) MR1405971 Zbl 0871.58054 [a3] I. Cornfeld, Ya. Sinai, "Basic notions of ergodic theory and examples of dynamical systems" Ya. Sinai (ed.) , Dynamical Systems II , Encycl. Math. Sci. , 2 , Springer (1989) pp. 2–27 (In Russian) [a4] A. Fathi, M. Herman, J. Yoccoz, "A proof of Pesin's stable manifold theorem" J. Palis (ed.) , Geometric Dynamics , Lecture Notes in Mathematics , 1007 , Springer (1983) pp. 177–215 MR730270 [a5] I. Goldsheid, G. Margulis, "Lyapunov exponents of a product of random matrices" Russian Math. Surveys , 44 (1989) pp. 11–71 (In Russian) MR1040268 [a6] M. Herman, "Une méthode pour minorer les exposants de Lyapunov et quelques examples montrant le caractére local d'un théorèm d'Arnold et de Moser sur le tore de dimension " Comment. Math. Helv. , 58 (1983) pp. 453–502 [a7] A. Katok, "Lyapunov exponents, entropy and periodic orbits for diffeomorphisms" IHES Publ. Math. , 51 (1980) pp. 137–173 MR0573822 Zbl 0445.58015 [a8] A. Katok, K. Burns, "Infinitesimal Lyapunov functions, invariant cone families and stochastic properties of smooth dynamical systems" Ergodic Th. Dynamical Systems , 14 (1994) pp. 757–785 MR1304141 Zbl 0816.58029 [a9] A. Katok, L. Mendoza, "Dynamical systems with nonuniformly hyperbolic behavior" A. Katok (ed.) B. Hasselblatt (ed.) , Introduction to the Modern Theory of Dynamical Systems , Cambridge Univ. Press (1995) [a10] A. Katok, J.-M. Strelcyn, "Invariant manifolds, entropy and billiards; smooth maps with singularities" , Lecture Notes in Mathematics , 1222 , Springer (1986) (with the collaboration of F. Ledrappier and F. Przytycki) MR0872698 Zbl 0658.58001 [a11] F. Ledrappier, "Propriétés ergodiques des mesures de Sinaï" IHES Publ. Math. , 59 (1984) pp. 163–188 MR0743818 Zbl 0561.58037 [a12] F. Ledrappier, J.-M. Strelcyn, "A proof of the estimate from below in Pesin's entropy formula" Ergodic Th. Dynamical Systems , 2 (1982) pp. 203–219 [a13] F. Ledrappier, L.-S. Young, "The metric entropy of diffeomorphisms I. Characterization of measures satisfying Pesin's entropy formula" Ann. of Math. (2) , 122 (1985) pp. 509–539 MR0819556 MR0819557 [a14] F. Ledrappier, L.-S. Young, "The metric entropy of diffeomorphisms. II. Relations between entropy, exponents and dimension" Ann. of Math. (2) , 122 (1985) pp. 540–574 MR0819556 MR0819557 [a15] P.-D. Liu, M. Qian, "Smooth ergodic theory of random dynamical systems" , Lecture Notes in Mathematics , 1606 , Springer (1995) MR1369243 Zbl 0841.58041 [a16] C. Liverani, M. Wojtkowski, "Ergodicity in Hamiltonian systems" , Dynamics Reported Expositions in Dynamical Systems (N.S.) , 4 , Springer (1995) pp. 130–202 MR1346498 Zbl 0824.58033 [a17] R. Mané, "A proof of Pesin's formula" Ergodic Th. Dynamical Systems , 1 (1981) pp. 95–102 (Errata: 3 (1983), 159–160) MR627789 [a18] R. Mané, "Lyapunov exponents and stable manifolds for compact transformations" J. Palis (ed.) , Geometric Dynamics , Lecture Notes in Mathematics , 1007 , Springer (1983) pp. 522–577 Zbl 0522.58030 [a19] V. Oseledets, "A multiplicative ergodic theorem. Liapunov characteristic numbers for dynamical systems" Trans. Moscow Math. Soc. , 19 (1968) pp. 197–221 (In Russian) [a20] Ya. Pesin, "Families of invariant manifolds corresponding to nonzero characteristic exponents" Math. USSR Izv. , 10 (1976) pp. 1261–1305 (In Russian) MR458490 Zbl 0383.58012 [a21] Ya. Pesin, "Characteristic exponents and smooth ergodic theory" Russian Math. Surveys , 32 (1977) pp. 55–114 (In Russian) MR466791 Zbl 0383.58011 [a22] Ya. Pesin, "Geodesic flows on closed Riemannian manifolds without focal points" Math. USSR Izv. , 11 (1977) pp. 1195–1228 (In Russian) MR488169 Zbl 0399.58010 [a23] Ya. Pesin, "General theory of smooth hyperbolic dynamical systems" Ya. Sinai (ed.) , Dynamical Systems II , Encycl. Math. Sci. , 2 , Springer (1989) pp. 108–151 (In Russian) [a24] C. Pugh, M. Shub, "Ergodic attractors" Trans. Amer. Math. Soc. , 312 (1989) pp. 1–54 MR0983869 Zbl 0684.58008 [a25] D. Ruelle, "An inequality for the entropy of differentiable maps" Bol. Soc. Brasil. Mat. , 9 (1978) pp. 83–87 MR0516310 Zbl 0432.58013 [a26] D. Ruelle, "Ergodic theory of differentiable dynamical systems" IHES Publ. Math. , 50 (1979) pp. 27–58 MR0556581 Zbl 0426.58014 [a27] D. Ruelle, "Characteristic exponents and invariant manifolds in Hilbert space" Ann. of Math. (2) , 115 (1982) pp. 243–290 MR0647807 Zbl 0493.58015 [a28] M. Wojtkowski, "Invariant families of cones and Lyapunov exponents" Ergodic Th. Dynamical Systems , 5 (1985) pp. 145–161 MR0782793 Zbl 0578.58033
How to Cite This Entry:
Pesin theory. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Pesin_theory&oldid=49524
This article was adapted from an original article by L.M. Barreira (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
|
group theory
Contents
Idea
For $H↪G$ a subgroup, its index is the number $\mid G:H\mid$ of $H$-cosets in $G$.
Definition
Definition
For $H↪G$ a subgroup, its index is the cardinality
$\mid G:H\mid ≔\mid G/H\mid${\vert G : H\vert} \coloneqq {\vert G/H\vert}
of the set $G/H$ of cosets.
Properties
Multiplicativity
If $H$ is a subgroup of $G$, the coset projection $-H:G\to G/H$ sends an element $g$ of $G$ to its orbit $\mathrm{gH}$.
If $s:G/H\to G$ is a section of the coset projection $-H:G\to G/H$, then $G/H×H\to G$ given by $\left(gH,h\right)↦s\left(gH\right){h}^{-1}$ is a bijection. Its inverse is given by the set map $G\to G/H×H$ given by $g↦\left(gH,{g}^{-1}s\left(gH\right)\right)$. Note that the induced product projections $G\to G/H$ conincides with the coset projection.
This argument can be internalized to a group object $G$ and a subgroup object $G$ in a category $C$. In this case, the coset projection $-H:G\to G/H$ is the coequalizer of the action on $G$ by multiplication of $H$. The coset projection need not have a section. However, in case such sections exist, each section $s$ of the coset projection, the above argument internalized yields an isomorphism
$G/H×H\stackrel{\simeq }{\to }G\phantom{\rule{thinmathspace}{0ex}}.$G/H \times H \overset{\simeq}\rightarrow G \, .
Even more generally, if $H↪K↪G$ is a sequence of subgroup objects, then each section of the projection $G/H\to G/K$ yields an isomorphism
$G/K\phantom{\rule{thinmathspace}{0ex}}×\phantom{\rule{thinmathspace}{0ex}}K/H\stackrel{\simeq }{\to }G/H\phantom{\rule{thinmathspace}{0ex}}.$G/K \,\times \, K/H \stackrel{\simeq}{\to} G/H \, .
Returning to the case of ordinary groups, i.e. group objects internal to $\mathrm{Set}$, where the external axiom of choice is assumed to hold, the coset projection, being a coequalizer and hence an epimorphism, has a section. This gives the multiplicative property of the indices of a sequence $H↪K↪G$ of subgroups
$\mid G:K\mid \stackrel{˙}{\mid K:H\mid }=\mid G:H\mid \phantom{\rule{thinmathspace}{0ex}}.${\vert G : K\vert} \dot {\vert K : H\vert} = {\vert G : H\vert} \,.
Finite groups
The concept of index is meaningful especially for finite groups, i.e. groups internal to $\mathrm{FinSet}$. See, for example, its role in the classification of finite simple groups.
Multiplicativity of the index has the following corollary, which is known as Lagrange’s theorem: If $G$ is a finite group, then the index of any subgroup is the quotient
$\mid G:H\mid =\frac{\mid G\mid }{\mid H\mid }${\vert G : H\vert} = \frac{{\vert G\vert}}{\vert H\vert}
of the order (cardinality = number of elements) of $G$ by that of $H$.
Examples
• For $n\in ℕ$ with $n\ge 1$ and $ℤ\stackrel{\cdot n}{↪}ℤ$ the subgroup of the integers given by those that are multiples of $n$, the index is $n$.
Revised on October 31, 2013 01:00:29 by Colin Tan (137.132.250.13)
|
# Cross-validation with sampling weights
I am trying to cross validate a logistic regression model with probability sampling weights (weights representing number of subjects in the population). I am not sure how to handle the weights in each of the 'folds' (cross-validation steps). I don't think it is as simple as leaving out the observations, I believe the weights need to be rescaled at each step.
SAS has an option in proc surveylogistic to get cross validated (leave one out) prediction probabilities. Unfortunately I cannot find in the documentation any details on how these were calculated. I would like to reproduce those probabilities in R. So far I have not had success and am not sure if my approach is correct.
I hope someone can recommend an appropriate method to do the cross validation with the sampling weights. If they could match the SAS results that would be great too.
R code for leave-one-out cross validated probabilities (produces error):
library(bootstrap)
library(survey)
fitLogistic = function(x,y){
tmp=as.data.frame(cbind(y,x))
dsn=svydesign(ids=~0,weights=wt,data=tmp)
svyglm(y~x1+x2,
data=tmp,family = quasibinomial,design=dsn)
}
predict.logistic = function(fitLog,x){
pred.logistic=predict(fitLog,newdata=x,type='response')
print(pred.logistic)
ifelse(pred.logistic>=.5,1,0)
}
CV_Res= crossval(x=data1[,-1], y=data1[,1], fitLogistic, predict.logistic, ngroup = 13)
Sample Data Set:
y x1 x2 wt
0 0 1 2479.223
1 0 1 374.7355
1 0 2 1953.4025
1 1 2 1914.0136
0 0 2 2162.8524
1 0 2 491.0571
0 0 1 1842.1192
0 0 1 400.8098
0 1 1 995.5307
0 0 1 955.6634
1 0 2 2260.7749
0 1 1 1707.6085
0 0 2 1969.9993
SAS proc surveylogistic leave-one-out cross validated probabilities for sample data set:
.0072, 1 .884, .954, ...
SAS Code:
proc surveylogistic;
model y=x1 x2;
weight wt;
output out=a2 predprobs=x;
run;
• I'm not sure how to interpret the SAS probabilities... is that 0.0072%, 1.884%, 0.954%? These seem terribly small for a dataset with 36% of the observations (adjusted by weighting) = 1... – jbowman Dec 19 '11 at 21:01
• They are the actual probabilities, percentages would be .72%, 100%, 88.4%, 95.4%,....They seem odd to me too... – Glen Dec 19 '11 at 22:18
You can save yourself some coding effort, surprisingly enough, by simply doing the leave-one-out (LWO) cross-validation yourself:
data1$norm.wt <- data1$wt / sum(data1$wt) lwo <- rep(0,nrow(data1)) for (j in 1:nrow(data1)) { fj <- glm(y~x1+x2, family=quasibinomial, weights=norm.wt, data=data1[-j,]) lwo[j] <- predict(fj, data1[j,], type="response") } > print(lwo, digits=4) [1] 2.564e-02 2.220e-16 4.405e-01 2.128e-07 7.360e-01 5.360e-01 2.383e-02 [8] 2.064e-02 1.316e-01 2.174e-02 4.152e-01 1.895e-01 7.162e-01 Normalizing the weights to sum to one prevents a numerical problem (in this case) that results in your parameter estimates blowing up: > summary(glm(y~x1+x2, family=quasibinomial, weights=wt, data=data1)) Call: glm(formula = y ~ x1 + x2, family = quasibinomial, data = data1, weights = wt) Deviance Residuals: Min 1Q Median 3Q Max -394.9 0.0 0.0 0.0 164.4 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -6.790e+15 2.354e+15 -2.885 0.0162 * x1 2.004e+15 1.630e+15 1.230 0.2470 x2 3.403e+15 1.393e+15 2.443 0.0347 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for quasibinomial family taken to be 2.030037e+18) Null deviance: 25460 on 12 degrees of freedom Residual deviance: 324940 on 10 degrees of freedom AIC: NA Number of Fisher Scoring iterations: 12 versus normalized weights: > summary(glm(y~x1+x2, family=quasibinomial, weights=data1$wt/sum(data1$wt), data=data1)) Call: glm(formula = y ~ x1 + x2, family = quasibinomial, data = data1, weights = data1$wt/sum(data1\$wt))
Deviance Residuals:
Min 1Q Median 3Q Max
-0.4273 -0.1004 -0.0444 0.1706 0.3879
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -8.038 6.978 -1.152 0.276
x1 1.631 3.177 0.513 0.619
x2 4.142 3.536 1.171 0.269
(Dispersion parameter for quasibinomial family taken to be 0.1440099)
Null deviance: 1.30513 on 12 degrees of freedom
Residual deviance: 0.84517 on 10 degrees of freedom
AIC: NA
Number of Fisher Scoring iterations: 6
You don't have to renormalize the weights at every step of the LWO loop, in effect they are renormalized anyway as the weights are relative.
This doesn't match the SAS probabilities, admittedly, but it seems to me it's what you're trying to do.
• +1 Thanks for the answer. However normalizing the weights in the glm fit gives different standard errors than the fit with svyglm. The svyglm results match up with the results from SAS. – Glen Dec 19 '11 at 22:55
|
Trigonometry
acos¶
Syntax: acos x (unary, atomic)
Returns the arccosine of x; that is, the value whose cosine is x. The result is in radians and lies between 0 and π. (The range is approximate due to rounding errors).
Null is returned if the argument is not between -1 and 1.
q)acos -0.4
1.982313
asin¶
Syntax: asin x (unary, atomic)
Returns the arcsine of x; that is, the value whose sine is x. The result is in radians and lies between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$. (The range is approximate due to rounding errors).
Null is returned if the argument is not between -1 and 1.
q)asin 0.8
0.9272952
atan¶
Syntax: atan x (unary, atomic)
Returns the arctangent of x; that is, the value whose tangent is x. The result is in radians and lies between $-{\pi}{2}$ and ${\pi}{2}$. The range is approximate due to rounding errors.
q)atan 0.5
0.4636476
q)atan 42
1.546991
cos¶
Syntax: cos x (unary, atomic)
Returns the cosine of x, taken to be in radians. The result is between -1 and 1, or null if the argument is null or infinity.
q)cos 0.2
0.9800666
q)min cos 10000?3.14159265
-1f
q)max cos 10000?3.14159265
1f
sin¶
Syntax: sin x
Returns the sine of x, taken to be in radians. The result is between -1 and 1, or null if the argument is null or infinity.
q)sin 0.5
0.4794255
q)sin 1%0
0n
tan¶
Syntax: tan x (unary, atomic)
Returns the tangent of x, taken to be in radians. Integer arguments are promoted to floating point. Null is returned if the argument is null or infinity.
q)tan 0 0.5 1 1.5707963 2 0w
0 0.5463025 1.557408 3.732054e+07 -2.18504 0n
|
# How to calculate conjunctions of 2 planets
So, the recent conjunction of Jupiter and Venus seems to have spawned lots of excitement over this "rare" event. But what I can't figure out, is exactly how rare it is. And I've seen such conflicting claims and calculations that I figured I'd better calculate it myself. The only problem is I'm no astronomer. So, does anyone know of a program that can calculate all the conjunctions between a date range that are within a degree of each other? Or does smart person know how to calculate it manually? :) I have a program called Stellerium which I've been using to view the various conjunctions that people have mentioned. I don't know if I can use it in the way I need it though. Any help? :) Thanks
The PyEphem library allows you to create Python scripts that could calculate any conjunction you want (and many other things besides that).
http://rhodesmill.org/pyephem/
Here's a script that already does that:
http://shallowsky.com/blog/science/astro/predicting-conjunctions.html
Here's the result of running that script now:
http://pastebin.com/ehjxV66m
Notable event: extremely close conjunction between Jupiter and Venus in 2016:
Conjunction of Venus and Jupiter lasts from 2016/8/25 to 2016/8/31.
Venus and Jupiter are closest on 2016/8/28 (0.1 deg).
• Which is interesting because most sites reporting on this don't mention the 2016 conjuction and say we won't see anything like it for another 10 years or more. (It varies drastically by article) – AdamMasters Jul 1 '15 at 22:45
• AdamsMasters, I don't have enough reputation to answer your comment outside of an answer so well, that's because it'll be very low when it gets dark enough to see. It should be visible to the naked eye for maybe literally 5 minutes between mid-twilight and it getting too low to see, but only if the sky isn't very hazy. However, this is from midnorthern latitudes. The Southern hemisphere should see that easily. A 6 to 9 thousand foot elevation with a low western horizon would be better than sea level. A telescope or binoculars would also make it easier to see as long as you can aim. – user7583 Jul 2 '15 at 0:01
• It will be difficult to see unless you're free of surrounding buildings and trees. At the time of the sunset, they will be only 11 deg above horizon. That's only half the altitude of the conjunction this year at sunset. But if you have an open, flat field on the western side of town, you'll be able to observe that conjunction just fine. – Florin Andrei Jul 2 '15 at 17:42
• There's another Venus-Jupiter conjunction of 25 Oct 2015 that I don't think is mentioned on the pastebin page above? – user21 Aug 10 '15 at 1:37
EDIT: http://wgc.jpl.nasa.gov:8080/webgeocalc/#AngularSeparationFinder lets you find planetary conjunctions online using NASA's data. It's still iterative, but fairly fast (since NASA uses fairly powerful servers, even for their websites).
Summary: I'm still researching, but there appears to be no well-known, reliable non-iterative method to find conjunctions. Using the iterative method and the C SPICE libraries, I created a table of conjunctions for the visible planets (Mercury, Venus, Mars, Jupiter, Saturn, Uranus) here:
http://search.astro.barrycarter.info/
I am still researching a general answer to this question ("How to calculate conjunctions of 2 planets"), but here's what I have so far.
The iterative method:
• Compute the positions of the planets at regular intervals (eg, daily). The "daily" works for planets (but not some asteroids and definitely not the Moon) because the planets move through the sky relatively slowly.
• Find local minima in the daily lists.
• For efficiency, carefully discard local minima that are too large. For example, Mercury and Venus may approach each other, reach a minimal distance of 20 degrees, and then drift apart. The 20 degrees is a local minima, but not a conjunction.
• However, be careful when discarding minima. If you are searching for 5-degree conjunctions, two planets may be 5.1 degrees apart one day, and 5.2 degrees apart the next day, but less than 5 degrees apart sometime in the interim.
• For 5-degree conjunctions, you only need daily minima less than 8 degrees, and even that is overkill. The fastest a planet can move in the sky is 1.32 degrees per day (Mercury), and the second fastest is 1.19 degrees per day (Venus). In theory, these movements could be in opposite directions, so the fastest two planets can separate is 2.51 degrees per day. So, if two planets are more than 8 degrees apart two days in a row, there is no way they could be closer than 5 degrees between the days.
• In reality, planets maximum retrograde angular speed is slower than the prograde maximum speed, so the 2.51 degree limit above is never actually reached.
• After finding local minima, use minimization techniques (eg, the ternary method) to find the instant of closest approach.
• I ended up using the C SPICE libraries, and found 32,962 six-degrees-or-less conjunctions between -13201 and 17190, averaging about 1 conjunction per year. Of these, 2,185 occur between the "star of Bethlehem" and the 2015 conjunctions:
http://12d4dc067e0d836f1541e50125c24a03.astro.db.mysql.94y.info/
This iterative process works, but can be tedious. Since planetary positions are semi-well-behaved, you'd think there would be a faster, non-iterative method. Well...
http://emfisis.physics.uiowa.edu/Software/C/cspice/doc/html/cspice/gfsep_c.html
However, this also uses an iterative method, as the long description of the "step" parameter indicates:
"step must be short enough for a search using step to locate the time intervals where the specified angular separation function is monotone increasing or decreasing. However, step must not be too short, or the search will take an unreasonable amount of time"
By experimentation, I found that a step size of 6 days does not find all Mercury/Venus minimal separations, although a step size of 1 day does find these. In other words, reducing the step size from 6 days to 1 day found additional conjunctions, but reducing the step size to well below 1 day did not produce any additional conjunctions.
M[y script] iterates through dates.
[...]
In Astronomical Algorithms, Meeus has a chapter (17) on Planetary Conjunctions, but the chapter is less than three pages and lacks detail except that he's pretty clearly iterating and looking for the smallest separation, same as my Python program does.
He does a little better in Astronomical Formulae for Calculators: sounds like he's advocating iterating over big steps to find the place where the separation stops decreasing and starts increasing, then using "inverse interpolation" to find the exact time.
"Mathematical Astronomy Morsels" (also Meeus) talks starting on p. 246 about triple conjunctions (defined as two planets having several conjunctions in a short time, due to retrograde motion, not three planets all in conjunction at once) and gives some examples for very specific pairs, like Jupiter and Saturn, with ways you could look for those conjunctions.
It doesn't look like any of these books would be very helpful in finding a non-iterative solution.
• I haven't had a chance to read the books above, but did find:
where Meeus confirms the standard iterative method, but also provides a different, less accurate method. Unfortunately, Meeus only uses this method to compute solar conjunctions, elongations and oppositions, not interplanet conjunctions.
• Jon Giorgini of NASA ([email protected]) tells me:
As far as NASA computed planetary/stellar conjunctions/occultations, I don't know of anyone within NASA that does that routinely. There is an external network of volunteers that does that under the umbrella of the International Occultation Timing Association (IOTA), and they have developed pretty refined internal software for that purpose.
[...] the software package Occult does generate planetary conjunction predictions - based on a two-body solution. The approach used is a crude brute-force method of generating the planetary ephemerides on a daily basis. It is not particularly efficient - but it is sufficient for its intended purpose. [...]
• As a note, IOTA focuses on asteroid occultations, so computing positions daily doesn't always work. Especially for near-Earth asteroids, IOTA must iterate considerably more frequently.
• I also tried contacting Fred Espenak, the creator of http://eclipse.gsfc.nasa.gov/SKYCAL/SKYCAL.html, but was unable to do so. Jon Giorgini tells me that Fred has retired.
• I'm still looking, but my current conclusion is that there is no good well-known non-iterative way to find planetary conjunctions. As the image in my https://mathematica.stackexchange.com/questions/92774 shows, planetary separations aren't really as well-behaved as I had hoped.
• I just got a reply from Arnold Barmettler, who runs calsky.com:
I'm using a very time consuming iterative approach to pre-calculate Bessel Elements for conjunctions. This allows fast online calculation for any place on earth. Initial calculations are only done every few years, so CPU time does not matter. This would change if I'd enter the business to calculate asteroidal occultations.
re-iterating (pun intended) the same theme.
Miscellaneous:
• I used planetary system barycenters (the center of mass of a planet and its moons) for an entirely different reason. If you ask HORIZONS (http://ssd.jpl.nasa.gov/?horizons) for the position of Mars and set the date, you'll see this notice:
Available time span for currently selected target body: 1900-Jan-04 to 2500-Jan-04 CT.
However, if you use Mars' barycenter, this becomes:
Available time span for currently selected target body: BC 9998-Mar-20 to AD 9999-Dec-31 CT.
In other words, NASA computes the position of Mars' planetary system barycenter for a much longer interval than they compute Mars' actual position. Since I wanted to compute conjunctions for a long period of time, I went with the barycenters (DE431 computes barycenters even beyond 9998 BC and 9999 AD).
I've complained that this is silly, especially for Mars, since the distance between Mars' center and Mars' planetary system barycenter is only about 20cm (yes, centimeters, since Phobos and Deimos have very little mass) per http://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/aareadme_de430-de431.txt. However, NASA apparently plans to keep it this way.
• I ignore light travel time, which introduces a small error. Most planetarium programs, like Stellarium, ignore light travel time by default, although Stellarium has an option to turn it on.
• I also ignore refraction, aberration, and similar minor effects.
More details on how I generated these tables (in highly fragmented form):
https://github.com/barrycarter/bcapps/tree/master/ASTRO
(README.conjuncts in the above is a good starting point)
Some of the "cooler" conjunctions I found are at: http://search.astro.barrycarter.info/table.html. Here's a screenshot:
I worked out this equation to determine the cycle length between conjunctions of two planets:
$$t = \frac{a \cdot b}{b-a}$$
where:
t = length of conjunction cycle (ie time between two conjunctions), in Earth days
a = year length for lower-orbit planet, in Earth days
b = year length for higher-orbit planet, in Earth days
• The synodic period is certainly important, but it's not sufficient for precise calculation of conjunctions over long time spans. Bodies with elliptical orbits don't move with constant angular speed, which makes things messy. – PM 2Ring Aug 9 '19 at 6:04
|
# zbMATH — the first resource for mathematics
Interpretations of probability. Reprint. (English) Zbl 1060.81003
Utrecht: VSP (ISBN 90-6764-310-6/hbk). 228 p. (2003).
The book is a reprint of the author’s earlier publication [Interpretations of probability. Utrecht: VSP (1999; Zbl 0998.81508)]. The contents of the book is well described in the publisher’s description given as a review of the above first edition. It seems reasonable only to mention other books by Khrennikov dealing with related subjects: Non-archimedean analysis: quantum paradoxes, dynamical systems and biological models. Dordrecht: Kluwer Academic Publishers (1997; Zbl 0920.11087); Non-Archimedean analysis and its applications. Moscow: Fizmatlit (2003; Zbl 1104.46047); with M. Nilsson, $$p$$-adic deterministic and random dynamics. Dordrecht: Kluwer Academic Publishers (2004; Zbl 1135.37003).
##### MSC:
81P05 General and philosophical questions in quantum theory 60-02 Research exposition (monographs, survey articles) pertaining to probability theory 81-02 Research exposition (monographs, survey articles) pertaining to quantum theory 60A05 Axioms; other general questions in probability 28Axx Classical measure theory 46N10 Applications of functional analysis in optimization, convex analysis, mathematical programming, economics 46N50 Applications of functional analysis in quantum physics 91E10 Cognitive psychology
|
Search
Question: Is there are better way to improve the design via blocking or contrasts while using limma-voom for the mentioned datasets?
0
13 months ago by
vd4mmind0
vd4mmind0 wrote:
I have 2 RNA-Seq datasets(tumors) coming from different countries. total samples n1=35(having tumors and normals) that is in-house and n2=99 external(only tumors). When I plot the log normalized data via glMDSplot I see 3 distinct clusters( one of in-house and 2 from the external). I used SVA to correct them at sva=5 which handles not only the differences of the lab but also the difference of the external data that was within them since it was not apparent why they had a huge difference. As I downloaded them from a consortium where meta-information is not present from which centers each of the data was coming so could not decide apriori a batch information. So the batch correction proceeded with SVA.
Having corrected via SVA, my interest is to just find DEGs within the n2=99 samples which are external data having some classification as 2 different tumor classes(T1 and T2). I am only interested to find T2 vs T1 now using also the information of the SVA values as factors in the design and the information of the tumors being Solid and also liquid. So I created a design matrix.
cond<-factor(batch$tumorType[1:99]) # this is the main condition on which I want to perform DE gr<-factor(batch.$dataType[1:99]) # this is another factor which the tumor has like some are solid and others are liquid tumors
sv.5<-sv$sv[1:99,] # using the SVA values as factors in the additive model design <- model.matrix(~sv.5+condition+group) design (Intercept) sv.51 sv.52 sv.53 sv.54 sv.55 condT2 grSol 1 1 -0.068 8.472e-02 -0.032 0.020 0.026 0 0 2 1 -0.051 6.310e-02 0.154 -0.069 -0.080 0 1 3 1 -0.094 9.266e-02 0.054 -0.082 0.068 0 1 4 1 -0.059 6.381e-02 0.046 0.126 -0.059 0 1 5 1 -0.076 4.110e-03 -0.182 -0.121 -0.065 0 1 6 1 -0.102 6.418e-02 0.064 -0.291 -0.085 0 1 . . . . . . 93 1 -0.078 7.718e-02 0.164 -0.031 -0.066 1 1 94 1 -0.074 -6.784e-03 -0.061 0.120 -0.138 1 1 95 1 -0.077 6.435e-02 -0.035 0.067 -0.100 1 0 96 1 -0.069 8.491e-02 0.344 0.032 -0.106 1 1 97 1 -0.081 4.205e-02 0.302 -0.048 -0.021 1 1 98 1 -0.065 1.633e-02 0.055 0.049 -0.093 1 1 99 1 -0.064 7.848e-02 -0.029 0.024 -0.025 1 0 nf <- calcNormFactors(counts, method = "TMM") dge <- DGEList(counts =counts, group = cond, norm.factors = nf) v <- voom(dge, design, plot=TRUE) fit <- lmFit(v, design) fit <- eBayes(fit) top <- topTable(fit,coef=ncol(design),n = nrow(counts)) #20k genes in topTable sum(top$adj.P.Val<0.05) # 6646 genes
#recalcluating FDR since I did only filtering of 0 counts so re-esitmating based on FC and expression
ww <- which(abs(top$logFC)>log2(2) & top$AveExpr > 0)
top$fdr <- 1 top$fdr[ww] <- p.adjust(top$P.Value[ww],method="fdr") dim(top) length(ww) degs <- (top[top$fdr < 0.05,])
degs2.cond <- degs[abs(degs$logFC)>1, ] dim(degs2.cond) # 1293 degs Now the above DEGs are only taking into account the factor level group(solid vs liquid) and masking the condition(T1 vs T2 tumor types). If I remove the group from the design and proceed with just condition=tumorType(T1 vs T2) then I find only 120 degs of which half of them show the trend of the condition. Rest DEGs are not very clean when projected in a heatmap. I am not getting a perfect clustering but still I see some differences between the tumorTypes. How can I improve the design? I am sure am not giving the perfect model matrix and should I include some blocking or constrast so that I can still preserve the effect of tumorTypes(perform T1 vs T2) and not let the dataType (solid vs liquid) mask the DE analysis. Is there a way to proceed for this using voom? Can anyone give me pointers about ADD COMMENTlink modified 13 months ago by Aaron Lun21k • written 13 months ago by vd4mmind0 2 This isn't a direct answer, but your FDR calculation is flawed, and will likely result in overstating your significance (i.e. underestimating the true FDR), because you are filtering by logFC, which is not independent of the p-value. You should instead consider using the treat function to test for a logFC significantly greater than a nonzero threshold. Also, the average expression filter is appropriate, but should generally be done before the voom step. ADD REPLYlink modified 13 months ago • written 13 months ago by Ryan C. Thompson7.0k I understand what you say but if you consider of genes that are still significant with low fold changes coming as better FDR estimates then one can, in fact, re-estimate the FDR based on avg expression and the logFC if the starting expression set is not filtered. I am using such FDR calculation as my starting expression set does not account for any apriori filtering strategy based on average expression or read counts. I only remove genes from count table that have zero counts. If I had chosen an expression set apriori with higher average expressed genes then this method would not be required since the FDR estimated by limma-voom will still be pretty good. am starting with the same gene list on which SVA was performed which was also based on the fact of genes filtered for only 0 counts. I agree if I reduce the gene set based on average expression apriori feeding to voom, I will not need to re-estimate the FDR. But that will still not account for my design. Is there also a flaw that you find in the design and give some pointers that I can implement that can take care of the comparison I intend to do? ADD REPLYlink written 13 months ago by vd4mmind0 5 13 months ago by Aaron Lun21k Cambridge, United Kingdom Aaron Lun21k wrote: Now the above DEGs are only taking into account the factor level group(solid vs liquid) and masking the condition(T1 vs T2 tumor types). Well, obviously. You've specified coef=ncol(design), and your design matrix suggests that the last coefficient should represent the group effect, i.e., of solid vs liquid tumours. So this will test for DE between solid and liquid tumours, blocking on all the other factors. If you want to test for differences between T1 and T2, you should start by specifying the correct coef in the topTable call. I remove the group from the design and proceed with just condition=tumorType(T1 vs T2) then I find only 120 degs of which half of them show the trend of the condition. This is directly caused by inflated variances when you have factors of variation that are not captured in your design matrix. Also, I don't understand what you mean by "trend of the condition", nor do I understand what "Rest DEGs" are in the next sentence. I understand what you say but if you consider of genes that are still significant with low fold changes coming as better FDR estimates... I don't know what you mean by this, but let me assure you, Ryan is right and your filtering strategy is wrong. You cannot apply the BH correction on genes that have been filtered by log-fold change. This is guaranteed to enrich for false positives as the log-fold change is not independent of the p-value under the null hypothesis. Consider this simple scenario: set.seed(1000) y <- matrix(rnorm(100000), ncol=10) design <- model.matrix(~gl(2,5)) fit <- lmFit(y, design) fit <- eBayes(fit) res <- topTable(fit, coef=2) sum(res$adj.P.Val <= 0.05) # 0
ww <- abs(res$logFC) > 1 new.fdr <- p.adjust(res$P.Value[ww], method="BH")
sum(new.fdr <= 0.05) # 10
Congratulations; you've just detected 10 genes in a data set with no true differences. Further investigation will reveal that your filtering approach will distort the p-value distribution under the null hypothesis, rendering it non-uniform, and thereby break the BH correction. Proper filtering should be independent of the test statistic, and there is some literature about this - check it out.
P.S. I just realized that you are trying to merge an in-house data set with an external data set. This is rarely a good idea for DE analyses; for starters, the mean-variance relationship is unlikely to be the same between batches, which reduces the accuracy of the modelling. The batch effect may also distort TMM normalization by causing genes to become "DE". Statistics aside, common sense should prevail here; do you really want to compare your in-house normal samples to tumour samples collected and sequenced by someone else at a different time and in a different lab? A much safer approach is to perform DE analyses separately within each batch and to perform a meta-analysis to extract the features of interest; see Merge different datasets regarding a common number of DE genes in R.
Thanks for the explanation. I was trying to understand basically why is it wrong to re-estimate the FDR from the limma, as far as I have read, most of these packages are in fact controlling the false discovery rate. Also, we are not guaranteed by the fact that what are the true-positive genes in the comparison. However , when we perform p.adjust should not we care about genes that have some average expression and logFC and calculcate the FDR for a control-measure. I am obviously stating this since my gene table that I feed for DE analysis does not account for a threshold of expression. I only remove genes with 0 counts and not with a read count threshold. If I start with lower expression set which read count threshold , I will obviously have better FDR estimates. So why is it wrong if I re-measure FDR for genes with certain logFC and avgExp. I might be wrongly interpreting but if you can re-clarify that would be nice. I would anyway think of resorting to restricting expression set of genes to test the DE and not control the FDR but I fail to understand if I do not remove lowly expressed genes apriori , why it will wrong to re-measure the FDR using specific logFC and top$avgExp, am not using only the logFC alone and I know it is wrong to do that alone. ADD REPLYlink written 13 months ago by vd4mmind0 2 Aaron has already given you a simulated example where FDR calculation after logFC filtering clearly gives the wrong answer, and he has given you a link to a publication with more information about the topic. We have already explained that filtering genes with low average expression is appropriate and will not compromise the FDR calculation. You seem to believe that combining both filters is acceptable even though filtering only on logFC is not, which is absurd. That is like saying that a diet of vegetables and cookies is healthy because vegetables are healthy. If you want a non-zero logFC threshold, use the treat function as I have already recommended. The treat function provides a way to test a logFC threshold while properly controlling the FDR. ADD REPLYlink modified 13 months ago • written 13 months ago by Ryan C. Thompson7.0k Sure, thanks for the heads up. ADD REPLYlink written 13 months ago by vd4mmind0 Dear Aaron, I re-did the analysis, and what I did here is first performing SVA on my the public data and our data(I want to project all data together), correct them for the confounders. I need to find actually the difference within the public data as I said between class of T2 vs T1. However, since the data is having an internal batch as I see in MDS plot. This is a reason I corrected with SVA since meta-data does not account for the information from which centers the data come from. So what I did is just perform DE analysis with voom on the public data and plot the degs on the sva corrected data. I did not use the sva as covariates in the design matrix this time. What I wanted to use is a design matrix with the cond=tumorType (levels(c(T1,T2) and group= c(Solid, Liquid). Below is the code. I want to test DE for T2 vs T1 but block the effect of Solid vs Liquid. Can you tell if this is the correct way to take account for it or not? Or should I use something else? I construct the design matrix taking both but use coeff=2 for topTable code: cond<-factor(batch$tumorType[1:99]) # this is the main condition on which I want to perform DE
gr<-factor(batch.$dataType[1:99]) # this is another factor which the tumor has like some are solid and others are liquid tumors design<-model.matrix(~cond+gr,data=counts) design (Intercept) condT2 grSol 1 1 0 0 2 1 0 1 3 1 0 1 4 1 0 1 5 1 0 1 6 1 0 1 ... ... 94 1 1 1 95 1 1 0 96 1 1 1 97 1 1 1 98 1 1 1 99 1 1 0 dge <- DGEList(counts = counts) d <- calcNormFactors(dge, method = "TMM") v <- voom(d, design, plot=TRUE) fit <- lmFit(v, design) fit <- eBayes(fit) top <- topTable(fit,coef=2,n = nrow(counts)) sum(top$adj.P.Val<0.05)
ind<-which(top$adj.P.Val<0.05 & abs(top$logFC)>1)
degs_0.05_1<-top[ind,]
dim(degs_0.05_1)
#[1] 482 6
The above code gives me 482 DEGs which if I plot on the SVA corrected data gives me genes that are mostly showing clustering of T2vsT1 and the effect of Solid vs Liquid is not dominating it. I would like to have some feedback if this is fine or not. Or is it advisable to you the SVA as batch covariates. In that case how should I control for the Solid vs liquid and just perform the DE for T2 vs T1. Should it be still same coeff=2. Looking for some feedback. Thanks
1
1. The coef=2 is correct if you want to test for the T1 vs T2 effect.
2. You should use treat rather than filtering manually on the log-fold change.
3. It is fine to block on the SVA-identified factors as well, just put them into the design matrix. However, I just realized you're merging in-house data with external data, see my edit above.
|
Locate entities
locate_entity(entity, xpos = 0, ypos = 0, size = 1, angle = 0, ...)
## Arguments
entity The entity to be placed The horizontal location of the entity The vertical location of the entity Parameter controlling the size of the entity Parameter controlling the orientation of the entity Other arguments are ignored
## Value
A tibble with four columns: x, y, id and type
## Details
When a jasmine entity is created it is implicitly assumed to be located at the origin (xpos = 0, ypos = 0), to have size 1, and to have a horizontal orientation (angle = 0). The locate_entity function allows the entity to be transformed in simple ways: translation, dilation and rotations
|
## Encyclopedia > Joseph Fry
Article Content
# Joseph Fry
Joseph Fry was the husband of Elizabeth Fry, whom she married when she was 20.
All Wikipedia text is available under the terms of the GNU Free Documentation License
Search Encyclopedia
Search over one million articles, find something about almost anything!
Featured Article
Partition function ... Given Z as a function of temperature, we may calculate the average energy as $E=\sum_j P(j) E_j=k_B T^2 {d \over dT} \ln Z$ The fr ...
|
# Rationality for isobaric automorphic representations: the CM-case
@article{Grobner2018RationalityFI,
title={Rationality for isobaric automorphic representations: the CM-case},
author={Harald Grobner},
journal={Monatshefte Fur Mathematik},
year={2018},
volume={187},
pages={79 - 94}
}
• H. Grobner
• Published 21 May 2018
• Mathematics
• Monatshefte Fur Mathematik
In this note we prove a simultaneous extension of the author’s joint result with M. Harris for critical values of Rankin–Selberg L-functions $$L(s,\Pi \times \Pi ')$$L(s,Π×Π′) (Grobner and Harris in J Inst Math Jussieu 15:711–769, 2016, Thm. 3.9) to (i) general CM-fields F and (ii) cohomological automorphic representations $$\Pi '=\Pi _1\boxplus \cdots \boxplus \Pi _k$$Π′=Π1⊞⋯⊞Πk which are the isobaric sum of unitary cuspidal automorphic representations $$\Pi _i$$Πi of general linear groups of…
8 Citations
Relations of rationality for special values of Rankin–Selberg L-functions of GLn×GLm over CM-fields
• Mathematics
• 2020
In this paper we present a bridge between automorphic forms of general reductive groups and motives over number elds, hinting a translation of Deligne's conjecture for motivic L-functions into a
Deligne's conjecture for automorphic motives over CM-fields, Part I: factorization
• Mathematics
• 2018
This is the first of two papers devoted to the relations between Deligne's conjecture on critical values of motivic $L$-functions and the multiplicative relations between periods of arithmetically
Special Values of L-functions for GL(n) Over a CM Field
• A. Raghuram
• Mathematics
International Mathematics Research Notices
• 2021
We prove a Galois-equivariant algebraicity result for the ratios of successive critical values of $L$-functions for ${\textrm GL}(n)/F,$ where $F$ is a totally imaginary quadratic extension of a
Algebraicity of ratios of special values of Rankin-Selberg $L$-functions and applications to Deligne's conjecture
In this paper, we prove new cases of Blasius’ and Deligne’s conjectures on the algebraicity of critical values of tensor product L-functions and symmetric odd power L-functions associated to modular
L -values of Elliptic Curves twisted by Hecke Grössencharacters
(over equation (over number fields) Elliptic curves twisted by Grössencharacters Modular forms associated to Grössencharacters Abstract Let E/K be an elliptic curve over an imaginary quadratic field K
On Deligne’s conjecture for symmetric fifth L-functions of modular forms
Abstract We prove Deligne’s conjecture for symmetric fifth L-functions of elliptic newforms of weight greater than 5. As a consequence, we establish period relations between motivic periods
On the Schwartz space $${\mathcal {S}}(G(k)\backslash G({\mathbb {A}}))$$
• Mathematics
Monatshefte für Mathematik
• 2020
For a connected reductive group $G$ defined over a number field $k$, we construct the Schwartz space $\mathcal{S}(G(k)\backslash G(\mathbb{A}))$. This space is an adelic version of Casselman's
Archimedean period relations and period relations for Rankin-Selberg convolutions
• Mathematics
• 2021
We prove the Archimedean period relations for Rankin-Selberg convolutions for GL(n) × GL(n − 1). This implies the period relations for critical values of the Rankin-Selberg L-functions for
## References
SHOWING 1-10 OF 45 REFERENCES
Automorphic Forms on GL(2)
In [3] Jacquet and I investigated the standard theory of automorphic forms from the point of view of group representations. I would like on this occasion not only to indicate the results we obtained
On the Special Values of Certain Rankin-Selberg L-Functions and Applications to Odd Symmetric Power L-Functions of Modular Forms
We prove an algebraicity result for the central critical value of certain Rankin-Selberg L-functions for GL n × GL n−1 . This is a generalization and refinement of the results of Harder [14],
On some arithmetic properties of automorphic forms of GL(m) over a division algebra
• Mathematics
• 2011
In this paper we investigate arithmetic properties of automorphic forms on the group G' = GL_m/D, for a central division-algebra D over an arbitrary number field F. The results of this article are
Automorphic Representations, Shimura Varieties, and Motives. Ein Marchen*
1. Introduction. It had been my intention to survey the problems posed by the study of zetafunctions of Shimura varieties. But I was too sanguine. This would be a mammoth task, and limitations of
The unitary dual of GL(n) over an archimedean field
for the group ofinvertible n by n matrices with entries in IF. We determine explicitly the set G, of equivalence classes of irreducible unitary representations of G. For IF = ~, the answer has been
Functoriality for the quasisplit classical groups
• Mathematics
• 2011
Functoriality is one of the most central questions in the theory of automorphic forms and representations [3, 6, 31, 32]. Locally and globally, it is a manifestation of Langlands’ formulation of a
The Multiplicity One Theorem for GL n
The purpose of this paper is to establish preliminary results in the theory of representations in the direction of a systematic treatment of Hecke-theory for the group GLn. In particular, special
cohomology of arithmetic groups, parabolic subgroups and the special values of $l$-functions on gl$_{n}$
• J. Mahnkopf
• Mathematics
Journal of the Institute of Mathematics of Jussieu
• 2005
let $\pi$ be a cuspidal automorphic representation of $\mathrm{gl}_n(\mathbb{a}_{\mathbb{q}})$ with non-vanishing cohomology. under a certain local non-vanishing assumption we prove the rationality
WHITTAKER PERIODS, MOTIVIC PERIODS, AND SPECIAL VALUES OF TENSOR PRODUCT $L$ -FUNCTIONS
• Mathematics
Journal of the Institute of Mathematics of Jussieu
• 2015
Let ${\mathcal{K}}$ be an imaginary quadratic field. Let ${\rm\Pi}$ and ${\rm\Pi}^{\prime }$ be irreducible generic cohomological automorphic representation of $\text{GL}(n)/{\mathcal{K}}$ and
Eisenstein Cohomology for GL(N) and ratios of critical values of Rankin-Selberg L-functions - I
• Mathematics
• 2014
The aim of this article is to study rank-one Eisenstein cohomology for the group GL(N)/F, where F is a totally real field extension of Q. This is then used to prove rationality results for ratios of
|
How can I know when I have added an excess of magnesium oxide?
The following instructions were used to prepare magnesium sulfate crystals, $\ce{MgSO4 . 7H2O}$.
1. Measure $50~ \mathrm{cm^3}$ of dilute sulfuric acid into a beaker and warm the solution.
2. Using a spatula, add some magnesium oxide and stir the mixture. Continue adding the magnesium oxide until excess is present.
3. Separate the excess magnesium oxide from the solution of magnesium sulfate.
4. Heat the solution until crystals form. Obtain the crystals and dry them.
The question I have an issue with is this: How would you know when excess magnesium oxide is present in step 2?
The answer key says that I would know when no more solid (that is, no more magnesium oxide dissolves). However, since we are talking about dilute sulfuric acid, there should be water, and I read on the internet that $\ce{MgO}$ is soluble in water.
• Check the solubility of magnesium oxide. Oct 8 '14 at 1:57
• In what sense? As in how much can dissolve in water or if it can dissolve in water at all? Or are you talking about acids Oct 8 '14 at 2:00
$\ce{MgO}$ is insoluble in water (solubility: 0.0086 g/100 mL). On the other hand, $\ce{MgSO4}$ is soluble in water (solubility: 25.5 g/100 mL).
The reaction that is taking place is an acid-base reaction:
$$\ce{MgO + H2SO4 -> MgSO4 + H2O}$$
Now for your question, how would you know when excess $\ce{MgO}$ is present? Here you are trying to make $\ce{H2SO4}$ the limiting reagent. When all the $\ce{H2SO4}$ has reacted, no more $\ce{MgO}$ can further react. Therefore, you will see solid in your reaction mixture that cannot be further dissolved.
At the end of your reaction,
$$\ce{MgO_{excess} + H2SO4 -> MgSO4 + H2O + MgO_{remaining}}$$
• Thank you! But I am a little confused as to why other sites ( BBC Bitesize, for one) say it is soluble in water to form MgOH Oct 8 '14 at 2:05
• @dadadok Both MgO and Mg(OH)2 are insoluble - you can easily filter them. Note that for MgSO4, if the solution is too basic it will react with OH- to form Mg(OH)2, leading to low yield.
– t.c
Oct 8 '14 at 2:08
• Thanks!This is pretty much the kind of response I was looking for... and goes some way towards relieving exam stress. Thanks again. Oct 8 '14 at 2:10
• @dadadok Mg(OH)2 is just barely soluble in water, that's why you're finding some sources saying it's insoluble and others saying it is soluble. The same is true for MgO, which will dissolve to a very small extent in water. "Very small" and "barely" here mean only a few thousandths of a gram per 100mL of water. Oct 8 '14 at 12:30
|
# Additive Model with Linear Smoother
##### Posted on Dec 07, 2021
A number of problems associated with $p$-dimensional smoothers
Take a different approach and use the one-dimensional smoother as a building block for a restricted class of nonparametric multiple regression models.
The additive model takes the form
$E(Y_i\mid x_{i1},\ldots, x_{ip}) = \sum_{j=1}^pf_j(x_{ij})$
The model is a special case of both the
• PPR (projection pursuit regression)
• ALS (alternating least squares)
• ACE (alternating conditional expectation)
## Additive Model and Its Normal Equations
### Least Squares on Populations
The optimization problem is to minimize
$E(Y-g(X))^2$
over $g(X)=\sum_{j=1}^pf_j(X_j)\in \cH^{add}$. Without the additivity restriction, the solution is simply $E(Y\mid X)$.
By assumption, $\cH^{add}$ is a closed subspace of $\cH$ this minimum exists and is unique, but the individual functions $f_i(X_i)$ may not be uniquely determined.
The minimizer can be characterized by residuals $Y-g(X)$ which are orthogonal to the space of fits: $Y-g(X)\ind \cH^{add}$. Since $\cH^{add}$ is generated by $\cH_i$, we have equivalently: $Y-g(X)\ind \cH_i$, or $P_i(Y-g(X))=0$. Componentwise this can be written as
$f_i(X_i) = P_i(Y-\sum_{j\neq i}f_j(X_j))$
### Penalized Least Squares
For single-smoother,
$\Vert y-f\Vert^2 + \lambda f^TKf$
Assuming the inverses exist, the stationary condition implies
$f = (I+\lambda K)^{-1}y$
that is
$S = (I+\lambda K)^{-1}$
Then, characterize $f=Sy$ as a stationary solution of
$\Vert y - f\Vert^2 + f^T(S^{-1}-I)f$
Extend to additive regression by penalizing the RSS separately for each component function,
$Q(f) = \Vert y - \sum_{j=1}^p\Vert^2 + \sum_{j=1}^pf_j^T(S_j^{-1}-I)f_j$
A proof for the existence of solution for cubic smoothing spline in the appendix.
### Algorithms for solving the normal equations
The Gauss-Seidel method is only one technique in the large class of iterative schemes called successive over-relaxation (SOR) methods. They differ from ordinary Gauss-Seidel procedures by the amount one proceeds in the direction of the Gauss-Seidel updates
$f_j\leftarrow (1-\omega)f_j + \omega S_j\left(y-\sum_{k\neq j}f_k\right)$
If the Gauss-Seidel algorithm converges, so do successive over-relaxation iterations for relaxation parameters $0 < \omega < 2$.
The numerical analysis literature also distinguishes between successive and simultaneous iterations, referred to as
• Gauss-Seidel
• Jacobi
### Summary of Consistency, Degeneracy, and Convergence Results
It is not a priori clear when the normal equations are consistent (when solutions exist). Nor is it clear when the equations are nondegenerate (when the solutions are unique).
nondegeneracy implies consistency. However, the normal equations are almost always degenerate.
1. For symmetric smoothers with eigenvalues in [0, 1], the normal equations always have at least one solution
2. The solution is unique unless there exists a $g\neq 0$ such that $\hat Pg = 0$, a phenomenon we call “concurvity”. This implies for any solution $f$, $f+\alpha g$ is also a solution for any $\alpha$
### Consistency
If each $S_j$ is symmetric with eigenvalues in $[0, 1]$, the normal equations are consistent for every $y$
If the smoothers $S_j$ are symmetric with eigenvalues in $[0, 1)$, the solutions of the normal equations can be written as $f_j = A_j(I+A)^{-1}y$, where $A_j = (I-S_j)^{-1}S_j$ and $A = \sum_jA_j$.
### Degeneracy of smoother-based normal equations
Collinearity detection as part of regression diagnostics is a must in every good regression analysis. Practioners are usually concerned with approximate collinearity and its inflationary effects on standard errors of regression coefficients.
The term “collinearity” refers to linear dependencies among predictors as the cause of degeneracy, the term “concurvity” has been used to describe nonlinear dependencies which lead to degeneracy in additive models.
In a technical sense, concurvity boils down to collinearity of nonlinear transforms of predictors. It is more intuitive to think of it as an additive dependence $f_+=0$.
Exact concurvity is defined as the existence of a nonzero solution of the corresponding homogeneous equations
$\hat Pg = 0$
if such a $g$ exists, and if $f$ is a solution to $\hat Pf=\hat Qy$, then so is $f+wg$ for any $\omega$, and thus infinitely many solutions exist.
The set of all nonzero solutions to the homogeneous equations $\hat Pg=0$ will be called concurvity space for the normal equations.
It is easy to check that
$g = \begin{bmatrix} \alpha 1\\ -\alpha 1 \end{bmatrix}$
lies in the concurvity space of the two-smoother problem if they both reproduce constants. Similarly, for $p$ such smoothers, the concurvity space has dimension at least $p-1$.
Consider $y=0$,
$Q(g) = \Vert g_+\Vert^2 + \sum_{j=1}^p g_j^T(S_j^{-1}-I)g_j\,,$
defined for $g_j\in\cR(S_j)$.
If the smoothers $S_j$ are all symmetric with eigenvalues in $[0,1]$, a vector $g\neq 0$ with $g_j\in \cR(S_j)$ represents a concurvity ($\hat Pg=0$) iff one of the following equivalent conditions is satisfied
1. $Q(g) = 0$, that is, $g$ minimizes $Q$
2. $g_j\in \cM_1(S_j),j=1,\ldots,p$, and $g_+=0$.
Condition 2 implies that exact concurvity is exact collinearity if, for example, all smothers are cubic spline smoothers.
Remark: if $S_j,j=1,\ldots,p$ are symmetric with eigenvalues in $[0, 1)$, then $\hat P$ is nonsingular.
In practice, we separate the constant term in the additive model, and adjust each of the smooth terms to have mean 0.
For two smoothers, there exists exact concurvity iff $f_1=(S_1S_2)f_1$ for some $f_1\neq 0$
If $\Vert S_1S_2\Vert < 1$ for some matrix norm, concurvity does not exist.
For two symmetric smoothers with eigenvalues in $(-1, +1]$, concurvity exists iff $\cM_1(S_1)\cap \cM_1(S_2)\neq 0$.
It has again the consequence that exact concurvity, e.g., for a pair of cubic spline smoothers, can only be an exact collinarity between the untransformed predictors, since cubic splines preserve constant and linear fits (what is the logic?).
uniqueness of the additive minimizer is guaranteed if there is no collinearity.
### The Convergence of backfitting: $p$ smoothers
If all the smoothers $S_j$ are symmetric with eigenvalues in $[0, 1]$, then the backfitting algorithm converges to some solution of the normal equations.
### Convergence of backfitting for two smoothers
Decompose $S_1=\tilde S_1+H_U$ and $S_2=\tilde S_2+H_U$, where $H_U$ is the orthogonal projection onto $U = \cM_1(S_1)\cap \cM_1(S_2)$. We have $\tilde S_jH_U=H_U\tilde S_j=0$ and $\Vert \tilde S_1\tilde S_2\Vert_2 < 1$.
Consider first $y$ and $f_2^0$ in $U^\ind$,
second for $y$ and $f_2^0$ in U
If $S_1$ and $S_2$ are symmetric with eigenvalues in $(-1, 1]$, then the Gauss-Seidel algorithm converges to a solution of the normal equations
The components $f_1^\infty$ and $f_2^\infty$ can be decomposed into
• the part within $U^\ind$ which is uniquely determined and depends on the data $y$ only
• the part within $U$ which depends on the sequence of iteration and the initialization $f_2^0$
### Modified Backfitting Algorithm
1. Initialize $\tilde f_1,\ldots,\tilde f_p$, and set $\tilde f_+ = \tilde f_1+\tilde f_2+\cdots+\tilde f_p$
2. Regress $y-\tilde f_+$ onto the space spanned by $\cM_1(S_1),\ldots,\cM_1(S_p)$, that is, set $g=H(y-\tilde f_+)$
3. Fit an additive model to $y-g$ using smoothers $\tilde S_i$; this step yields an additive fit $\tilde f_+=\tilde f_1+\cdots +\tilde f_p$
4. repeat steps 1 and 2 until convergence.
### A closer look at convergence
Consider the case of extreme collinearity, where two identical covariates and a cubic spline smoother matrix $S$.
Starting the backfitting algorithm from $0$, the residual after $m$ smooths is given by
$r^m = [I-S+S^2-S^3+\cdots + (-1)^{m-1}S^{m-1}](I-S)y\rightarrow (I+S)^{-1}(I-S)y$
1. the residuals (and their norm) oscillate as they converge
2. the converged model is rougher than a single smoother.
3. by looking at every other iteration, it is clear that the norm of the residuals converges upwards, after every even number of steps
4. $r^2$ is the same as the “twicing” residual, where twicing enhances a smooth by adding in the smooth of the residual.
Published in categories Note
|
# How to understand degrees of freedom?
From Wikipedia, there are three interpretations of the degrees of freedom of a statistic:
In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.
Estimates of statistical parameters can be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter is called the degrees of freedom (df). In general, the degrees of freedom of an estimate of a parameter is equal to the number of independent scores that go into the estimate minus the number of parameters used as intermediate steps in the estimation of the parameter itself (which, in sample variance, is one, since the sample mean is the only intermediate step).
Mathematically, degrees of freedom is the dimension of the domain of a random vector, or essentially the number of 'free' components: how many components need to be known before the vector is fully determined.
The bold words are what I don't quite understand. If possible, some mathematical formulations will help clarify the concept.
Also do the three interpretations agree with each other?
• Check out this explanation Jul 28 '10 at 10:26
• Also see this question "What are degrees of freedom?" Oct 12 '11 at 22:12
• Jul 26 at 23:01
• See Ye 1998 for a generalization of degrees of freedom. Oct 7 at 0:38
This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't the time (and there isn't the space here) to give a full exposition, but I would like to share one approach and an insight that it suggests.
Where does the concept of degrees of freedom (DF) arise? The contexts in which it's found in elementary treatments are:
• The Student t-test and its variants such as the Welch or Satterthwaite solutions to the Behrens-Fisher problem (where two populations have different variances).
• The Chi-squared distribution (defined as a sum of squares of independent standard Normals), which is implicated in the sampling distribution of the variance.
• The F-test (of ratios of estimated variances).
• The Chi-squared test, comprising its uses in (a) testing for independence in contingency tables and (b) testing for goodness of fit of distributional estimates.
In spirit, these tests run a gamut from being exact (the Student t-test and F-test for Normal variates) to being good approximations (the Student t-test and the Welch/Satterthwaite tests for not-too-badly-skewed data) to being based on asymptotic approximations (the Chi-squared test). An interesting aspect of some of these is the appearance of non-integral "degrees of freedom" (the Welch/Satterthwaite tests and, as we will see, the Chi-squared test). This is of especial interest because it is the first hint that DF is not any of the things claimed of it.
We can dispose right away of some of the claims in the question. Because "final calculation of a statistic" is not well-defined (it apparently depends on what algorithm one uses for the calculation), it can be no more than a vague suggestion and is worth no further criticism. Similarly, neither "number of independent scores that go into the estimate" nor "the number of parameters used as intermediate steps" are well-defined.
"Independent pieces of information that go into [an] estimate" is difficult to deal with, because there are two different but intimately related senses of "independent" that can be relevant here. One is independence of random variables; the other is functional independence. As an example of the latter, suppose we collect morphometric measurements of subjects--say, for simplicity, the three side lengths $X$, $Y$, $Z$, surface areas $S=2(XY+YZ+ZX)$, and volumes $V=XYZ$ of a set of wooden blocks. The three side lengths can be considered independent random variables, but all five variables are dependent RVs. The five are also functionally dependent because the codomain (not the "domain"!) of the vector-valued random variable $(X,Y,Z,S,V)$ traces out a three-dimensional manifold in $\mathbb{R}^5$. (Thus, locally at any point $\omega\in\mathbb{R}^5$, there are two functions $f_\omega$ and $g_\omega$ for which $f_\omega(X(\psi),\ldots,V(\psi))=0$ and $g_\omega(X(\psi),\ldots,V(\psi))=0$ for points $\psi$ "near" $\omega$ and the derivatives of $f$ and $g$ evaluated at $\omega$ are linearly independent.) However--here's the kicker--for many probability measures on the blocks, subsets of the variables such as $(X,S,V)$ are dependent as random variables but functionally independent.
Having been alerted by these potential ambiguities, let's hold up the Chi-squared goodness of fit test for examination, because (a) it's simple, (b) it's one of the common situations where people really do need to know about DF to get the p-value right and (c) it's often used incorrectly. Here's a brief synopsis of the least controversial application of this test:
• You have a collection of data values $(x_1, \ldots, x_n)$, considered as a sample of a population.
• You have estimated some parameters $\theta_1, \ldots, \theta_p$ of a distribution. For example, you estimated the mean $\theta_1$ and standard deviation $\theta_2 = \theta_p$ of a Normal distribution, hypothesizing that the population is normally distributed but not knowing (in advance of obtaining the data) what $\theta_1$ or $\theta_2$ might be.
• In advance, you created a set of $k$ "bins" for the data. (It may be problematic when the bins are determined by the data, even though this is often done.) Using these bins, the data are reduced to the set of counts within each bin. Anticipating what the true values of $(\theta)$ might be, you have arranged it so (hopefully) each bin will receive approximately the same count. (Equal-probability binning assures the chi-squared distribution really is a good approximation to the true distribution of the chi-squared statistic about to be described.)
• You have a lot of data--enough to assure that almost all bins ought to have counts of 5 or greater. (This, we hope, will enable the sampling distribution of the $\chi^2$ statistic to be approximated adequately by some $\chi^2$ distribution.)
Using the parameter estimates, you can compute the expected count in each bin. The Chi-squared statistic is the sum of the ratios
$$\frac{(\text{observed}-\text{expected})^2}{\text{expected}}.$$
This, many authorities tell us, should have (to a very close approximation) a Chi-squared distribution. But there's a whole family of such distributions. They are differentiated by a parameter $\nu$ often referred to as the "degrees of freedom." The standard reasoning about how to determine $\nu$ goes like this
I have $k$ counts. That's $k$ pieces of data. But there are (functional) relationships among them. To start with, I know in advance that the sum of the counts must equal $n$. That's one relationship. I estimated two (or $p$, generally) parameters from the data. That's two (or $p$) additional relationships, giving $p+1$ total relationships. Presuming they (the parameters) are all (functionally) independent, that leaves only $k-p-1$ (functionally) independent "degrees of freedom": that's the value to use for $\nu$.
The problem with this reasoning (which is the sort of calculation the quotations in the question are hinting at) is that it's wrong except when some special additional conditions hold. Moreover, those conditions have nothing to do with independence (functional or statistical), with numbers of "components" of the data, with the numbers of parameters, nor with anything else referred to in the original question.
Let me show you with an example. (To make it as clear as possible, I'm using a small number of bins, but that's not essential.) Let's generate 20 independent and identically distributed (iid) standard Normal variates and estimate their mean and standard deviation with the usual formulas (mean = sum/count, etc.). To test goodness of fit, create four bins with cutpoints at the quartiles of a standard normal: -0.675, 0, +0.657, and use the bin counts to generate a Chi-squared statistic. Repeat as patience allows; I had time to do 10,000 repetitions.
The standard wisdom about DF says we have 4 bins and 1+2 = 3 constraints, implying the distribution of these 10,000 Chi-squared statistics should follow a Chi-squared distribution with 1 DF. Here's the histogram:
The dark blue line graphs the PDF of a $\chi^2(1)$ distribution--the one we thought would work--while the dark red line graphs that of a $\chi^2(2)$ distribution (which would be a good guess if someone were to tell you that $\nu=1$ is incorrect). Neither fits the data.
You might expect the problem to be due to the small size of the data sets ($n$=20) or perhaps the small size of the number of bins. However, the problem persists even with very large datasets and larger numbers of bins: it is not merely a failure to reach an asymptotic approximation.
Things went wrong because I violated two requirements of the Chi-squared test:
1. You must use the Maximum Likelihood estimate of the parameters. (This requirement can, in practice, be slightly violated.)
2. You must base that estimate on the counts, not on the actual data! (This is crucial.)
The red histogram depicts the chi-squared statistics for 10,000 separate iterations, following these requirements. Sure enough, it visibly follows the $\chi^2(1)$ curve (with an acceptable amount of sampling error), as we had originally hoped.
The point of this comparison--which I hope you have seen coming--is that the correct DF to use for computing the p-values depends on many things other than dimensions of manifolds, counts of functional relationships, or the geometry of Normal variates. There is a subtle, delicate interaction between certain functional dependencies, as found in mathematical relationships among quantities, and distributions of the data, their statistics, and the estimators formed from them. Accordingly, it cannot be the case that DF is adequately explainable in terms of the geometry of multivariate normal distributions, or in terms of functional independence, or as counts of parameters, or anything else of this nature.
We are led to see, then, that "degrees of freedom" is merely a heuristic that suggests what the sampling distribution of a (t, Chi-squared, or F) statistic ought to be, but it is not dispositive. Belief that it is dispositive leads to egregious errors. (For instance, the top hit on Google when searching "chi squared goodness of fit" is a Web page from an Ivy League university that gets most of this completely wrong! In particular, a simulation based on its instructions shows that the chi-squared value it recommends as having 7 DF actually has 9 DF.)
With this more nuanced understanding, it's worthwhile to re-read the Wikipedia article in question: in its details it gets things right, pointing out where the DF heuristic tends to work and where it is either an approximation or does not apply at all.
A good account of the phenomenon illustrated here (unexpectedly high DF in Chi-squared GOF tests) appears in Volume II of Kendall & Stuart, 5th edition. I am grateful for the opportunity afforded by this question to lead me back to this wonderful text, which is full of such useful analyses.
### Edit (Jan 2017)
Here is R code to produce the figure following "The standard wisdom about DF..."
#
# Simulate data, one iteration per column of x.
#
n <- 20
n.sim <- 1e4
bins <- qnorm(seq(0, 1, 1/4))
x <- matrix(rnorm(n*n.sim), nrow=n)
#
# Compute statistics.
#
m <- colMeans(x)
s <- apply(sweep(x, 2, m), 2, sd)
counts <- apply(matrix(as.numeric(cut(x, bins)), nrow=n), 2, tabulate, nbins=4)
expectations <- mapply(function(m,s) n*diff(pnorm(bins, m, s)), m, s)
chisquared <- colSums((counts - expectations)^2 / expectations)
#
# Plot histograms of means, variances, and chi-squared stats. The first
# two confirm all is working as expected.
#
mfrow <- par("mfrow")
par(mfrow=c(1,3))
red <- "#a04040" # Intended to show correct distributions
blue <- "#404090" # To show the putative chi-squared distribution
hist(m, freq=FALSE)
hist(s^2, freq=FALSE)
hist(chisquared, freq=FALSE, breaks=seq(0, ceiling(max(chisquared)), 1/4),
xlim=c(0, 13), ylim=c(0, 0.55),
col="#c0c0ff", border="#404040")
curve(ifelse(x <= 0, Inf, dchisq(x, df=2)), add=TRUE, col=red, lwd=2)
curve(ifelse(x <= 0, Inf, dchisq(x, df=1)), add=TRUE, col=blue, lwd=2)
par(mfrow=mfrow)
• This is an amazing answer. You win at the internet for this.
Dec 14 '11 at 3:00
• @caracal: as you know, ML methods for the original data are routine and widespread: for the normal distribution, for instance, the MLE of $\mu$ is the sample mean and the MLE of $\sigma$ is the square root of the sample standard deviation (without the usual bias correction). To obtain estimates based on counts, I computed the likelihood function for the counts--this requires computing values of the CDF at the cutpoints, taking their logs, multiplying by the counts, and adding up--and optimized it using generic optimization software.
– whuber
Dec 16 '11 at 14:35
• @caracal You probably no longer need it, but an example of R code for ML fitting of binned data now appears in a related question: stats.stackexchange.com/a/34894.
– whuber
Aug 22 '12 at 21:29
• Thanks for confirming me in not understanding the deeper meaning of DF wrt. statistics. I tried to read this up many times, and it never made any sense to me. Dec 3 '13 at 11:41
• @Clarinetist The principal point of my answer is to suggest that what you have been taught is based on a confusion of two concepts of DF. Although that confusion causes no problems for standard least-squares Normal-theory models, it leads to errors even in simple, common circumstances like analyses of contingency tables. That matrix rank gives the functional DF. In a least-squares linear model it happens to give the correct DF for certain kinds of tests, such as F tests. For the chi-squared test, the special conditions are enumerated later in the answer as points (1) and (2).
– whuber
Apr 28 '16 at 18:33
Or simply: the number of elements in a numerical array that you're allowed to change so that the value of the statistic remains unchanged.
# for instance if:
x + y + z = 10
you can change, for instance, x and y at random, but you cannot change z (you can, but not at random, therefore you're not free to change it - see Harvey's comment), 'cause you'll change the value of the statistic (Σ = 10). So, in this case df = 2.
• It is not quite correct to say "you cannot change z". In fact, you have to change z to make the sum equal 10. But you have no choice (no freedom) about what it changes to. You can change any two values, but not the third. Jul 28 '10 at 14:29
The concept is not at all difficult to make mathematical precise given a bit of general knowledge of $n$-dimensional Euclidean geometry, subspaces and orthogonal projections.
If $P$ is an orthogonal projection from $\mathbb{R}^n$ to a $p$-dimensional subspace $L$ and $x$ is an arbitrary $n$-vector then $Px$ is in $L$, $x - Px$ and $Px$ are orthogonal and $x - Px \in L^{\perp}$ is in the orthogonal complement of $L$. The dimension of this orthogonal complement, $L^{\perp}$, is $n-p$. If $x$ is free to vary in an $n$-dimensional space then $x - Px$ is free to vary in an $n-p$ dimensional space. For this reason we say that $x - Px$ has $n-p$ degrees of freedom.
These considerations are important to statistics because if $X$ is an $n$-dimensional random vector and $L$ is a model of its mean, that is, the mean vector $E(X)$ is in $L$, then we call $X-PX$ the vector of residuals, and we use the residuals to estimate the variance. The vector of residuals has $n-p$ degrees of freedom, that is, it is constrained to a subspace of dimension $n-p$.
If the coordinates of $X$ are independent and normally distributed with the same variance $\sigma^2$ then
• The vectors $PX$ and $X - PX$ are independent.
• If $E(X) \in L$ the distribution of the squared norm of the vector of residuals $||X - PX||^2$ is a $\chi^2$-distribution with scale parameter $\sigma^2$ and another parameter that happens to be the degrees of freedom $n-p$.
The sketch of proof of these facts is given below. The two results are central for the further development of the statistical theory based on the normal distribution. Note also that this is why the $\chi^2$-distribution has the parametrization it has. It is also a $\Gamma$-distribution with scale parameter $2\sigma^2$ and shape parameter $(n-p)/2$, but in the context above it is natural to parametrize in terms of the degrees of freedom.
I must admit that I don't find any of the paragraphs cited from the Wikipedia article particularly enlightening, but they are not really wrong or contradictory either. They say in an imprecise, and in a general loose sense, that when we compute the estimate of the variance parameter, but do so based on residuals, we base the computation on a vector that is only free to vary in a space of dimension $n-p$.
Beyond the theory of linear normal models the use of the concept of degrees of freedom can be confusing. It is, for instance, used in the parametrization of the $\chi^2$-distribution whether or not there is a reference to anything that could have any degrees of freedom. When we consider statistical analysis of categorical data there can be some confusion about whether the "independent pieces" should be counted before or after a tabulation. Furthermore, for constraints, even for normal models, that are not subspace constraints, it is not obvious how to extend the concept of degrees of freedom. Various suggestions exist typically under the name of effective degrees of freedom.
Before any other usages and meanings of degrees of freedom is considered I will strongly recommend to become confident with it in the context of linear normal models. A reference dealing with this model class is A First Course in Linear Model Theory, and there are additional references in the preface of the book to other classical books on linear models.
Proof of the results above: Let $\xi = E(X)$, note that the variance matrix is $\sigma^2 I$ and choose an orthonormal basis $z_1, \ldots, z_p$ of $L$ and an orthonormal basis $z_{p+1}, \ldots, z_n$ of $L^{\perp}$. Then $z_1, \ldots, z_n$ is an orthonormal basis of $\mathbb{R}^n$. Let $\tilde{X}$ denote the $n$-vector of the coefficients of $X$ in this basis, that is $$\tilde{X}_i = z_i^T X.$$ This can also be written as $\tilde{X} = Z^T X$ where $Z$ is the orthogonal matrix with the $z_i$'s in the columns. Then we have to use that $\tilde{X}$ has a normal distribution with mean $Z^T \xi$ and, because $Z$ is orthogonal, variance matrix $\sigma^2 I$. This follows from general linear transformation results of the normal distribution. The basis was chosen so that the coefficients of $PX$ are $\tilde{X}_i$ for $i= 1, \ldots, p$, and the coefficients of $X - PX$ are $\tilde{X}_i$ for $i= p+1, \ldots, n$. Since the coefficients are uncorrelated and jointly normal, they are independent, and this implies that $$PX = \sum_{i=1}^p \tilde{X}_i z_i$$ and $$X - PX = \sum_{i=p+1}^n \tilde{X}_i z_i$$ are independent. Moreover, $$||X - PX||^2 = \sum_{i=p+1}^n \tilde{X}_i^2.$$ If $\xi \in L$ then $E(\tilde{X}_i) = z_i^T \xi = 0$ for $i = p +1, \ldots, n$ because then $z_i \in L^{\perp}$ and hence $z_i \perp \xi$. In this case $||X - PX||^2$ is the sum of $n-p$ independent $N(0, \sigma^2)$-distributed random variables, whose distribution, by definition, is a $\chi^2$-distribution with scale parameter $\sigma^2$ and $n-p$ degrees of freedom.
• NRH, Thanks! (1) Why is $E(X)$ required to be inside $L$? (2) Why $PX$ and $X−PX$ are independent? (3) Is the dof in the random variable context defined from the dof in its deterministic case? For example, is the reason for $||X−PX||^2$ has dof $n-p$ because it is true when $X$ is a deterministic variable instead of a random variable? (4) Are there references (books, papers or links) that hold the same/similar opinion as yours?
– Tim
Oct 12 '11 at 22:50
• @Tim, $PX$ and $X-PX$ are independent, since they are normal and uncorrelated. Oct 13 '11 at 7:34
• @Tim, I have reworded the answer a little and given a proof of the stated results. The mean is required to be in $L$ to prove the result about the $\chi^2$-distribution. It is a model assumption. In the literature you should look for linear normal models or general linear models, but right now I can only recall some old, unpublished lecture notes. I will see if I can find a suitable reference.
– NRH
Oct 13 '11 at 11:06
• Wonderful answer. Thanks for the insight. One question: I got lost what you meant by the phrase "the mean vector $EX$ is in $L$". Can you explain? Are you try to define $E$? to define $L$? something else? Maybe this sentence is trying to do too much or be too concise for me. Can you elaborate what is the definition of $E$ in the context you mention: is it just $E(x_1,x_2,\dots,x_n) = (x_1+x_2+\dots+x_n)/n$? Can you elaborate on what is $L$ in this context (of normal iid coordinates)? Is it just $L = \mathbb{R}$?
– D.W.
Oct 13 '11 at 21:12
• @D.W. The $E$ is the expectation operator. So $E(X)$ is the vector of coordinatewise expectations of $X$. The subspace $L$ is any $p$-dimensional subspace of $\mathbb{R}^n$. It is a space of $n$-vectors and certainly not $\mathbb{R}$, but it can very well be one-dimensional. The simplest example is perhaps when it is spanned by the $\mathbf{1}$-vector with a 1 at all $n$-coordinates. This is the model of all coordinates of $X$ having the same mean value, but many more complicated models are possible.
– NRH
Oct 13 '11 at 22:02
It's really no different from the way the term "degrees of freedom" works in any other field. For example, suppose you have four variables: the length, the width, the area, and the perimeter of a rectangle. Do you really know four things? No, because there are only two degrees of freedom. If you know the length and the width, you can derive the area and the perimeter. If you know the length and the area, you can derive the width and the perimeter. If you know the area and the perimeter you can derive the length and the width (up to rotation). If you have all four, you can either say that the system is consistent (all of the variables agree with each other), or inconsistent (no rectangle could actually satisfy all of the conditions). A square is a rectangle with a degree of freedom removed; if you know any side of a square or its perimeter or its area, you can derive all of the others because there's only one degree of freedom.
In statistics, things get more fuzzy, but the idea is still the same. If all of the data that you're using as the input for a function are independent variables, then you have as many degrees of freedom as you have inputs. But if they have dependence in some way, such that if you had n - k inputs you could figure out the remaining k, then you've actually only got n - k degrees of freedom. And sometimes you need to take that into account, lest you convince yourself that the data are more reliable or have more predictive power than they really do, by counting more data points than you really have independent bits of data.
Moreover, all three definitions are almost trying to give a same message.
• Basically right, but I'm concerned that the middle paragraph could be read in a way that confuses correlation, independence (of random variables), and functional independence (of a manifold of parameters). The correlation-independence distinction is particularly important to maintain.
– whuber
Oct 12 '11 at 20:46
• @whuber: is it fine now? Oct 12 '11 at 20:55
• It's correct, but the way it uses terms would likely confuse some people. It still does not explicitly distinguish dependence of random variables from functional dependence. For example, the two variables in a (nondegenerate) bivariate normal distribution with nonzero correlation will be dependent (as random variables) but they still offer two degrees of freedom.
– whuber
Oct 12 '11 at 21:02
• This was copy-pasted from a reddit post I made in 2009. Apr 20 '14 at 3:22
• Our Help Center provide clear guidance on how to reference material written by others, so I hope the OP will come back to this post to take appropriate actions and engage in constructive interactions (we haven't seen him for a while, though).
– chl
Apr 24 '14 at 18:54
I really like first sentence from The Little Handbook of Statistical Practice. Degrees of Freedom Chapter
One of the questions an instrutor dreads most from a mathematically unsophisticated audience is, "What exactly is degrees of freedom?"
I think you can get really good understanding about degrees of freedom from reading this chapter.
• It would be nice to have an explanation for why degrees of freedom is important, rather than just what it is. For instance, showing that the estimate of variance with 1/n is biased but using 1/(n-1) yields an unbiased estimator. Jul 31 '10 at 20:12
• "I think you can get really good understanding about degrees of freedom from reading this chapter" - Definitely not. Oct 6 '20 at 18:18
Wikipedia asserts that degrees of freedom of a random vector can be interpreted as the dimensions of the vector subspace. I want to go step-by-step, very basically through this as a partial answer and elaboration on the Wikipedia entry.
The example proposed is that of a random vector corresponding to the measurements of a continuous variable for different subjects, expressed as a vector extending from the origin $[a\,b\,c]^T$. Its orthogonal projection on the vector $[1\,1\,1]^T$ results in a vector equal to the projection of the vector of measurement means ($\bar{x}=1/3(a+b+c)$), i.e. $[\bar x \, \bar x \, \bar x]^T$, dotted with the $\vec{1}$ vector, $[1\,1\,1]^T$ This projection onto the subspace spanned by the vector of ones has $1\,\text{degree of freedom}$. The residual vector (distance from the mean) is the least-squares projection onto the $(n − 1)$-dimensional orthogonal complement of this subspace, and has $n − 1\,\text{degrees of freedom}$, $n$ being the total number of components of the vector (in our case $3$ since we are in $\mathbb{R}^3$ in the example).This can be simply proven by obtaining the dot product of $[\bar{x}\,\bar{x}\,\bar{x}]^T$ with the difference between $[a\,b\,c]^T$ and $[\bar{x}\,\bar{x}\,\bar{x}]^T$:
$$[\bar{x}\, \bar{x}\,\bar{x}]\, \begin{bmatrix} a-\bar{x}\\b-\bar{x}\\c-\bar{x}\end{bmatrix}=$$
$$= \bigg[\tiny\frac{(a+b+c)}{3}\, \bigg(a-\frac{(a+b+c)}{3}\bigg)\bigg]+ \bigg[\tiny\frac{(a+b+c)}{3} \,\bigg(b-\frac{(a+b+c)}{3}\bigg)\bigg]+ \bigg[\tiny\frac{(a+b+c)}{3} \,\bigg(c-\frac{(a+b+c)}{3}\bigg)\bigg]$$
$$=\tiny \frac{(a+b+c)}{3}\bigg[ \bigg(\tiny a-\frac{(a+b+c)}{3}\bigg)+ \bigg(b-\frac{(a+b+c)}{3}\bigg)+ \bigg(c-\frac{(a+b+c)}{3}\bigg)\bigg]$$
$$= \tiny \frac{(a+b+c)}{3}\bigg[\tiny \frac{1}{3} \bigg(\tiny 3a-(a+b+c)+ 3b-(a+b+c)+3c-(a+b+c)\bigg)\bigg]$$
$$=\tiny\frac{(a+b+c)}{3}\bigg[\tiny\frac{1}{3} (3a-3a+ 3b-3b+3c-3c)\bigg]\large= 0$$.
And this relationship extends to any point in a plane orthogonal to $[\bar{x}\,\bar{x}\,\bar{x}]^T$. This concept is important in understanding why $\frac 1 {\sigma^2} \Big((X_1-\bar X)^2 + \cdots + (X_n - \bar X)^2 \Big) \sim \chi^2_{n-1}$, a step in the derivation of the t-distribution(here and here).
Let's take the point $[35\,50\,80]^T$, corresponding to three observations. The mean is $55$, and the vector $[55\,\,55\,\,55]^T$ is the normal (orthogonal) to a plane, $55x + 55y + 55z = D$. Plugging in the point coordinates into the plane equation, $D = -9075$.
Now we can choose any other point in this plane, and the mean of its coordinates is going to be $55$, geometrically corresponding to its projection onto the vector $[1\,\,1\,\,1]^T$. Hence for every mean value (in our example, $55$) we can choose an infinite number of pairs of coordinates in $\mathbb{R}^2$ without restriction ($2\,\text{degrees of freedom}$); yet, since the plane is in $\mathbb{R}^3$, the third coordinate will come determined by the equation of the plane (or, geometrically the orthogonal projection of the point onto $[55\,\,55\,\,55]^T$.
Here is representation of three points (in white) lying on the plane (cerulean blue) orthogonal to $[55\,\,55\,\,55]^T$ (arrow): $[35\,\,50\,\,80]^T$, $[80\,\,80\,\,5]$ and $[90\,\,15\,\,60]$ all of them on the plane (subspace with $2\,\text{df}$), and then with a mean of their components of $55$, and an orthogonal projection to $[1\,\,1\,\,1]^T$ (subspace with $1\,\text{df}$) equal to $[55\,\,55\,\,55]^T$:
In my classes, I use one "simple" situation that might help you wonder and perhaps develop a gut feeling for what a degree of freedom may mean.
It is kind of a "Forrest Gump" approach to the subject, but it is worth the try.
Consider you have 10 independent observations $X_1, X_2, \ldots, X_{10}\sim N(\mu,\sigma^2)$ that came right from a normal population whose mean $\mu$ and variance $\sigma^2$ are unknown.
Your observations bring to you collectively information both about $\mu$ and $\sigma^2$. After all, your observations tend to be spread around one central value, which ought to be close to the actual and unknown value of $\mu$ and, likewise, if $\mu$ is very high or very low, then you can expect to see your observations gather around a very high or very low value respectively. One good "substitute" for $\mu$ (in the absence of knowledge of its actual value) is $\bar X$, the average of your observation.
Also, if your observations are very close to one another, that is an indication that you can expect that $\sigma^2$ must be small and, likewise, if $\sigma^2$ is very large, then you can expect to see wildly different values for $X_1$ to $X_{10}$.
If you were to bet your week's wage on which should be the actual values of $\mu$ and $\sigma^2$, you would need to choose a pair of values in which you would bet your money. Let's not think of anything as dramatic as losing your paycheck unless you guess $\mu$ correctly until its 200th decimal position. Nope. Let's think of some sort of prizing system that the closer you guess $\mu$ and $\sigma^2$ the more you get rewarded.
In some sense, your better, more informed, and more polite guess for $\mu$'s value could be $\bar X$. In that sense, you estimate that $\mu$ must be some value around $\bar X$. Similarly, one good "substitute" for $\sigma^2$ (not required for now) is $S^2$, your sample variance, which makes a good estimate for $\sigma$.
If your were to believe that those substitutes are the actual values of $\mu$ and $\sigma 2$, you would probably be wrong, because very slim are the chances that you were so lucky that your observations coordinated themselves to get you the gift of $\bar X$ being equal to $\mu$ and $S^2$ equal to $\sigma^2$. Nah, probably it didn't happen.
But you could be at different levels of wrong, varying from a bit wrong to really, really, really miserably wrong (a.k.a., "Bye-bye, paycheck; see you next week!").
Ok, let's say that you took $\bar X$ as your guess for $\mu$. Consider just two scenarios: $S^2=2$ and $S^2=20,000,000$. In the first, your observations sit pretty and close to one another. In the latter, your observations vary wildly. In which scenario you should be more concerned with your potential losses? If you thought of the second one, you're right. Having a estimate about $\sigma^2$ changes your confidence on your bet very reasonably, for the larger $\sigma^2$ is, the wider you can expect $\bar X$ to variate.
But, beyond information about $\mu$ and $\sigma^2$, your observations also carry some amount of just pure random fluctuation that is not informative neither about $\mu$ nor about $\sigma^2$.
How can you notice it?
Well, let's assume, for sake of argument, that there is a God and that He has spare time enough to give Himself the frivolity of telling you specifically the real (and so far unknown) values of both $\mu$ and $\sigma$.
And here is the annoying plot twist of this lysergic tale: He tells it to you after you placed your bet. Perhaps to enlighten you, perhaps to prepare you, perhaps to mock you. How could you know?
Well, that makes the information about $\mu$ and $\sigma^2$ contained in your observations quite useless now. Your observations' central position $\bar X$ and variance $S^2$ are no longer of any help to get closer to the actual values of $\mu$ and $\sigma^2$, for you already know them.
One of the benefits of your good acquaintance with God is that you actually know by how much you failed to guess correctly $\mu$ by using $\bar X$, that is, $(\bar X - \mu)$ your estimation error.
Well, since $X_i\sim N(\mu,\sigma^2)$, then $\bar X\sim N(\mu,\sigma^2/10)$ (trust me in that if you will), also $(\bar X - \mu)\sim N(0,\sigma^2/10)$ (ok, trust me in that on too) and, finally, $$\frac{\bar X - \mu}{\sigma/\sqrt{10}} \sim N(0,1)$$ (guess what? trust me in that one as well), which carries absolutely no information about $\mu$ or $\sigma^2$.
You know what? If you took any of your individual observations as a guess for $\mu$, your estimation error $(X_i-\mu)$ would be distributed as $N(0,\sigma^2)$. Well, between estimating $\mu$ with $\bar X$ and any $X_i$, choosing $\bar X$ would be better business, because $Var(\bar X) = \sigma^2/10 < \sigma^2 = Var(X_i)$, so $\bar X$ was less prone to be astray from $\mu$ than an individual $X_i$.
Anyway, $(X_i-\mu)/\sigma\sim N(0,1)$ is also absolutely non informative about neither $\mu$ nor $\sigma^2$.
"Will this tale ever end?" you may be thinking. You also may be thinking "Is there any more random fluctuation that is non informative about $\mu$ and $\sigma^2$?".
[I prefer to think that you are thinking of the latter.]
Yes, there is!
The square of your estimation error for $\mu$ with $X_i$ divided by $\sigma$, $$\frac{(X_i-\mu)^2}{\sigma^2} = \left(\frac{X_i-\mu}{\sigma}\right)^2 \sim \chi^2$$ has a Chi-squared distribution, which is the distribution of the square $Z^2$ of a standard Normal $Z\sim N(0,1)$, which I am sure you noticed has absolutely no information about either $\mu$ nor $\sigma^2$, but conveys information about the variability you should expect to face.
That is a very well known distribution that arises naturally from the very scenario of you gambling problem for every single one of your ten observations and also from your mean: $$\frac{(\bar X-\mu)^2}{\sigma^2/10} = \left(\frac{\bar X-\mu}{\sigma/\sqrt{10}}\right)^2 = \left(N(0,1)\right)^2 \sim\chi^2$$ and also from the gathering of your ten observations' variation: $$\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2/10} =\sum_{i=1}^{10} \left(\frac{X_i-\mu}{\sigma/\sqrt{10}}\right)^2 =\sum_{i=1}^{10} \left(N(0,1)\right)^2 =\sum_{i=1}^{10} \chi^2.$$ Now that last guy doesn't have a Chi-squared distribution, because he is the sum of ten of those Chi-squared distributions, all of them independent from one another (because so are $X_1, \ldots, X_{10}$). Each one of those single Chi-squared distribution is one contribution to the amount of random variability you should expect to face, with roughly the same amount of contribution to the sum.
The value of each contribution is not mathematically equal to the other nine, but all of them have the same expected behavior in distribution. In that sense, they are somehow symmetric.
Each one of those Chi-square is one contribution to the amount of pure, random variability you should expect in that sum.
If you had 100 observations, the sum above would be expected to be bigger just because it have more sources of contibutions.
Each of those "sources of contributions" with the same behavior can be called degree of freedom.
Now take one or two steps back, re-read the previous paragraphs if needed to accommodate the sudden arrival of your quested-for degree of freedom.
Yep, each degree of freedom can be thought of as one unit of variability that is obligatorily expected to occur and that brings nothing to the improvement of guessing of $\mu$ or $\sigma^2$.
The thing is, you start to count on the behavior of those 10 equivalent sources of variability. If you had 100 observations, you would have 100 independent equally-behaved sources of strictly random fluctuation to that sum.
That sum of 10 Chi-squares gets called a Chi-squared distributions with 10 degrees of freedom from now on, and written $\chi^2_{10}$. We can describe what to expect from it starting from its probability density function, that can be mathematically derived from the density from that single Chi-squared distribution (from now on called Chi-squared distribution with one degree of freedom and written $\chi^2_1$), that can be mathematically derived from the density of the normal distribution.
"So what?" --- you might be thinking --- "That is of any good only if God took the time to tell me the values of $\mu$ and $\sigma^2$, of all the things He could tell me!"
Indeed, if God Almighty were too busy to tell you the values of $\mu$ and $\sigma^2$, you would still have that 10 sources, that 10 degrees of freedom.
Things start to get weird (Hahahaha; only now!) when you rebel against God and try and get along all by yourself, without expecting Him to patronize you.
You have $\bar X$ and $S^2$, estimators for $\mu$ and $\sigma^2$. You can find your way to a safer bet.
You could consider calculating the sum above with $\bar X$ and $S^2$ in the places of $\mu$ and $\sigma^2$: $$\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{S^2/10} =\sum_{i=1}^{10} \left(\frac{X_i-\bar X}{S/\sqrt{10}}\right)^2,$$ but that is not the same as the original sum.
"Why not?" The term inside the square of both sums are very different. For instance, it is unlikely but possible that all your observations end up being larger than $\mu$, in which case $(X_i-\mu) > 0$, which implies $\sum_{i=1}^{10}(X_i-\mu) > 0$, but, by its turn, $\sum_{i=1}^{10}(X_i-\bar X) = 0$, because $\sum_{i=1}^{10}X_i-10 \bar X =10 \bar X - 10 \bar X = 0$.
Worse, you can prove easily (Hahahaha; right!) that $\sum_{i=1}^{10}(X_i-\bar X)^2 \le \sum_{i=1}^{10}(X_i-\mu)^2$ with strict inequality when at least two observations are different (which is not unusual).
"But wait! There's more!" $$\frac{X_i-\bar X}{S/\sqrt{10}}$$ doesn't have standard normal distribution, $$\frac{(X_i-\bar X)^2}{S^2/10}$$ doesn't have Chi-squared distribution with one degree of freedom, $$\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{S^2/10}$$ doesn't have Chi-squared distribution with 10 degrees of freedom $$\frac{\bar X-\mu}{S/\sqrt{10}}$$ doesn't have standard normal distribution.
"Was it all for nothing?"
No way. Now comes the magic! Note that $$\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{[X_i-\mu+\mu-\bar X]^2}{\sigma^2} =\sum_{i=1}^{10} \frac{[(X_i-\mu)-(\bar X-\mu)]^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2-2(X_i-\mu)(\bar X-\mu)+(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2-(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-\sum_{i=1}^{10} \frac{(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-10\frac{(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-\frac{(\bar X-\mu)^2}{\sigma^2/10}$$ or, equivalently, $$\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} +\frac{(\bar X-\mu)^2}{\sigma^2/10}.$$ Now we get back to those known faces.
The first term has Chi-squared distribution with 10 degrees of freedom and the last term has Chi-squared distribution with one degree of freedom(!).
We simply split a Chi-square with 10 independent equally-behaved sources of variability in two parts, both positive: one part is a Chi-square with one source of variability and the other we can prove (leap of faith? win by W.O.?) to be also a Chi-square with 9 (= 10-1) independent equally-behaved sources of variability, with both parts independent from one another.
This is already a good news, since now we have its distribution.
Alas, it uses $\sigma^2$, to which we have no access (recall that God is amusing Himself on watching our struggle).
Well, $$S^2=\frac{1}{10-1}\sum_{i=1}^{10} (X_i-\bar X)^2,$$ so $$\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} =\frac{\sum_{i=1}^{10} (X_i-\bar X)^2}{\sigma^2} =\frac{(10-1)S^2}{\sigma^2} \sim\chi^2_{(10-1)}$$ therefore $$\frac{\bar X-\mu}{S/\sqrt{10}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\frac{S}{\sigma}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\sqrt{\frac{S^2}{\sigma^2}}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\sqrt{\frac{\frac{(10-1)S^2}{\sigma^2}}{(10-1)}}} =\frac{N(0,1)}{\sqrt{\frac{\chi^2_{(10-1)}}{(10-1)}}},$$ which is a distribution that is not the standard normal, but whose density can be derived from the densities of the standard normal and the Chi-squared with $(10-1)$ degrees of freedom.
One very, very smart guy did that math[^1] in the beginning of 20th century and, as an unintended consequence, he made his boss the absolute world leader in the industry of Stout beer. I am talking about William Sealy Gosset (a.k.a. Student; yes, that Student, from the $t$ distribution) and Saint James's Gate Brewery (a.k.a. Guinness Brewery), of which I am a devout.
[^1]: @whuber told in the comments below that Gosset did not do the math, but guessed instead! I really don't know which feat is more surprising for that time.
That, my dear friend, is the origin of the $t$ distribution with $(10-1)$ degrees of freedom. The ratio of a standard normal and the squared root of an independent Chi-square divided by its degrees of freedom, which, in an unpredictable turn of tides, wind up describing the expected behavior of the estimation error you undergo when using the sample average $\bar X$ to estimate $\mu$ and using $S^2$ to estimate the variability of $\bar X$.
There you go. With an awful lot of technical details grossly swept behind the rug, but not depending solely on God's intervention to dangerously bet your whole paycheck.
• Thank you for such an effort! I confess that I found your explanation less than convincing, though. It seems to founder at this crucial junction: "Each of those "sources of contributions" with the same behavior can be called degree of freedom." If you had instead summed $10$ independent normal variates rather than $10$ independent chi-squared variates, you would end up with--one normal variate. Somehow the "degrees of freedom" get completely swallowed up. Evidently there is something special about chi-squared you haven't yet described. BTW, Gosset didn't do the math: he guessed!
– whuber
Jan 1 '17 at 3:33
• Thank you very much for your evaluation, @whuber! It's amazing how many typos pop up once you forgot what you wrote. About your evaluation, I intended just to illustrate another way of thinking -- a little bit less mathematical in some sense. Also, I am not grasping fully what you meant with If you had instead summed 10 independent normal variates rather than 10 independent chi-squared variates, you would end up with--one normal variate -- which I guessed to hold your key-point. I will try to elaborate about it, hoping to improve the post. Jan 2 '17 at 15:08
This particular issue is quite frustrating for students in statistics courses, since they often cannot get a straight answer on exactly what a degree-of-freedom is defined to be. I will try to clear that up here. Suppose we have a random vector $$\mathbf{x} \in \mathbb{R}^n$$ and we form a new random vector $$\mathbf{t} = T(\mathbf{x})$$ via the linear function $$T$$. Formally, the degrees-of-freedom of $$\mathbf{t}$$ is the dimension of the space of allowable values for this vector, which is:
$$DF \equiv \dim \mathscr{T} \equiv \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \}.$$
The initial random vector $$\mathbf{x}$$ has an allowable space of dimension $$n$$, so it has $$n$$ degrees of freedom. Often the function $$T$$ will reduce the dimension of the allowable space of outcomes, and so $$\mathbf{t}$$ may have a lower degrees-of-freedom than $$\mathbf{x}$$. For example, in an answer to a related question you can see this formal definition of the degrees-of-freedom being used to explain Bessel's correction in the sample variance formula. In that particular case, transforming an initial sample to obtain its deviations from the sample mean leads to a deviation vector that has $$n-1$$ degrees-of-freedom (i.e., it is a vector in an allowable space with dimension $$n-1$$).
When you apply this formal definition to statistical problems, you will usually find that the imposition of a single "constraint" on the random vector (via a linear equation on that vector) reduces the dimension of its allowable values by one, and thus reduces the degrees-of-freedom by one. As such, you will find that the above formal definition corresponds with the informal explanations you have been given.
In undergraduate courses on statistics, you will generally find a lot of hand-waving and informal explanation of degrees-of-freedom, often via analogies or examples. The reason for this is that the formal definition requires an understanding of vector algebra and the geometry of vector spaces, which may be lacking in introductory statistics courses at an undergraduate level.
You can see the degree of freedom as the number of observations minus the number of necessary relations among these observations. By exemple if you have $$n$$ sample of independant normal distribution observations $$X_1,\dots,X_n$$. The random variable $$\sum_{i=1}^n (X_i-\overline{X}_n)^2\sim \mathcal{X}^2_{n-1}$$, where $$\overline{X}_n = \frac{1}{n}\sum_{i=1}^n X_i$$. The degree of freedom here is $$n-1$$ because, their is one necessary relation between theses observations $$(\overline{X}_n = \frac{1}{n}\sum_{i=1}^n X_i)$$.
An intuitive explanation of degrees of freedom is that they represent the number of independent pieces of information available in the data for estimating a parameter (i.e., unknown quantity) of interest.
As an example, in a simple linear regression model of the form:
$$Y_i=\beta_0 + \beta_1\cdot X_i + \epsilon_i,\quad i=1,\ldots, n$$
where the $$\epsilon_i$$'s represent independent normally distributed error terms with mean 0 and standard deviation $$\sigma$$, we use 1 degree of freedom to estimate the intercept $$\beta_0$$ and 1 degree of freedom to estimate the slope $$\beta_1$$. Since we started out with $$n$$ observations and used up 2 degrees of freedom (i.e., two independent pieces of information), we are left with $$n-2$$ degrees of freedom (i.e., $$n-2$$ independent pieces of information) available for estimating the error standard deviation $$\sigma$$.
• Thanks very much for your edits to my answer, @COOLSerdash! May 8 '19 at 23:06
The clearest "formal" definition of degrees-of-freedom is that it is the dimension of the space of allowable values for a random vector. This generally arises in a context where we have a sample vector $$\mathbf{x} \in \mathbb{R}^n$$ and we form a new random vector $$\mathbf{t} = T(\mathbf{x})$$ via the linear function $$T$$. Formally, the degrees-of-freedom of $$\mathbf{t}$$ is the dimension of the space of allowable values for this vector, which is:
$$DF \equiv \dim \mathscr{T} \equiv \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \}.$$
If we represent this linear transformation by the matrix transformation $$T(\mathbf{x}) = \mathbf{T} \mathbf{x}$$ then we have:
\begin{aligned} DF &= \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \} \\[6pt] &= \dim \{ \mathbf{T} \mathbf{x} | \mathbf{x} \in \mathbb{R}^n \} \\[6pt] &= \text{rank} \ \mathbf{T} \\[6pt] &= n - \text{Ker} \ \mathbf{T}, \\[6pt] \end{aligned}
where the last step follows from the rank-nullity theorem. This means that when we transform $$\mathbf{x}$$ by the linear transformation $$T$$ we lose degrees-of-freedom equal to the kernel (nullspace) of $$\mathbf{T}$$. In statistical problems, there is a close relationship between the eigenvalues of $$\mathbf{T}$$ and the loss of degrees-of-freedom from the transformation. Often the loss of degrees-of-freedom is equivalent to the number of zero eigenvalues in the transformation matrix $$\mathbf{T}$$.
For example, in this answer we see that Bessel's correction to the sample variance, adjusting for the degrees-of-freedom of the vector of deviations from the mean, is closely related to the eigenvalues of the centering matrix. An identical result occurs in higher dimensions in linear regression analysis. In other statistical problems, similar relationships occur between the eigenvalues of the transformation matrix and the loss of degrees-of-freedom.
The above result also formalises the notation that one loses a degree-of-freedom for each "constraint" imposed on the observable vector of interest. Thus, in simple univariate sampling problems, when looking at the sample variance, one loses a degree-of-freedom from estimating the mean. In linear regression models, when looking at the MSE, one loses a degree-of-freedom for each model coefficient that was estimated.
For me the first explanation I understood was:
If you know some statistical value like mean or variation, how many variables of data you need to know before you can know the value of every variable?
This is the same as aL3xa said, but without giving any data point a special role and close to the third case given in the answer. In this way the same example would be:
If you know the mean of data, you need to know the values for all but one data point, to know the value to all data points.
• Variables --> observations Mar 30 '19 at 12:25
Think of it this way. Variances are additive when independent. For example, suppose we are throwing darts at a board and we measure the standard deviations of the $x$ and $y$ displacements from the exact center of the board. Then $V_{x,y}=V_x+V_y$. But, $V_x=SD_x^2$ if we take the square root of the $V_{x,y}$ formula, we get the distance formula for orthogonal coordinates, $SD_{x,y}=\sqrt{SD_x^2+SD_y^2}$. Now all we have to show is that standard deviation is a representative measure of displacement away from the center of the dart board. Since $SD_x=\sqrt{\dfrac{\sum_{i=1}^n(x_i-\bar{x})^2}{n-1}}$, we have a ready means of discussing df. Note that when $n=1$, then $x_1-\bar{x}=0$ and the ratio $\dfrac{\sum_{i=1}^n(x_i-\bar{x})^2}{n-1}\rightarrow \dfrac{0}{0}$. In other words, there is no deviation to be had between one dart's $x$-coordinate and itself. The first time we have a deviation is for $n=2$ and there is only one of them, a duplicate. That duplicate deviation is the squared distance between $x_1$ or $x_2$ and $\bar{x}=\dfrac{x_1+x_2}{2}$ because $\bar{x}$ is the midpoint between or average of $x_1$ and $x_2$. In general, for $n$ distances we remove 1 because $\bar{x}$ is dependent on all $n$ of those distances. Now, $n-1$ represents the degrees of freedom because it normalizes for the number of unique outcomes to make an expected square distance. when divided into the sum of those square distances.
|
# 4.12 - Further Example of Confidence and Prediction Intervals
### Hospital Infection Data
The hospital infection risk dataset consists of a sample of 113 hospitals in four regions of the U.S. The response variable is y = infection risk (percent of patients who get an infection) and the predictor variable is x = average length of stay (in days). Here we analyze n = 58 hospitals in the east and north central U.S (regions 1 and 2). [Two hospitals with extreme values for Stay have also been removed.] Statistical software output for a simple linear regression model fit to these data follows:
Software output with information for x = 10.
We can make the following observations:
1. For the interval given under 95% CI, we say with 95% confidence we can estimate that in hospitals in which the average length of stay is 10 days, the mean infection risk is between 4.25921 and 4.79849.
2. For the interval given under 95% PI, we say with 95% confidence that for any future hospital where the average length of stay is 10 days, the infection risk is between 2.45891 and 6.59878.
3. The value under Fit is calculated as $\hat{y} = −1.160 + 0.5689(10) = 4.529$.
4. The value under SE Fit is the standard error of $\hat{y}$ and it measures the accuracy of $\hat{y}$ as an estimate of E(Y ).
5. Since df = n − 2 = 58 − 2 = 56, the multiplier for 95% confidence is 2.00324. The 95% CI for E(Y) is calculated as $4.52885 \pm (2.00324 × 0.134602) = 4.52885 \pm 0.26964 = (4.259, 4.798)$.
6. Since S = $\sqrt{MSE}$ = 1.02449, the 95% PI is calculated as $4.52885 \pm (2.00324 × \sqrt{1.02449^2 + 0.134602^2}) = 4.52885 \pm 0.20699 = (2.459, 6.599)$.
The following figure provides plots showing the difference between a 95% CI for E(Y ) and 95% PI for y.
There are also some things to note:
1. Notice that the limits for E(Y) are close to the line. The purpose for those limits is to estimate the "true" location of the line.
2. Notice that the prediction limits (on the right) bracket most of the data. Those limits describe the location of individual y-values.
3. Notice that the prediction intervals are wider than the confidence intervals. This is something that can be noted by the formulas.
|
# Stiffness constant $k$ for a diatomic molecule
1 vote
33 views
A hypothetical diatomic molecule has a binding length of $0.8860 nm$. When the molecule makes a rotational transition from $l = 2$ to the next lower energy level, a photon is released with $\lambda_r = 1403 \mu m$. At a vibration transition to a lower energy state, a photon is released with $\lambda_v = 4.844 \mu m$. Determine the spring constant k.
What I've done is:
1) Calculate the moment of inertia of the molecule by equating Planck's equation to the transition rotational energy:
$$E = \frac{hc}{\lambda} = \frac{2 \hbar^2}{I}$$
Thus solving for $I$:
$$I = 1.562 \times 10^{-46} kg m^2$$
2) Knowing that the moment of inertia of a diatomic molecule rotating about its CM can be expressed as $I = \rho r^2$, solve for $\rho$ (where $\rho$ is the reduced mass and $r$ is the distance from one of the molecules to the axis of rotation; thus $r$ is half the bond length. EDIT: $r$ is the bond length and NOT half of it. Curiously, if you use the half value you get a more reasonable k: around $3 N/m$):
$$\rho = \frac{I}{r^2} = 1.99 \times 10^{-28} kg$$
3) Solve for $k$ from the frequency of vibration equation:
$$f = \frac{1}{2\pi}\sqrt{\frac{k}{\rho}}$$
Knowing:
$$\omega = 2\pi f$$
We get:
$$k = \omega^2 \rho = (c/\lambda_v)^2 \rho = 0.763 N/m$$
Where $\lambda_v= 4.844 \times 10^{-6} m$
The problem I see is that the method seems to be OK but the result does not convince me. We know that the stiffness constant $k$ is is a measure of the resistance offered by a body to deformation. The unknown diatomic molecule we're dealing with seems to be much more elastic than $H_2$ (which has $k = 550 N/m$).
|
# Isn't this strange
1. Sep 8, 2007
### mubashirmansoor
Isn't this strange!!!
Hello,
Sometime ago I realized sth strange for most of the polynomials;
For example, $$y=x^2[/itex] if we take f(x+d) as a multiple of f(x), I mean: f(2)/f(1) = 4 hence 4*f(1) = f(2) so 4 is a multiple of f(1). Now; [tex](x+d)^2/x^2[/itex] as x approaches infinity the multiple approaches 1 ! doesn't this means that when y=x^2 and x is an infinitly large number, y reaches a constant term, ie never approaches ininity? I'll be thankfull for your help. I hope I've been able to express what I really mean. Thankyou. 2. Sep 8, 2007 ### DeadWolfe Firstly, f(x+d) is not generically a multiple of f(x). (I'm not sure if you were claiming it was). Secondly, whether or not f(x+d) is a multiple of f(x) has nothing to do with anything here, as you can sesnibly talk about f(x+d)/(f(x) anywhere where f(x) is not zero. Finally, it is obvious that (x+d)^2/x^2 tends to 1, but this does not suggest that x^2 appraches a constant, merely that x^2 "tends to" (x+d)^2 in some sense, which is intuitive and does not in anyway suggest that either function is bounded above. 3. Sep 8, 2007 ### mubashirmansoor Well, sorry for my wrong words, but what I really mean is different, note the following for y = x^2: f(2)/f(1) = 4 f(3)/f(2) = 2.25 f(4)/f(3) = 1.7778 and so on... as we divide these consecutive terms the result approaches 1 and at some point f(x+1) = f(x) where x is extreamly large... which means y approaches a constant term when x approaches infinity... This is impossible, so I'm puzzeled... :) 4. Sep 8, 2007 ### Gib Z No. There is no point on f(x) = x^2 where f(x+1) = f(x). As as a limit where x approaches infinity then they are equal, but for any Real value, they are not. Expressing f(x+1) = f(x) differently, you are claiming x^2 + 2x + 1 = x^2 is true for large values of x. Indeed, if it was true, then for large values of x, 2x+1 = 0. Quite false. Just because in the limiting sense the 2x+1 becomes negligible compared to the x^2 does not mean it is not there. 5. Sep 9, 2007 ### bomba923 mubashirmansoor: You can see that [tex]\mathop {\lim }\limits_{x \to \infty } \frac{{x^2 + d}}{{x^2 }} = \mathop {\lim }\limits_{x \to \infty } 1 + d/x^2 = 1$$
Clearly,
$$\forall \varepsilon > 0,\;\exists x_0 > 0:\forall x > x_0 ,\;\left| {\frac{{x^2 + d}}{{x^2 }} - 1} \right| < \varepsilon$$
Simply choose $$x_0 = \sqrt {\left| d \right|/\varepsilon }$$
6. Sep 9, 2007
### HallsofIvy
Staff Emeritus
What it means is that f(x)= x2 and g(x)= (x+d)2, for fixed d, are of the same "order": x2= o((x+d)2) which was clear to begin with since they are of the same degree.
7. Sep 9, 2007
### robert Ihnot
The ratio of F(n) over F(n+1) is going to be smaller as n increases. This is to be expected. So what? It has no bearing on the nature of the infinitesimal.
After all, just n/(n+1) tends to 1, so does the square. We have $$lim\frac{n^2}{(n+1)^2}=lim\frac{n}{n+1}*lim\frac{n}{n+1}\rightarrow1*1=1$$.
Geometrically, from the standpoint of the derivative we are talking about a tangent between two points that tends to a ultimate of a single point.
From your standpoint, we are looking at $$\frac{F(n+1)-F(n)}{1}=2n+1\rightarrow\infty$$
I take this as a case of wholesale confusion between a constant increase and infinitesimal.
Last edited: Sep 9, 2007
8. Sep 9, 2007
### Pathway
ohgodicanthelpmyself
let x = -0.5
but I see what you were going for.
Last edited: Sep 9, 2007
|
# Increasing Image Resolution
I know of some oscilloscopes (DSA8300) that repeatedly sample at a few hundred kS/s to reconstruct a few GHz signal. I was wondering if this could be extended to 2D signals (photographs). Can I take a series (say 4) of still pictures using a commercial 16MP camera to finally reconstruct a 32MP image? Will doing this remove aliasing I have from each image?
Should such a thing be attempted from a single image it would obviously not work as no new information is being introduced. If all the pictures taken are absolutely identical, will I still be at the same point as having one image? So are variations essential? Is CCD / CMOS noise enough variation for such a thing to work?
Is there a name for such a technique or algorithm? What should I look for?
• CCD noise wouldn't help you, but physical movement of the camera could. Taking multiple pictures of an identical scene with an identical camera in an identical position would only allow you to reduce noise, not reduce aliasing. You're still measuring the same points. Taking pictures offset by less than one pixel from each other, however, would give you an effectively higher sampling rate, helping to remove aliasing. – endolith Oct 26 '12 at 14:28
• I have a Nikon DX with a width of 23.6mm and has 4928 pixels on that dimension. This accounts to the width of each photosite on the sensor ~ 4.7889 microns. So should I move my camera along the width axis by fractions of this amount? Say 10 pictures by moving my camera 0.47 microns each time? And the same along the height? This hardly sounds like a weekend project with off the shelf stepper motors :'-( – Lord Loh. Oct 26 '12 at 18:15
• As an after thought, I was wondering, can I use multiple photos from a single shot of Light Field Camera (Lytro) with different focal planes to reconstruct a super resolution image? Intuitively, I think It will not work :-/ – Lord Loh. Oct 26 '12 at 18:55
• No, it depends on the distance to the target, optics, etc. Imagine a ray shooting out of each pixel of your camera, being bent by the lens, and hitting your target, so it's covered by a rectangular grid of points. Those are the points that each camera pixel sees. If the target is a wall covered in stripes, and the stripes alternate multiple times between each of your grid points, then you're going to have aliasing. – endolith Oct 26 '12 at 21:36
• That now makes sense :-) a 0.4 micron movement in that case is practically no movement at all! – Lord Loh. Oct 26 '12 at 22:02
One word for that technique is superresolution.
Robert Gawron has a blog post here and the Python implementation here.
Usually, this technique relies on each image being slightly offset from the others. The only gain you'd get from not moving between shots would be to reduce the noise level.
• Will this do away with aliased parts of the image? Like building windows and fine nets? If each image is aliased, can that lost information still be recovered? – Lord Loh. Oct 24 '12 at 20:36
• – Peter K. Oct 24 '12 at 21:33
Intuitively, if You move the sensor $N$ steps each at the size of $\frac{1}{N}$ of its resolution you can get $\times N$ more resolution.
It is like a polyphase representation of the signal.
Using estimation methods, any movement which is not an (Event with zero probability) integer multiplication of the resolution of the sensor, namely, fractional movement can be used to gather more data and enhance resolution.
Usually those methods are called Super Resolution which is fancy name for Poly Phase representation and sampling and are sub problem in the Inverse Problem family in Image Processing.
Yet, pay attention that many papers deals with Super Resolution yet the actually solve a different problem (Deconvolution of Single Image).
While the problem you're after is also in the field of Inverse Problems, yet using multi images.
I think the method you are after is mainly used in the Lithography industry.
• That is what I had initially thought. That I would have to move in sub-micron range, but this - mathworks.com/matlabcentral/fileexchange/… does not take such an approach and gives a decent image improvement - may be it is getting information from sub-photo sites by moving the camera slightly randomly instead of systematic 1/N step movement. – Lord Loh. Mar 26 '14 at 23:35
• Hi, As I wrote, using estimation techniques any movement (Unless it is integer multiplication of the sensors cells) could be used to infer more data. – Royi Mar 27 '14 at 12:54
Another word is "stacking". It is used to reduce CCD noise, to increase focal depth (by stacking images that are focused slightly differently), to improve very low-light astronomical photos, and to obtain high dynamic range (HDR) from a series of normal range images. See
http://en.wikipedia.org/wiki/Focus_stacking
http://www.instructables.com/id/Astrophotography-Star-Photo-Stacking
http://en.wikipedia.org/wiki/High_dynamic_range_imaging
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.