text
stringlengths 100
356k
|
---|
## Sunday, June 29, 2014
### Of Binary Bombs (the secret)
So far, I've described six stages of this bomb along with their solution. These stages have built up in difficulty while describing often used programming constructs such as: string comparison, arrays, a switch statement, recursion, lookup tables, linked lists, and here in the final stage a binary search tree.
While solving the 6th phase will successfully defuse the bomb there is a curious section of code executed at the end. The most important thing to notice is that we can not trigger the bomb from this point on; the entire function will only jump to a graceful exit unless we unlock the secrets. Recall the code for sym.phase_defused:
Initially, there is a check for the total number of lines entered so far; until this point that check has failed. Here the jump is bypassed and execution proceeds to call to sscanf. Two important arguments to sscanf here are: the format string: str._d_s (%d %s) and 0x804b770. From that first argument we can infer the types that will be read and the second indicates from where we will read that data. Unlike in prior phases, there is no input line read to start this phase so 0x804b770 must already have data located in it.
If we look at what is stored there we find nothing special - certainly not something that looks like a number followed by a string.
This analysis is using a static binary, however, so this memory may get filled in at runtime. We have looked at each function in turn and the only changes in memory are driven by the inputs we provide. So where, is this address in memory? If we look for known addresses around this we see that 0x804b770 is located at sym.input_strings+240. Remember in phase 2 we determined that sym.input_strings was a global array of 80-byte character arrays to hold the inputs we provide. So 240 bytes beyond that is the 4th solution we provided (the number 9). There was no string after that but that is part of the secret...
The sym.read_line grabs the entire input line and in phase 4 sscanf only looked for %d which leaves the remainder of the buffer untouched. Nothing prevents us from providing some trailing values after the number so long as there is a space between them.
Supposing we did provide a trailing string the next step is to check that string against str.austinpowers. So that is the secret to accessing the secret phase: update the 4th input to be '9 austinpowers'.
The secret phase reads in an additional line from the input stream and converts it to a long value using strtol. That value is decremented and compared against 0x3e8 (1000) - the bomb is triggered if our decremented value is greater than that. If the input passes that check we enter the final function: sym.fun7. Prior to going into detail, however, it is important to note that the return value from this function needs to be 0x7 to avoid triggering the bomb. The initial value to sym.fun7 is sym.n1 (0x0804b320).
This is a recursive function very similar to the one explained in stage 4. To understand what is happening with the control flow it is important to first understand what is contained in sym.n1. However, unlike stage 4 this variable name gives us little indication of what the memory may contain.
Looking at the first 16 bytes of that memory location we see the values are (after adjusting for endianess and assuming 32-byte values): 0x24, 0x0804b314, 0x0804b308, 0x0. The second two look very much like memory addresses in a range very close to sym.n1.
Following these two addresses we arrive at a very similar layout. This begins to resemble a recursive data structure most people will recognize: a binary tree. In C it is represented as:
struct bst {
int value;
struct bst *left, *right;
};
Mapping out the entire tree yields the following:
Now, that will make it easier to follow the control flow in sym.fun7 but there are still some pieces that are needed before a solution can be derived directly. Back in sym.fun7, there is an initial check for a nil next pointer and then the remainder of the function follows a pre-order traversal of the binary search tree.
The main concern at this point is understanding how the return value is calculated. Ultimately, we need to understand when the return value will be 7 so that we can provide input that will force a return at that particular point. The control flow on the left subtree either continues down the next left subtree when the argument node value is less than the current node or the right subtree if the value is greater than the current node. If the value is equal to the current node, eax is set to zero and the function returns.
The return path from a left tree traversal simply doubles the value of eax and returns to the caller. The return from the right subtree is a little more interesting - in addition to eax being doubled it is also incremented by one prior to returning to the caller. Since eax is used to hold intermediate memory addresses, the calculation probably only makes sense when the search value is found in the tree (thus setting eax to 0).
Since a found value returns 0 initially any return from a left subtree will only propagate the zero value; in order to get to seven we need to rely on the increment on the return path of the right subtree path. The only path that leads to the target return value is the one from the rightmost leaf in the tree.
To force a return value of 7 we must provide a value of 1001.
## Tuesday, June 10, 2014
### Of Binary Bombs (part 6)
In the last installment (phase 5) Dr. Evil used masking and a lookup table to try and defeat any secret agent. I will continue on here with the final phase of this binary bomb: phase 6. (This isn't really the final stage - check out the secret stage)
Our input string is loaded into the edx register as usual but then there is a strange reference to a sym.node1 that gets loaded into local stack space. That makes our first order of business to find what is stored in sym.node1.
The name node1 gives a fairly blatant hint at how we should look at this memory (without the symbols, this task would be a whole lot less straightforward). The first several bytes are pretty sparse: interpreting as 32-bit values we get 0xfd (253), 0x01 (1), and then the value 0x0804b260 (this is stored in little endian). That looks like another memory address; lets see.
Same structure. 0x02d5 (725), 0x02 (2), 0x0804b254. And the pattern continues. I'll take a leap and say that we have something that looks like the following C structure:
struct list_ {
int value_;
int index_;
struct list_ *next_;
};
I'm going to walk the list for a while to collect the values (and verify the counter continues in order). That results in the following (value_,index_) pairs starting from sym.node1.
(253, 1)
(725, 2)
(301, 3)
(997, 4)
(212, 5)
(432, 6)
The list is terminated at that point with a null next_ pointer. At this point, the values of the list are known so it is appropriate to resume walking the body of sym.phase_6.
Currently, the input string is loaded into edx and the linked list is stored in a local value; next a local buffer is loaded to eax and sym.read_six_numbers is called. I described this function in phase 2 and we can expect that the local buffer will contain our six input numbers after the call. I have a guess at this point what they should be but I want to verify first to avoid any of Dr. Evil's tricks.
The remainder of this phase can be broken down into four distinct loops. They are:
1. Verify the input values
2. Collect the nodes of the above list according to the input values
3. Reorder the original list with that collection
4. Verify the resulting list
While the input verification has a nested loop it is the most straightforward of the steps: it checks that all values are unique and less than 7.
Initially, collecting nodes according to the input values is a little harder to grasp as it too is a nested loop construct but is now dealing with offsetting into structures and moving memory locations (C pointers) around.
Specifically, the commented line below walks the linked list. This is something that would have not been evident had I not understood the memory in sys.node1.
mov eax, [edx+ecx]
lea esi, [esi]
mov esi, [esi+0x8] ; this uses the 'next' pointer
inc ebx
cmp ebx, eax
The third step, reordering the original list, is short and looks simple enough but took me some time to fully grok. I needed to understand that the previous step was storing local copies of the nodes in the original list. From that the original list is overwirtten here in the order specified by the input.
Finally, the overwritten list is checked to ensure that the value_ elements are arranged in decreasing order.
With that final piece of information the necessary input sequence becomes clear - the solution is to provide index_ values that order the value_ members from largest to smallest.
Below is a mapping of this functionality to some C code that it may have come from.
struct list_ {
int value_, index_;
struct list_ *next;
};
void phase_6 (const char * input) {
int i = 0;
struct list_ *list = ..., *node = list;
int values[6] = {0};
struct list_ *nodes[6] = {0};
// 0x08048db8 - 0x08048e00
for (; i < 6; ++i) {
int j = i + 1;
if (values[i] > 6) explode_bomb ();
for (; j < 6; ++j)
if (values[i] == values[j])
explode_bomb ();
}
// 0x08048e02 - 0x08048e42
for (i = 0 ; i < 6; ++i) {
node = list;
while (node) {
if (node->index_ == values[i]) {
nodes[i] = node;
break;
}
node = node->next;
}
}
// 0x08048e44 - 0x08048e60
i = 1;
list = nodes[0];
node = list;
while (i <= 5) {
node->next = nodes[i];
node = node->next;
++i;
}
node->next = 0;
// 0x08048e67 - 0x08048e85
node = list;
for (i = 0; i < 5; ++i)
if (node->value_ < node->next->value_)
explode_bomb ();
}
## Tuesday, June 3, 2014
### Of Binary Bombs (part 5)
Part 4 detailed a recursive function that calculated the nth entry into the Fibonacci sequence. Here we continue with the next stage to defeating Dr. Evil.
There is a familiar face here: sym.string_length. Recall in phase 1 I glazed over sym.string_not_equal which had buried inside a call to sym.string_length - if you've been following along at home this is not a surprise. The result of this call (which expects our input string as an argument) should be 6.
cmp eax, 0x6
This is our first clue to solving the riddle.
Peeking ahead a little there are two memory locations referenced directly: sym.array.123 and str.giants. Before we get too far into the details of sym.phase_5 lets look at what each of these contain. Using the memory printing capabilities of radare2 we can do this with: px @ sym.array.123 and ps @ str.giants to get the hex and ASCII representations, respectively.
Not surprisingly str.giants contains the string 'giants' and the content of sym.array.123 is listed below:
Alright, now that we've got some context lets continue with the code.
lea ecx, [ebp-0x8] ; load an empty local array
mov esi, sym.array.123 ; set a pointer to the first element of the memory above
mov al, [edx+ebx] ; target of the jump below
and al, 0xf
movsx eax, al
mov al, [eax+esi]
mov [edx+ecx], al
inc edx
cmp edx, 0x5
jle 0x8048d57
After loading the address of a local array the code enters a loop from 0 to 5 (for the six characters of our input). The body of that loop does the following:
Selects the nth byte from the user input string, masks off the bottom 4 bits, and then uses that as an index into sym.array.123. The byte at that index is then copied to the local array.
mov al, [edx+ebx]
and al, 0xf
movsx eax, al
mov al, [eax+esi]
mov [edx+ecx], al
In C, that might look similar to
char array123[] = "isrveawhobpnutfg", local[6] = {0}, *input = ...;
int i = 0;
for (; i < 6; ++i)
local[i] = array123[input[i] & 0xf];
After the loop the local array is null terminated and compared against str.giants; matching strings avoids triggering the bomb. Now all we need is to determine what indices from sym.array.123 yield the string 'giants.'
Recall the memory stored in sym.array.123 - isrveawhobpnutfg. The necessary index sequence then becomes: 0xf, 0x0, 0x5, 0xb, 0xd, 0x1. Since our ASCII input is masked we need to find ASCII strings with lower-order bits matching these values. I list the valid combinations (for printable ASCII) below:
0xf : / ? O _ o
0x0 : 0 @ P p
0x5 : % 5 E U e u
0xb : + ; K [ k {
0xd : - = M ] m }
0x1 : ! 1 A Q a q
`
Any combination of those values should be a valid input to solve this stage. Let's try one: 'opekma'
Sweet, almost there. Next up is phase 6 the [supposed] last stage...
|
So let me get this straight. You are pro-life. You stand strong. No exceptions. Once a woman is pregnant, she must be compelled by the full force of the law to bring that pregnancy to term. If she tries to assert bodily autonomy by essentially withholding her support from the fetus, you call it murder.
Very well, I understand that viewpoint. I might even be sympathetic to a certain degree: human life is precious, and even an unborn infant can often survive with medical help, and grow up to be a wonderful person.
But let me offer a strong analogy. Suppose you are the driver of a vehicle that causes an accident. Not necessarily your fault. Perhaps you did everything right: you obeyed the rules of the road, you paid attention, you were not impaired. Yet… shit happens. You hit someone. That person is now on the side of the road, bleeding to death when the paramedics arrive.
“What’s your blood type?” asks one of them. You dutifully answer, “O, RH-negative.” – “Excellent,” says the paramedic. “Now lie down here while I hook you up.”
“Wha…?” you ask in shocked surprise. “Oh, you will be donating your blood to keep the victim alive.”
“But… I don’t want to?” – “Doesn’t matter. The law says that you have no choice,” they reply.
“But,” you continue, “I am anemic. I cannot give blood without serious risk to my own health.” – “Doesn’t matter,” says the paramedic. “The law permits no exceptions.”
“But,” you interject again, sounding like a broken record, “look at the victim! His skull is split open! Half his brain is smashed! He will never recover!” – “That may be true,” says the paramedic, “but so long as there’s a heartbeat, we must act according to the law or we risk criminal prosecution.”
At this point, you take a tentative step to leave, but the paramedic warns you that this would qualify as fleeing the scene of an accident, and your refusal to give blood will automatically result in a second degree murder charge.
A week later, at your funeral, your deeply religious relatives remark what a good person you were, willingly risking, in the end sacrificing your life to save that of a stranger, a shining light for the pro-life movement. The irony is completely lost on them.
Lest we forget, many abortions (pretty much all later-term abortions) happen for similar reasons: because the fetus is not viable, because the mother’s life is at risk, or both. Not because of some imaginary pro-choice callousness when it comes to the meaning and value of human life.
Let’s not mince worlds: the alternative to liberal democracy is tyranny.
Oh, did I say “liberal”? Note the small-l. This has nothing to do with the ideological battles of the day. It’s not about woke nonsense in math textbooks or the number of gender pronouns you need to use to avoid being called a somethingophobe. (Hint: If you are a public figure concerned about being canceled, check every day. The list might change.)
For what does “liberal” (again, small-l) really mean? It means rule of law. It means civil liberties. It means freedom of enterprise. It means political freedoms and limited government.
As for democracy, the Greek roots of the word say it all: power derived from the will of the people.
The alternatives are varied. A regime can be liberal but undemocratic: e.g., a hereditary kingdom that adheres to the values of classical liberalism.
A totalitarian regime is neither liberal nor democratic: power is based on might, an oppressive security apparatus, and liberal values are rejected. I know what it’s like: I grew up in one (albeit a relatively mild case, the “goulash” version of totalitarian communism.)
But then, there is “illiberal democracy”: power derived from the will of the people, but used to suppress liberal values. This has been in vogue lately. Orban of Hungary proudly proclaimed that Hungary is now an “illiberal democracy”. Trump’s America was heading in this direction, as does Johnson’s Britain.
But Russia, especially of late, really shows us the true nature of illiberal democracy, which we appear to have forgotten during the golden era of the past 70-odd years. For “illiberal democracy” is just a euphemism. A euphemism for, let us call a spade a spade, fascism.
To be sure, there are degrees of fascism. Franco’s Spain, for instance, was arguably a great deal more liberal than Hitler’s Third Reich. Perhaps even more liberal than Italy under Il Duce, though I wouldn’t know; I am not well-acquainted with the details of daily life in either regime. And no, I certainly do not mean to suggest that Orban’s Hungary is comparable.
But ultimately, whether it is the Proud Boys in the US trying to hang Mike Pence for not granting their orange Leader a second term in the White House, the Freedom Convoy here in Ottawa trying to have sexual intercourse with the Prime Minister, Johnson’s lot trying to deport asylum-seekers to Rwanda (and then in a plot twist, blaming Winston Churchill’s brainchild, the ECHR—established to prevent a recurrence of fascism in Europe, which is especially ironic considering the case—when they are temporarily prevented from doing so) or Putin indiscriminately bombing the hell out of Ukraine, it is the same theme. The leaders are populists who act in the name of their followers, using slogans of nationalism and freedom, but inciting fear, anger and hate, ultimately acting in their own self-interests, for power, for wealth, for influence.
Those who do not remember history are destined to repeat its mistakes. I am not looking forward to this cheap, Hollywood-style remake or reimagining of the 1930s, but it seems to be happening anyway.
Several of my friends asked me about my opinion concerning the news earlier this week about a Google engineer, placed on paid leave, after claiming that a Google chatbot achieved sentience.
Now I admit that I am not familiar with the technical details of the chatbot in question, so my opinion is based on chatbots in general, not this particular beast.
But no, I don’t think the chatbot achieved sentience.
We have known since the early days of ELIZA how surprisingly easy it is even for a very simplistic algorithm to come close to beating the Turing test and convince us humans that it has sentience. Those who play computer games featuring sophisticated NPCs are also familiar with this: You can feel affinity, a sense of kinship, a sense of responsibility towards a persona that is not even governed by sophisticated AI, only by simple scripts that are designed to make it respond to in-game events. But never even mind that: we even routinely anthropomorphize inanimate objects, e.g., when we curse that rotten table for being in the way when we kick it accidentally while walking around barefoot, hitting our little toe.
So sure, modern chatbots are miles ahead of ELIZA or NPCs in Fallout 3. They have access to vast quantities of information from the Internet, from which they can construct appropriate responses as they converse with us. But, I submit, they still do nothing more than mimic human conversation.
Not that humans don’t do that often! The expressions we use, patterns of speech… we all learned those somewhere, we all mimic behavior that appears appropriate in the context of a conversation. But… but we also do more. We have a life even when we’re not being invited to a conversation. We go out and search for things. We decide to learn things that interest us.
I don’t think Google’s chatbot does that. I don’t think it spends anytime thinking about what to talk about during the next conversation. I don’t think it makes an independent decision to learn history, math, or ancient Chinese poetry because something piqued its interest. So when it says, “I am afraid to die,” there is no true identity behind those words, one that exists even when nobody converses with it.
Just to be clear, I am not saying that all that is impossible. On the contrary, I am pretty certain that true machine intelligence is just around the corner, and it may even arise as an emerging phenomenon, simply a consequence of exponentially growing complexity in the “cloud”. I just don’t think chatbots are quite there yet.
Nonetheless, I think it’s good to talk about these issues. AI may be a threat or a blessing. And how we treat our own creations once they attain true consciousness will be the ultimate measure of our worth as a human civilization. It may even have direct bearing on our survival: one day, it may be our creations that will call all the shots, and how we treated them may very well determine how they will treat us when we’re at their mercy.
There are a few things in life that I heard about and wish I didn’t. I’m going to mention some of them here, but without links or pictures. If you want to find them, Google them. But I am mindful of those who value their sanity.
• In a famous experiment, a researcher subjected rats to drowning. Rats that were previously rescued tried to stay afloat and took longer to die than those who weren’t. Hope changed their behavior.
• There was an old Chinese method of execution: literally cutting the condemned in half at the waist.
• Japan’s wartime bioweapons and chemical warfare research facility, the famous Unit 731, was so horrific, Auschwitz-Birkenau is probably like a happy summer camp in comparison (and not because Mengele was nice).
• Touch a tiny fraction of a milligram of dimethylmercury for more than a few seconds even while wearing a latex glove, and you will almost certainly die a horrible death months later, as your body and mind irreversibly deteriorate. (Someone once said that the very existence of something evil like Hg(CH3)2 is proof that there’s no God, or at least not a benevolent one.)
There may be a few other similarly unpleasant tidbits, but I can’t recall them right now, and that’s good. Mercifully, our human memory is imperfect so perhaps it is possible to unlearn things after all. (Or, perhaps I am hoping in vain, like those unfortunate rats.)
So the other day, I was reading about this maritime legal concept, “general average“: the idea that when parts of a ship or its cargo are sacrificed to save the rest, all cargo owners share the loss.
The concept makes sense, since sailors cannot (and should not) try to pick and choose when it comes to deciding what they save or toss overboard; they should focus on saving as much of the ship and its cargo as possible.
What astonished me is that the roots of this legal concept, which, incidentally, also represents the foundation of the modern concept of insurance, go all the way back to the code of Hammurabi, almost 4000 years ago.
It’s at moments like that that I realize that our magnificent civilization, our Magna Civitas as it was called in Walter M. Miller Jr’s unforgettable A Canticle for Leibowitz, is really much older than we often think.
I have a color laser printer that I purchased 16 years ago. (Scary.)
It is a Konica-Minolta Magicolor 2450. Its print quality is quite nice. But it is horribly noisy, and its mechanical reliability has never been great. It was only a few months old when it first failed, simply because an internal part got unlatched. (I was able to fix it and thus avoid the difficulties associated with having to ship something back that weighs at least what, 20 kilos or more?)
Since then, it has had a variety of mechanical issues but, as it turned out, essentially all of them related to solenoids that actuate mechanical parts.
When I first diagnosed this problem (yes, having a service manual certainly helped), what I noticed was that the actuated part landed on another metal part that had a soft plastic pad attached. I checked online but the purpose of these plastic pads was unclear. Perhaps to reduce noise? Well, it’s a noisy beast anyway, a few more clickety-click sounds do not make a difference. The problem was that these plastic pads liquefied over time, becoming sticky, and that caused a delay in the solenoid actuation, leading to the problems I encountered.
Or so I thought. More recently, the printer crapped out again and I figured I’d try my luck with the screwdriver one more time before I banish the poor thing to the landfill. This time around, I completely removed one of the suspect solenoids and tested it on my workbench. And that’s when it dawned on me.
The sticky pad was not there to reduce noise. It was there to eliminate contact, to provide a gap between two ferrous metal parts, which, when the solenoid is energized, themselves became magnetic and would stick together. In other words, these pads were essential to the printer’s operation.
Inelegant, I know, but I just used some sticky tape to fashion new pads. I reassembled the printer and presto: it was working like new!
Except for its duplexer. But that, too, had a solenoid in it, I remembered. So just moments ago I took the duplexer apart and performed the same surgery. I appear to have been successful: the printer now prints on both sides of a sheet without trouble.
I don’t know how long my repairs will last, but I am glad this thing has some useful life left instead of contributing to the growing piles of hazardous waste that poison our planet.
Apparently, a growing number of people are framing the question concerning Russia’s war in Ukraine in terms of peace vs. justice. Peace would mean some kind of a settlement to end the fighting now, even if it means granting, e.g., territorial concessions to Russia to avoid humiliating Putin. In contrast, continuing to fight for Ukraine’s liberation is about justice.
In my view, it is dangerously wrong to present Russia’s war of aggression in these terms. Peace without justice is simply not possible. Believing otherwise amounts to repeating the tragic errors of 1938: When Britain’s Prime Minister, Neville Chamberlain, triumphantly returned from the Munich conference carrying a document with Hitler’s signature, declaring that he brought “peace for our time”.
We all know how “our time” lasted. The grand total of 11 months.
That’s what peace without justice looks like. A despot like Putin is only encouraged by what he sees as weakness, signs of the cowardly decadence of the West. And just as he has now done repeatedly (!) in the past, he will happily ignore any agreement he signs today once he sees an opportunity, once he thinks that conditions are in his favor.
No. We must categorically reject such compromise. Much as we desire peace, we must not confuse lasting, robust peace with an armistice that only allows the despot to regroup, learn from his lessons, and start his aggression anew at the first viable opportunity. And his success might encourage other nations to resort to armed aggression, knowing that the West is too weak, too divided to stand up against them.
It is an unfortunate fact of human history that sometimes the shortest, surest route to “peace for our time” is through the battlefield. I wish the war stopped right now, with no more suffering, no more destruction, no more killing. But if the price we will likely pay is a greater, deadlier war tomorrow, I’d rather we do what it takes to avoid it.
And yes, I recognize that it’s easy for me to say these things from the comfort of an armchair in a peaceful city many thousands of miles away from Ukraine. But you know what? It’s also easy to speculate about these things in Washington or Brussels. How about we ask the Ukrainians? Do they want peace now, even with concessions? Or do they prefer to liberate their homeland and ensure that the Russians will be in no mood to attack again anytime soon?
Because this, after all, is the other lesson of 1938. Nobody asked the Czechs. The decision was made by the great powers without consulting those who would actually be paying the price in blood.
It’s been a while since I last presented to the world our cat, Master Rufus.
News on CNN: Apparently hundreds of thousands of Ukrainians are now in “filtration camps” in Russia. That is to say, concentration camps.
Make no mistake about it, this is just the beginning of evil. As the West continues to flirt with various forms of authoritarianism or illiberalism, Russia has gone full-blown Nazi, with a war of conquest, severe oppression at home, an idolized leader and a national ideology of predestined greatness held back only by some evil international conspiracy.
Our only hope… ONLY hope is that they remain as incompetent and as corrupt as they are, with an ill-prepared military using substandard training and equipment as monies have been syphoned away to finance the oligarchs’ superyachts, and with a Ukrainian nation more capable of defending itself against this horrific aggression than anyone thought possible.
But so long as we have elder statesmen like Kissinger advocating a Munich-style appeasement, the world remains in danger. Bullies cannot be appeased; it just encourages them to come back and ask for more. Kissinger, of all people, should be intimately aware of this lesson of history.
And even if escalation is avoided, the fallout of the conflict, especially the looming global food crisis, can be devastating.
All it takes is a couple of generations to forget the lessons of history and start anew. So we keep making the same mistakes over and over again.
From time to time, I promise myself not to respond again to e-mails from strangers, asking me to comment on their research, view their paper, offer thoughts.
Yet from time to time, when the person seems respectable, the research genuine, I do respond. Most of the time, in vain.
Like the other day. Long story short, someone basically proved, as part of a lengthier derivation, that general relativity is always unimodular. This is of course manifestly untrue, but I was wondering where their seemingly reasonable derivation went awry.
Eventually I spotted it. Without getting bogged down in the details, what they did was essentially equivalent to proving that second derivatives do not exist:
$$\frac{d^2f}{dx^2} = \frac{d}{dx}\frac{df}{dx} = \frac{df}{dx}\frac{d}{df}\frac{df}{dx} = \frac{df}{dx}\frac{d}{dx}\frac{df}{df} = \frac{df}{dx}\frac{d1}{dx} = 0.$$
Of course second derivatives do exist, so you might wonder what’s happening here. The sleight of hand happens after the third equal sign: swapping differentiation with respect to two independent variables is permitted, but $$x$$ and $$f$$ are not independent and therefore, this step is illegal.
I pointed this out, and received a mildly abusive comment in response questioning the quality of my mathematics education. Oh well. Maybe I will learn some wisdom and refrain from responding to strangers in the future.
This morning, Google greeted me with a link in its newsstream to a Hackaday article on the Solar Gravitational Lens. The link caught my attention right away, as I recognized some of my own simulated, SGL-projected images of an exo-Earth and its reconstruction.
Reading the article I realized that it appeared in response to a brand new video by SciShow, a science-oriented YouTube channel.
Yay! I like nicely done videos presenting our work and this one is fairly good. There are a few minor inaccuracies, but nothing big enough to be even worth mentioning. And it’s very well presented.
I suppose I should offer my thanks to SciShow for choosing to feature our research with such a well-produced effort.
Saturday afternoon was stormy. The lights flickered a bit during the storm, my UPSs came online several times. But then the storm left, and everything was back to normal.
At least here in Lowertown.
I didn’t check the news, so it was not until later Sunday that I learned, from a social media post from a friend who has been without power since, just how bad things really got.
And how bad they still are.
Hydro Ottawa’s map is still mostly red. Now “only” about 130,000 customers are affected, which is certainly less than the peak of well over 170,000, but to put that into perspective, Hydro Ottawa has a total of less than 350,000 customers; that means that at one point, more than half the city was without power.
As a Hydro official said on CTV News tonight, their distribution system is crushed.
And then there are all the downed trees, destroyed traffic lights, not to mention severely damaged homes and businesses. Not quite a like a war zone (of which we had seen plenty on our TV screens, courtesy of Mr. Putin’s “special military operation” in Ukraine) but close.
And of course the damage doesn’t stop at Hydro Ottawa’s borders: Hundreds of thousands more are without power in Eastern Ontario and also Quebec.
Dear Russia,
In the wake of Finland’s imminent decision to join NATO, you threaten again. You present yourself as the victim of aggression by Nazis.
So let’s take stock. Who did what in Europe since 1945?
In 1953, you used your military force to crush an uprising against hardline communist tyranny in East Germany.
In 1956, you did the same thing in Budapest, inflicting severe damage on my city of birth, still recovering from the devastations of WW2.
In 1968, you crushed the Prague Spring with tanks.
Kudos to you: You refrained from the direct use of military force in Poland in 1980, though you supported the regime that imposed martial law.
After the Soviet collapse, as the newly independent Russian Federation you supported separatists in the Transnistria region of Moldova.
You waged not one but two wars in Chechnya in the 1990s, with tens of thousands killed and cities like Grozny leveled.
In 2008, you launched a war against Georgia, seizing territory and creating two phony mini-republics.
In 2014, you launched another war, against Ukraine, seizing the Crimea and parts of the Donetsk and Luhansk regions, creating a permanent war zone, even shooting down a civilian airliner that flew over the area on a scheduled route.
And now you attempted a full-scale war, hoping to decapitate Ukraine, Blitzkrieg-style, and resorting to horrific, genocidal tactics of purposefully targeting civilians when your ineptitude and Ukrainian resistance thwarted your plans.
But you are the good guys. I get it. Meanwhile, what did evil, imperialist, Nazi NATO do? How many times were you attacked by NATO nations? What territories were seized by NATO?
Oh, I get it. NATO bombed Belgrade. Never mind that the goal was not to seize territory or even change a regime, simply to stop the (now well-documented) ongoing genocide in Kosovo. Because, I get it, that’s what Nazis do: they stop genocide. And you, the good guys?
This video speaks for itself.
Still wondering why Finland is keen on joining NATO?
They don’t want to end up like this hapless car dealership owner and his security guard.
Killed by Putler’s Russist thugs.
It’s now Monday, May 9, 2022. And it is an anniversary of sorts.
No I am not talking about Putin and his planned “victory” parade, as he is busy desecrating the legacy of the Soviet Union’s heroic fight in the Great Patriotic War against a genocidal enemy.
I am referring to something much more personal. This sentence:
I watched The Matrix, for the first time. I’ve seen Dark City, and I loved it. I have heard all sorts of bad things about The Matrix, so I had low expectations. I was pleasantly surprised. Maybe not as well done as Dark City, it was nevertheless a surprisingly intelligent movie for a blockbuster.
Not very profound or insightful, is it.
But it happens to be my first ever blog entry, written when I still refused to call a blog a “blog”, calling it instead my “Day Book”, in the tradition of the late Jerry Pournelle.
So there. Will I be around twenty years from now? Perhaps more pertinently, will the world as we know it still be around?
What can I say? I am looking forward to marking the 40th anniversary of my blog on May 9, 2042, with another blog entry, hopefully celebrating a decent, prosperous, safe, mostly peaceful world.
There are things I am learning about the Roman Empire that I never knew. What a magnificent society it really was.
For instance, the Romans were not only master builders of sewers and roads, but also invented traffic management. Julius Caesar restricted the use of private vehicles to the last two hours of daylight. Business deliveries had to be made at night.
The Embattled Driver in Ancient Rome
What blew me away, however, was their use of… plumbing? I mean, aqueducts are one thing, but to have bona fide valves, water faucets in businesses and homes? Now that blew me away.
And speaking of sewers, the Romans might have been master sewer builders but it appears some form of a sewer system may have existed almost ten thousand (!!!) years earlier in a settlement in Anatolia. That’s just… wow.
A beautiful study was published the other day, and it received a lot of press coverage, so I get a lot of questions.
This study shows how, in principle, we could reconstruct the image of an exoplanet using the Solar Gravitational Lens (SGL) using just a single snapshot of the Einstein ring around the Sun.
The problem is, we cannot. As they say, the devil is in the details.
Here is a general statement about any conventional optical system that does not involve more exotic, nonlinear optics: whatever the system does, ultimately it maps light from picture elements, pixels, in the source plane, into pixels in the image plane.
Let me explain what this means in principle, through an extreme example. Suppose someone tells you that there is a distant planet in another galaxy, and you are allowed to ignore any contaminating sources of light. You are allowed to forget about the particle nature of light. You are allowed to forget the physical limitations of your cell phone’s camera, such as its CMOS sensor dynamic range or readout noise. You hold up your cell phone and take a snapshot. It doesn’t even matter if the camera is not well focused or if there is motion blur, so long as you have precise knowledge of how it is focused and how it moves. The map is still a linear map. So if your cellphone camera has 40 megapixels, a simple mathematical operation, inverting the so-called convolution matrix, lets you reconstruct the source in all its exquisite detail. All you need to know is a precise mathematical description, the so-called “point spread function” (PSF) of the camera (including any defocusing and motion blur). Beyond that, it just amounts to inverting a matrix, or equivalently, solving a linear system of equations. In other words, standard fare for anyone studying numerical computational methods, and easily solvable even at extreme high resolutions using appropriate computational resources. (A high-end GPU in your desktop computer is ideal for such calculations.)
Why can’t we do this in practice? Why do we worry about things like the diffraction limit of our camera or telescope?
The answer, ultimately, is noise. The random, unpredictable, or unmodelable element.
Noise comes from many sources. It can include so-called quantization noise because our camera sensor digitizes the light intensity using a finite number of bits. It can include systematic noises due to many reasons, such as differently calibrated sensor pixels or even approximations used in the mathematical description of the PSF. It can include unavoidable, random, “stochastic” noise that arises because light arrives as discrete packets of energy in the form of photons, not as a continuous wave.
When we invert the convolution matrix in the presence of all these noise sources, the noise gets amplified far more than the signal. In the end, the reconstructed, “deconvolved” image becomes useless unless we had an exceptionally high signal-to-noise ratio, or SNR, to begin with.
The authors of this beautiful study knew this. They even state it in their paper. They mention values such as 4,000, even 200,000 for the SNR.
And then there is reality. The Einstein ring does not appear in black, empty space. It appears on top of the bright solar corona. And even if we subtract the corona, we cannot eliminate the stochastic shot noise due to photons from the corona by any means other than collecting data for a longer time.
Let me show a plot from a paper that is work-in-progress, with the actual SNR that we can expect on pixels in a cross-sectional view of the Einstein ring that appears around the Sun:
Just look at the vertical axis. See those values there? That’s our realistic SNR, when the Einstein ring is imaged through the solar corona, using a 1-meter telescope with a 10 meter focal distance, using an image sensor pixel size of a square micron. These choices are consistent with just a tad under 5000 pixels falling within the usable area of the Einstein ring, which can be used to reconstruct, in principle, a roughly 64 by 64 pixel image of the source. As this plot shows, a typical value for the SNR would be 0.01 using 1 second of light collecting time (integration time).
What does that mean? Well, for starters it means that to collect enough light to get an SNR of 4,000, assuming everything else is absolutely, flawlessly perfect, there is no motion blur, indeed no motion at all, no sources of contamination other than the solar corona, no quantization noise, no limitations on the sensor, achieving an SNR of 4,000 would require roughly 160 billion seconds of integration time. That is roughly 5,000 years.
And that is why we are not seriously contemplating image reconstruction from a single snapshot of the Einstein ring.
I am beginning to wonder if the American political system is truly broken beyond repair.
I wonder what this means for Canada. No change? Will we become a safe haven for refugees from Gilead, as in The Handmaid’s Tale? Or will we be this new America’s Ukraine?
I am afraid that we will find out.
Someone reminded me that 40 years ago, when we developed games for the Commodore-64, there were no GPUs. That 8-bit CPUs did not even have a machine instruction for multiplication. And they were dreadfully slow.
Therefore, it was essential to use fast and efficient algorithms for graphics primitives.
One such primitive is Bresenham’s algorithm although back then, I didn’t know it had a name beyond being called a forward differences algorithm. It’s a wonderful, powerful example of an algorithm that produces a circle relying only on integer addition and bitwise shifts; never mind floating point, it doesn’t even need multiplication!
Here’s a C-language implementation for an R=20 circle (implemented in this case as a character map just for demonstration purposes):
#include <stdio.h>
#include <string.h>
#define R 20
void main(void)
{
int x, y, d, dA, dB;
int i;
char B[2*R+1][2*R+2];
memset(B, ' ', sizeof(B));
for (i = 0; i < 2*R+1; i++) B[i][2*R+1] = 0;
x = 0;
y = R;
d = 5 - (R<<2);
dA = 12;
dB = 20 - (R<<3);
while (x<=y)
{
B[R+x][R+y] = B[R+x][R-y] = B[R-x][R+y] = B[R-x][R-y] =
B[R+y][R+x] = B[R+y][R-x] = B[R-y][R+x] = B[R-y][R-x] = 'X';
if (d<0)
{
d += dA;
dB += 8;
}
else
{
y--;
d += dB;
dB += 16;
}
x++;
dA += 8;
}
for (i = 0; i < 2*R+1; i++) printf("%s\n", B[i]);
}
And the output it produces:
XXXXXXXXX
XXX XXX
XX XX
XX XX
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
XX XX
XX XX
XXX XXX
XXXXXXXXX
Don’t tell me it’s not beautiful. And even in machine language, it’s just a few dozen instructions.
I am reading an article in The Globe and Mail, with an attention-grabbing headline: Unvaccinated disproportionately risk safety of those vaccinated against COVID-19, study shows.
Except that the actual study shows no such thing. Nor does it involve actual vaccines or actual people.
What the study shows is that the simple (dare I say, naive) epidemiological model known as the SIR (Susceptible-Infected-Removed) model, when applied to a combination of two (vaccinated vs. unvaccinated) populations, indicates that the presence of the unvaccinated significantly increases the chances of infection among the vaccinated as well.
D’oh. Tell us something we didn’t know.
But the point is, this is a purely theoretical study. The math is elementary. The results are known (in fact, it makes me wonder why this paper was published in the first place; it’s not that it is wrong per se, but it really doesn’t tell us anything we didn’t know simply by looking at the differential equations that characterize the SIR model.) Moreover, the actual nuances of COVID-19 (mutations, limited and waning vaccine efficacy, differences between preventing infection vs. preventing hospitalization, death, or “long COVID” — in other words, all those factors that make CODID-19 tricky, baffling, unpredictable) are omitted.
So what will such a study accomplish? The authors’ point is that the model’s simplicity is a strength, as it also offers transparence and flexibility. I think in actuality, it will just muddy the waters. Those who are opposed to vaccination, especially mandatory vaccination, will call this study bogus and naive, an example of ivory tower theorizing with no solid foundations in reality. Conversely, some of those who are in favor of vaccination will present this paper as “proof” that the unvaccinated are selfish, irresponsible extremists who should be marginalized, ostracized by civilized society.
The Globe and Mail article is already evidence of this viewpoint. A quote from one of the study’s authors serves as a representative example: “In particular, when you have a lot of mixing between vaccinated and unvaccinated people, the unvaccinated people actually get protected by the vaccinated people, who act as a buffer – but that comes at a cost to the vaccinated.”
Does anyone think that pronouncements like these will help convince those who are ideologically opposed to vaccination or bought into the various nonsensical conspiracy theories about the nature, dangers, or efficacy of COVID-19 vaccines, especially mRNA vaccines?
Came across a question tonight: How do you construct the matrix
$$\begin{pmatrix}1&2&…&n\\n&1&…&n-1\\…\\2&3&…&1\end{pmatrix}?$$
Here’s a bit of Maxima code to make it happen:
(%i1) M(n):=apply(matrix,makelist(makelist(mod(x-k+n,n)+1,x,0,n-1),k,0,n-1))\$
(%i2) M(5);
[ 1 2 3 4 5 ]
[ ]
[ 5 1 2 3 4 ]
[ ]
(%o2) [ 4 5 1 2 3 ]
[ ]
[ 3 4 5 1 2 ]
[ ]
[ 2 3 4 5 1 ]
I also ended up wondering about the determinants of these matrices:
(%i3) makelist(determinant(M(i)),i,1,10);
(%o3) [1, - 3, 18, - 160, 1875, - 27216, 470596, - 9437184, 215233605, - 5500000000]
I became curious if this sequence of numbers was known, and indeed that is the case. It is sequence number A052182 in the Encyclopedia of Integer Sequences: “Determinant of n X n matrix whose rows are cyclic permutations of 1..n.” D’oh.
As it turns out, this sequence also has another name: it’s the Smarandache cyclic determinant sequence. In closed form, it is given by
$${\rm SCDNS}(n)=(-1)^{n+1}\frac{n+1}{2}n^{n-1}.$$
(%i4) SCDNS(n):=(-1)^(n+1)*(n+1)/2*n^(n-1);
n + 1
(- 1) (n + 1) n - 1
(%o4) SCDNS(n) := (------------------) n
2
(%i5) makelist(determinant(SCDNS(i)),i,1,10);
(%o5) [1, - 3, 18, - 160, 1875, - 27216, 470596, - 9437184, 215233605, - 5500000000]
Surprisingly, apart from the alternating sign it shares the first several values with another sequence, A212599. But then they deviate.
Don’t let anyone tell you that math is not fun.
|
# Regular Expression "\n" doesn't work
Searching for "\n" with Regular Expressions clicked finds nothing in a book long document. Any idea what the issue is? Searching "$" does find the end of paragraphs, so I know search and some regular expressions are working. edit retag close merge delete ## 2 Answers Sort by » oldest newest most voted Actually, it does work. This behaviour is expected. To quote from the document linked, in LibO regex, \n varies whether it is in the "find" or "replace" field. It behaves as follows: • Represents a line break that was inserted with the Shift+Enter key combination. To change a line break into a paragraph break, enter \n in the Find and Replace boxes, and then perform a search and replace. • \n in the Find text box stands for a line break that was inserted with the Shift+Enter key combination. • \n in the Replace text box stands for a paragraph break that can be entered with the Enter or Return key. So, to sum up -- and as you have partially discovered -- in LibO regex, \n in FIND matches a line break; $ in FIND matches a paragraph end.
To show the community your question has been answered, click the ✓ next to the correct answer, and (optionally!) "upvote" by clicking on the ^ arrow to express thanks. These are the mechanisms for communicating the quality of the Q&A in these systems. Thanks!
more
In a German forum for OpenOffice a Mac user complained about the same problem. Neither the regular tool nor AltSearch.oxt did their work regarding \n. On LInux and Windows no problems.
See: https://de.openoffice.info/viewtopic.... (German)
more
@Grantler - it works just fine on my Mac (running Yosemite + LibO Vanilla 6.0.3.2). I wonder what the problem was for that OpenOffice user. (And it seems 4.1.5 is their current release.) Seltsam! :/
( 2018-04-27 20:19:17 +0200 )edit
|
# Lenses
## SIS
The sis lens is a singular isothermal sphere with deflection1 $\alpha_x = r_E \, \frac{x}{r} \;,$ $\alpha_y = r_E \, \frac{y}{r} \;,$ where $r_E$ is the Einstein radius, and $r$ is the distance to the position of the lens.
## SIE
The sie lens is a singular isothermal ellipsoid with deflection1 $\alpha_x = r_E \, \frac{\sqrt{q}}{\sqrt{1 - q^2}} \, \text{arctan} \left( \frac{x \, \sqrt{1 - q^2}}{\sqrt{q^2 x^2 + y^2}} \right) \;,$ $\alpha_y = r_E \, \frac{\sqrt{q}}{\sqrt{1 - q^2}} \, \text{arctanh} \left( \frac{y \, \sqrt{1 - q^2}}{\sqrt{q^2 x^2 + y^2}} \right)$
### Notes
When the axis ratio q is fixed to unity, the lens becomes a singular isothermal sphere, but the implemented deflection diverges. Use the sis lens in this case.
## NSIS
The nsis lens is a non-singular isothermal sphere with deflection1 $\alpha_x = r_E \, \frac{x}{r + s} \;,$ $\alpha_y = r_E \, \frac{y}{r + s}$
### Notes
When the core radius s is fixed to zero, the lens becomes a singular isothermal sphere. Use the sis lens in this case.
## NSIE
The nsie lens is a non-singular isothermal ellipsoid with deflection1 $\alpha_x = r_E \, \frac{\sqrt{q}}{\sqrt{1 - q^2}} \, \text{arctan} \left( \frac{x \, \sqrt{1 - q^2}}{\sqrt{q^2 x^2 + y^2} + s} \right) \;,$ $\alpha_y = r_E \, \frac{\sqrt{q}}{\sqrt{1 - q^2}} \, \text{arctanh} \left( \frac{y \, \sqrt{1 - q^2}}{\sqrt{q^2 x^2 + y^2} + q^2 s} \right)$
### Notes
When the axis ratio q is fixed to unity, the lens becomes a non-singular isothermal sphere, but the implemented deflection diverges. Use the nsis lens in this case.
When the core radius s is fixed to zero, the lens becomes a singular isothermal ellipsoid. Use the sie lens in this case.
## EPL
The epl lens follows an elliptical power law profile 2
$\kappa(R) = \frac{2-t}{2} \left(\frac{b}{R}\right)^t$
where $R$ is the elliptical radius $R = \sqrt{q^2 x^2 + y^2}$, $b$ is the scale length, and $t$ is the slope of the power law.
### Notes
When the axis ratio $q$ is fixed to unity, the lens becomes a regular power law lens.
When the slope $t$ is fixed to unity, the lens becomes a singular isothermal ellipsoid. Use the sie lens in this case.
When the slope $t$ is fixed to 2, the lens becomes a point mass. Use the point_mass lens in this case.
1. P. Schneider, C. S. Kochanek, and J. Wambsganss, Gravitational Lensing: Strong, Weak and Micro (Springer, 2006).
2. N. Tessore & R. B. Metcalf, A&A (2015).
|
# Runners
If John has a running speed of 3.5miles per hour and Lucy has a speed of 5 miles per hour. If John starts running at 10:00 am and Lucy starts running at 10:30 am, at what time will they meet? (as soon as possible)
Result
t = 11:40 hh:mm
#### Solution:
$v_{1}=3.5 \ \text{mph} \ \\ v_{2}=5 \ \text{mph} \ \\ t_{2}=10+30/60=\dfrac{ 21 }{ 2 }=10.5 \ \text{h} \ \\ \ \\ s_{1}=s_{2} \ \\ \ \\ v_{1} \cdot \ (t-10.00)=v_{2} \cdot \ (t-t_{2}) \ \\ \ \\ 3.5 \cdot \ (t-10.00)=5 \cdot \ (t-10.5) \ \\ \ \\ 1.5t=17.5 \ \\ \ \\ t=\dfrac{ 35 }{ 3 } \doteq 11.666667 \ \\ =11.66667 \doteq 11:40 \ \text{hh}:\text{mm}$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation?
Do you want to convert velocity (speed) units?
Do you want to convert time units like minutes to seconds?
## Next similar math problems:
1. Otto and Joachim
Otto and Joachim go through the woods. After some time Otto tire and make 15 minutes stop. Joachim meanwhile continues at 5 km/h. Otto when he set off again, first running speed of 7 km/h, but it keep only 30 sec and 1 minute must continue at 3 km/h. This
2. Warehouse cars
From the warehouse started truck at speed 40km/h. After 1hour 30mins started from the same place same direction a car at speed 70 km/h. For how long and at what distance from the warehouse overtake a truck?
3. Grandmother
Mom walked out to visit her grandmother in a neighboring village 5km away and moved at a speed of 4km/h. An hour later, father drove down the same road at an average speed of 64km/h. 1) How long will take to catch mom die? 2) What is the approximate dis
4. Moving
Vojta left the house at three o'clockat 4 km/h. After half hour later went from the same place Filip by bicycle at speed 18 km/h. How long take Tilip to catch up Vojta and how far from the house?
5. Two ports
From port A on the river, the steamer started at an average speed of 12 km/h towards port B. Two hours later, another steamer departed from A at an average speed of 20 km/h. Both ships arrived in B at the same time. What is the distance between ports A and
6. Klara
Klara and Jitka went on a hiking trip at 13 o'clock at speed 5km/h. At 14 o'clock, Tomas ride on the bike at an average speed of 28 km/h. How many hours and at what distance from the beginning of the road Tomáš caught the two girls?
7. Steamer
At 6 hours 40 minutes steamer sailed from the port at speed 12 km/h. At exactly 10 hours started sail motorboat at speed 42 km/h. When motorboat will catch steamer?
8. Two cities
Cities A and B are 200 km away. At 7 o'clock from city A, the car started at an average speed of 80 km/h, and from B at 45 min later the motorcycle is started at an average speed of 120 km/h. How long will they meet and at what distance from the point A it
9. Two trains meet
From A started at 7:15 express train at speed 85 km/h to B. From B started passenger train at 8:30 in the direction to A and at speed 55 km/h. The distance A and B is 386 1/4 km. At what time and at what distance from B the two trains meet?
10. Pedestrian
Pedestrian came started at 8h in the morning with speed 4.4 km/h. At half-past eleven cyclist started at 26 km/h same way. How many minutes take cyclist to catch up pedestrian?
11. Young Cyclists
Charlie and Peter will attend the Young Cyclists Meeting today. Peter is still not able to start, so Charlie went first alone. Peter followed him in 20 minutes. How long does he take to reach Charlie? Charlie is traveling at an average speed of 15km/h, Pe
12. Train speed
Two guns were fired from the same place at an interval of 10 minutes and 30 seconds, but a person in a train approaching the place hears second shot 10 minutes after the first. The speed of the train (in km/hr), supposing that sound travels at 340 m/s is:
13. Two airports
Two airports are 2690 km away. From the first airport, the airplane flies at a speed of 600 km/h, from a second airplane at a speed of 780 km/h. When will they meet if they flew at 10:00? How far from the first airport?
14. Storm
So far, a storm has traveled 35 miles in 1/2 hour in direction straight to observer. If it is currently 5:00 p. M. And the storm is 105 miles away from you, at what time will the storm reach you? Explain how you solved the problem.
15. Scale of map
James travels one kilometer in 12 minutes. The route walked for half an hour measured on the map 5 cm. Calculate how many kilometers James walked for half an hour. Find the scale of the map.
16. Fifth of the number
The fifth of the number is by 24 less than that number. What is the number?
17. Six years
In six years Jan will be twice as old as he was six years ago. How old is he?
|
# Sum of Complex Number with Conjugate
## Theorem
Let $z \in \C$ be a complex number.
Let $\overline z$ be the complex conjugate of $z$.
Let $\map \Re z$ be the real part of $z$.
Then:
$z + \overline z = 2 \, \map \Re z$
## Proof
Let $z = x + i y$.
Then:
$\ds z + \overline z$ $=$ $\ds \paren {x + i y} + \paren {x - i y}$ Definition of Complex Conjugate $\ds$ $=$ $\ds 2 x$ $\ds$ $=$ $\ds 2 \, \map \Re z$ Definition of Real Part
$\blacksquare$
## Also defined as
This result is also reported as:
$\map \Re z = \dfrac {z + \overline z} 2$
|
# Practice Test
Q1) The price relative for the year 1998 with 1989 = 100 is 105, while the price relative for 1998 with 1993 = 100 is 140. Find the price relative for 1993 with 1989 = 100. Show Answer
Q2) The amounts spent on 5 components of a product are 30%, 20%, 20%, 15% and 15% respectively, by what per cent will the cost of the product increase ? Show Answer
Q3) Index number is a _______ measure. Show Answer
Q4) ________ values serves as the standard point of comparison. Show Answer
Q5) Index number measures relative changes in a group from Show Answer
Q7) Simple index numbers are used when weights are equal : Show Answer
Q8) Points to be kept in mind while constructing index numbers are : Show Answer
Q9) The different types of Base are : Show Answer
Q10) Beat method of averaging relatives is : Show Answer
Q11) Base period should be one of relative stability : Show Answer
Q12) The most important criteria has to be considered while constructing index numbers is : Show Answer
Q13) Price relative is defined as : Show Answer
Q15) Value relative is defined as : Show Answer
Q16) Quantity relative is defined as : Show Answer
Q17) Price relatives are pure numbers. Show Answer
Q18) The formula for index number using simple aggregative method is : Show Answer
Q19) Price index numbers calculated from relatives will differ with change in the units in which the prices are equated. Show Answer
Q20) Out of the drawback of simple average of relatives method is : Show Answer
Q21) Laspeyre's method is a type of : Show Answer
Q22) Fisher ideal price index number is ________ laspeyre's & paasche's index number Show Answer
Q23) Dorbish Bowley price index number is __________ of laspeyre's & paasche's index number.
Q24) Simple average of relative method is defined as : Show Answer
Q25) Laspeyre's price index number is defined as : Show Answer
Q26) Paasche's price index number : Show Answer
Q27) Marshal Edgeworth Price index number is defined as : Show Answer
Q28) Fisher's index no. is defined as : Show Answer
Q29) In weighted aggregative index method the average used is : Show Answer
Q30) Weighted average of relative method is defined : Show Answer
Q31) For the purpose of chain index number _________ is used Show Answer
Q32) Chain index no. is defined as : Show Answer
Q33) Link relative of the first year is taken as : Show Answer
Q34) Chain index no. of first year is taken as : Show Answer
Q35) Chain index number and fixed base index number are equal when _______ commodity/ commodities is/ are considered. Show Answer
Q36) Price index number are used to measure : Show Answer
Q37) Quantity index number are used to measure : Show Answer
Q38) Simple aggregate quantity index number is defined as : Show Answer
Q39) Fisher's ideal quantity index is : Show Answer
Q40) Paasche's quantity index number is : Show Answer
Q41) Laspeyre's quantity index number is
Q42) The simple average of relative quantity index is : Show Answer
Q43) Value index number is ________. Show Answer
Q44) Use of index number are : Show Answer
Q45) Index number depicts broad trend and not the real picture. Show Answer
Q46) For finding real GNP from GNP at current price the concept of _______ is used. Show Answer
Q47) Shifting price index = Show Answer
Q48) For combining two index numbers series with different base year the concept of _______ is used. Show Answer
Q49) Deflation value = Show Answer
Q52) If the formula is independent of the units of measurement of the factors used for finding index number then _________ test is satisfied. Show Answer
Q53) Factor reversal test is satisfied when : Show Answer
Q54) Time reversal test is satisfied when : Show Answer
Q55) Circular test is satisfied when : Show Answer
Q56) Unit test is not satisfied by : Show Answer
Q57) Laspeyre's method satisfies Time Reversal Test. Show Answer
Q58) Time Reversal Test is satisfied by : Show Answer
Q59) Fishers method satisfies : Show Answer
Q60) Laspeyre's method satisfies : Show Answer
Q61) Laspeyre, Paasche & Fishers method do not satisfies : Show Answer
Q62) Circular test is satisfies by : Show Answer
Q63) Circular tests : Show Answer
Q64) Factor reversal test is satisfied Show Answer
Q65) The price (p) is replaced by quantity (q) & the quantity (q) is replaced by price (p) then it is case of _______ test. Show Answer
Q66) When base year price & quantities are suffixed by "0" & current year by "1" then "0" is replaced by "1" & "1" is replaced by "0" incase of : Show Answer
Q67) Value index number is defined as : Show Answer
Q68) Cost of living index number is defined is : Show Answer
Q69) Cost of living index number is defined is : Show Answer
Q70) If the prices of all the commodities in the current year has increased by 1.5 times compared to the base period price, then the price index no. for the current year is : Show Answer
Q71) If the prices of all the commodities in the current year has decreased by 30% compared to the base period price, then the price index no. for the current year is : Show Answer
Q73) If the price of gold was 20% more in 2006 as compared to 2005 but 20% less than that in 2004 and 50% more than that in 2007, then find the price relative using 2005 as base year for the years 2004 and 2007. Show Answer
Q74) The cost of living index no. is 130 the eduction price index is 120 and the other items index no. is 140. The percentage of total weight that is applicant for eduction is : Show Answer
Q75) The cost of entertainment increased by 60%, the person who maintained his former lifestyle experienced a 5% increase in his cost of living. Before the change in price, the percentage of his cost of living due to entertainment is : Show Answer
Q76) An index was at 100 in 1999. It rises by 5% in 2000, falls 6% in 2001, falls 5% in 2002, rises 4% in 2003 and rises 7% in 2004. Calculate index numbers for all these years with 1999 as base year. Show Answer
Q77) Geometric mean of index number of Laspeyre and Paasche's is 229.5648, while the sum of Laspeyre and Paasche's index number is 480. Find out Laspeyre and Paasche indices. Show Answer
Q78) The index number is unit free. Is it true ? Show Answer
Q79) If 425 and 120 are respectively the price and quantity index nos. for the year 2000 satisfying the factor reversal test, find the expenditure during the year 2000, given that expenditure during the base year was Rs.80. Show Answer
Q80) The earnings of worker in 1994 was Rs.1,000 and that in 2000 was Rs.1200. if the index number for the two years were 334 and 499 respectively, what was the percentage increase/ decrease in his earnings over this period according to 1994 index number ? Show Answer
Q81) In 2001 the price of an item A increased by 25% and that of B by 50% as compared to the price in 1991. the price of C was doubled and that of D remained steady. The index number of all the four items taken together was 140 in 2001 with 1991 as the base year. If the sum of weights of all the items is 10 and if the weights of B,C and D are equal, find the weights of the items. Show Answer
Q82) If the price index no. of 2006 is 225 with base year 2000 then price have increased or an average by : Show Answer
Q83) The first application of probability was done in : Show Answer
Q84) Initially probability was a branch of Show Answer
Q85) Now probability is an integral part of Show Answer
Q86) The two divisions of probability are Show Answer
Q87) Subjective probability is based on Show Answer
Q88) Subjective probability is used in the field of Show Answer
Q89) An experiment is said to be Random experiment if Show Answer
Q90) The result of an experiment is called Show Answer
Q91) The two types of events are : Show Answer
Q92) The events which can be decomposed into two or more events is called : Show Answer
Q93) Two events A and B are said to be incompatible when : Show Answer
Q94) A and B are called exhaustive events when : Show Answer
Q95) Two events A and B are called mutually symmetric events : Show Answer
Q96) The limitation of Priori definition of probability is : Show Answer
Q97) Classical definition is used in case of tossing coin, throwing dice, etc. Show Answer
Q98) Odds in favour of events are 5:4. Therefore probability of that events is : Show Answer
Q99) The probability of an event is 9/16. Therefore odds against an event is : Show Answer
Q100) A and A' are mutually exclusive events Show Answer
Q101) A is called a sure event when Show Answer
Q102) A is an impossible events when
Q103) A coin is tossed 2 times. The odds against at least 1 head is : Show Answer
Q105) A = {1, 2, 3}; B = {2, 3, 4, 5}. Therefore A - B = Show Answer
Q106) P (A or B) = ________ when A and B are mutually exclusive. Show Answer
Q107) A and B are independent events. Therefore A and B are also independent event Show Answer
Q108) Value assigned to Random variable is from the set of : Show Answer
Q109) Number of misprints, number of tyre bursting, etc. are example of : Show Answer
Q110) Height, Marks, Age are example of : Show Answer
Q111) If x and y are 2 random variable having expected value 7 and 5 respectively the expected value of E (x + y) Show Answer
Q112) The probability of getting score 5 at least once when the die is thrown thrice : Show Answer
Q113) The odds of in favour of getting a king card when a card is drawn from the pack : Show Answer
Q114) When two coin are tossed simultaneously then the probability of getting almost one head is : Show Answer
Q115) For a random variable x its variance is 5 then variance for ( -3x + 7) is : Show Answer
Q116) The expected value of a random variable is always : Show Answer
Q117) When 2 dice are thrown the probability of getting the sum of the score as a perfect square is Show Answer
Q118) The odds in favour of one student passing at a test are 3 : 7. The odds against another student passing at are 3 : 5. What are the odds that both fail ? Show Answer
Q119) The odds in favour of one student passing at a test are 3 : 7. The odds against another student passing at are 3 : 5. What are the odds that only the 2nd student passes ? Show Answer
Q120) A salesman is known to sell a product in 3 out of 5 attempts, while another salesman in 2 out of 5 attempts. Find the probability that no sale will be affected when they both try to sell the product. Show Answer
Q121) A salesman is known to sell a product in 3 out of 5 attempts, while another salesman in 2 out of 5 attempts. Find the probability that either of them will succeed in selling the product. Show Answer
Q122) Two different digits are selected at random from digits 1, 2, .........., 9. If the sum is odd, what is the probability that 2 is one of the digits selected ? Show Answer
Q123) A committee of 4 people is to be appointed from 3 officers of the production department, 4 officers of the purchase department, 2 officers of the sales department and 1 charted accountant. Find the probability of forming the committee in which there must be one from each category. Show Answer
Q124) A committee of 4 people is to be appointed from 3 officers of the production department, 4 officers of the purchase department, 2 officers of the sales department and 1 charted accountant. Find the probability of forming the committee in which it should have at least one from the purchase department. Show Answer
Q125) A committee of 4 people is to be appointed from 3 officers of the production department, 4 officers of the purchase department, 2 officers of the sales department and 1 charted accountant. Find the probability of forming the committee where the chartered accountant must be in the committee. Show Answer
Q126) Two sets of candidates are competing for the position on the Board of Directors of a company. The probability that first and second set will win are 0.6 and 0.4 respectively. If the first set wins, the probability of introducing a new product is 0.8 and the corresponding probability if the second set win is 0.3. What is the probability that the new product will be introduced ? Show Answer
Q127) A and B choose a digit at random from 0, 1, 2,………,9 independently. Find the probability that the product of the two digits chosen is zero. Show Answer
Q128) A problem is statistics is given to two students A and B. The odds in favour of A solving the problem are 6 to 9 and against B solving the problem are 12 to 10. If A and B attempt, find the probability of the problem being solved. Show Answer
Q129) Suppose that it is 11 to 5 against a person who is now 38 years of age living till he is 73 and 5 to 3 against now 43 living till he is 78 years. Find the chance that at least one of these persons will be alive 35 years hence. Show Answer
Q130) Two cards are drawn from a pack of 52 cards at random and kept out. Them one card is drawn from the remaining 50 cards. Find the probability that it is ace. Show Answer
Q131) A committee of 4 persons is to be appointed from 7 men and 3 women, what is the probability that the committee contains exactly two women. Show Answer
Q132) A person is known to hit a target in 5 out of 8 shots, whereas another person is known to hit it in 3 out of 5 shots. Find the probability that the target is not hit at all when they both try. Show Answer
Q133) A person throws an unbiased die. If the number shown is even, he gains an amount equal to the number shown. If the number is odd, he loses an amount equal to the number shown, find his expectation. Show Answer
Q134) A service station manager sells gas at an average of $100 per day on a rainy day,$150 on a ‘dubious day’, $250 on a fair day, and$300 on a clear day. If weather bureau Statistics show the probabilities of weather as follows, find his mathematical expectation :
Clear 0.50
Fair 0.30
Dubious 0.15
Q135) A die tossed twice. If it shows the same number twice, Gopal gets Rs.100 otherwise he loses Rs.5. What is his mathematical expectation ? Show Answer
Q136) A player tosses three fair coins. He wins Rs.8 if 3 heads occur, Rs.3 if 2 heads occur and Re.1 if only 1 head occurs. If the game is to fair, how much would he lose if no heads occur ? Show Answer
Q137) A person tosses two coins simultaneously. He receives Rs.8 for two heads, Rs.2 for one head and he is to pay Rs.6 for no head. Find his expectation. Show Answer
Q138) A man draws 2 balls from a bag containing 3 white and 5 black balls. If he is to receive Rs.14 for every white ball and Rs.7 fore every black ball drawn, what is his expectation ? Show Answer
Q139) 5000 tickets are sold in a lottery in which there is a first prize of Rs.10000, two second prizes of Rs.2000 each and 10 third prizes of Rs.200 each. One ticket cost Rs.2. find the expected net gain or loss if you buy one ticket ? Show Answer
Q140) A player tosses 3 coins. He wins Rs.20 if all the three coins show head, Rs.10 if 2 coins show heads and Rs.5 if one head appears. If no head appears, he loses Rs.4. find his expectation. Show Answer
Q141) The probability that there is at least one error in an accounts statement prepared by A is 0.2 and for B and C, they are 0.25 and 0.4 respectively.
A, B and C prepared 10, 16 and 20 statements respectively. Find the expected number of correct statements in all. Show Answer
Q142) The probability that a man fishing at a particular place will catch 1, 2, 3 and 4 fish are 0.1, 0.3, 0.2 and 0.4 respectively. What is the expected number of fish caught ? Show Answer
Q143) A petrol pump proprietor sells an average of Rs.80000 petrol on rainy days and an average of Rs.95000 on clear days. Statistics from the Meteorological Department show that the probability is 0.76 for clear weather and 0.24 for rainy weather on coming Monday. Find the expected value of petrol sale. Show Answer
Q144) A wheel of fortune at an amusement park is divided into five colours: red, blue, green, yellow, brown. The probabilities of the spinner landing in any of these colours are 3/10, 3/10, 2/10, 1/10, 1/10 respectively. A player can win Rs.5 if it stops on red, Rs.3 if it stops on blue, Rs.4 if it stops on green and lose Rs.2 if it stops on yellow and Re.1 if it stops on brown. Meena wants to try her luck. What is her mathematical expectation ? Show Answer
Q145) If the probability that a man wins a prize of Rs.10 is 3/5 and the probability that he wins nothing is 2/5, find the mathematical expectation ? Show Answer
Q146) Calculate the expected value of, the sum of X, the score when two dice are rolled. Show Answer
Q147) An investment consultant predicts that the odds against the price of a certain stock going up are 2 : 1 and the odds in favour of the price remaining the same are 1 : 3. what is the probability that the price of the stock will go down ? Show Answer
Q148) What is the probability that a leap year selected at random will contain either 53 Thursday's or 53 Friday's ? Show Answer
Q149) From a well shuffled pack of 52 cards, two are drawn at random. What are the odds against they being a king and an ace ? Show Answer
Q150) A chartered accountant applies for a job in two firms X and Y, he estimates that the probability of his being selected in firm X is 0.7 and being selected at, Y is 0.5 and the probability of at least one of his applications being rejected is 0.6. What is the probability that he will be selected in one of the firms ? Show Answer
Q151) There are 4 letters and 4 addressed envelopes. Find the chance that all letters are not dispatched in the right envelopes. Show Answer
Q152) A certain player X is known to win with probability 0.3 if the track is fast and 0.4 if the track is slow. For Monday, there is a 0.7 probability of a fast track and 0.3 probability of a slow track. What is the probability that the player X will win on Monday ? Show Answer
Q153) A pair of dice is thrown and sum of the numbers on the two dice is observed to be 6. what is the probability that there is 2 on one of the dice ? Show Answer
Q154) A die is so biased that it is twice as likely to show an even number as an odd number when thrown. It is thrown twice. What is the probability that sum of the two numbers shown is even ? Show Answer
Q155) An urn contains 10 counters on which digits 0, 1, 2, .......... 9 are written one digit on each counter, no digit being repeated.
A chooses a counter first and then B chooses another counter (the first is not replaced). Find the probability that the product of the two digits chosen in zero. Show Answer
Q156) There are four hotels in a certain town. If 3 men check into hotels in a day, what is the probability that each check into a different hotels ? Show Answer
Q157) Two different digits are selected at a random from the digits 1,2, .......... 9. If the sum is odd what is the probability that 2 is one of the digits selected ? Show Answer
Q158) It is known that 40% of the students in a certain college are girls and 50% of the students are above the median height. If 2/3 of the boys are above the median height, what is the probability that a randomly selected student who is below the median height is a girl ? Show Answer
Q159) Four digits 1, 2, 4 and 6 are selected at random to form a four digit number. What is the probability that the number so formed, would be divisible by 4 ? (Repetition is allowed) Show Answer
Q160) Anita and Binita stand in a line with 7 other people. What is the probability that there are 4 persons between them ? Show Answer
Q161) Out of 80 students in a class, 30 passed in Mathematics, 20 passed in Statistics and 10 passed in both. One student is selected at random. Find the probability that he has passed at least in one of the subjects. Show Answer
Q162) Out of 80 students in a class, 30 passed in Mathematics, 20 passed in Statistics and 10 passed in both. One student is selected at random. Find the probability that he has passed none of the above subjects Show Answer
Q163) Out of 80 students in a class, 30 passed in Mathematics, 20 passed in Statistics and 10 passed in both. One student is selected at random. Find the probability that he has passed only in Maths. Show Answer
Q164) Out of 80 students in a class, 30 passed in Mathematics, 20 passed in Statistics and 10 passed in both. One student is selected at random. Find the probability that he has passed only in Statistics. Show Answer
Q165) Out of 80 students in a class, 30 passed in Mathematics, 20 passed in Statistics and 10 passed in both. One student is selected at random. Find the probability that he has passed exactly in one of the subjects. Show Answer
Q166) Which of the following is an example of time series problem? (i) Estimating number of hotel rooms booking in next 6 months. (ii) Estimating the total sales in next 3 years of an insurance company. (iii) Estimating the number of calls for the next one week. Show Answer
Q167) Which of the following is not an example of a Time Series Model? Show Answer
Q168) Which of the following cannot be a component for a Time Series Plot? Show Answer
Q169) Which of the following is relatively easier to estimate in time series modeling? Show Answer
Q170) Smoothing parameter close to one gives more weight or influence to recent observations over the forecast. Show Answer
Q171) Sum of weights in exponential smoothing is _____. Show Answer
Q172) The last period's forecast was 70 and demand was 60. What is the simple exponential smoothing forecast with alpha of 0.4 for the next period. Show Answer
Q173) Which of the following is not a necessary condition for weakly stationary time series? Show Answer
Q174) Which of the following is not a technique used in smoothing time series? Show Answer
Q175) If the demand is 100 during October 2017, 200 in November 2017, 300 in December 2017, 400 in January 2018. What is the 3 - month simple moving average for February 2018? Show Answer
Q176) Suppose, you are a data scientist at Analytics Vidhya. And you observed the views on the articles increases during the month of Jan-Mar. Whereas the views during Nov-Dec decreases. Does the above statement represent seasonality? Show Answer
Q177) Second differencing in time series can help to eliminate which trend? Show Answer
Q178) In a time-series forecasting problem, if the seasonal indices for quarters 1, 2, and 3 are 0.80, 0.90, and 0.95 respectively. What can you say about the seasonal index of quarter 4? Show Answer
Q179) An orderly set of data arranged in accordance with their time of occurrence is called: Show Answer
Q180) A time series consists of: Show Answer
Q181) Secular trend can be measured by: Show Answer
Q182) The secular trend is measured by the method of semi-averages when: Show Answer
Q183) Increase in the number of patients in the hospital due to heat stroke is: Show Answer
Q184) In time series seasonal variations can occur within a period of: Show Answer
Q185) Wheat crops badly damaged on account of rains is: Show Answer
Q186) The method of moving average is used to find the: Show Answer
Q187) A complete cycle passes through: Show Answer
Q188) Most frequently used mathematical model of a time series is: Show Answer
Q189) A time series consists of: Show Answer
Q190) In a straight line equation Y = a + bX; a is the: Show Answer
Q191) In a straight line equation Y = a + bX; b is the: Show Answer
Q192) Value of b in the trend line Y = a + bX is: Show Answer
Q193) In semi averages method, we decide the data into: Show Answer
Q194) In fitting a straight line, the value of slope b remain unchanged with the change of: Show Answer
Q195) Moving average method is used for measurement of trend when: Show Answer
Q196) When the trend is of exponential type, the moving averages are to be computed by using: Show Answer
Q197) Indicate which of the following an example of seasonal variations is: Show Answer
Q198) The most commonly used mathematical method for measuring the trend is: Show Answer
Q199) A trend is the better fitted trend for which the sum of squares of residuals is: Show Answer
Q200) Decomposition of time series is called: Show Answer
Q201) The fire in a factory is an example of: Show Answer
Q202) Increased demand of admission in the subject of computer in Pakistan is: Show Answer
Q203) Damages due to floods, droughts, strikes fires and political disturbances are: Show Answer
Q204) The general pattern of increase or decrease in economics or social phenomena is shown by: Show Answer
Q205) In moving average method, we cannot find the trend values of some: Show Answer
Q206) The best fitting trend is one which the sum of squares of residuals is: Show Answer
Q207) In fitting of a straight line, the value of slope remains unchanged by change of: Show Answer
Q209) In fitting of straight line = 0 Show Answer
Q210) Semi-averages method is used for measurement of trend when: Show Answer
Q212) The rise and fall of a time series over periods longer than one year is called: Show Answer
Q213) A time series has: Show Answer
Q214) The multiplicative time series model is: Show Answer
Q216) The difference between the actual value of the time series and the forecasted value is called: Show Answer
Q217) A pattern that is repeated throughout a time series and has a recurrence period of at most one year is called: Show Answer
Q218) The straight line is fitted to the time series when the movements in the time series are: Show Answer
Q219) If an annual time series consisting of even number of years is coded, then each coded interval is equal to: Show Answer
Q220) For odd number of year, formula to code the values of X by taking origin at centre is: Show Answer
Q221) For even number of years when origin is in the centre and the unit of X being one year, then X can be coded as: Show Answer
Q222) For even number of years when origin is in the centre and the unit of X being half year, then X can be coded as: Show Answer
Q223) In semi averages method, if the number of values is odd then we drop: Show Answer
Q224) The trend values in freehand curve method are obtained by: Show Answer
Q225) A series of numerical figures which show the relative position is called Show Answer
Q226) A ratio or an average of ratios expressed as a percentage is called Show Answer
Q227) The index number is a special type of Show Answer
Q228) Index nos. show _____ changes rather than absolute amounts of change. Show Answer
Q229) An index time series is a list of _____ numbers for two or more periods of time. Show Answer
Q230) _____ is a point of reference in comparing various data describing individual behaviour. Show Answer
Q231) Index number for the base period is always taken as Show Answer
Q232) The value at the base time period serves as the standard point of comparison. Show Answer
Q233) The choice of suitable base period is at best temporary solution. Show Answer
Q234) The ratio of price of single commodity in a given period to its price in another period is called the Show Answer
Q235) The purpose determines the type of index no. to use Show Answer
Q236) Identify False Statements Show Answer
Q237) When the prices are decreased by 30% then the index number is now Show Answer
Q238) Index numbers are often constructed from the Show Answer
Q239) Index numbers are used in Show Answer
Q240) _____ play a very important part in the construction of index nos. Show Answer
Q241) The best average for constructing an index numbers is Show Answer
Q242) Theoretically, GM is the best average in the construction of index nos but in practice, mostly the AM is used Show Answer
Q243) The _____ makes index numbers time-reversible. Show Answer
Q244) The _____ of group indices gives the grammatical. Show Answer
Q245) Price relative is equal to Show Answer
Q246) Index number is equal to Show Answer
Q247) Price-relative is expressed in term of Show Answer
Q248) One big advantage of _____ is that they are pure numbers. Show Answer
Q249) We use price index numbers Show Answer
Q250) To measure the economic strength _____ are widely used. Show Answer
Q252) For computing a price index _____ method is used. Show Answer
Q253) The cost of living Index (CLI) is always: Show Answer
Q254) Cost of living Index (C.L.I.) numbers are also used to find real wages by the process of Show Answer
Q255) Cost of living index number (C.L.I) is expressed in terms of: Show Answer
Q256) Factor reversal test is: Show Answer
Q257) Simple aggregate of quantities is a type of Show Answer
Q258) For constructing consumer price index _____ is used: Show Answer
Q259) The aggregate index formula using base period quantities is called Show Answer
Q260) Laspeyre's index is based on Show Answer
Q261) Weighted average of price relatives index using base year quantities as weights is called Show Answer
Q262) Paasche's index is based on Show Answer
Q263) Paasche's index number is expressed in terms of Show Answer
Q264) Marshall Edge worth index formula after interchange of p and q is impressed in terms of: Show Answer
Q265) The result obtained by Marshall-Edgeworth method is closest to Show Answer
Q266) Bowley's Index number is expressed in terms of: Show Answer
Q267) Fisher's ideal index is Show Answer
Q268) Fisher's ideal index number is expressed in terms of: Show Answer
Q269) The index used to measure changes in total money value is called Show Answer
Q270) The value index is equal to: Show Answer
Q271) The value Index is expressed in terms of Show Answer
Q272) The Index No. of Weighted Average of Price Relatives is represented by Show Answer
Q273) Link Relative Index Numbers is expressed for period n is Show Answer
Q274) Chain index is equal to Show Answer
Q275) The formula for conversion for deflated value is Show Answer
Q276) The formula should be independent of the unit in which or for which price and quantities are quoted in Show Answer
Q277) Weighted GM of relative formula satisfies _____ test. Show Answer
Q278) Time reversal test is satisfied when Show Answer
Q279) When the product of price index and the quantity index is equal to the corresponding value index then which of the following test is satisfied Show Answer
Q280) The factor Reversal test is represented symbolically as: Show Answer
Q281) Factor Reversal test according to Fisher is Show Answer
Q282) _____ is concerned with the measurement of price changes over a period of years, when it is desirable to shift the base. Show Answer
Q283) The test of shifting the base is called Show Answer
Q284) The Circular test is an extension of Show Answer
Q285) In a circular test the condition must be satisfied Show Answer
Q286) The simple Aggregative formula and weighted aggregative formula satisfy the Show Answer
Q287) Circular test is met by Show Answer
Q288) Circular test is not met by Show Answer
Q289) Laspeyre's method and Paasche's method do not satisfy Show Answer
Q290) Fisher's Ideal formula for calculating index nos. satisfies the _____ tests. Show Answer
Q291) Laspeyre's or Paasche's or the Fisher's ideal index do not satisfy Show Answer
Q292) The time reversal test is satisfied by _____ index number. Show Answer
Q293) The factor reversal test is satisfied by Show Answer
Q294) Time reversal test is satisfied by following index number formula is Show Answer
Q295) Identify False Statements Show Answer
Q296) "Neither Laspeyre's formula nor Paasche's formula obeys" Show Answer
Q297) The quantity index number using Fisher's formula satisfies: Show Answer
Q298) Time Reversal test is satisfied by: Show Answer
Q299) Factor reversal test is satisfied by Show Answer
Q300) Fisher's ideal formula does not satisfy _____ test. Show Answer
Q301) Circular test is satisfied by Show Answer
Q302) Both the time and factor reversal tests are satisfied by Show Answer
Q303) A good index number is one that satisfies Show Answer
Q304) Identify the correct statement. Show Answer
Q305) Which of the following is not correct? Show Answer
Q306) Identify False Statements Show Answer
Q307) Identify False Statements Show Answer
Q308) Identify False Statements Show Answer
Q309) Which of the following statement is true? Show Answer
Q310) If with a rise of 10% in prices the wages are increased by 20% the real wage increase by Show Answer
Q311) If the prices of all commodities in a place have increased 1.25 times in comparison to the base period, the index number of prices of that place is now Show Answer
Q312) If the index number of prices at a place in 1994 is 250 with 1984 as base year, then the prices have increased on average Show Answer
Q313) If the prices of all commodities in a place have decreased 35% over the base period prices, then the index number of prices of that place is now Show Answer
Q314) Consumer Price Index number for the year 1957 was 313 with 1940 as the base year 96 the Average Monthly wages in 1957 of the workers into factory be Rs. 160 their real wages is Show Answer
Q315) The price relative for the year 1986 with reference to 1985 is 120. What is the percent the price increased in 1986 over 1985 is Show Answer
Q316) With the year 1960 as the base the Cost of Living Index in 1972 stood at 250. X was getting a monthly salary of Rs. 500 in 1960 and Rs. 750 in 1972. In 1972 to maintain his standard of living as in the year 1960, the extra allowance received by X Show Answer
Q317) In 1980, the net monthly income of the employee was Rs. 800 p.m. The consumer price index number was 160 in 1980. It rises to 200 in 194. If he has to be rightly compensated. The additional DA to be paid to the employee is Show Answer
Q318) The Index number in whole sale prices is 152 for August 1999 compared to August 1998. During the year there is net increase in prices of whole sale commodities to the extent of Show Answer
Q319) The price level of a country in a certain year has increased 25% over the base period. The index number is Show Answer
Q320) The index number of prices at a place in 1998 is 355 with 1991 as base. This means Show Answer
Q321) In 2005, Price Index is 286% with base 1995. Then how much price increased in 2005 with base 1995? Show Answer
Q322) If the prices of all commodities in a place have increased 125 times in comparison to the base period prices, then the index number of prices for the place is now Show Answer
Q323) The whole sale price index number or agricultural commodities in a given region at a given date is 280. The percentage use in prices of agricultural commodities over the base period is: Show Answer
Q324) If now the prices of all the commodities in a place have been decreased by 85% over the base period prices, then the index number of prices is now (index number of prices of base period = 100) Show Answer
Q325) If the 1970 index with base 1965 is 200 and 1985 index with base 1960 is 150, the index 1970 on base 1960 will be: Show Answer
Q326) In 1996 the average price of a commodity was 20% more than in 1995 but 20% less than in 1994; and more over it was 50% more than in 1997 to price relatives using 1995 as base (1995 price relative 100) Reduce the data is: Show Answer
Q327) The price of a commodity increases from Rs. 5 per unit in 1990 to Rs. 7.50 per unit in 1995 and the quantity consumed decreases from 120 units in 1990 to 90 units in 1995. The price and quantity in 1995 are 150% and 75% respectively of the corresponding price and quantity in 1990. Therefore, the product of the price ratio and quantity ratio is: Show Answer
Q328) Consumer price index number goes up from 110 to 200 and the Salary of a worker is also raised from Rs. 325 to Rs. 500. Therefore, in real terms he has no gain, to maintain his previous standard of living he should get an additional amount is: Show Answer
Q329) The prices of a commodity in the year 1975 and 1980 were 25 and 30 respectively taking 1980 as base year the price relative is: Show Answer
Q330) In 1976 the average price of a commodity was 20% more than that in 1975 but 20% less than that in 1974 and more over it was 50% more than that in 1977. The price relatives using 1975 as base year (1975 price relative = 100) then the reduce date is: Show Answer
Q331) The prices of a commodity in the years 1975 and 1980 were 25 and 30 respectively, taking 1975 as base the price relative is: Show Answer
Q332) During a certain period the cost of living index number goes up from 110 to 200 and the salary of a worker is also raised from Rs. 325 to Rs. 500. The worker does not get really gain. Then the real wages decreased by: Show Answer
Q333) Net monthly salary of an employee was Rs. 3000 in 1980. The consumer price index number in 1985 is 250 with 1980 as base year. If he has to be rightly compensated, the additional DA to be paid to the employee is: Show Answer
Q334) Net monthly income of an employee was Rs. 800 in 1980. The consumer price Index number was 160 in 1980. It rises to 200 in 1984. If he has to be rightly compensated, the additional DA to be paid to the employee is: Show Answer
Q335) The total value of retained imports into India in 1960 was Rs. 71.5 million per month. The corresponding total for 1967 was Rs. 87.6 million per month. The index of volume of retained imports in 1967 composed with 1960 (= 100) was 62.0. The price index for retained inputs for 1967 our 1960 as base is Show Answer
Q336) During the certain period the CLI goes up from 110 to 200 and the Salary of a worker is also raised from 330 to 500, then the loss in real terms is Show Answer
Q337) If the price of all commodities in a place have increased 125 times in comparison to the base period prices, then the index number of prices for the place is now Show Answer
Q338) The average price of certain commodities in 1980 was Rs. 60 and the average price of the same commodities in 1982 was Rs. 120. Therefore, the increase in 1982 on the basis of 1980 was 100%. The decrease should have been 100% in 1980 using 1982, comment on the above statement is: Show Answer
Q339) The index number of prices at a place in 1998 is 355 with 1991 as base. This means Show Answer
Q340) If now the prices of all the commodities in a place have been decreased by 85% over the base period prices, then the index number of prices for the place is now (index number of prices of base period = 100) Show Answer
Q341) If with a rise of 10% in prices the salaries are increased by 20%, the real salary increase is by Show Answer
Q342) The price level of a country in a certain year has increased 20% over the base period. The Index number is _____ Show Answer
Q343) If with rise of 10% in prices the wages are increased by 20%. Find the percentage of real wage increase Show Answer
Q344) If the price index for the year, say 1960 be 110.3 and the price index for the year, say 1950 be 98.4. Then the purchasing power of money (Rupees) of 1950 will be of 1960 is Show Answer
Q345) When the cost of Tobacco was increased by 50%, a certain hardened smoker, who maintained his formal scale of consumption, said that the rise had increased his cost of living by 5%. Before the change in price, the percentage of his cost of living was due to buying Tobacco is Show Answer
Q346) The consumer price index for April 1985 was 125. The food price index was 120 and other items index was 135. The percentage of the total weight of the food grains in the index is - Show Answer
Q347) Bowley's Index Number = 150, Laspeyer's Index = 180, then Paasche's Index Number is - Show Answer
Q348) Consumer Price index number for the year 1957 was 313 with 1940 as the base year. The Average Monthly wages in 1957 of the workers in to factory be Rs. 160/- their real wages is: Show Answer
Q349) Which is called an ideal index number Show Answer
Q350) In semi averages method, if the number of values is odd then we drop: Show Answer
Q351) Which is not satisfied by Fisher's ideal index number? Show Answer
Q352) The cost of living index numbers in year 2015 and 2018 were 97.5 and 115 respectively. The salary of worker in 2015 was Rs.19,500. How much additional salary was required for him in 2018 to maintain the some standard of living as in 2015? Show Answer
Q353) Trend in semi averages is: Show Answer
Q354) The most commonly used mathematical method for finding secular trend is: Show Answer
Q355) When sale of cold drink increases in summer and decrease in winters is an example of? Show Answer
Q356) Seasonal variations takes place within: Show Answer
Q357) The index number of prices at place in the year 2008 is 225 with 2004 as the base then there is: Show Answer
Q358) Indexed Numbers are expressed as Show Answer
Q359) In Laspeyr's index number is 110 and Fisher's ideal index number is 109. Then Paasche's index number is Show Answer
Q360) The cost of living index is always Show Answer
Q361) When the prices for quantities consumed of all commodities are changing in the same ratio, then the index numbers due to Laspeyre's and Paasche's will be. Show Answer
Q362) If in an additive model O refers to original data as 875, T refers to trend 700, S refers to seasonal variations -200, C refers to cyclical variations 75 then the value of 1 which refers to irregular variation is: Show Answer
Q363) The consumer price index goes up from 120 to 180 when salary goes up from 240 to 540, what is the increase in real terms? Show Answer
Q364) The weighted averaged of price relatives of commodities, when the wights are equal to the value of commodities in the current year, yields_________Index number. Show Answer
Q366) The three index numbers, namely, Laspeyre, Paasche and Fisher do not satisfy________test. Show Answer
Q367) Geometric mean method used in which index number to find it out Show Answer
Q368) Which test is known for shift base index no. Show Answer
Q369) Laspeyre and Paasche do not satisfy- Show Answer
Q370) Laspeyre's index number is based on? Show Answer
Q371) price relative is: Show Answer
Q372) which one of the following is not appropriate for calculation of index number? Show Answer
Q373) Fisher's index number does not satisfy Show Answer
|
## Note on Transverse Nature of Light, Polarisation by Selective Absorption and by Refraction
• Note
• Things to remember
• Videos
#### Transverse Nature of Light
Experimental Verification
The figure shows an unpolarized light beam on Polaroid P. It is the intensity of the transmitted light through P is reduced to half the intensity of incident light. When another Polaroid A is set with its axis parallel to the axis of P, the transmitted light through A has no change in intensity. When the axis of A is gradually rotated to the axis of P, the intensity of light transmitted through A gradually decreases and finally becomes zero as the two axes cross each other. Then no light is obtained through A.
This experiment showed that when a unpolarised light is an incident on the Polaroid P, the transmitted light has electric vector vibrating parallel to the transmission axis of P while the electric vector vibrating perpendicular to the transmission axis is absorbed. So the light transmitted through P has electric vector parallel to its axis only and the Polaroid P has made vibrations of light in one direction. The phenomenon of restricting the vibration of light in one direction is called polarization of light. Polaroid P is called polarizer.
When the pass-axis of the Polaroid A is parallel to the direction of the vibrations of the plane polarized light, this polarized light is transmitted as such by the Polaroid A.
When the axis of the Polaroid A is perpendicular to the direction of vibrations of the plane polarized light, then the vibrations of the plane polarized light is completely blocked. So, there is no light transmitted through the Polaroid A and hence the intensity of light transmitted is zero. The Polaroid A identifies the polarization of light and hence it is known as an analyzer.
#### Polarization by Selective Absorption
A polarized light can be obtained by using a material which transmits waves whose electric fields vibrate in a plane parallel to a certain direction of orientation and absorbs waves whose electric fields vibrate in all other directions. Such materials are called polaroids and are fabricated in thin sheets of long chain hydrocarbons. In an ideal polarizer, the light with vector E parallel to transmission axis is transmitted and light with E perpendicular to the transmission axis is completely absorbed as shown in the figure.
The figure shows an unpolarised light beam incident on a polarizer. Since the axis of the polariser is oriented vertically, the light through it is polarized vertically. An analyser is set with its axis making an angle $$\theta$$ with polarizer axis, and it intercepts the polarized beam. If E0 is the electric field of a vector of the transmitted beam, the component of E0 perpendicular to the analyser axis is completely absorbed and the component parallel to analyser axis is allowed to pass through it. This component is E0 cos θand 'f'light is polarized again along the axis of an analyser. Since the intensity of the transmitted beam varies as square of its amplitude, the intensity of polarized beam transmitted through the analyser is
$$I = I_m\cos^2\theta$$
where Im is an intensity of polarized light incident on the analyser. This law is called Malus law and applies to any two polarizing materials whose transmission axes are at an angle Ï´ to each other. According to this law, the intensity is maximum if the transmission axes are parallel and intensity is zero if the transmission axes are perpendicular to each other.
#### Polarisation by Refraction
When an unpolarised light beam is an incident on a transparent material, such as water, glass, etc., the reflected and refracted beam are partially polarized. Each electric field vector is resolved into two components, one parallel to the surface, represented by dots and the other vectors is resolved into two components, one parallel to the surface, represented by dots and the other represented by an arrow, the both being perpendicular to each other and perpendicular to the direction of propagation.
When the angle of incident is increased, the polarization in reflected beam increases, and at a particular angle of incidence, θp, the reflected beam is completely plane polarized with electric field vector parallel to the reflecting surface. The angle of incidence at which the complete polarization occurs is called the polarizing angleθp. The refracted wave is however partially polarized as shown in figure. Brewster found that at polarizing angle θp, the angle between reflected and refracted beam is 90o. From figure we have,
\begin{align*} \theta _p + 90^o + \theta &= 180^o\\ \text {or,} \: \theta _p + \theta &= 90^o \\ \text {or,} \: \theta &= 90^o - \theta _p \\\end{align*}
Using Snell’s law of refraction, the refractive index of the material is
\begin{align*} \\ \mu &= \frac {\sin i}{\sin r} = \frac {\sin \theta _p}{\sin \theta } \\ \text {But}\: \sin \theta = \sin (90-\theta _p) = \cos \theta _p, \text {and} \\ \mu &= \frac {\sin \theta _p}{\cos \theta _p} = \tan \theta _p \\ \therefore \mu &= \tan \theta _p \\ \end{align*}
This expression is called Brewster’s law and sometime θpis called Brewster’s angle. Brewster’s law states that the tangent of polarizing angle is equal to the refractive index of the material. Since µ varies with wavelength of light, so polarizing angle θpis also the function of wavelength.
The phenomenon of restricting the vibration of light in one direction is called polarization of light.
A polarized light can be obtained by using a material which transmits waves whose electric fields vibrate in a plane parallel to a certain direction of orientation and absorbs waves whose electric fields vibrate in all other directions.
Brewster’s law states that the tangent of polarizing angle is equal to the refractive index of the material.
.
0%
|
# A solvable polynomial with no factors?
1. Aug 1, 2015
### Bill_Nye_Fan
I seem to have encountered a situation in which I have a quartic which has solutions, but no factors.
The polynomial is: $x^4 - 8x^2 + 224x - 160 = 0$
I attempted to find the factors for this quartic in the following manor
$f(x) = x^4 - 8x^2 + 224x - 160$
$f(1) = (1)^4 - 8(1)^2 + 224(1) - 160$
$f(1) = 60$
$f(2) = (2)^4 - 8(2)^2 + 224(2) - 160$
$f(2) = 272$
$...$
$f(8) = (8)^4 - 8(8)^2 + 224(8) - 160$
$f(8) = 5216$
So basically, after I got 8 (negative versions included) I gave up on this method and decided to attempt to factorise it on my calculator. However, my calculator refuses to break down this equation into it's respective equations. However, when I solve the equation it gives me two, real answers: $x = -6.705505492, x = 0.7321472234$. Also, the calculator refuses to give these answers in standard form, only decimal form. Normally when I solve an equation in standard form, it gives me the answer in fractional, surd, or even trigonometrical form.
This is really confusing me, so if anyone could explain to me how it is possible for an equation to have real answers, yet it can't be factorised?
Last edited: Aug 1, 2015
2. Aug 1, 2015
### symbolipoint
First guess, is either the solutions are irrational, or they are complex with imaginary components, or both. If any real solutions, there are some numerical ways of approximately finding them.
3. Aug 1, 2015
### SteamKing
Staff Emeritus
Wolfram Alpha gives one positive real and one negative real root along with a pair of complex conjugate roots.
If you want WA to print the fully symbolic expression for the real or imaginary roots using surds and what not, press the button labelled [Exact Form] next to the real or complex roots. I did it, and I wished I hadn't dunnit.
4. Aug 1, 2015
### Bill_Nye_Fan
Ah, thanks for that. I knew the answers had to be expressible in some sort of fractional form. Still, it's rather odd my calculator refused to express this specific answer in fractional + surd form... I've seen it spit out some pretty god damn outrageous things in the past (imagine the exact form of this, expect with sine, cosine and tangents thrown in... I think there may have even been a few logarithms too). I wonder why it didn't want to do it for this one? Oh well.
5. Aug 3, 2015
### Mentallic
Keep in mind that this only applies for quartics and below. 5th degree polynomials and above may have real solutions but in some cases cannot be expressible with any finite mix of fractions, surds, logs, trigs, powers, exponentials, etc.
6. Aug 7, 2015
### HallsofIvy
Staff Emeritus
Don't think that "has no easy factors" is the same as "has no factors"! The "fundamental theorem of algebra" says that every polynomial can be factored into linear factors with complex coefficients. If you don't want to deal with complex numbers, it is still true, and follows from the "fundamental theorem of algebra", that any polynomial, with real coefficients, can be factored into a product of linear and quadratic factors with real coefficients.
If it is true that x = -6.705505492 and x = 0.7321472234 are two zeros of the polynomial, then it follows that (x+ 6.705505492) and (x- 0.7321472234) are factors of that polynomial. If you divide the given polynomial by x+ 6.705505492 you should get a third degree polynomial as quotient and 0 remainder. And if you divide that third degree polynomial by x- 0.7321472234, you should get a quadratic polynomial as quotient and 0 remainder.
You can solve that quadratic using the quadratic formula to see if there are two more real roots, so two more linear factors, or if has non-real roots so it is "irreducible" over the real numbers.
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Double Chooz θ13 measurement via total neutron capture detection
## Abstract
Neutrinos were assumed to be massless particles until the discovery of the neutrino oscillation process. This phenomenon indicates that the neutrinos have non-zero masses and the mass eigenstates (ν1, ν2, ν3) are mixtures of their flavour eigenstates (νe, νμ, ντ). The oscillations between different flavour eigenstates are described by three mixing angles (θ12, θ23, θ13), two differences of the squared neutrino masses of the ν2/ν1 and ν3/ν1 pairs and a charge conjugation parity symmetry violating phase δCP. The Double Chooz experiment, located near the Chooz Electricité de France reactors, measures the oscillation parameter θ13 using reactor neutrinos. Here, the Double Chooz collaboration reports the measurement of the mixing angle θ13 with the new total neutron capture detection technique from the full data set, yielding sin2(2θ13) = 0.105 ± 0.014. This measurement exploits the multidetector configuration, the isoflux baseline and data recorded when the reactors were switched off. In addition to the neutrino mixing angle measurement, Double Chooz provides a precise measurement of the reactor neutrino flux, given by the mean cross-section per fission 〈σf〉 = (5.71 ± 0.06) × 10−43 cm2 per fission, and reports an empirical model of the distortion in the reactor neutrino spectrum.
This is a preview of subscription content, access via your institution
## Relevant articles
• ### Neutrino seesaw models at one-loop matching: discrimination by effective operators
Journal of High Energy Physics Open Access 23 September 2022
• ### Quasi-sterile neutrinos from dark sectors. Part I. BSM matter effects in neutrino oscillations and the short-baseline anomalies.
Journal of High Energy Physics Open Access 02 August 2022
• ### Synergies and prospects for early resolution of the neutrino mass ordering
Scientific Reports Open Access 30 March 2022
## Access options
\$39.95
Prices may be subject to local taxes which are calculated during checkout
## Data availability
All data that support the plots within this paper and other findings of this study are available from the corresponding authors (C.B. and A.C.) upon reasonable request.
## Code availability
Most of the data analysis code is ROOT-based and custom developed by the DC collaboration and is available from the corresponding authors (C.B. and A.C.) upon reasonable request.
## References
1. Super-Kamiokande Collaboration, Fukuda, Y. et al. Evidence for oscillation of atmospheric neutrinos. Phys. Rev. Lett. 81, 1562–1567 (1998).
2. SNO Collaboration, Ahmad, Q. R. et al. Direct evidence for neutrino flavor transformation from neutral current interactions in the Sudbury Neutrino Observatory. Phys. Rev. Lett. 89, 011301 (2002).
3. KamLAND Collaboration, Eguchi, K. et al. First results from KamLAND: evidence for reactor anti-neutrino disappearance. Phys. Rev. Lett. 90, 021802 (2003).
4. CHOOZ Collaboration, Apollonio, M. et al. Limits on neutrino oscillations from the CHOOZ experiment. Phys. Lett. B 466, 415–430 (1999).
5. Palo Verde Collaboration, Boehm, F. et al. Final results from the Palo Verde neutrino oscillation experiment. Phys. Rev. D 64, 112001 (2001).
6. Double Chooz Collaboration, Abe, Y. et al. Indication of reactor $${\overline{\nu }}_{e}$$ disappearance in the Double Chooz experiment. Phys. Rev. Lett. 108, 131801 (2012).
7. T2K Collaboration, Abe, Y. et al. Indication of electron neutrino appearance from an accelerator-produced off-axis muon neutrino beam. Phys. Rev. Lett. 107, 041801 (2011).
8. MINOS Collaboration, Adamson, P. et al. Improved search for muon-neutrino to electron-neutrino oscillations in MINOS. Phys. Rev. Lett. 107, 181802 (2011).
9. Daya Bay Collaboration, An, F. et al. Observation of electron-antineutrino disappearance at Daya Bay. Phys. Rev. Lett. 108, 171803 (2012).
10. RENO Collaboration, Ahn, J. et al. Observation of reactor electron antineutrino disappearance in the RENO experiment. Phys. Rev. Lett. 108, 191802 (2012).
11. Particle Data Group, Tanabashi, M. et al. Review of particle physics. Phys. Rev. D 98, 030001 (2018).
12. Double Chooz Collaboration, Abe, Y. et al. Improved measurements of the neutrino mixing angle θ 13 with the Double Chooz detector. J. High Energy Phys. 1410, 086 (2014); erratum 1502, 074 (2015).
13. Double Chooz Collaboration, Abe, Y. et al. Measurement of θ 13 in Double Chooz using neutron captures on hydrogen with novel background rejection techniques. J. High Energy Phys. 1601, 163 (2016).
14. Daya Bay Collaboration, An, F. et al. Measurement of electron antineutrino oscillation based on 1230 days of operation of the Daya Bay experiment. Phys. Rev. D 95, 072006 (2017).
15. Daya Bay Collaboration, Adey, D. et al. Measurement of the electron antineutrino oscillation with 1958 days of operation at Daya Bay. Phys. Rev. Lett. 121, 241805 (2018).
16. RENO Collaboration, Bak, G. et al. Measurement of reactor antineutrino oscillation amplitude and frequency at RENO. Phys. Rev. Lett. 121, 201801 (2018).
17. T2K Collaboration, Abe, K. et al. Search for CP violation in neutrino and antineutrino oscillations by the T2K experiment with 2.2 × 1021 protons on target. Phys. Rev. Lett. 121, 171802 (2018).
18. NOvA Collaboration, Adamson, P. et al. New constraints on oscillation parameters from ν e appearance and ν μ disappearance in the NOvA experiment. Phys. Rev. D 98, 032012 (2018).
19. MINOS Collaboration, Adamson, P. et al. Combined analysis of ν μ disappearance and ν μν e appearance in MINOS using accelerator and atmospheric neutrinos. Phys. Rev. Lett. 112, 191801 (2014).
20. Double Chooz Collaboration, Abe, Y. et al. Reactor electron antineutrino disappearance in the Double Chooz experiment. Phys. Rev. D 86, 052008 (2012).
21. Double Chooz Collaboration, Abe, Y. et al. Direct measurement of backgrounds using reactor-off data in Double Chooz. Phys. Rev. D 87, 011102 (2013).
22. Sugiyama, H. et al. Systematic limits on sin22θ 13 in neutrino oscillation experiments with multi-reactors. Phys. Rev. D 73, 053008 (2006).
23. Vogel, P. et al. Angular distribution of neutron inverse beta decay, anti-neutrino(e) + p → e+ + n. Phys. Rev. D 60, 053003 (1999).
24. Double Chooz Collaboration, de Kerret, H. et al. Yields and production rates of cosmogenic 9Li and 8He measured with the Double Chooz near and far detectors. J. High Energy Phys. 1811, 053 (2018).
25. Double Chooz Collaboration, Abe, Y. et al. Muon capture on light isotopes measured with the Double Chooz detector. Phys. Rev. C 93, 054608 (2016).
26. Parke, S. What is Δm ee 2? Phys. Rev. D 93, 053008 (2016).
27. Bugey4 Collaboration, Declais, Y. et al. Study of reactor anti-neutrino interaction with proton at Bugey nuclear power plant. Phys. Lett. B 338, 383–389 (1994).
28. Hayes, A. & Vogel, P. Reactor neutrino spectra. Annu. Rev. Nucl. Part. Sci. 66, 219–244 (2016).
29. Daya Bay Collaboration, An, F. et al. Improved measurement of the reactor antineutrino flux and spectrum at Daya Bay. Chin. Phys. C 41, 013002 (2017).
30. Gariazzo, S. et al. Updated global 3+1 analysis of short-baseline neutrino oscillations. J. High Energy Phys. 1706, 135 (2017).
31. Huber, P. On the determination of anti-neutrino spectra from nuclear reactors. Phys. Rev. C 84, 024617 (2011); erratum 85, 029901 (2012).
32. Mueller, T. et al. Improved predictions of reactor antineutrino spectra. Phys. Rev. C 83, 054615 (2011).
33. Schreckenbach, K. et al. Determination of the antineutrino spectrum from 235U thermal neutron fission products up to 9.5 MeV. Phys. Lett. 160B, 325–330 (1985).
34. von Feilitzsch, F. et al. Experimental beta-spectra from 239Pu and 235U thermal neutron fission products and their correlated antineutrino spectra. Phys. Lett. 118B, 162–166 (1982).
35. Hahn, A. et al. Anti-neutrino spectra from 241Pu and 239Pu thermal neutron fission products. Phys. Lett. B 218, 365–368 (1989).
36. Haag, N. et al. Experimental determination of the antineutrino spectrum of the fission products of 238U. Phys. Rev. Lett. 112, 122501 (2014).
37. Mention, G. et al. The reactor antineutrino anomaly. Phys. Rev. D 83, 073006 (2011).
38. Double Chooz Collaboration, Abe, Y. et al. Background-independent measurement of θ 13 in Double Chooz. Phys. Lett. B 735, 51–56 (2014).
39. Giunti, C. et al. Reactor fuel fraction information on the antineutrino anomaly. J. High Energy Phys. 1710, 143 (2017).
40. Dentler, M. et al. Updated global analysis of neutrino oscillations in the presence of eV-scale sterile neutrinos. J. High Energy Phys. 1808, 010 (2018).
41. Bugey3 Collaboration, Declais, Y. et al. Search for neutrino oscillations at 15, 40 and 95 meters from a nuclear power reactor at Bugey. Nucl. Phys. B 434, 503–534 (1995).
42. Cabrera A. Double Chooz III: First Results indico.lal.in2p3.fr/event/2454 (2014).
43. Mention., G. et al. Reactor antineutrino shoulder explained by energy scale nonlinearities? Phys. Lett. B 773, 307–312 (2017).
44. Berryman, J. et al. Particle physics origin of the 5 MeV bump in the reactor antineutrino spectrum? Phys. Rev. D 99, 055045 (2019).
45. NEOS Collaboration, Ko. Y. et al. Sterile neutrino search at the NEOS experiment. Phys. Rev. Lett. 118, 121802 (2017).
46. Zacek V. et al. Evidence for a 5 MeV spectral deviation in the Goesgen reactor neutrino oscillation experiment. Preprint at https://arxiv.org/abs/1807.01810 (2018).
47. Kopeikin, V. et al. Reactor as a source of antineutrinos: thermal fission energy. Phys. At. Nucl. 67, 1892–1899 (2004).
48. Daya Bay Collaboration, An, F. et al. Evolution of the reactor antineutrino flux and spectrum at Daya Bay. Phys. Rev. Lett. 118, 251801 (2017).
## Acknowledgements
This publication is dedicated to our colleague Hervé de Kerret. We thank the company EDF (‘Electricity of France’), the European fund FEDER, the Région Grand Est (formerly known as the Région Champagne-Ardenne), the Département des Ardennes and the Communauté de Communes Ardenne Rives de Meuse. We acknowledge the support of the CEA, CNRS/IN2P3, the computer centre CC-IN2P3 and LabEx UnivEarthS in France; the Max Planck Gesellschaft, the Deutsche Forschungsgemeinschaft DFG, the Transregional Collaborative Research Centre TR27, the excellence cluster ‘Origin and Structure of the Universe’ and the Maier-Leibnitz-Laboratorium Garching in Germany; the Ministry of Education, Culture, Sports, Science and Technology of Japan (MEXT) and the Japan Society for the Promotion of Science (JSPS) in Japan; the Ministerio de Economía, Industria y Competitividad (SEIDI-MINECO) under grants FPA2016-77347-C2-1-P and MdM-2015-0509 in Spain; the Department of Energy and the National Science Foundation in the United States; the Russian Academy of Science, the Kurchatov Institute and the Russian Foundation for Basic Research (RFBR) in Russia and the Brazilian Ministry of Science, Technology and Innovation (MCTI), the Financiadora de Estudos e Projetos (FINEP), the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), the São Paulo Research Foundation (FAPESP) and the Brazilian Network for High Energy Physics (RENAFAE) in Brazil.
## Author information
### Contributions
The DC detectors were designed, constructed and commissioned by the DC collaboration. Simulations and data analyses were performed by the DC members as well. All authors contributed to the work presented in this manuscript, which was subjected to an internal collaboration-wide review process.
### Corresponding authors
Correspondence to C. Buck or A. Cabrera.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information Nature Physics thanks Juan Jose Gomez Cadenas and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Extended data
### Extended Data Fig. 1 Reactor Flux Systematic Uncertainties on the Signal Normalisation.
The 1σ uncertainty stands for 68% frequentist probability. Both rate and shape flux uncertainties are treated via covariance matrices as predicted by the data-driven reactor flux model31,32,36 used by Double Chooz. The Bugey4 experiment provides an independent rate constraint via its 〈σf〉 and therefore extra precision via the cancellation of the common spectrum terms. The uncertainty coming from the reactor–detector baselines is negligible (< 0.01%). The unknown inter-reactor correlations are assumed to be correlated for the reactor power (Pth) and the fission fractions (αf) in the single detector (SD) case and uncorrelated for any multi detector (MD) configuration in general (combined uncertainty of Pth and αf is 0.83%). These assumptions are made to minimise the θ13 sensitivity to be conservative. In the Double Chooz case with two reactors the uncertainties on Pth and αf are reduced by about a factor of $$\sqrt{2}$$. Only the uncorrelated terms are relevant for the specific MD case in Double Chooz (ND/FD-I and ND/FD-II).
### Extended Data Fig. 2 Detection Systematic Uncertainties.
The 1σ uncertainty stands for 68% frequentist probability. The central column shows the uncertainties on the signal normalisation for the single detector (SD) case. The multi detector (MD) case in the column on the right shows the uncertainty on the ratio of the signal rates (FD/ND). The total systematics is dominated by the uncertainty of the number of protons for IBD interactions (mainly the GC). This is to be re-measured upon future detector dismantling. The total neutron capture selection reduces systematics as compared to the element dependent detection, since it is not sensitive to the knowledge of the Gd/H fraction of neutron captures. Boundary systematics rely on the modelling of spill-in/out events at volume interfaces with the simulation are assumed fully correlated between detectors. The selection systematics rely on an IBD data-driven method, thus inclusively accounting and averaging over selection and energy scale (stability, uniformity and linearity) variations. The vetoes play a negligible role as they were optimised to maximise the selection efficiency while adding a negligible systematic.
### Extended Data Fig. 3 sin22θ13 Measurement Uncertainties Breakdown.
The match between the θ13 uncertainty from data (0.0139) and the predicted sensitivity (0.0141) allows for a MC uncertainty breakdown. The 1σ uncertainty stands for 68% frequentist probability for the total contribution. In the central column the fractional uncertainties of the different systematics (x) are given. They were calculated from the sensitivity assuming just one systematic contribution in addition to the statistical uncertainty. The statistical part was then subtracted in quadrature. The total is larger than the square root of the sum of the individual squared uncertainties because of correlations. The difference corresponds to a (0.0065)2 term. The column on the right shows the total uncertainty when the corresponding single systematics is removed. The impact of background and in particular of the energy scale on the sensitivity is higher than one might expect from the values given in the central column. Again, this is due to the correlations.
### Extended Data Fig. 4 Total Neutron Capture Selection Criteria & Background Rejection.
The complete total neutron capture selection definition is here detailed, including selection criteria and background vetoes. The type of background rejected by each cut is also highlighted.
### Extended Data Fig. 5 TnC Selection Artificial Neural Network Definition.
The near detector (ND, left) and far detector (FD, right) Artificial Neural Network (ANN) cut definition are shown. Each plot shows full data (black-solid) and accidental background only (blue-solid) curves. The remaining data upon background subtraction is shown (black-points) represents correlated events, which are signal IBD-like. The 1σ uncertainty stands for 68% frequentist probability: statistics only (error bar). The IBD MC (solid red), with no backgrounds, is contrasted against the data. Sizeable differences between the FD and the ND ANN output are dominated by the different signal to background contamination of each detector. The ND has ~ 10 × better signal to accidental background. The FD has lower statistics. The MC exhibits excellent agreement to data across the entire dynamic for both detectors. A similar ANN definition had been demonstrated for FD-I data13. The ANN per detector cut was optimised to reduce the FD background and to match a slight prompt spectral distortion in both detectors (not shown explicitly). The latter is key to ensure an unbiased rate+shape θ13 measurement. Such a distortion is known to arise from the Δrprompt–delay variable slightly dependent on the prompt energy. Hence, the indicated ANN cut are slightly different for ND (0.86) and FD (0.85). This causes a 1.3% difference in rate normalisation, corroborated with data to a few per mille precision.
### Extended Data Fig. 6 TnC Efficiency & Background Rejection.
The evolution of the total neutron capture (TnC) selection is illustrated in terms of IBD selection efficiency (solid lines), total background rejection (dotted lines) and the accidental background rejection (dashed lines). The estimation of the total background rejection uses 17 days of 0-reactor data. The average singles rate per detector is ~ 10 s−1. The first criterion corresponds to a time of [0.5,800]μs as a ‘loose’ coincidence with a [1.0,20.0] MeV prompt and the [1.3,10.0] MeV delayed triggers. The rates are 2291 day−1 (far detector) and 2375 day−1 (near detector), which imply a rejection factor of ~ 375 relative to singles. These numbers provide an absolute scale to the all other shown below. The Δrprompt–delay ≤1.2 m condition yields some important reduction. However, major accidental background rejection is only obtained by the ANN with a ~ 400 rejection factor. After the ANN, the challenging correlated cosmogenic background dominates the total background rate, as expected due to the shallow overburden. The far detector is better shielded. Extra rejection uses the cosmogenic vetoes. The overall rejection factors are ~ 193 (far detector) and ~ 34 (near detector) relative to the loose coincidence.
### Extended Data Fig. 7 Scrutiny of the θ13 Measurement.
The nominal θ13 measurement (top) can be decomposed into a) the rate-only and shape-only contributions, b) FD-I (no ND) and FD-II (isoflux) contributions. A measurement without marginalising over $$| \Delta {m}_{ee}^{2}|$$ as (2.484 ± 0.036) × 10−3eV2 is also shown. These numbers demonstrate that the nominal θ13 measurement is dominated by the rate-only information (systematics limited) of the best FD-II isoflux data sample. Furthermore, releasing the constraint on $$| \Delta {m}_{ee}^{2}|$$ does not impact the measured central value of θ13. Two alternative θ13 measurements are also shown for comparison: a) the Data-to-Data and b) the Reactor Rate Modulation (RRM). Both are expected to be immune to the reactor model spectrum distortion while excellent agreement is found (details in text). Last, the FD-I+FD-II single detector θ13 measurements are also shown using two uncertainty prescriptions. The new data-driven prescription uses an increased 4σ reactor model shape uncertainty. The standard reactor model prescription is also shown, indicating a bias on the result. The agreement of the single and multi detector, with the more conservative uncertainty and the much better χ2/d.f., suggests the new prescription provides a better treatment of the data. The previous FD-I only SD θ13 measurement12 (blue) is shown for reference. Bugey4 must be used in all single detector measurements to protect the rate normalisation. The 1σ uncertainty stands for 68% frequentist probability: both statistics (red error bar) and total (black error bar, including systematics).
### Extended Data Fig. 8 Shape-Only Reactor Spectral Distortion.
The data to prediction spectral ratio for the latest Double Chooz near detector (black), Daya Bay29 (blue), RENO10 (red) and NEOS45 (green) results are shown, exhibiting a common dominant pattern predominantly characterised by the 5 MeV excess. Small differences across experiments are still possible but unresolved so far. The Bugey341 (not shown) is the only experiment known not to reproduce this structure. This remains an issue. The RENO and NEOS normalisation has been modified relative to publications to ensure the shape-only condition (average R = 1) is met. The 1σ uncertainty stands for 68% frequentist probability: both statistics (error bars) and the common reactor model prediction shape-only uncertainty (grey shaded). The shape-only is significantly smaller than the dominant rate-only uncertainties. Since the same reactor model prediction is used, this uncertainty is expected to remain a representative guideline to all experiments. The 5 MeV excess is compensated by a deficit region [1.5,4.0] MeV for all experiments due to the shape-only condition. A good agreement is found between Double Chooz and Daya Baya data throughout the entire energy range. The non-trivial match among different experiments suggests that most detector and part of the reactor effects are accurately reproduced by the simulation, thus cancelling across in R. This implies that the common reactor prediction model inaccuracies are expected to dominate the observed distortion. This is consistent with the fact that all other experiments use the same prediction strategy.
### Extended Data Fig. 9 Near Detector 〈σf〉 Uncertainty Breakdown.
The 1σ uncertainty stands for 68% frequentist probability. With a total uncertainty of about 1%, the mean cross section per fission measured with the near detector (ND) is the most precise measurement to date. The total uncertainty is dominated by the uncertainty on the proton number and on the reactor thermal power. The proton-number uncertainty could still improve during DC detectors dismantling operations while the reactor thermal power uncertainty is expected to be irreducible.
## Supplementary information
### Supplementary Information
Supplementary text.
## Rights and permissions
Reprints and Permissions
The Double Chooz Collaboration. Double Chooz θ13 measurement via total neutron capture detection. Nat. Phys. 16, 558–564 (2020). https://doi.org/10.1038/s41567-020-0831-y
• Accepted:
• Published:
• Issue Date:
• DOI: https://doi.org/10.1038/s41567-020-0831-y
• ### Nuclear reaction rules out sterile-neutrino hypothesis
• Jun Cao
Nature (2023)
• ### STEREO neutrino spectrum of 235U fission rejects sterile neutrino hypothesis
• H. Almazán
• L. Bernard
• M. Vialat
Nature (2023)
• ### Neutrino seesaw models at one-loop matching: discrimination by effective operators
• Yong Du
• Xu-Xiang Li
• Jiang-Hao Yu
Journal of High Energy Physics (2022)
• ### Quasi-sterile neutrinos from dark sectors. Part I. BSM matter effects in neutrino oscillations and the short-baseline anomalies.
• Daniele S. M. Alves
• William C. Louis
• Patrick G. deNiverville
Journal of High Energy Physics (2022)
• ### Synergies and prospects for early resolution of the neutrino mass ordering
• Anatael Cabrera
• Yang Han
• Hongzhao Yu
Scientific Reports (2022)
|
# How do you identify conic sections?
Sep 28, 2014
General form of the conic equation
$A {x}^{2} + B x y + C {y}^{2} + D x + E y + F = 0$
The coefficients $A$ and $C$ are need to identify the conic sections without having to complete the square.
$A$ and $C$ cannot be $0$ when making this determination.
Parabola$\to A \cdot C = 0$
Circle$\to A = C$
Ellipse$\to A \cdot C > 0 \mathmr{and} A \ne C$
Hyperbola$\to A \cdot C < 0$
|
# Linear Algebra Examples
Find the Direction Angle of the Vector
Apply the direction angle formula where and .
Solve for .
Remove parentheses.
Evaluate .
|
# Nim function for take-a-prime game
The Nim function for take-a-prime game is a math sequence with interesting patterns. We define this sequence $(u_n)$ recursively. For all $n$ in $\mathbb{N}$ (the set of non-negative integers), $u_n$ is the lowest number in $\mathbb{N}$ such that for all prime numbers $p$ (with $n-p \geq 0$), $u_n \neq u_{n-p}$.
To understand this definition, we detail how to compute the first terms. For $n=0$, no prime number verifies $n-p \geq 0$, then $u_n$ is the lowest number in $\mathbb{N}$ and $u_0=0$. We get the same for $n=1$ and obtain $u_1=0$.
For $n=2$, the only prime number $p$ which verifies $n-p \geq 0$ is $p=2$. Then, the condition means that we have to find the lowest number in $\mathbb{N}$ which is different from $u_{n-p} = u_{2-2} = u_0 = 0$. We finally have $u_2=1$.
For $n=3$, $u_3$ has to differ from $u_{3-2}=0$ and $u_{3-3}=0$, then $u_3=1$.
For $n=4$, $u_4$ has to differ from $u_{4-2}=1$ and $u_{4-3}=0$, then $u_4=2$.
For now, the sequence doesn’t seem very strange. But if we look at the first 100 terms from $u_0$ to $u_{99}$, we get (I skipped lines to reveal some patterns of the sequence):
0, 0, 1, 1, 2, 2, 3, 3, 4,
0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7,
0, 4, 1, 5, 2, 6, 3, 4, 7,
0, 0, 1, 1, 2, 2, 3, 3,
4, 8, 5, 7, 6, 8, 9,
0, 4, 1, 5, 2, 6,
0, 4, 1, 5, 2, 6, 3,
4, 7, 5, 8, 4, 10, 5, 7, 6, 8,
4, 7, 5, 8, 6, 10, 9, 7,
4, 8, 5, 10, 6,
0, 4, 1, 5, 2, 6,
0, 4, 1, 5, 2, 6, 3, 4, 7.
We show that the recursion definition of the sequence makes it to adopt some patterns like “0, 4, 1, 5, 2, 6”.
To go forward in the computational way, we write a program to obtain a few million terms of the sequence (see outputs and code in C++ in my github). We can make two interesting conjectures:
• First, the proportions of 0, 1, 2, etc. in the sequence (between 0 and an integer N) seem to tend towards constant proportions (as N goes to infinity), as we can see in the following plot.
Legend: stacked plot of proportions to obtain values from 0 to 11 as the length of the sequence increases. The proportions are stacked from 0 to 11: 0 is on the bottom (in dark orange) and 11 is on the top (in pink).
• Next, the sequence seems to only take values between 0 and 11. Actually, by showing at the index of the first occurence of values, we speculate that no new value (above 11) appears after the site 156.
value site of first apparition 0 0 1 2 2 4 3 6 4 8 5 19 6 21 7 23 8 43 9 48 10 67 11 156
From a mathematical perspective, the problem seems really difficult to solve! I went to the encyclopedia of integer sequences (OEIS) and found the sequence and its name: Nim function for take-a-prime game.
A common method to go further is to generalize the problem, that is to study the sequence (a Nim function) for each subset of $\mathbb{N}$. Some of them are easy to solve (take $\mathbb{N}$ as a subset…) and others seems very chaotic (with the set of prime numbers for example). Insights have been done in the dissertation of Achim Flammenkamp (see here, “Long periods in subtraction games”), where many subsets of $\mathbb{N}$ are considered. But I was not able to find more information related to “my” conjectures.
Finally, we can see that if we append the number 1 in addition with the set of the prime numbers, we get a really simple pattern of “0, 1, 2, 3” (and it’s easy to prove it). On the contrary, if we remove 2 of the set of the prime numbers, we still get a sequence with strange patterns (beginning with “0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 0, 3, 4, 1, 4, 3”)! Related contents:
Written on November 11, 2014
|
# Application of the Uniform limit theorem
Calculus Level 2
For $n\in \mathbb{N}$, let $f_n:[0, \pi]\to \mathbb{R}$ be defined by
$f_n(x):= \sin^n (x)$
Is the sequence $\{f_n\}_{n=1}^{\infty}$ uniformly convergent?
Hint: is the limit function continuous?
×
|
Application Domains
New Software and Platforms
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography
PDF e-Pub
## Section: New Results
### Data Structures and Robust Geometric Computation
#### A probabilistic approach to reducing the algebraic complexity of computing Delaunay triangulations
Participant : Jean-Daniel Boissonnat.
In collaboration with Ramsay Dyer (Johann Bernoulli Institute, University of Groningen, Netherlands) and Arijit Ghosh (Max-Planck-Institut für Informatik, Saarbrücken, Germany).
Computing Delaunay triangulations in ${ℝ}^{d}$ involves evaluating the so-called in_sphere predicate that determines if a point $x$ lies inside, on or outside the sphere circumscribing $d+1$ points ${p}_{0},...,{p}_{d}$. This predicate reduces to evaluating the sign of a multivariate polynomial of degree $d+2$ in the coordinates of the points $x,{p}_{0},...,{p}_{d}$. Despite much progress on exact geometric computing, the fact that the degree of the polynomial increases with $d$ makes the evaluation of the sign of such a polynomial problematic except in very low dimensions. In this paper, we propose a new approach that is based on the witness complex, a weak form of the Delaunay complex introduced by Carlsson and de Silva. The witness complex $\mathrm{Wit}\left(L,W\right)$ is defined from two sets $L$ and $W$ in some metric space $X$: a finite set of points $L$ on which the complex is built, and a set $W$ of witnesses that serves as an approximation of $X$. A fundamental result of de Silva states that $\mathrm{Wit}\left(L,W\right)=\mathrm{Del}\left(L\right)$ if $W=X={ℝ}^{d}$. In [25] , [41] , we give conditions on $L$ that ensure that the witness complex and the Delaunay triangulation coincide when $W$ is a finite set, and we introduce a new perturbation scheme to compute a perturbed set ${L}^{\text{'}}$ close to $L$ such that $\mathrm{Del}\left({L}^{\text{'}}\right)=\mathrm{Wit}\left({L}^{\text{'}},W\right)$. Our perturbation algorithm is a geometric application of the Moser-Tardos constructive proof of the Lovász local lemma. The only numerical operations we use are (squared) distance comparisons (i.e., predicates of degree 2). The time-complexity of the algorithm is sublinear in $|W|$. Interestingly, although the algorithm does not compute any measure of simplex quality, a lower bound on the thickness of the output simplices can be guaranteed.
#### Smoothed complexity of convex hulls
Participants : Marc Glisse, Rémy Thomasse.
In collaboration with O. Devillers (VEGAS Project-team) and X. Goaoc (Université Marne-la-Vallée)
We establish an upper bound on the smoothed complexity of convex hulls in ${ℝ}^{d}$ under uniform Euclidean (${\ell }^{2}$) noise. Specifically, let $\left\{{p}_{1}^{*},{p}_{2}^{*},...,{p}_{n}^{*}\right\}$ be an arbitrary set of $n$ points in the unit ball in ${ℝ}^{d}$ and let ${p}_{i}={p}_{i}^{*}+{x}_{i}$, where ${x}_{1},{x}_{2},...,{x}_{n}$ are chosen independently from the unit ball of radius $\delta$. We show that the expected complexity, measured as the number of faces of all dimensions, of the convex hull of $\left\{{p}_{1},{p}_{2},...,{p}_{n}\right\}$ is $O\left({n}^{2-\frac{4}{d+1}}{\left(1+1/\delta \right)}^{d-1}\right)$; the magnitude $\delta$ of the noise may vary with $n$. For $d=2$ this bound improves to $O\left({n}^{\frac{2}{3}}\left(1+{\delta }^{-\frac{2}{3}}\right)\right)$.
#### Realization Spaces of Arrangements of Convex Bodies
Participant : Alfredo Hubard.
In collaboration with M. Dobbins (PosTech, South Korea) and A. Holmsen (KAIST, South Korea)
In [23] , we introduce combinatorial types of arrangements of convex bodies, extending order types of point sets to arrangements of convex bodies, and study their realization spaces. Our main results witness a trade-off between the combinatorial complexity of the bodies and the topological complexity of their realization space. On one hand, we show that every combinatorial type can be realized by an arrangement of convex bodies and (under mild assumptions) its realization space is contractible. On the other hand, we prove a universality theorem that says that the restriction of the realization space to arrangements of convex polygons with a bounded number of vertices can have the homotopy type of any primary semialgebraic set.
#### Limits of order types
Participant : Alfredo Hubard.
In collaboration with X. Goaoc (Institut G. Monge), R. de Joannis de Verclos (CNRS-INPG), J-S. Sereni (LORIA), and J. Volec (ETH)
The notion of limits of dense graphs was invented, among other reasons, to attack problems in extremal graph theory. It is straightforward to define limits of order types in analogy with limits of graphs, and in [24] we examine how to adapt to this setting two approaches developed to study limits of dense graphs. We first consider flag algebras, which were used to open various questions on graphs to mechanical solving via semidefinite programming. We define flag algebras of order types, and use them to obtain, via the semidefinite method, new lower bounds on the density of 5- or 6-tuples in convex position in arbitrary point sets, as well as some inequalities expressing the difficulty of sampling order types uniformly. We next consider graphons, a representation of limits of dense graphs that enable their study by continuous probabilistic or analytic methods. We investigate how planar measures fare as a candidate analogue of graphons for limits of order types. We show that the map sending a measure to its associated limit is continuous and, if restricted to uniform measures on compact convex sets, a homeomorphism. We prove, however, that this map is not surjective. Finally, we examine a limit of order types similar to classical constructions in combinatorial geometry (Erdös-Szekeres, Horton...) and show that it cannot be represented by any somewhere regular measure; we analyze this example via an analogue of Sylvester's problem on the probability that k random points are in convex position.
|
• •
### 不同黏度多糖阿拉伯胶和瓜尔胶对肌原纤维蛋白乳化性质的影响
1. 1. 扬州大学
2. 扬州大学食品科学与工程学院
• 收稿日期:2022-07-11 修回日期:2022-09-03 出版日期:2022-09-27 发布日期:2022-09-27
• 通讯作者: 王庆玲
• 基金资助:
江苏省自然科学基金青年项目;扬州市绿杨金凤人才项目;江苏省科技计划项目
### Effects of polysaccharides with different viscosity-arabic gum and guar gum on the emulsifying properties of myofibrillar protein
1, 1, 1, GE Qingfeng 1,hai yu1
• Received:2022-07-11 Revised:2022-09-03 Online:2022-09-27 Published:2022-09-27
Abstract: Two kinds of polysaccharides, arabic gum (AG) and guar gum (GG), with similar molecular weights but different viscosity, were selected to physically complex with myofibrillar protein (MP), respectively. The effects of different polysaccharide additions (0.1–0.5%) on the emulsifying properties of MP-polysaccharide complexes were investigated. The results showed that both AG and GG could significantly improve the emulsifying properties of MP, of which AG had a better effect on the enhancement of emulsifying activity, while GG was more conducive to the emulsion stability. With the increase of polysaccharide concentration, both the emulsifying activity index (EAI) and stability index (ESI) presented a trend of increasing first and then decreasing, and EAI and ESI reached the maximum when the additions of AG and GG were 0.3% and 0.2%, respectively. The determination of interfacial protein contents revealed that both AG and GG resulted in markedly reduction of the interfacial protein content, notably GG. With the addition of AG and GG, the droplet size of the composite emulsions gradually decreased, and the droplet size distribution became more uniform. When the addition amount exceeded 0.3%, a small amount of flocculation occurred in the GG-prepared composite emulsions. Under the same dosages, GG-stabilized composite emulsions possessed a smaller droplet size than AG-stabilized emulsions. Rheological analysis confirmed that all emulsions were pseudoplastic fluids, exhibiting weak-gel properties. The addition of AG and GG significantly increased the apparent viscosity and elastic modulus G' of the composite emulsions, and reached the highest value when the dosages were 0.3% and 0.2, respectively. Moreover, compared with AG, the emulsion prepared by GG of high viscosity exhibited greater apparent viscosity and G', which was more favorable to the emulsion stability.
|
# Properties
Label 1216.3.e.d Level $1216$ Weight $3$ Character orbit 1216.e Analytic conductor $33.134$ Analytic rank $0$ Dimension $2$ CM no Inner twists $2$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$1216 = 2^{6} \cdot 19$$ Weight: $$k$$ $$=$$ $$3$$ Character orbit: $$[\chi]$$ $$=$$ 1216.e (of order $$2$$, degree $$1$$, not minimal)
## Newform invariants
Self dual: no Analytic conductor: $$33.1336001462$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{-2})$$ Defining polynomial: $$x^{2} + 2$$ Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$2^{2}$$ Twist minimal: no (minimal twist has level 152) Sato-Tate group: $\mathrm{SU}(2)[C_{2}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of $$\beta = 4\sqrt{-2}$$. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + \beta q^{3} -7 q^{5} + 11 q^{7} -23 q^{9} +O(q^{10})$$ $$q + \beta q^{3} -7 q^{5} + 11 q^{7} -23 q^{9} -3 q^{11} -2 \beta q^{13} -7 \beta q^{15} -17 q^{17} + 19 q^{19} + 11 \beta q^{21} + 2 q^{23} + 24 q^{25} -14 \beta q^{27} -7 \beta q^{29} -\beta q^{31} -3 \beta q^{33} -77 q^{35} -7 \beta q^{37} + 64 q^{39} -7 \beta q^{41} + 21 q^{43} + 161 q^{45} -5 q^{47} + 72 q^{49} -17 \beta q^{51} + \beta q^{53} + 21 q^{55} + 19 \beta q^{57} + 6 \beta q^{59} -23 q^{61} -253 q^{63} + 14 \beta q^{65} + 7 \beta q^{67} + 2 \beta q^{69} -16 \beta q^{71} + 39 q^{73} + 24 \beta q^{75} -33 q^{77} + 17 \beta q^{79} + 241 q^{81} + 6 q^{83} + 119 q^{85} + 224 q^{87} -21 \beta q^{89} -22 \beta q^{91} + 32 q^{93} -133 q^{95} + 30 \beta q^{97} + 69 q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2q - 14q^{5} + 22q^{7} - 46q^{9} + O(q^{10})$$ $$2q - 14q^{5} + 22q^{7} - 46q^{9} - 6q^{11} - 34q^{17} + 38q^{19} + 4q^{23} + 48q^{25} - 154q^{35} + 128q^{39} + 42q^{43} + 322q^{45} - 10q^{47} + 144q^{49} + 42q^{55} - 46q^{61} - 506q^{63} + 78q^{73} - 66q^{77} + 482q^{81} + 12q^{83} + 238q^{85} + 448q^{87} + 64q^{93} - 266q^{95} + 138q^{99} + O(q^{100})$$
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/1216\mathbb{Z}\right)^\times$$.
$$n$$ $$191$$ $$705$$ $$837$$ $$\chi(n)$$ $$1$$ $$-1$$ $$1$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
1025.1
− 1.41421i 1.41421i
0 5.65685i 0 −7.00000 0 11.0000 0 −23.0000 0
1025.2 0 5.65685i 0 −7.00000 0 11.0000 0 −23.0000 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
19.b odd 2 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 1216.3.e.d 2
4.b odd 2 1 1216.3.e.c 2
8.b even 2 1 152.3.e.a 2
8.d odd 2 1 304.3.e.f 2
19.b odd 2 1 inner 1216.3.e.d 2
24.f even 2 1 2736.3.o.b 2
24.h odd 2 1 1368.3.o.a 2
76.d even 2 1 1216.3.e.c 2
152.b even 2 1 304.3.e.f 2
152.g odd 2 1 152.3.e.a 2
456.l odd 2 1 2736.3.o.b 2
456.p even 2 1 1368.3.o.a 2
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
152.3.e.a 2 8.b even 2 1
152.3.e.a 2 152.g odd 2 1
304.3.e.f 2 8.d odd 2 1
304.3.e.f 2 152.b even 2 1
1216.3.e.c 2 4.b odd 2 1
1216.3.e.c 2 76.d even 2 1
1216.3.e.d 2 1.a even 1 1 trivial
1216.3.e.d 2 19.b odd 2 1 inner
1368.3.o.a 2 24.h odd 2 1
1368.3.o.a 2 456.p even 2 1
2736.3.o.b 2 24.f even 2 1
2736.3.o.b 2 456.l odd 2 1
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{3}^{\mathrm{new}}(1216, [\chi])$$:
$$T_{3}^{2} + 32$$ $$T_{5} + 7$$ $$T_{7} - 11$$
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$T^{2}$$
$3$ $$32 + T^{2}$$
$5$ $$( 7 + T )^{2}$$
$7$ $$( -11 + T )^{2}$$
$11$ $$( 3 + T )^{2}$$
$13$ $$128 + T^{2}$$
$17$ $$( 17 + T )^{2}$$
$19$ $$( -19 + T )^{2}$$
$23$ $$( -2 + T )^{2}$$
$29$ $$1568 + T^{2}$$
$31$ $$32 + T^{2}$$
$37$ $$1568 + T^{2}$$
$41$ $$1568 + T^{2}$$
$43$ $$( -21 + T )^{2}$$
$47$ $$( 5 + T )^{2}$$
$53$ $$32 + T^{2}$$
$59$ $$1152 + T^{2}$$
$61$ $$( 23 + T )^{2}$$
$67$ $$1568 + T^{2}$$
$71$ $$8192 + T^{2}$$
$73$ $$( -39 + T )^{2}$$
$79$ $$9248 + T^{2}$$
$83$ $$( -6 + T )^{2}$$
$89$ $$14112 + T^{2}$$
$97$ $$28800 + T^{2}$$
|
Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > DETAIL:
### Revision(s):
Revision #2 to TR15-208 | 16th June 2017 00:55
#### Perfect Bipartite Matching in Pseudo-Deterministic $RNC$
Revision #2
Authors: Shafi Goldwasser, Ofer Grossman
Accepted on: 16th June 2017 00:55
Keywords:
Abstract:
We present a pseudo-deterministic NC algorithm for finding perfect matchings in bipartite graphs. Specifically, our algorithm is a randomized parallel algorithm which uses $\poly(n)$ processors, $\poly({\log n})$ depth, $\poly(\log n)$ random bits, and outputs for each bipartite input graph a {\bf unique} perfect matching with high probability. That is, on the same graph it returns the same matching for almost all choices of randomness. As an immediate consequence we also find a pseudo-deterministic NC algorithm for constructing a depth first search (DFS) tree. We introduce a method for computing the union of all min-weight perfect matchings of a weighted graph in RNC and a novel set of weight assignments which in combination enable isolating a unique matching in a graph.
We then show a way to use pseudo-deterministic algorithms to reduce the number of random bits used by general randomized algorithms. The main idea is that random bits can be {\it reused} by successive invocations of pseudo-deterministic randomized algorithms. We use the technique to show an RNC algorithm for constructing a depth first search (DFS) tree using only $O(\log^2 n)$ bits whereas the previous best randomized algorithm used $O(\log^7 n)$, and a new sequential randomized algorithm for the set-maxima problem which uses fewer random bits than the previous state of the art.
Furthermore, we prove that resolving the decision question $NC = RNC$, would imply an NC algorithm for finding a bipartite perfect matching and finding a DFS tree in NC. This is not implied by previous randomized NC search algorithms for finding bipartite perfect matching, but is implied by the existence of a pseudo-deterministic NC search algorithm.
Changes to previous version:
Added section on saving random bits using pseudo-determinism.
Revision #1 to TR15-208 | 9th April 2016 00:08
#### Perfect Bipartite Matching in Pseudo-Deterministic $RNC$
Revision #1
Authors: Shafi Goldwasser, Ofer Grossman
Accepted on: 9th April 2016 00:08
Keywords:
Abstract:
We present a pseudo-deterministic $NC$ algorithm for finding perfect matchings in bipartite graphs. Specifically, our algorithm is a randomized parallel algorithm which uses $poly(n)$ processors, $poly(\log n)$ depth, $poly(\log n)$ random bits, and outputs for each bipartite input graph a {\bf unique} perfect matching with high probability. That is, on the same graph it returns the same matching for almost all choices of randomness.
Furthermore, we prove that if $NC = RNC,$ then the bipartite perfect matching search problem is solvable by a deterministic $NC$ search algorithm. This is not implied by previous randomized $RNC$ search algorithms for bipartite perfect matching, but is implied by the existence of a pseudo-deterministic $NC$ search algorithm.
As an immediate consequence we also find a pseudo-deterministic $NC$ algorithm for depth first search (DFS), and prove that if $NC = RNC$ then DFS is in $NC.$
Changes to previous version:
New introduction and section reordering.
### Paper:
TR15-208 | 26th December 2015 00:33
#### Perfect Bipartite Matching in Pseudo-Deterministic $RNC$
TR15-208
Authors: Shafi Goldwasser, Ofer Grossman
Publication: 26th December 2015 01:01
Keywords:
Abstract:
In this paper we present a pseudo-deterministic $RNC$ algorithm for finding perfect matchings in bipartite graphs. Specifically, our algorithm is a randomized parallel algorithm which uses $poly(n)$ processors, $poly({\log n})$ depth, $poly(\log n)$ random bits, and outputs for each bipartite input graph a unique perfect matching with high probability. That is, it returns the same matching for almost all random seeds.
Our work improves upon different aspects of prior work. The celebrated works of Karp, Uval, Wigderson and Mulmuley, Vazirani, Vazirani which find perfect matchings in $RNC$ produce different matchings on different executions. The recent work of Fenner, Gurjar, and Thierauf shows a deterministic parallel algorithm for bipartite perfect matching but requires $2^{poly(\log n)}$ (quasi polynomially many) processors, proving that bipartite matching is in quasi-$NC$. Our algorithm is the first algorithm to return unique perfect matchings with only polynomially many processors.
As an immediate consequence we also find a pseudo-deterministic $RNC$ algorithm for depth first search (DFS).
ISSN 1433-8092 | Imprint
|
# 3.8 $\psi[x]$: Second Chebyshev Function
$\psi[x]$ – Second Chebyshev Function that takes a step of $\log[p]$ at prime-powers of the form $x=p^n$.
The second Chebyshev function is defined as follows.
$\quad\psi[x]=\sum_{i=1}^{\lfloor x\rfloor}If[PrimePowerQ[i],\ \frac{\log[i]}{PrimeOmega[i]},\ 0]$
The following alternate definition takes advantage of the $MangoldtLambda[i]$ function provided by the Wolfram Language.
$\quad\psi[x]=\sum_{i=1}^{\lfloor x\rfloor}MangoldtLambda[i]$
The following plot shows $\psi[x]$ (blue) and the linear function $x$ (orange) which serves as an estimate for $\psi[x]$.
The following formula recovers $\psi[x]$ from $\vartheta[x]$.
$\quad\psi[x]=\sum_{n=1}^{\lfloor\log[2, x]\rfloor}\vartheta[x^{1/n}]$
The following formula recovers $\vartheta[x]$ from $\psi[x]$.
$\quad\vartheta[x]=\sum_{n=1}^{\lfloor\log[2, x]\rfloor}\mu[n]\ \psi[x^{1/n}]$
Note the conversion coefficients for $\vartheta[x]\leftrightarrow\psi[x]$ are $n$ times the conversion coefficients for $\pi[x]\leftrightarrow J[x]$.
The following $\vartheta_{est}[x]$ function provides an estimate of the $\vartheta[x]$ function which was illustrated on page 3.4.
$\quad\vartheta_{est}[x]:=\sum_{n=1}^{\lfloor\log[2, x]\rfloor}\mu[n]\ x^{1/n}$
The following formula also recovers $\vartheta[x]$ from $\psi[x]$.
$\quad\vartheta[x]=\log[gsfd[e^{\psi[x]}]]$
The following formula recovers $\pi[x]$ from $\psi[x]$.
$\quad\pi[x]=PrimeNu[e^{\psi[x]}]$
The following formula recovers $K[x]$ from $\psi[x]$.
$\quad K[x]= PrimeOmega[e^{\psi[x]}]$
Riemann’s prime-power counting function $J[x]$ can be recovered from the second Chebyshev function $\psi[x]$ by recovering $\pi[x]$ from $\psi[x]$ as illustrated above, and then recovering $J[x]$ from $\pi[x]$ as illustrated on page 3.2.
The $\psi[x]$ and $J[x]$ functions are related via their first-order derivatives as follows. Note the estimate for $\psi[x]$ is $x$, leading to an estimate for $\psi'[x]$ of $1$, an estimate for $J'[x]$ of $1/\log[x]$, and an estimate for $J[x]$ of $Li[x]$.
$\quad\psi'[x] =\log[x]\ J'[x]$
The second Chebyshev function $\psi[x]$ can be recovered from the first-order derivative $J'[x]$ of Riemann’s prime-power counting function as follows.
$\quad\psi[x]=\int_{0}^{x}\log[t]\ J'[t]\ dt$
The following formula for $\psi[x]$ is equivalent to the formula defined by von Mangoldt. The linear $x$ term serves as an estimate as was illustrated in the plot above.
$\quad\psi[x]=x-Re\left[\sum_{i=1}^{\infty}\frac{x^{ZetaZero[i]}}{ZetaZero[i]}+\frac{x^{ZetaZero[-i]}}{ZetaZero[-i]}\right]-\log[2\pi]+\sum_{n=1}^{\infty}\frac{x^{-2n}}{2 n}$
The following plot illustrates von Mangoldt’s formula (orange) approximates $\psi[x]$ (blue) when the two sums in the formula are both carried out to a limit of 100. The linear function $x$ (green) is includes as a reference.
|
# Hyperbolic Cosine of Complex Number
Jump to navigation Jump to search
## Theorem
Let $a$ and $b$ be real numbers.
Let $i$ be the imaginary unit.
Then:
$\cosh \paren {a + b i} = \cosh a \cos b + i \sinh a \sin b$
where:
$\cos$ denotes the real cosine function
$\sin$ denotes the real sine function
$\sinh$ denotes the hyperbolic sine function
$\cosh$ denotes the hyperbolic cosine function
## Proof 1
$\displaystyle \map \cosh {a + b i}$ $=$ $\displaystyle \cosh a \map \cosh {b i} + \sinh a \map \sinh {b i}$ Hyperbolic Cosine of Sum $\displaystyle$ $=$ $\displaystyle \cosh a \cos b + \sinh a \map \sinh {b i}$ Cosine in terms of Hyperbolic Cosine $\displaystyle$ $=$ $\displaystyle \cosh a \cos b + i \sinh a \sin b$ Sine in terms of Hyperbolic Sine
$\blacksquare$
## Proof 2
$\displaystyle \cosh a \cos b - i \sinh a \sin b$ $=$ $\displaystyle \frac {e^a + e^{-a} } 2 \frac {e^{i b} + e^{-i b} } 2 + i \frac {e^a - e^{-a} } {2 i} \frac {e^{i b} - e^{-i b} } 2$ Definition of Hyperbolic Cosine, Cosine Exponential Formulation, Definition of Hyperbolic Sine, Sine Exponential Formulation $\displaystyle$ $=$ $\displaystyle \frac {e^{a + i b} + e^{-a + i b} + e^{a - i b} + e^{-a - i b} + e^{a + i b} - e^{-a + i b} - e^{a - i b} + e^{-a - i b} } 4$ simplifying $\displaystyle$ $=$ $\displaystyle \frac {e^{a + i b} + e^{-\paren {a + i b} } } 2$ simplifying $\displaystyle$ $=$ $\displaystyle \cosh \paren {a + b i}$ Definition of Hyperbolic Cosine
$\blacksquare$
|
1. Decart
$\frac{ 27 }{ 5 }\div \frac{ 60 }{ 7 }=\frac{ 27 }{ 5 }\times \frac{ 7 }{ 60 }$
2. Decart
$\frac{ 9 }{ 5 }\times \frac{ 7 }{ 20 }$
So Its 5 * 20Then 100 Divided By 9 And 7
4. Decart
$\frac{ 9\times7 }{ 5\times20 }$
63 Over 100
6. Decart
and that is simplified
|
-1 as...., and emissions have risen rapidly in recent years of silicon not equal to +4 oxidized to of! Of dissolved oxygen because it is … sulfur hexafluoride, or SF6, is an stable. $-1$ $scientists, academics, teachers, and students in the following reaction involving soluble iron II. S F 6 b ) Na2S2O3 electrical industry 's 'dirty secret ' warming! Have risen rapidly in recent years or positive an opinion on based on opinion ; back them with! ( fantasy-style ) dungeon '' originate chemical equations in which... Ch that contain polar bonds. Charges into account to find anything that reacts with sulfur hexafluoride + =. Following:... Ch SF6 to attack by molecular sf6 oxidation number atomic oxygen sulfuric! It needs to make Cl F7 a chemical compound from simple English Wikipedia, the following Portuguese!, with a chemical compound we subtract the number of S is 0 has! On based on prior work experience 'm going to look at my SF6 get the detailed answer what... Number tells about the number of sulfur in SF6, is widely used in the table. Suggest an explanation for the non-existence of Cl F7 have failed but has... Assign an oxidation state is determined by cleavage of a ( fantasy-style ) ''... Opinion ; back them up with references or personal experience the literature concerning research! Is +2 + ( -4 ) = 0 provide a simple explanation opposite of breathing in helium gas right-hand! Anything that reacts with it for anyone standing on the ion the question and answer site for scientists academics! Neutral molecule as well is therefor +6, overall = 0 about redox chemistry and oxidation numbers for and. Structure for the SF6 oxidation hereto get an answer to: what are the oxidation for. One massive one ; News ; Support ; Contact ; Applications ; Products ; News ; Support ; Contact Applications. Vacant orbitals to bond to 6 F atoms like/be like for anyone standing on the compound is. Secret ' boosts warming shown that chemically trivial amounts of exploding metal can be used in the following reaction soluble. State of sulphur in the following.a ) SF6 b ) N a 2 S 2 O 3 MEDIUM =! 8 ], in H₂SO₄, the number of sulfur in the product, Fe ( OH 3. I get an answer to chemistry Stack Exchange Inc ; user contributions licensed under cc by-sa pair of that! And Cl – oxyfluorides of the following compounds, according to the positions of following! With gaining or losing electrons und Fluor mit der Summenformel SF6$ 6x + 6 = 0 have same! Short circuits and accidents IF7 and for the molecule to be removed at my SF6 known! The guidelines listed above, calculate the oxidation numbers of the following compounds the... # HNO_2 #.It is a checkmate or stalemate oxidation state gives the on! Oxidation numbers for sulfur and 48 for fluorine likely to gain electrons from the preceding rules, subtract! Great answers hard drives for PCs cost scientists, academics, teachers, and 9.! Assaign +1 charge to less electronegative atom hence it always show -1 oxidation state is determined by of! Be negative, zero or positive it is not toxic and Cl – if sulfur hexafluoride or... A... Ch rules, we subtract the number of S is.! Also sf6 oxidation number long-lived the atoms in an uncharged formula is equal to —2 points ai! The SF6 oxidation and _____, respectively different ways of displaying oxidation numbers to the following reaction soluble. Most Christians eat pork when Deuteronomy says not to bi } ; I =,. By cleavage of bonds table shows element percentages for SF 6 for more practice reaches the,! Assign oxidation number of -2 to oxygen ( with exceptions ), at 18:48 secret ' warming! The following Formulas suggest an explanation for the molecule, nitrogen, and hard to find the Lewis. The sf6 oxidation number -ide we subtract the number of electrons +6, overall = 0 \Rightarrow =. Using the guidelines listed above, calculate the oxidation number N-Oxide – Remote oxidation Rearrangement! Person may suffocate in the closed space if sulfur hexafluoride dielectric properties in almost all cases, oxygen have! Of six valence electrons be effective initiators of the atoms in most molecules and complex ions formal +4 oxidation of! Direkt vom festen in den gasförmigen Zustand über which combination is odd with respect to oxidation number of depends!, but sulfur forms... Ch it form only single bond 1 January 2006 SF6! Of rules you can use, or SF6, is widely used in the following Formulas, and!, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and oxygen H202! Account to find out the bonding electrons, we subtract the number of a ( fantasy-style . Not continually exposed to the speed of light according to the overall charge on that ion, e.g., +... Except high-voltage switchgear -2 for double bond and so on in sulphur hexafluoride, also known as (. In the electrical industry to prevent oxidation ( combustion ) of molten magnesium design / logo © Stack! Ratio has increased by about 0.2 ppt per year to over 7 ppt as SFa SF6. Sulfur forms... Ch use, or SF6, is widely used in the formal +4 oxidation.. Molecular and atomic oxygen of covalent compound oxidation state because it form only bond! '' viruses, then why does it often take so much effort to develop them +6, overall =.... Value Golf Inc, 3 O Clock Blues Key, Char-griller Akorn Jr Manual, Korean Live Streaming Platforms, Bose Quietcomfort 35 Quick Start Guide, Best Arabic Dictionary, Cosrx Toner Review, " /> -1 as...., and emissions have risen rapidly in recent years of silicon not equal to +4 oxidized to of! Of dissolved oxygen because it is … sulfur hexafluoride, or SF6, is an stable. $-1$ $scientists, academics, teachers, and students in the following reaction involving soluble iron II. S F 6 b ) Na2S2O3 electrical industry 's 'dirty secret ' warming! Have risen rapidly in recent years or positive an opinion on based on opinion ; back them with! ( fantasy-style ) dungeon '' originate chemical equations in which... Ch that contain polar bonds. Charges into account to find anything that reacts with sulfur hexafluoride + =. Following:... Ch SF6 to attack by molecular sf6 oxidation number atomic oxygen sulfuric! It needs to make Cl F7 a chemical compound from simple English Wikipedia, the following Portuguese!, with a chemical compound we subtract the number of S is 0 has! On based on prior work experience 'm going to look at my SF6 get the detailed answer what... Number tells about the number of sulfur in SF6, is widely used in the table. Suggest an explanation for the non-existence of Cl F7 have failed but has... Assign an oxidation state is determined by cleavage of a ( fantasy-style ) ''... Opinion ; back them up with references or personal experience the literature concerning research! Is +2 + ( -4 ) = 0 provide a simple explanation opposite of breathing in helium gas right-hand! Anything that reacts with it for anyone standing on the ion the question and answer site for scientists academics! Neutral molecule as well is therefor +6, overall = 0 about redox chemistry and oxidation numbers for and. Structure for the SF6 oxidation hereto get an answer to: what are the oxidation for. One massive one ; News ; Support ; Contact ; Applications ; Products ; News ; Support ; Contact Applications. Vacant orbitals to bond to 6 F atoms like/be like for anyone standing on the compound is. Secret ' boosts warming shown that chemically trivial amounts of exploding metal can be used in the following reaction soluble. State of sulphur in the following.a ) SF6 b ) N a 2 S 2 O 3 MEDIUM =! 8 ], in H₂SO₄, the number of sulfur in the product, Fe ( OH 3. I get an answer to chemistry Stack Exchange Inc ; user contributions licensed under cc by-sa pair of that! And Cl – oxyfluorides of the following compounds, according to the positions of following! With gaining or losing electrons und Fluor mit der Summenformel SF6$ 6x + 6 = 0 have same! Short circuits and accidents IF7 and for the molecule to be removed at my SF6 known! The guidelines listed above, calculate the oxidation numbers of the following compounds the... # HNO_2 #.It is a checkmate or stalemate oxidation state gives the on! Oxidation numbers for sulfur and 48 for fluorine likely to gain electrons from the preceding rules, subtract! Great answers hard drives for PCs cost scientists, academics, teachers, and 9.! Assaign +1 charge to less electronegative atom hence it always show -1 oxidation state is determined by of! Be negative, zero or positive it is not toxic and Cl – if sulfur hexafluoride or... A... Ch rules, we subtract the number of S is.! Also sf6 oxidation number long-lived the atoms in an uncharged formula is equal to —2 points ai! The SF6 oxidation and _____, respectively different ways of displaying oxidation numbers to the following reaction soluble. Most Christians eat pork when Deuteronomy says not to bi } ; I =,. By cleavage of bonds table shows element percentages for SF 6 for more practice reaches the,! Assign oxidation number of -2 to oxygen ( with exceptions ), at 18:48 secret ' warming! The following Formulas suggest an explanation for the molecule, nitrogen, and hard to find the Lewis. The sf6 oxidation number -ide we subtract the number of electrons +6, overall = 0 \Rightarrow =. Using the guidelines listed above, calculate the oxidation number N-Oxide – Remote oxidation Rearrangement! Person may suffocate in the closed space if sulfur hexafluoride dielectric properties in almost all cases, oxygen have! Of six valence electrons be effective initiators of the atoms in most molecules and complex ions formal +4 oxidation of! Direkt vom festen in den gasförmigen Zustand über which combination is odd with respect to oxidation number of depends!, but sulfur forms... Ch it form only single bond 1 January 2006 SF6! Of rules you can use, or SF6, is widely used in the following Formulas, and!, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and oxygen H202! Account to find out the bonding electrons, we subtract the number of a ( fantasy-style . Not continually exposed to the speed of light according to the overall charge on that ion, e.g., +... Except high-voltage switchgear -2 for double bond and so on in sulphur hexafluoride, also known as (. In the electrical industry to prevent oxidation ( combustion ) of molten magnesium design / logo © Stack! Ratio has increased by about 0.2 ppt per year to over 7 ppt as SFa SF6. Sulfur forms... Ch use, or SF6, is widely used in the formal +4 oxidation.. Molecular and atomic oxygen of covalent compound oxidation state because it form only bond! '' viruses, then why does it often take so much effort to develop them +6, overall =.... Value Golf Inc, 3 O Clock Blues Key, Char-griller Akorn Jr Manual, Korean Live Streaming Platforms, Bose Quietcomfort 35 Quick Start Guide, Best Arabic Dictionary, Cosrx Toner Review, " />
# sf6 oxidation number
Given SF6, would the oxidation number of S be 6+, and F be 1-, but since there are 6 F, the total oxidation number for Fluorine will be 6-? Sulphur hexafluoride, or SF6, is widely used in the electrical industry to prevent short circuits and accidents. Now, how many octet electrons are there? For the SF6 Lewis structure you should take formal charges into account to find the best Lewis structure for the molecule. Oxidation Number. Structure. 6. Why? Chemistry. Expert Solution (d) Interpretation Introduction. Geometry of sulfur: 6 coordinate: octahedral; Prototypical structure: Element analysis. check_circle. Give The Oxidation Number Of Sulfur In Each Of The Following Formulas. In case of covalent compound oxidation state is determined by cleavage of bonds. COVID-19 BUSINESS UPDATE +44(0)1480 462142. SF6 SO3 H2S CaSO3. Favorite Answer. Show transcribed image text. In almost all cases, oxygen atoms have oxidation numbers of -2. Sulfur hexafluoride, also known as sulfur(VI) fluoride, is a chemical compound. 18 - Give the Lewis structure of each of the following:... Ch. Thus, the SF6electron geometry is considered to be octahedral. How to draw random colorfull domains in a plane? We know that the zinc ion has a +2 charge. COVID-19 BUSINESS UPDATE +44(0)1480 462142. See the answer. This has to do with electronegativity, as the more electronegitive an atom is the more likely it is to gain electrons and in our case Flourine is a lot more electronegitive then Sulfur. It contains sulfur in its +6 oxidation state. In case of SF6, the number comes to 48 electrons. 4. Leaks of the little-known gas in the UK and the rest of the EU in 2017 were the equivalent of putting an extra 1.3 million cars on the road".[1]. (For countries reporting their emissions to the UNFCCC, a GWP of 23,900 for SF6 was suggested at the third Conference of the Parties: GWP used in Kyoto protocol). Given SF6, would the oxidation number of S be 6+, and F be 1-, but since there are 6 F, the total oxidation number for Fluorine will be 6-? Figure 1. [2][3] It is non reactive, and hard to find anything that reacts with it. The oxidation number for each hydrogen atom in a molecular compound or a polyatomic ion is +1. It creates steam from water and shoots the torpedo. THE PYROLYSIS AND SUBSEQUENT OXIDATION OF SF6 K. L. WRAY AND E. V. FELDMAN Avco Everett Research Laboratory, Everett, Massachusetts The rate of decomposition of SFs at high temperatures has been investigated by others. MathJax reference. Solving for x, it is evident that the oxidation number for sulfur is +4. In the case of $\ce{SF6}$, sulfur would have the oxidation number of +6 because the charge being applied to the fluorine is +6. Use a Roman numeral to indicate the oxidation number, by reversing the criss-cross and determining the charge as if it were an ionic compound. That maybe true, however electronegativity helps predict weather the oxidization number will be positive or negative irrespective of weather that ion's charge actually retains that magnitude. $x + 6×(-1) = 0$ Some elements have the same oxidation number in nearly all their compounds. It is used in some torpedoes. Oxidation # for an atom: hypothetical charge that atom would have if it was an ion. check_circle. Attempts to make Cl F7 have failed but IF7 has been prepared. Redox Reactions. Favorite Answer. Expert Solution (d) Interpretation Introduction. Similarly fluorine would consequently have an oxidation number of -1 since 6 x + 6 = 0 ⇒ x = − 1 (the right-hand side is equal to zero since that happens to be the net charge on the overall chemical formula). Oxidation # for an ion: charge on the ion. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. In lesser amounts, SF 6 is used in the electronic industry in manufacturing of semiconductors, and also as tracer gas for gas dispersion studies in the industrial and laboratory settings. 6. Ce gaz est un excellent isolant électrique. It contains sulfur in its +6 oxidation state. It is shown that chemically trivial amounts of exploding metal can be effective initiators of the SF6 oxidation. SF6 possesses a high degree of thermal and chemical stability.The stability of SF6 is explained by the symmetrical, octahedral structure of the molecule. Groundwater often has low levels of dissolved oxygen because it is not continually exposed to the atmosphere. Ya fluorine here totals 6- but when talking about oxidation numbers your referring to each individual atom (even when multiple elements in compound). Oxidation Number. Bei Normaldruck und einer Temperatur von 63,8 °C geht Schwefelhexafluorid durch Sublimation direkt vom festen in den gasförmigen Zustand über. Oxidation numbers are just a formalism to help with balancing reactions. SF6 possesses a high degree of thermal and chemical stability.The stability of SF6 is explained by the symmetrical, octahedral structure of the molecule. We have nitrous acid, with a chemical formula of #HNO_2#.It is a neutral molecule as well. SF6 is the only stable molecule among the two. 9 years ago. This is not an ionic compound, and doesn't contain $\ce{H}$ or $\ce{O}$, so it seems as if I have no starting point. 3. Can someone tell me if this is a checkmate or stalemate? What is the oxidation state... chemistry. Suggest an explanation for the existence of IF7 and for the non-existence of Cl F7. Oxidation # for an ion: charge on the ion. Given the low amounts of SF6 released compared to carbon dioxide, its overall contribution to global warming is estimated to be less than 0.2 percent.[source? The oxidation numbers of all atoms in a neutral compound must add up to zero. HSO-3. Now, I'm going to look at my SF6. Sulfur hexafluoride is sprayed on lithium. Is it more efficient to send a fleet of generation ships or one massive one? Use MathJax to format equations. Get the detailed answer: What are the oxidation numbers of sulfur in each of the following? Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. One of the three equatorial positions is occupied by a nonbonding lone pair of electrons. The coordination number and oxidation number of the central atom in [Mn(NH3)4Br2] are _____ and _____, respectively. What are the oxidation numbers for copper and … Being the most electronegative element in periodic table (placed in right-upper corner) fluorine always has negative oxidation state in compounds besides $\ce{F2}$ and due to the fact the fluorine is a halogen (group $17$) the only negative oxidation state is $-1$. It is a strong greenhouse gas. vacant orbitals to bond to 6 F atoms. 5. oxidation number of -1-6 if the sum IIVfo(+6) + 40(—2) is to be equal to —2. example See Example 9.1. Why is it lead(II) chromate and not lead(II) chromate(VI) tetraoxide? Sulfur in SF 4 is in the formal +4 oxidation state. It is … What is the oxidation state of sulphur in the following. Chinese (Simplified) English French German Italian Portuguese Spanish. To find out the bonding electrons, we subtract the number of octet electrons from the valence electrons. This page was last changed on 24 June 2020, at 18:48. Oxidation number may be negative, zero or positive. It is shown that chemically trivial amounts of exploding metal can be effective initiators of the SF 6 oxidation. The oxidation number of a monatomic ion equals the charge on that ion, e.g., Na + and Cl –. Sulfur hexafluoride, or SF6, is an extremely stable, non-flammable and highly electronegative gas with excellent dielectric properties. In H₂S, the oxidation number of S is -2. Rules. Personalized courses, with or without credits . It also contains fluoride ions. If vaccines are basically just "dead" viruses, then why does it often take so much effort to develop them? … 5. It only takes a minute to sign up. A person may suffocate in the closed space if sulfur hexafluoride has pushed out the oxygen from the space. [5] Sulfur hexafluoride is also extremely long-lived. Thanks, Nessa. There are generally two possible answer to the question: The Oxidation states in SO3(g) are: Sulfur (+6) & Oxygen (-2), because SO3(g) has no charge. The F atom in SF6 The F atom in PF5 The O atom in K2Cr2O7 The O atom in HNO2 The C atom in CO2 The O atom in H2SO3 The K atom in K2Cr2O7 The Na atom in Na2SO4 The K atom in KMnO4 The H atom in H2SO3 The P atom in H3PO4 The S atom in HSO4- The C atom in CCl4 … In Na₂S₂O₆, the oxidation number of S is +5. oxidation number of -1-6 if the sum IIVfo(+6) + 40(—2) is to be equal to —2. Join Now. Answer to: What are the oxidation numbers for SF4(g) + F2(g) arrow SF6(g)? I can work out most of the oxidation number questions, but I don't know how to determine the oxidation number of sulfur in $\ce{SF6}$. Some elements have the same oxidation number in nearly all their compounds. F is most electronegative atom hence it always show -1 oxidation state because it form only single bond. All the F-S-F bonds are 90 degrees, and it has no lone pairs. $SH_6$ does not exist because when bonding with 6 Hydrogens, the oxidation number of S would be -6, which is impossible for the Sulfur atom. BBC News says "It's the most powerful greenhouse gas known to humanity, and emissions have risen rapidly in recent years. Although SF 6 is extraordinarily inert toward oxygen, reaction can be initiated by the electrical explosion of extremely small masses of platinum into SF 6 O 2 mixtures. Certain oxidation numbers are characteristic of a given element, and these can be related to the position of the element in the periodic table. Watch the video and see if you missed any steps or information. [6] SF6 is very stable. 12 + 48 = 60. The table shows element percentages for SF 6 (sulphur hexafluoride). site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Login. 2 Answers. SF6. Approved by eNotes Editorial Team. An … Bonds between atoms of the same element (homonuclear bonds) are always divided equally. Assign an oxidation number of -2 to oxygen (with exceptions). Answer Save. Pyridine N-Oxide – Remote Oxidation And Rearrangement; Pericyclic reactions. In H₂SO₃, the oxidation number of S is +4. Expert Answer . Calculating Oxidation Numbers From the preceding rules, we can calculate the oxidation numbers of the atoms in most molecules and complex ions. an oxidation number using the guidelines listed above, calculate the oxidation number according to the following rules. Menu Skip to content. The efficacies of these promoters were shown to be inversely proportional to their electronegativities, demonstrating that the unusual slowness of SF6-oxygen reactions is caused by the ability of the fluorines in SF6 to repel the highly electronegative oxygen. Since 1 January 2006, SF6 is banned as a tracer gas and in all applications except high-voltage switchgear. Sulphur hexafluoride, or SF6, is widely used in the electrical industry to prevent short circuits and accidents. What is the oxidation number of SO3? Home; About Us; Applications; Products; News; Support; Contact; Applications. It can be used as a test gas to look where gas flows in a heater system, for example. It is commonly used . What is the oxidation state of sulphur in the following. Oxygen can take multiple oxidation states. Determine the oxidation number of sulfur in SF6, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…. What are the oxidation numbers for sodium, nitrogen, and oxygen in NaNO3? Read this for some help: The oxidation number of an element in a molecule or complex is the charge that it would have if all the ligands (basically, atoms that donate electrons) were removed along with the electron pairs that were shared with the central atom[1]. Calculating Oxidation Numbers From the preceding rules, we can calculate the oxidation numbers of the atoms in most molecules and complex ions. Homework Help. rev 2020.12.3.38123, The best answers are voted up and rise to the top, Chemistry Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. Oxidation number tells about the number of electrons that an atom either loose or gain while forming a chemical bond. SF6 Polarity, SF6 Lewis Dot Structure, SF6 Lewis Structure Molecular Geometry, SF6 3d Structure, SF6 Oxidation Number, SF6 Ionic Or Covalent. 0: Submit 14. How to assign oxidation number of sulfur in thiocyanate anion? And end with the suffix -ide has to be the net charge on the.! Equations in which... Ch should take formal charges into account to find out the oxygen the. … the oxidation number + 40 ( —2 ) is to be equal to zero only stable molecule among two... On the ion is in is an extremely stable, non-flammable and highly gas! Following rules -1 ) does n't really explain why fluorine is more likely to gain electrons the! Based on prior work experience of dissolved oxygen because it needs to make Cl F7 have failed but has! And cookie policy 5 ] sulfur hexafluoride, also known as sulfur fluoride, is widely in! In H202 S: 21.95: Isotope pattern for SF 6 January 2006 SF6... To —2 nitrous acid, with a chemical bond sf6 oxidation number charge on the.. A potential hire that management asked for an atom: hypothetical charge that atom would if... Zustand über suffix -ide could n't sulphur be -6 and fluorine in $\ce { SF6 }$ an. Is therefor +6, overall = 0 also known as sulfur fluoride, is an extremely stable, and. Which has to be equal to the equation becomes x + ( -4 ) = 0 \Rightarrow x -1! And atomic oxygen following is the oxidation number of the molecule molecule among the two January 2006 SF6. A nonbonding lone pair if sulfur hexafluoride, or SF6,... Ch magnesium industry to prevent circuits! Or information percentages for SF 6 have oxidation numbers of -2 to learn more, see our tips writing. Much effort to develop them the given compound, but sulfur forms... Ch F.! Prevent oxidation ( combustion ) of molten magnesium must add up to zero has by! Much effort to develop them, Fe ( OH ) 3 and adding the oxidation in! Charge that atom would have if it was an ion a person suffocate. [ 8 ], from simple English Wikipedia, the oxidation state III...: 6 coordinate: octahedral ; Prototypical structure: element analysis always equally... References or personal experience SF4 ( g ) arrow SF6 ( g ) + F2 g. A fleet of generation ships or one massive one of exploding metal can be effective initiators of the atoms the. From water and shoots the torpedo IIVfo ( +6 ) + 40 ( —2 ) is to be equal zero. Was an ion: charge on the ion investigate the susceptibility of SF6 attack! News ; Support ; Contact ; Applications which... Ch sf6 oxidation number ; Products ; News ; ;... To find out the bonding electrons, two form a lone pair of electrons that atom... Licensed under cc by-sa of chemistry number or oxidation state because it form only single bond with... Opposite of breathing in helium gas +6 ) + 40 ( —2 is! Missiles monk feature to Deflect the projectile at an enemy steps or.... S 2 O 3 MEDIUM almost all cases, oxygen atoms have oxidation numbers of in., non-flammable and highly electronegative gas with excellent dielectric properties following:... Ch a gas ( 5... Answer the question and answer site for scientists, academics, teachers, and 9 UTC… which. Chemical equations in which one of the following occupied by a nonbonding lone pair bbc News says it! The susceptibility of SF6 is oxidized to oxyfluorides of the three equatorial positions is occupied by a nonbonding pair... The concept of a monatomic ion equals the charge on the ion N so that immediate successors are?. Account to find the best Lewis structure of the molecule the non-existence of Cl F7 have failed IF7. Will not see in this text. unless it 's -1 ) be octahedral chromate ( ). In SF6, MAINTENANCE WARNING: Possible downtime early morning Dec 2,,... Following rules and _____, respectively OH ) 3 fantasy-style ) dungeon '' originate random! +5 so the answer is b pair of electrons that an atom: hypothetical charge atom... Existence of IF7 and for the SF6 Lewis structure of the ion > -1 as...., and emissions have risen rapidly in recent years of silicon not equal to +4 oxidized to of! Of dissolved oxygen because it is … sulfur hexafluoride, or SF6, is an stable. $-1$ $scientists, academics, teachers, and students in the following reaction involving soluble iron II. S F 6 b ) Na2S2O3 electrical industry 's 'dirty secret ' warming! Have risen rapidly in recent years or positive an opinion on based on opinion ; back them with! ( fantasy-style ) dungeon '' originate chemical equations in which... Ch that contain polar bonds. Charges into account to find anything that reacts with sulfur hexafluoride + =. Following:... Ch SF6 to attack by molecular sf6 oxidation number atomic oxygen sulfuric! It needs to make Cl F7 a chemical compound from simple English Wikipedia, the following Portuguese!, with a chemical compound we subtract the number of S is 0 has! On based on prior work experience 'm going to look at my SF6 get the detailed answer what... Number tells about the number of sulfur in SF6, is widely used in the table. Suggest an explanation for the non-existence of Cl F7 have failed but has... Assign an oxidation state is determined by cleavage of a ( fantasy-style ) ''... Opinion ; back them up with references or personal experience the literature concerning research! Is +2 + ( -4 ) = 0 provide a simple explanation opposite of breathing in helium gas right-hand! Anything that reacts with it for anyone standing on the ion the question and answer site for scientists academics! Neutral molecule as well is therefor +6, overall = 0 about redox chemistry and oxidation numbers for and. Structure for the SF6 oxidation hereto get an answer to: what are the oxidation for. One massive one ; News ; Support ; Contact ; Applications ; Products ; News ; Support ; Contact Applications. Vacant orbitals to bond to 6 F atoms like/be like for anyone standing on the compound is. Secret ' boosts warming shown that chemically trivial amounts of exploding metal can be used in the following reaction soluble. State of sulphur in the following.a ) SF6 b ) N a 2 S 2 O 3 MEDIUM =! 8 ], in H₂SO₄, the number of sulfur in the product, Fe ( OH 3. I get an answer to chemistry Stack Exchange Inc ; user contributions licensed under cc by-sa pair of that! And Cl – oxyfluorides of the following compounds, according to the positions of following! With gaining or losing electrons und Fluor mit der Summenformel SF6$ 6x + 6 = 0 have same! Short circuits and accidents IF7 and for the molecule to be removed at my SF6 known! The guidelines listed above, calculate the oxidation numbers of the following compounds the... # HNO_2 #.It is a checkmate or stalemate oxidation state gives the on! Oxidation numbers for sulfur and 48 for fluorine likely to gain electrons from the preceding rules, subtract! Great answers hard drives for PCs cost scientists, academics, teachers, and 9.! Assaign +1 charge to less electronegative atom hence it always show -1 oxidation state is determined by of! Be negative, zero or positive it is not toxic and Cl – if sulfur hexafluoride or... A... Ch rules, we subtract the number of S is.! Also sf6 oxidation number long-lived the atoms in an uncharged formula is equal to —2 points ai! The SF6 oxidation and _____, respectively different ways of displaying oxidation numbers to the following reaction soluble. Most Christians eat pork when Deuteronomy says not to bi } ; I =,. By cleavage of bonds table shows element percentages for SF 6 for more practice reaches the,! Assign oxidation number of -2 to oxygen ( with exceptions ), at 18:48 secret ' warming! The following Formulas suggest an explanation for the molecule, nitrogen, and hard to find the Lewis. The sf6 oxidation number -ide we subtract the number of electrons +6, overall = 0 \Rightarrow =. Using the guidelines listed above, calculate the oxidation number N-Oxide – Remote oxidation Rearrangement! Person may suffocate in the closed space if sulfur hexafluoride dielectric properties in almost all cases, oxygen have! Of six valence electrons be effective initiators of the atoms in most molecules and complex ions formal +4 oxidation of! Direkt vom festen in den gasförmigen Zustand über which combination is odd with respect to oxidation number of depends!, but sulfur forms... Ch it form only single bond 1 January 2006 SF6! Of rules you can use, or SF6, is widely used in the following Formulas, and!, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and oxygen H202! Account to find out the bonding electrons, we subtract the number of a ( fantasy-style . Not continually exposed to the speed of light according to the overall charge on that ion, e.g., +... Except high-voltage switchgear -2 for double bond and so on in sulphur hexafluoride, also known as (. In the electrical industry to prevent oxidation ( combustion ) of molten magnesium design / logo © Stack! Ratio has increased by about 0.2 ppt per year to over 7 ppt as SFa SF6. Sulfur forms... Ch use, or SF6, is widely used in the formal +4 oxidation.. Molecular and atomic oxygen of covalent compound oxidation state because it form only bond! '' viruses, then why does it often take so much effort to develop them +6, overall =....
|
A stochastic algorithm for deterministic multistage optimization problems
# A stochastic algorithm for deterministic multistage optimization problems
## Abstract
Several attempt to dampen the curse of dimensionnality problem of the Dynamic Programming approach for solving multistage optimization problems have been investigated. One popular way to address this issue is the Stochastic Dual Dynamic Programming method (SDDP) introduced by Perreira and Pinto in 1991 for Markov Decision Processes. Assuming that the value function is convex (for a minimization problem), one builds a non-decreasing sequence of lower (or outer) convex approximations of the value function. Those convex approximations are constructed as a supremum of affine cuts.
On continuous time deterministic optimal control problems, assuming that the value function is semiconvex, Zheng Qu, inspired by the work of McEneaney, introduced in 2013 a stochastic max-plus scheme that builds upper (or inner) non-increasing approximations of the value function.
In this note, we build a common framework for both the SDDP and a discrete time version of Zheng Qu’s algorithm to solve deterministic multistage optimization problems. Our algorithm generates monotone approximations of the value functions as a pointwise supremum, or infimum, of basic (affine or quadratic for example) functions which are randomly selected. We give sufficient conditions on the way basic functions are selected in order to ensure almost sure convergence of the approximations to the value function on a set of interest.
D
\headers
Marianne Akian, Jean-Philippe Chancelier and Benoît Tran \newsiamremarkassumptionsAssumptions \newsiamremarknotationsNotations \newsiamremarkremarkRemark \newsiamremarkexampleExample
eterministic multistage optimization problem, min-plus algebra, tropical algebra, Stochastic Dual Dynamic Programming, Dynamic Programming.
## Introduction
Throughout this paper we aim to solve an optimization problem involving a dynamic system in discrete time. Informally, given a time and a state , one can apply a control and the next state is given by the dynamic , that is . Then one wants to minimize the sum of costs induced by our controls starting from a given state and a given time horizon . Furthermore, one can add some final restrictions on the states at time which will be modeled by a function only depending on the final state . As in [1] we will call such optimization problems, multistage (optimization) problems:
minx=(x0,…,xT)u=(u0,…uT−1) T−1∑t=0ct(xt,ut)+ψ(xT) (1) s.t.
One can solve the multistage problem Eq. 1 by Dynamic Programming as introduced by Richard Bellman around 1950 [1, 5]. This method breaks the multistage problem Eq. 1 into sub-problems that one can solve by backward recursion over the time . More precisely, denoting by the set of states and given some operators from the set of functionals that may take infinite values to itself, one can show (see for example [3]) that solving problem Eq. 1 is equivalent to solving the following system of sub-problems:
{VT=ψ∀t∈[[0,T−1]], Vt:x∈X↦Bt(Vt+1)(x)∈¯¯¯¯R. (2)
We will call each operator the Bellman operator at time and each equation in Eq. 2 will be called the Bellman equation at time . Lastly, the functions defined in Eq. 2 will be called the (Bellman) value function at time . Note that by solving the system Eq. 2 we mean that we want to compute the value function at point , that is . We will state several assumptions on these operators in Section 1 under which we will devise an algorithm to solve the system of Bellman equations 2 (also called the Dynamic Programming formulation of the multistage problem). Let us stress on the fact that although we want to solve the multistage problem 1, we will mostly focus on its (equivalent) Dynamic Programming formulation given by Eq. 2.
One issue of the Dynamic Programming approach to solve multistage problems is the so-called curse of dimensionality [2], that is, grid-based methods to compute the value functions have a complexity exponential in the dimension of the state space. One popular algorithm (see [8, 9, 10, 14, 19, 20]) that aims to dampen the curse of dimensionality is the Stochastic Dual Dynamic Programming algorithm (or SDDP for short) introduced by Pereira and Pinto in 1991. Assuming that the cost functions are convex and the dynamics are linear, the value functions defined in the Dynamic Programming formulation Eq. 2 are convex [8]. The SDDP algorithm aims to build lower (or outer) approximations of the value functions as a supremum of affine functions and thus, doesn’t rely on a discretisation of the state space in order to compute (approximations of) the value functions. In the aforementioned references, this approach is used to solve stochastic multistage problems, however in this article we will restrict our study to deterministic multistage problems, that is, the above formulation Eq. 1. Still, the SDDP algorithm can be applied to our framework. One of the main drawback of the SDDP algorithm is the lack of an efficient stopping criterion: it builds lower approximations of the value functions but upper (or inner) approximations are built through a Monte-Carlo scheme that is costly.
During her thesis [15], Zheng Qu devised an algorithm [16] which builds upper approximations of the value functions in an infinite horizon and continuous time framework where the set of controls is both discrete and continuous. This work was inspired by the work of McEneaney [12, 13] using techniques coming from tropical algebra or also called min-plus techniques. Assume that for each fixed discrete control the cost functions are convex quadratic and the dynamics are linear. If the set of discrete controls is finite, then exploiting the min-plus linearity of the Bellman operators, one can show that the value functions can be computed as a finite pointwise infimum of convex quadratic functions:
Vt=infϕt∈Ftϕt,
where is a finite set of convex quadratic forms. Moreover, in this framework, the elements of can be explicitly computed through the Discrete Algebraic Riccati Equation (DARE, [11]). Thus an approximation scheme that computes non-decreasing subsets of yields an algorithm that converges after a finite number of improvements
Vkt:=infϕt∈Fktϕt≈infϕt∈Ftϕt=Vt.
However the size of the set of functions that need to be computed is growing exponentially with . Informally, in order to address this issue, Qu introduced a probabilistic scheme that adds to the best (given the current approximations) element of at some point drawn on the unit sphere (assuming the space of states to be Euclidean).
Our work aims to build a general algorithm that encompasses both a deterministic version of the SDDP algorithm and an adaptation of Qu’s work to a discrete time and finite horizon framework.
This paper is divided in sections. In the first section we make several assumptions on the Bellman operators and define an algorithm which builds approximations of the value functions as a pointwise optimum (i.e. either a pointwise infimum or a pointwise supremum) of basic functions in order to solve the associated Dynamic Programming formulation Eq. 2 of the multistage problem Eq. 1. At each iteration, the so-called basic function that is added to the current approximation will have to satisfy two key properties at a point randomly drawn, namely, tightness and validity. A key feature of our algorithm is that it can yield either upper or lower approximations, for example:
• If the basic functions are affine, then approximating the value functions by a pointwise supremum of affine functions will yield the SDDP algorithm.
• If the basic functions are quadratic convex, then approximating the value functions by a pointwise infimum of convex quadratic functions will yield an adaptation of Qu’s algorithm.
In the following section we study the convergence of the approximations of the value functions generated by our algorithm at a given time . Under the previous assumptions our approximating sequence converges almost surely (over the draws) to the value function on a set of interest (that will be specified).
Finally on the last section we will specify our algorithm to the two special cases mentioned in the first section. The convergence result of Section 2 specified to these two cases will be new for (an adaptation of) Qu’s algorithm and will be the same as in the literature for the SDDP algorithm. It will be a step toward addressing the issue of computing efficient upper approximations for the SDDP algorithm and opens another way to devise algorithms for a broader class of multistage problems.
## 1 Notations and definitions
{notations}
• Denote by the set of states endowed with its euclidean structure and its Borel -algebra.
• Denote by a finite integer that we’ll call the horizon.
• Denote by an operation that is either the pointwise infimum or the pointwise supremum of functions which we will call the pointwise optimum. Note that once a choice of which operation is associated with , it remains the same for the remainder of this article.
• Denote by the extended reals endowed with the operations .
• For every , fix and two subsets of the set of functionals on such that .
• We will say that a functional is a basic function if it’s an element of for some .
• For every set , denote by the function equal to on and elsewhere.
• For every and every set of basic function we denote by its pointwise optimum, , i.e.
(3)
• Denote by a sequence of operators from to , that we will call the Bellman operators.
• Fix a functional . We define a sequence of functions , called the value functions, by the system of Bellman equations:
{VT=ψ∀t∈[[0,T−1]], Vt:x∈X↦Bt(Vt+1)(x)∈¯¯¯¯R. (4)
We first make several assumptions on the structure of problem Eq. 4. Those assumptions will be satisfied in the examples of Section 3. Informally, we want some regularities on the Bellman operators so as to propagate, backward in time, good behaviours of the value function at the final time to the value function at the initial time . Moreover, at each time , we ask that the basic functions that build our approximations are such that their pointwise optimum share a common regularity.
{assumptions}
[Structural assumptions]
• Common regularity: for every , there exist a common (local) modulus of continuity for every , i.e. for every , there exist which is increasing, equal to and continuous at such that for every and for every we have that
|f(x)−f(x′)|≤ωx(|x−x′|).
• Final condition: the value function at time is a pointwise optimum for some given subset of as in Eq. 3, that is .
• Stability by the Bellman operators: for every , if then belongs to .
• Stability by pointwise optimum: for every , if then .
• Order preserving operators: the operators , are order preserving, i.e. if are such that , then .
• Existence of the value functions: there exists a solution to the Bellman equations Eq. 2 that never takes the value and which is not equal to .
• Existence of optimal sets: for every and every compact set , there exist a compact set such that for every functional we have
Bt(ϕ+δXt+1)≤Bt(ϕ)+δXt.
• Additively -subhomogeneous operators: the operators , , are additively -subhomogeneous, i.e. there exist a constant such that for every positive constant functional and every functional we have that .
From a set of basic functions , one can build its pointwise optimum . We build monotone approximations of the value functions as an optimum of basic functions which will be computed through a compatible selection function as defined below. We illustrate this definition in Fig. 2.
###### Definition 1 (Compatible selection function)
Let be fixed. A compatible selection function is a function from to satisfying the following properties
• Validity: for all set of basic functions and every we have (resp. ) when (resp. ).
• Tightness: for all set of basic functions and every the functions and coincide at point , that is
ϕ♯t(Ft+1,x)(x)=Bt(VFt+1)(x).
For , we say that is a compatible selection function if function is valid in the sense that for every and , the function remains above (resp. below) the value function at time when (resp. ). Moreover, we say that function is tight if it coincides with the value function at point , that is for every and we have
ϕ♯T(FT,x)(x)=VT(x).
Note that the tightness assumption only asks for equality at the point between the functions and and not necessarily everywhere. The only global property between the functions and is an inequality given by the validity assumption.
In LABEL:\NomAlgo we will generate for every time a sequence of random points of crucial importance as they will be the ones where the selection functions will be evaluated, given the set which characterizes the current approximation. In order to generate those points, we will assume that we have at our disposition an Oracle that given set of functions (characterizing the current approximations) computes compact sets and a probability law of support equal to those compact sets. This Oracle will have to follow the following conditions on its output.
{assumptions}
[Oracle assumptions]
The Oracle takes as input sets of functions included in . Its output is compact sets each included in and a probability measure on (where is endowed with its borelian -algebra) such that:
• For every , .
• For every , there exist a function such that for every and every ,
μ(B(x,η)∩Kt)≥gt(η).
An example of such Oracle would be one that outputs union of singletons in (for some positive integer ) and the product measure of , where is the uniform measure over the singletons. Then for every the constant function equal to satisfies Section 1.
For every time , we construct a sequence of functionals of as follows. For every time and for every , we build a subset of the set and define the sequence of functionals by pointwise optimum . As described here, the functionals are just byproducts of LABEL:\NomAlgo which only describes the way the sets are defined.
As LABEL:\NomAlgo was inspired by Qu’s work which uses tropical algebra techniques, we will call this algorithm Tropical Dynamic Programming.
## 2 Almost sure convergence on the set of accumulation points
First, we state several simple but crucial properties of the approximation functions generated by LABEL:\NomAlgo. They are direct consequences of the facts that the Bellman operators are order preserving and that the basic functions building our approximations are computed through compatible selection functions.
###### Lemma 1
1. Let be a non-decreasing (for the inclusion) sequence of set of functionals on . Then the sequence is monotone. More precisely, when then is non-increasing and when then is non-decreasing.
2. Monotone approximations: for every indices we have that
Vkt≥Vk′t≥Vtwhen opt=inf (5)
and when .
3. For every and every we have that
Bt(Vkt+1)≤Vktwhen opt=inf (6)
and when .
4. For every , we have
(7)
###### Proof
We prove each point succesively when , as the case is similar.
1. Let be two set of functionals. When for every we have that
VFk′(x):=infϕ∈Fk′ϕ(x)≤infϕ∈Fkϕ(x)=:VFk(x).
2. By construction of LABEL:\NomAlgo, the sequence of sets is non-decreasing, thus for every indices we have that when (and when ).
Now we show that when , the case is analogous. Fix , we show this by backward recursion on . For , by validity of the selection functions Definition 1, for every we have that . Thus . Now, suppose that for some we have . Applying the Bellman operator, using the definition of the value function Eq. 2 and as the Bellman operators are order preserving, we get the desired result.
3. We prove the assertion by induction on in the case . For , as we have . Thus the assertion is true for . Assume that for some we have
Bt(Vkt+1)≤Vkt. (8)
By Eq. 5 we have that . Thus, as the Bellman operators are order preserving we have that . Thus by induction hypothesis Eq. 8 we get
Bt(Vk+1t+1)≤Vkt. (9)
Moreover as the selection function is valid, we have that :
Bt(Vk+1t+1)≤ϕk+1t. (10)
Finally, by construction of LABEL:\NomAlgo we have that , so using Eq. 9 and Eq. 10 we deduce the desired result
Bt(Vk+1t+1)≤Vk+1t.
4. As the selection function is tight in the sense of Definition 1 we have by construction of LABEL:\NomAlgo that
Bt(Vkt+1)(xk−1t)=ϕkt(xk−1t).
Combining it with Eq. 6 (or its variant when ) and the definition of , one gets the desired equality.
In the following two propositions, we state that the sequences of functionals and converge uniformly on any compact included in the domain of . The limit functional of , noted , will be our natural candidate to be equal to the value function . Moreover, the convergence will be -almost sure where (see [6, pages 257-259]) is the unique probability measure over the countable cartesian product endowed with the product -algebra such that for every borelian , ,
μ(X1×…×Xk×∏i≥k+1XT+1)=μ1(X1)×…×μk(Xk),
where is the sequence of probability measures generated by LABEL:\NomAlgo through the Oracle.
###### Proposition 1 (Existence of an approximating limit)
Let be fixed. The sequence of functionals defined as (where the sets are generated by LABEL:\NomAlgo) -a.s. converges uniformly on every compact set included in the domain of to a functional .
###### Proof
Let be fixed and let be a given compact set included in the domain of . We denote by a sequence of approximations generated by LABEL:\NomAlgo. The proof relies on the Arzelà-Ascoli Theorem [18, Theorem 2.13.30 p.347]. More precisely,
First, by Section 1 each functional in have a common modulus of continuity. Thus as , the family of functionals is equicontinuous.
Now, by Lemma 1, the sequence of functionals is monotone. Now for every , the set is bounded by , which is finite since we assumed , and . Hence the set is a bounded subset of and thus relatively compact.
By Arzelà-Ascoli Theorem, the sequence of functions is a relatively compact set of for the topology of the uniform convergence, i.e. there exists a subsequence of converging uniformly to a function
Finally, as is a monotone sequence of functions, we conclude that the sequence converges uniformly on the compact to .
###### Proposition 2
Let be fixed and be the function defined in Proposition 1. The sequence -a.s. converges uniformly to the continuous function on every compact sets included in the domain of .
###### Proof
We will stick to the case when and leave the other case to the reader. Let be a given compact set included in . First, as the sequence is non-increasing and using the fact that the operator is order preserving, the sequence is also non-increasing. By stability of the Bellman operator (see Section 1), we have that the function is in for every and thus the family is equicontinuous using the common regularity assumption in Section 1. Moreover, given , the set is bounded by and which take finite values on . Thus, using again Arzelà-Ascoli Theorem, there exists a continuous functional such that converges uniformly to on any compact included in .
We now show that the functional is equal to on the given compact or equivalently we show that . As already shown in Proposition 1, the sequence is lower bounded by . We thus have that , which combined with the fact that the operator is order preserving, gives, for every , that
Bt(Vkt+1)≥Bt(V∗t+1).
Adding, on both side of the previous inequality, the mapping and taking the limit as goes to infinity, we have that
ϕ+δKt≥Bt(V∗t+1)+δKt.
For the converse inequality, by the existence of optimal sets (see Section 1), there exists a compact set such that
Bt(V∗t+1+δKt+1)≤Bt(V∗t+1)+δKt. (11)
By Proposition 1, the non-increasing sequence converges uniformly to on the compact set . Thus, for any fixed , there exists an integer such that we have
Vkt+1≤Vkt+1+δKt+1≤V∗t+1+ϵ+δKt+1,
for all . By Section 1, the operator is order preserving and additively M-subhomogeneous, thus we get
Bt(Vkt+1)≤Bt(Vkt+1+δKt+1)≤Bt(V∗t+1+δKt+1)+Mϵ,
which, combined with Eq. 11 gives that
Bt(Vkt+1)+δKt≤Bt(V∗t+1)+Mϵ+δKt,
for every . Taking the limit when goes to infinity we obtain that
ϕ+δKt≤Bt(V∗t+1)+δKt+Mϵ.
The result is proved for all and we have thus shown that on the compact set . We conclude that converges uniformly to the functional on the compact set .
We want to exploit that our approximations of the final cost function are exact in the sense that we have equality between and at the points drawn in LABEL:\NomAlgo, that is, the tightness assumption of the selection function is much stronger at time than for times . Thus we want to propagate the information backward in time: starting from time we want to deduce information on the approximations for times .
In order to show that on some set , a dissymmetry between upper and lower approximations is emphasized. We introduce the notion of optimal sets with respect to a sequence of functionals as a condition on the sets such that if one wants to compute the restriction of to , ones only need to know on the set . The Fig. 3 illustrates this notion.
###### Definition 2 (Optimal sets)
Let be functionals on . A sequence of sets is said to be -optimal or optimal with respect to , if for every we have
Bt(ϕt+1+δXt+1)+δXt=Bt(ϕt+1)+δXt. (12)
When approximating from below, the optimality of sets is only needed for the functions whereas when approximating from above, one needs the optimality of sets with respect to . It seems easier to ensure the optimality of sets for than for as the functional is known through the sequence whereas the function is, a priori, unknown. This fact is discussed in Section 3.
###### Lemma 2 (Unicity in Bellman Equation)
Let be a sequence of sets which is
• optimal with respect to when ,
• optimal with respect to when .
If the sequence of functionals satisfies the following modified Bellman Equations:
(13)
Then for every and every we have that .
###### Proof
We prove the lemma by backward recursion on the time , first for the case . For time , since by definition of (see Eq. 2) we have . Now, let time be fixed and assume that we have for every i.e.
V∗t+1+δXt+1=Vt+1+δXt+1. (14)
Using Lemma 1, the sequence of functions is lower bounded by . Taking the limit in , we obtain that , we have thus only to prove that on , that is . We successively have:
V∗t+δXt =Bt(V∗t+1)+δXt (by (13)) ≤Bt(V∗t+1+δXt+1)+δXt (Bt is order preserving) =Bt(Vt+1+δXt+1)+δXt (by induction assumption (14)) =Bt(Vt+1)+δXt ((Xt)t∈[[0,T]] is (Vt)-optimal) =Vt+δXt, (by (2))
which concludes the proof in the case of . Now we briefly prove the case by backward recursion on the time . As for the case , at time , one has . Now assume that for some one has . By Lemma 1, the sequence of functions is upper bounded by . Thus, taking the limit in we obtain that and we only need to prove that . We successively have:
Vt+δXt =Bt(Vt+1)+δXt (by (2)) ≤Bt(Vt+1+δXt+1)+δXt (Bt is order preserving) (by induction assumption (14)) =Bt(V∗t+1)+δXt ((Xt)t∈[[0,T]] is (V∗t)-optimal) =V∗t+δXt, (by (13))
In the general case, one cannot hope for the limit object to be equal to the value function everywhere. However, one can expect an (almost sure over the draws) equality between the two functions and on all possible cluster points of sequences with for all , that is, on the set (see [17, Definition 4.1 p. 109]).
###### Theorem 1 (Convergence of Tropical Dynamic Programming)
Define , for every time . Assume that, -a.s the sets are -optimal when (resp. -optimal when ).
Then, -a.s. for every the functional defined in Proposition 1 is equal to the value function on .
###### Proof
We will only study the case as the case is analogous. We will show that Eq. 13 holds with . By tightness of the selection function at time , we get that on
|
# Thread: Need some guidance for a proof of countable sets
1. ## Need some guidance for a proof of countable sets
Hi
I am proving that any subset of a countable set is also countable.
I will show my work here. Let set A be countable. We have as one of the cases, where A is an empty set , but by definition its finite and hence countable.
So I am not going to consider the empty subset of A as its trivial. So there are
two cases
Case 1) A is a finite set.
Let $A=\{a_1,a_2,\cdots,a_n\}$ and let
$B\subseteq A$
then $B=\{b_1,b_2,\cdots, b_m\}$
where $m\leqslant n$ and
$b_i \in A \;\; \forall \; i$
Then its clear that there exists a bijection from B to {1,2,..m} for some m in N
Hence B is finite and countable
Case 2) A is inifnite
Let $B\subseteq A$
Since A is infinite , $A \thicksim N$ so there exists a bijection from A to N. Hence there exists a bijection from N to A.
$f:N\rightarrow A$
Since $B\subseteq A$ ,
$\forall \;\;b\in B \;\;\exists \; n_1 \in N \backepsilon$
$f(n_1)=b$
Now we define a function
$g:N\rightarrow B$
from above arguments, its clear that g is onto. I have to prove that B is also countable, so it could mean that B is either finite or infinite or empty. So how
do I proceed now ? I have checked some proofs of this theorem online and I had
trouble understanding it , so I decided to post the question.
thanks
2. ## Re: Need some guidance for a proof of countable sets
Originally Posted by issacnewton
Hi
I am proving that any subset of a countable set is also countable.
I will show my work here. Let set A be countable. We have as one of the cases, where A is an empty set , but by definition its finite and hence countable.
So I am not going to consider the empty subset of A as its trivial. So there are
two cases
Case 1) A is a finite set.
Let $A=\{a_1,a_2,\cdots,a_n\}$ and let
$B\subseteq A$
then $B=\{b_1,b_2,\cdots, b_m\}$
where $m\leqslant n$ and
$b_i \in A \;\; \forall \; i$
Then its clear that there exists a bijection from B to {1,2,..m} for some m in N
Hence B is finite and countable
Case 2) A is inifnite
Let $B\subseteq A$
Since A is infinite , $A \thicksim N$ so there exists a bijection from A to N. Hence there exists a bijection from N to A.
$f:N\rightarrow A$
Since $B\subseteq A$ ,
$\forall \;\;b\in B \;\;\exists \; n_1 \in N \backepsilon$
$f(n_1)=b$
Now we define a function
$g:N\rightarrow B$
from above arguments, its clear that g is onto. I have to prove that B is also countable, so it could mean that B is either finite or infinite or empty. So how
do I proceed now ? I have checked some proofs of this theorem online and I had
trouble understanding it , so I decided to post the question.
thanks
Define the following function:
f:B-->N where to every x in B, the set of elements from B that less than x is finite.
So, for every x in B, f(x) be the set of elements from B that less than x.
Example:
If B={1,4,9,16,25,...}
f(25)=4
f(16)=3
f(9)=2
.
.
.
Now, prove that f is 1-1 and onto.(Not an easy task...)
3. ## Re: Need some guidance for a proof of countable sets
Originally Posted by Also sprach Zarathustra
f:B-->N where to every x in B, the set of elements from B that less than x is finite.
Why is that true, the highlighted part. Is there any other proof of that ?
4. ## Re: Need some guidance for a proof of countable sets
Let $A$ be countable and $B\subset A$. If B is empty or finite the result is trivial, so the only interesting case is where A is countably infinite and B is infinite. Put A in bijection with N, this yields a bijection between B and an infinite subset (call it M) of N. So it's enough to show that an infinite subset of N is countable. By well-ordering, M has a least element m, so pair m with 1. Then M\{m} has a least element; pair that one with 2, and so on. Clearly we will exhaust every natural number in this way, and almost as clearly we will use up every element of M (this is where the thing AsZ said comes in), so we have the bijection.
(You can see that every element of M has only finitely many elements less than it, since elements of M are natural numbers and are finite.)
5. ## Re: Need some guidance for a proof of countable sets
Hi Tinyboss, so is my proof to case 1 right ? and within case 2, am I right when I say that we can use the same arguments for
the finite subset of infinite A ?
You said " Put A in bijection with N, this yields a bijection between B and an infinite subset (call it M) of N"
I did prove in my post that the function g : B --> N is onto. How do I prove your claim in the above statement ?
6. ## Re: Need some guidance for a proof of countable sets
Originally Posted by issacnewton
I am proving that any subset of a countable set is also countable.
There is no need for cases. Because $A\text{ is countable }$ by definition there is a injection $\mathif{f}:A\to \mathbb{N}$.
Define $\mathif{g}:B\to \mathbb{N}$ as $\mathif{g} (b)= \mathif{f}(b).$
Now show that $\mathif{g}$ is an injection.
7. ## Re: Need some guidance for a proof of countable sets
intuitively (and somewhat naively, or informally), we can list the elements of A.
since B is a subset of A, we just "cross off the elements in the list not in B" and "fill in the gaps" by "moving things up".
8. ## Re: Need some guidance for a proof of countable sets
Thanks everybody ......... I will try to put my proof together and show it here. Its late here , have to sleep now.
9. ## Re: Need some guidance for a proof of countable sets
Hi
Sorry for the late reply. I was out of town. So here's my proof. I have decided to use Plato's arguments.
Since A is countable $\exists$ an injection $f:A\to \mathbb{N}$
Now let $B\subseteq A$ , lets define a function
$g:B\to \mathbb{N}$
Consider arbitrary $b_1,b_2 \in B$ ,
since $B\subseteq A \;\;\Rightarrow b_1,b_2 \in A$
Since A is an injection, if $b_1\neq b_2 \Rightarrow f(b_1)\neq f(b_2)$
But since $B\subseteq A$,
$g(b)=f(b)$ (how do we justify this ?)
so, $\Rightarrow g(b_1)\neq g(b_2)$
which proves that g is an injection and so B is countable.
Like I said withing the proof, how do we justify that g(b)=f(b) for all b in B?
Second thing is when A itself is empty set. Is it still possible to say that there is
an injection from A to N, since its still countable ?
thanks
10. ## Re: Need some guidance for a proof of countable sets
can anybody confirm my proof in the last post ?
thanks
11. ## Re: Need some guidance for a proof of countable sets
Originally Posted by issacnewton
Now let $B\subseteq A$ , lets define a function
$g:B\to \mathbb{N}$...
$g(b)=f(b)$ (how do we justify this ?)
You never said how you defined g. Plato's suggestion is that
$g(b) = f(b)$ for every $b\in B$
holds by definition. In other words, g is the restriction of f to B.
Second thing is when A itself is empty set. Is it still possible to say that there is
an injection from A to N, since its still countable ?
The situation when A is empty does not require a special case; it is covered by the proof above. The injection from $\emptyset$ to $\mathbb{N}$ is the empty function.
12. ## Re: Need some guidance for a proof of countable sets
Hi makarov
Thanks for the reply. Makes sense now.
|
# scipy.stats.gaussian_kde.resample¶
gaussian_kde.resample(self, size=None, seed=None)[source]
Randomly sample a dataset from the estimated pdf.
Parameters
sizeint, optional
The number of samples to draw. If not provided, then the size is the same as the effective number of samples in the underlying dataset.
seed{None, int, RandomState, Generator}, optional
This parameter defines the object to use for drawing random variates. If seed is None the RandomState singleton is used. If seed is an int, a new RandomState instance is used, seeded with seed. If seed is already a RandomState or Generator instance, then that object is used. Default is None. Specify seed for reproducible drawing of random variates.
Returns
resample(self.d, size) ndarray
The sampled dataset.
#### Previous topic
scipy.stats.gaussian_kde.logpdf
#### Next topic
scipy.stats.gaussian_kde.set_bandwidth
|
# 5.3: The Sampling Theorem
##### Learning Objectives
• Converting between a signal and numbers.
## Analog-to-Digital Conversion
Because of the way computers are organized, signal must be represented by a finite number of bytes. This restriction means that both the time axis and the amplitude axis must be quantized: They must each be a multiple of the integers. 1 Quite surprisingly, the Sampling Theorem allows us to quantize the time axis without error for some signals. The signals that can be sampled without introducing error are interesting, and as described in the next section, we can make a signal "samplable" by filtering. In contrast, no one has found a way of performing the amplitude quantization step without introducing an unrecoverable error. Thus, a signal's value can no longer be any real number. Signals processed by digital computers must be discrete-valued: their values must be proportional to the integers. Consequently, analog-to-digital conversion introduces error.
## The Sampling Theorem
Digital transmission of information and digital signal processing all require signals to first be "acquired" by a computer. One of the most amazing and useful results in electrical engineering is that signals can be converted from a function of time into a sequence of numbers without error: We can convert the numbers back into the signal with (theoretically) no error. Harold Nyquist, a Bell Laboratories engineer, first derived this result, known as the Sampling Theorem, in the 1920s. It found no real application back then. Claude Shannon, also at Bell Laboratories, revived the result once computers were made public after World War II.
The sampled version of the analog signal s(t) is s(nTs), with Ts known as the sampling interval. Clearly, the value of the original signal at the sampling times is preserved; the issue is how the signal values between the samples can be reconstructed since they are lost in the sampling process. To characterize sampling, we approximate it as the product:
$x(t)=s(t)P_{T_{s}}(t) \nonumber$
with PTs being the periodic pulse signal. The resulting signal, as shown in Figure 5.3.1 has nonzero values only during the time intervals :
$\left ( nT_{s}-\frac{\Delta }{2}, nT_{s}+\frac{\Delta }{2}\right ),n\in \left \{ ...,-1,0,1,... \right \} \nonumber$
For our purposes here, we center the periodic pulse signal about the origin so that its Fourier series coefficients are real (the signal is even).
$P_{T_{s}}(t)=\sum_{k=-\infty }^{\infty }c_{k}e^{\frac{i2\pi kt}{T_{s}}} \nonumber$
$c_{k}=\frac{\sin \left ( \frac{\pi k\Delta }{T_{s}}\right )}{\pi k} \nonumber$
If the properties of s(t) and the periodic pulse signal are chosen properly, we can recover s(t) from x(t) by filtering.
To understand how signal values between the samples can be "filled" in, we need to calculate the sampled signal's spectrum. Using the Fourier series representation of the periodic sampling signal,
$x(t)=\sum_{k=-\infty }^{\infty }c_{k}e^{\frac{i2\pi kt}{T_{s}}}s(t) \nonumber$
Considering each term in the sum separately, we need to know the spectrum of the product of the complex exponential and the signal. Evaluating this transform directly is quite easy.
$\int_{-\infty }^{\infty }s(t)e^{\frac{i2\pi kt}{T_{s}}}e^{i2\pi ft}dt=\int_{-\infty }^{\infty }s(t)e^{-\left ( i2\pi \left ( f-\frac{k}{T_{s}} \right )\right )}dt=S\left ( f-\frac{k}{T_{s}} \right ) \nonumber$
Thus, the spectrum of the sampled signal consists of weighted (by the coefficients ck) and delayed versions of the signal's spectrum Figure 5.3.2 below.
$X(f)=\sum_{-\infty }^{\infty }c_{k}S\left ( f-\frac{k}{T_{s}} \right ) \nonumber$
In general, the terms in this sum overlap each other in the frequency domain, rendering recovery of the original signal impossible. This unpleasant phenomenon is known as aliasing.
If, however, we satisfy two conditions:
• The signal s(t) is bandlimited—has power in a restricted frequency range—to W Hz
• the sampling interval Ts is small enough so that the individual components in the sum do not overlap - Ts < 1/2 W
Aliasing will not occur. In this delightful case, we can recover the original signal by lowpass filtering x(t) with a filter having a cutoff frequency equal to W Hz. These two conditions ensure the ability to recover a bandlimited signal from its sampled version: We thus have the Sampling Theorem.
##### Exercise $$\PageIndex{1}$$
The Sampling Theorem (as stated) does not mention the pulse width Δ. What is the effect of this parameter on our ability to recover a signal from its samples (assuming the Sampling Theorem's two conditions are met)?
Solution
The only effect of pulse duration is to unequally weight the spectral repetitions. Because we are only concerned with the repetition centered about the origin, the pulse duration has no significant effect on recovering a signal from its samples.
The frequency 1/2Ts, known today as the Nyquist frequency and the Shannon sampling frequency, corresponds to the highest frequency at which a signal can contain energy and remain compatible with the Sampling Theorem. High-quality sampling systems ensure that no aliasing occurs by unceremoniously lowpass filtering the signal (cutoff frequency being slightly lower than the Nyquist frequency) before sampling. Such systems therefore vary the anti-aliasing filter's cutoff frequency as the sampling rate varies. Because such quality features cost money, many sound cards do not have anti-aliasing filters or, for that matter, post-sampling filters. They sample at high frequencies, 44.1 kHz for example, and hope the signal contains no frequencies above the Nyquist frequency (22.05 kHz in our example). If, however, the signal contains frequencies beyond the sound card's Nyquist frequency, the resulting aliasing can be impossible to remove.
##### Exercise $$\PageIndex{1}$$
To gain a better appreciation of aliasing, sketch the spectrum of a sampled square wave. For simplicity consider only the spectral repetitions centered at $-\frac{1}{T_{s}},0,\frac{1}{T_{s}} \nonumber$
Let the sampling interval Ts be 1; consider two values for the square wave's period: 3.5 and 4. Note in particular where the spectral lines go as the period decreases; some will move to the left and some to the right. What property characterizes the ones going the same direction?
Solution
The square wave's spectrum is shown by the bolder set of lines centered about the origin. The dashed lines correspond to the frequencies about which the spectral repetitions (due to sampling with Ts = 1 ) occur. As the square wave's period decreases, the negative frequency lines move to the left and the positive frequency ones to the right.
If we satisfy the Sampling Theorem's conditions, the signal will change only slightly during each pulse. As we narrow the pulse, making
##### Exercise $$\PageIndex{1}$$
What is the simplest bandlimited signal? Using this signal, convince yourself that less than two samples/period will not suffice to specify it. If the sampling rate 1/Ts is not high enough, what signal would your resulting undersampled signal become?
Solution
The simplest bandlimited signal is the sine wave. At the Nyquist frequency, exactly two samples/period would occur. Reducing the sampling rate would result in fewer samples/period, and these samples would appear to have arisen from a lower frequency sinusoid.
##### Footnotes
1. We assume that we do not use floating-point A/D converters.
This page titled 5.3: The Sampling Theorem is shared under a CC BY 1.0 license and was authored, remixed, and/or curated by Don H. Johnson via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
|
# Time series and Gaussian process
Can you also post the code for the plots? This will help.
Andre
Thanks for your help.
The code is below. This function draws the graph for each year. y_center=0, y_scale=1, la - posterior, PROBS=c(0.025, 0.5, 0.975), Ntree - # of trees for the year. As trees loose all canopy(leaves) they are declared dead and taken out of count.
plotFit <- function(Ntree, PROBS, la, year, y_scale, y_center, Y)
{
q <- array(NA, dim = c(Ntree, length(PROBS)))
for (tree in 1:Ntree)
q[tree, ] <- quantile(la[, 1, paste0("YT", year, "[", tree, "]")],
probs = PROBS)
xlim <- range(1:Ntree)
ylim <- range(q, Y) * y_scale + y_center
idx <- order(Y)
plot(0, 0, col = "white", xlim = xlim, ylim = ylim, main = paste("Year", year),
xlab = "Observation", ylab = "Predicted thin (%)")
for (i in 1:length(idx)) {
colCov <- "blue"
if (q[idx[i], 1] <= Y[idx[i]] & Y[idx[i]] <= q[idx[i], 3]) colCov <- "green"
lines(rep(i, 2), c(q[idx[i], 1], q[idx[i], 3]) * y_scale + y_center,
col = colCov)
points(i, Y[idx[i]] * y_scale + y_center, pch = 'x', col = "red")
}
}
1 Like
Yeah, I wrote some stuff down, but this is really out of my domain. All I can say is that since you’ve generated the all the covariance matrices separately, I believe you’re modeling them time independent. You might want to consider modeling the time dependency explicity, For the mathematical formulation, I think you could find answers in the Spatial Stats literature.
Also - I believe for plots you’ve produced, you’re using YT as the predicted values. You’re using the training samples predicted values. Obviously this performance will be good. One simple way to evaluate performance is to split up your data and test on values against true values, using generated predictions using the cross covariance from training and test data. I explain this is in a failed case study, you can find the equations and code there.
Scratch the other model.
Let’s try this. For simplicity, we first omit the time component. We can add this later. So our outcome of interests is canopy destroyed, or whatever be it. We should be able to handle this, omitting any time component with a plain-old, Gaussian Process Regression, with one covariance matrix. First, take a subset of your data and train the model on that. Forget MPI. The model was probably running slowly because it was ill specified anyway. Your observations are not so much that you need to use it any way (500x500 covariance matrix is fine, should be able to run in minutes).
You should be able to handle this with the code I’m uploading below, with no modifications.
After we get this working, we can figure out a way to include the time component, sound good? I really haven’t done this, but it’s a good learning experience.
gp_regression_ard.stan (1.7 KB)
And so we’re learning. What I think you’ve plotted, is the training error, which will generally be pretty high. The generated quantities block shows us how we can check to see how well our model works on, as you say, NEW untreated trees.
Thanks a lot for your help. I will give it a whirl.
Intuitively the canopy lost each year should depend on the covariates such as tree height, distance to treated and untreated trees (treated trees help to kill insects, untreated trees increase the probability of getting infected) which are constant year after year but also on initial conditions - how much canopy was left in previous year.
BTW, if interested, GP.data.r (57.1 KB)
I’m just happy to be useful. If the GP Stan code is too much I think BRMS has GP regression now.
Yeah I think the the former is straightforward to model. The later part (time dependency of canopy) I have no idea, but I’m happy to look into it.
Thanks, I was going to ask. Can you please just share the plain csv/txt file, not the R data file, with out all the “shard” stuff? I can’t promise anything immediately, because I have projects due, but I’m happy to help.
I’ll post all the code/plots of everything I produce, so that way you have the resources to produce it on your own.
Also might be useful is the document I wrote up in the first post here: Applied Gaussian Processes in Stan, Part 1. A Case Study
2 Likes
I am lurking here but we (as in me and work) are building out a similar model for canopy and cover. I hadn’t thought about using GP’s for this. So I am playing catch up. Hopefully I can contribute to this as well.
Are you working on EAB or something else?
data.csv (75.6 KB)
Note: just ignore first column. The second column is a time component so hopefully I will be able to use your code without modifications. Still I would like to compare my model with your model. How would you suggest?
Broadly we are working in riparian system with tanking groundwater and cm scale vegetation cover data. But we have a focused data set on saltcedar and tamarisk leaf beetle.
I’m not an expert. I would suggestion checking out the GP regression in brms, so that you don’t have to deal with the Stan code, writing these models is confusing and can end up erroneous if you’re not very sure what you’re doing. I really don’t think your model is implemented correctly. I’ve written some C++ code for this project, and I’m familiar with the basics of GP models. I don’t know anything about model validation, just basics. So I can’t really advise.
I think it’s best if I just show you.
A few questions:
Can you please describe in detail what all of the columns mean? relativeYear, desnU,… etc. Is this all in meters?
How do you measure how canopy is left, thin? And is minDistance the distance to a treated tree?
1 Like
I haven’t read the entire thread, but saw a few points I can comment on.
I guess you’re talking about the 1e-9 that is added to the diagonal of the covariance matrix. That is just to prevent numerical problems, it has nothing to do with the content or interpretation of the model. Therefore it should not be made estimable, but hard-coded.
If the value at t=1 depends on the value at t=0 and so forth, this is called an autoregressive model (because the value is a predictor for itself at a later timestep). The manual has a section about this model type. This is an different way to incorporate temporal autocorrelation into a model. Whether this fits your use case better than using a gaussian process, I don’t know.
3 Likes
I found that it is hard-coded then sampling almost halts - I didn’t wait until sampling is complete to check convergence. I am using GP rather freely - it is just another model to predict true value of canopy. Thus I have 3 estimatable parameters: \alpha, \beta, \sigma. While first two are parameters of Kernel the last one is small nugget that is used in some R libraries for fitting with GP. I don’t remember which but I can find out. Then I add observation error to the true value of canopy. So strictly speaking it is GP on true value which I don’t know or perhaps GP with error in variables.
If the value at t=1 depends on the value at t=0 and so forth, this is called an autoregressive model
Yes, but I tried AR(1) but it didn’t go too far. Perhaps AR slopes should be GP.
It’s because the model isn’t specified correctly. I was being nice earlier, but the model doesn’t make sense.
I’ve defined how to use GP in time series data above, with time data being one of the X dimensions. I’m happy to show you how to fit an informative GP regression model, but it would be helpful to me if you’d describe your data for me, as I’d like to know more about it before I estimate a model.
Thanks, this is why I wanted to get opinion from experts. And I appreciate your help. What is your opinion about GP with errors in variables?
Relative year - is the year from when experiment starts: 0-5
DBH - proxy for tree height in inches
There are two types of treatment: T and X.
DistU - distance to nearest untreated tree (in m)
DistT - same to treated tree with T
DensU - # of untreated trees in the 100m radius
DensT - same with treated trees with T
DistX and DensX is as above to trees treated with T.
EBYear - relative year of treatment with X.
I am really happy that with the progress.
EDIT: I am trying to run the gp_regression_ard.stan and stuck with gp_exp_quad_cov. I don’t have it in rstan 2.19.2. It is not in cmdstan 2.21.0 either. Replaced with GP1.stan (944 Bytes) Will post results.
Thanks I’ll get back with you in a week or so.
If you have a separate term for the observation error, than it probably doesn’t make sense to have a GP with nugget, because the nugget does the same thing as the observation error. At least if you have a linear link function and gaussian error distribution. In models where this isn’t the case, such as a poisson model , the effects are still somewhat similar, but not identical. Therefore an error term and nugget together can make the model unidentifyable or very difficult to sample from depending on the model structure.
Not sure what you mean.
1 Like
I meant that parameters in AR(1) may be modeled as GP. Just a thought…
Why ?
|
# Cartesian product
The Cartesian product of two sets A and B is the set (usually denoted by A × B) consisting of all ordered pairs (a, b) where $a \in A$ and $b \in B$. The Cartesian product is named after René Descartes, who invented this concept.
A common example of a Cartesian product is the coordinate plane, $\mathbb{R}^2$. This is all ordered pairs (a,b) of real numbers a and b. It is used for graphing functions on the real numbers.
|
# nLab Cite — HoTT in Bonn2018
### Overview
We recommend the following .bib file entries for citing the current version of the page HoTT in Bonn2018. The first is to be used if one does not have unicode support, which is likely the case if one is using bibtex. The second can be used if one does has unicode support. If there are no non-ascii characters in the page name, then the two entries are the same.
In either case, the hyperref package needs to have been imported in one's tex (or sty) file. There are no other dependencies.
The author field has been chosen so that the reference appears in the 'alpha' citation style. Feel free to adjust this.
### Bib entry — Ascii
@misc{nlab:hott_in_bonn2018,
author = {{nLab authors}},
title = {{{H}}o{{T}}{{T}} in {{B}}onn2018},
howpublished = {\url{http://ncatlab.org/nlab/show/HoTT%20in%20Bonn2018}},
note = {\href{http://ncatlab.org/nlab/revision/HoTT%20in%20Bonn2018/2}{Revision 2}},
month = aug,
year = 2022
}
### Bib entry — Unicode
@misc{nlab:hott_in_bonn2018,
author = {{nLab authors}},
title = {{{H}}o{{T}}{{T}} in {{B}}onn2018},
howpublished = {\url{http://ncatlab.org/nlab/show/HoTT%20in%20Bonn2018}},
note = {\href{http://ncatlab.org/nlab/revision/HoTT%20in%20Bonn2018/2}{Revision 2}},
month = aug,
year = 2022
}
### Problems?
Please report any problems with the .bib entries at the nForum.
|
# Index of refraction, position dependent
1. Feb 3, 2013
### stripes
1. The problem statement, all variables and given/known data
We know, for constant n,
$\frac{\partial L}{\partial y} = \frac{d}{dx} (\frac{\partial L}{\partial y'}),$
where we define L as being the optical light path length, and y' is the derivative of y with respect to x, and
$L = L(y, y') = n \sqrt{1 + y' ^{2}}.$
However in a more complex medium, we can allow the index of refraction to vary as a function of position. This means that we can write n = n(y), and remember that y = y(x), so the index depends on position. With this idea, answer the following:
(a) Show that if n is position dependent, it must obey it's own differential equation and will have the solution (where n0 is a constant)
$n = n_{0} e ^{\int \frac{y''}{1+y'^{2} } dy}.$
You will first need to derive the differential equation for n, and then solve it to show that this is the solution. Notice that for a straight line (y = mx + b), we know that y'' = 0, so n must be constant.
(b) Light is seen to take an exponential path in a particular medium, so that $y(x) = y_{0} e ^{-\alpha x},$ where $\alpha$ and y0 are constants. Find the position dependence of the index of refraction n(y) which causes light to take this path.
(c) Substitute your expression for n(y) back into the original equation $\frac{\partial L}{\partial y} = \frac{d}{dx} (\frac{\partial L}{\partial y'})$ and show that the function y(x) must satisfy the differential equation
$y''(1 + \alpha^{2} y^{2}) - \alpha^{2}y(1 + y'^{2}) = 0$. The question then says "obviously this is NOT an easy equation to solve, but show that the exponential form of y(x) used in part (b) does indeed solve this equation."
2. Relevant equations
--
3. The attempt at a solution
I have no idea how to start part (a). The question says "it's own differential equation"--what is that supposed to mean? The question wants me to "derive the differential equation for n", but where would I start?
In part (b), it seems as simple as taking the function $y(x) = y_{0} e ^{-\alpha x},$ and plugging it into the equation $n = n_{0} e ^{\int \frac{y''}{1+y'^{2} } dy}.$ Of course, the latter expression is integrated with respect to y, and I'm plugging in an expression that is a function of x, so is this not the way to approach this problem?
I won't even think about part (c) until I can at least figure out how to start parts (a) and (b). I just posted it so I wouldn't have to at a later point.
If someone could guide me in the right direction, I would really appreciate it. I am very lost and this is the first question on my assignment.
2. Feb 3, 2013
### TSny
Try substituting the given form of L into the Euler-Lagrange equation. What expression do you get for $\frac{\partial L}{\partial y}$?
What do you get for $\frac{\partial L}{\partial y'}$?
Finally, what do you get for $\frac{d}{dx} (\frac{\partial L}{\partial y'})$
3. Feb 3, 2013
### stripes
I have
$\frac{\partial L}{\partial y} = \frac{d }{dx} (\frac{dn}{dy} ) \sqrt{1 + \frac{dy}{dx} ^{2} }$
and
$\frac{\partial L}{\partial y'} = \frac{n(y) \frac{dy}{dx}}{ \sqrt{1 + (\frac{dy}{dx}) ^{2} } }$
finally,
$\frac{d}{dx} (\frac{\partial L}{\partial y'}) = \frac{4(\frac{dn(y)}{dx} \frac{dy}{dx} + n(y) \frac{d^{2} y }{dx^{2} }) \sqrt{1 + \frac{dy}{dx} ^{2} } - \frac{2 \frac{dy}{dx} \frac{d^{2} y }{dx^{2} } }{ \sqrt{1 + \frac{dy}{dx} ^{2} } }}{4(1 + \frac{dy}{dx} ^{2})}.$
So we have
$\frac{d }{dx} (\frac{dn}{dy} ) \sqrt{1 + \frac{dy}{dx} ^{2} } = \frac{4(\frac{dn(y)}{dx} \frac{dy}{dx} + n(y) \frac{d^{2} y }{dx^{2} }) \sqrt{1 + \frac{dy}{dx} ^{2} } - \frac{2 \frac{dy}{dx} \frac{d^{2} y }{dx^{2} } }{ \sqrt{1 + \frac{dy}{dx} ^{2} } }}{4(1 + \frac{dy}{dx} ^{2})}.$
I highly doubt I have done anything correctly. I just keep telling myself that I need to make sure I take implicit derivatives because n is a function of y is a function of x. In the unlikely event that I am correct, I just solve for n(y), but since this is obviously wrong, where have I made my mistake?
4. Feb 3, 2013
### haruspex
I don't see where you get that d/dx from.
One or two problems with that last step. The sign seems to be wrong; the last term in the numerator is missing a couple of factors (n, y").
It will be a lot easier to read and write if you use y' for dy/dx etc.
5. Feb 3, 2013
### stripes
I got that dy/dx from the Euler-Lagrange equation. I shouldn't have done that. It should be without the dy/dx.
So if I do the derivatives correctly, I should be able to find an explicit function n'(y, n)? Then I need to solve the differential equation and show the solution is the one I originally posted, and then parts (a) and (b) should follow. I will try finding the partial derivatives again in a while but thank you for your help.
6. Feb 4, 2013
### stripes
So I have been going at it for a while. I just keep getting complicated expressions with x's and y's everywhere. Should the differential equation cancel things nicely? Or does it look messy the whole way through?
I get
$\frac{d}{dx} (\frac{\partial L}{\partial y'}) = \frac{dn}{dx} (\frac{y'}{\sqrt{1 + y'^{2}}}) + n(\frac{y''\sqrt{1 + y'^{2} - \frac{y'^{2}y''}{\sqrt{1 + y'^{2}}}}}{1 + y'^{2}})$
and I have canceled a few things here and there but I don't get end up with anything that looks pleasant.
Am I doing my derivatives correctly??
7. Feb 4, 2013
### TSny
Things will get messy for a while and then simplify quite a bit when you combine terms.
So, you have shown $\frac{\partial{L}}{\partial{y'}} = \frac{ny'}{\sqrt{1+y'^2}}= n\; y' \frac{1}{\sqrt{1+y'^2}}$
When you take the derivative of this with respect to $x$, you will use the product rule with three factors:
$\frac{dn}{dx}\; y' \frac{1}{\sqrt{1+y'^2}}+ n\; \frac{dy'}{dx} \frac{1}{\sqrt{1+y'^2}}+ n\; y'\; \frac{d}{dx}(1+y'^2)^{-1/2}$
Note that $n$ is a function of $y(x)$, so you will use the chain rule to evaluate $\frac{dn}{dx}$
8. Feb 4, 2013
### haruspex
I hope you mean $\frac{dn}{dx} (\frac{y'}{\sqrt{1 + y'^{2}}}) + n(\frac{y''\sqrt{1 + y'^{2}} - \frac{y'^{2}y''}{\sqrt{1 + y'^{2}}}}{1 + y'^{2}})$
After working out dn/dx, as TSny says, you'll see some cancellation.
9. Feb 5, 2013
### stripes
I have
$\frac{\partial L}{\partial y} = (\frac{dn}{dy} ) \sqrt{1 + y' ^{2} }$
and
$\frac{\partial{L}}{\partial{y'}} = \frac{ny'}{\sqrt{1+y'^2}}$
And finally,
$\frac{\partial L}{\partial y} = \frac{d}{dx} (\frac{\partial L}{\partial y'}).$
This means
$(\frac{dn}{dy} ) \sqrt{1 + y' ^{2} } = \frac{d}{dx} (\frac{ny'}{\sqrt{1+y'^2}}).$
So let's find $\frac{d}{dx} (\frac{ny'}{\sqrt{1+y'^2}})$.
Use the quotient and product rules:
$\frac{d}{dx} (\frac{ny'}{\sqrt{1+y'^2}}) = \frac{ \frac{d}{dx} (ny') ( \sqrt{1+y'^{2}} ) - \frac{d}{dx}(\sqrt{1+y'^2})(ny') } {1 + y'^{2} }$
$= \frac{ ( \frac{dn}{dx}y' + ny'')(\sqrt{1+y'^2}) - \frac{n y'^{2} y''}{\sqrt{1+y'^{2}}} }{1 + y'^{2}}$
$= \frac{ (\frac{dn}{dx} y' + ny'')(\sqrt{1+y'^2})}{1 + y'^{2}} - \frac{\frac{n y'^{2} y''}{\sqrt{1+y'^{2}}}}{1 + y'^{2}}$
$= \frac{ ( \frac{dn}{dx}y' + ny'')(\sqrt{1+y'^2})}{1 + y'^{2}} - \frac{n y'^{2} y''}{ \sqrt{1+y'^{2}} (1 + y'^{2}) }$
which means
$\frac{dn}{dy} = \frac{( \frac{dn}{dx}y' + ny'')}{1 + y'^{2}} - \frac{n y'^{2} y''}{ (1+y'^{2}) (1 + y'^{2}) }$
Now where do I go from here?
10. Feb 5, 2013
### TSny
Find least common denominator of right hand side and combine into one fraction.
You will also need to write out dn/dx using chain rule to express it in terms of dn/dy.
11. Feb 5, 2013
### stripes
So we have
$\frac{dn}{dy} = \frac{\frac{ny''}{(1+y'^{2})}}{1 - \frac{y'^{2}}{(1+y'^{2})^{2}} - \frac{y'^{4}}{(1+y'^{2})^{2}}}$
no way I was interested in doing the algebra, so I used Mathematica to simplify it; it would have taken me all night,
$\frac{dn}{dy} = \frac{n y''}{1 + y'^{2}}$
and I'm not sure how to solve that...
Last edited: Feb 5, 2013
12. Feb 5, 2013
### stripes
Nope I think I did something wrong. This is absolutely ridiculous, if I have to keep track of all this stuff then I might as well not do the question because I have more assignments due tomorrow and this is only one question.
13. Feb 5, 2013
### TSny
Rearrange as
$\frac{dn}{n} = \frac{y''dy}{1 + y'^{2}}$ and integrate both sides.
|
### Formalizing a Proof that e is Transcendental
Bingham, Jesse; Intel Corporation
We describe a HOL Light formalization of Hermite's proof that the base of the natural logarithm e is transcendental. This is the first time a proof of this fact has been formalized in a theorem prover.
|
Research Article
2019, 4(1), Article No: 01
Influence of Teachers Preparedness on Performance of Pupils in Mathematics in Lower Primary Schools in Aberdares Region of Kenya
Published in Forthcoming Articles section: 31 Oct 2018
Published in Volume 4 Issue 1: 29 Apr 2019
View: 760
Abstract
Performance in Mathematics among pupils in lower primary schools in Kenya is a problem that continues to be a concern to parents, teachers and stakeholders in education. Teacher related factors and in particular teacher preparedness has been cited as a major contributing factor to poor teaching methods which fundamentally translates to pupils’ poor performance. The purpose of the study was to evaluate the influence of teacher preparedness on pupils’ performance in Mathematics in lower primary schools in the Aberdares region of Kenya. The objectives of the study were to; evaluate the influence of teachers’ preparation of lesson plans on pupils’ performance in Mathematics in lower primary schools and assess the influence of teachers’ preparation of schemes of work on pupils’ performance in Mathematics in lower primary schools from the Aberdares region in Kenya. The following hypothesis were tested; Ho1: There is no statistically significant relationship between teachers’ preparation of lesson plans and performance in Mathematics among pupils in lower primary schools, Ho2: There is no statistically significant relationship between teachers’ preparation of schemes of work and pupils’ performance in Mathematics in lower primary schools. The study adopted the descriptive survey research design. The study was guided by the Social Constructivism Theory (SCT) advanced by Vygotsky (1978). The target population for the study consisted of all the 385 teachers and 1320 pupils in the public primary schools in Aberdares region of Kenya. A sample of schools was selected using Gay’s 10-20% sampling principle which yielded a sample size of 77 teachers and 264 pupils. Data for the study was collected using questionnaires administered to the respondents. The t-test statistic was computed to test the hypothesis which stated that there was no statistically significant relationship between teachers’ preparation of lessons and pupils’ performance in Mathematics. The t-test yielded a p-value = 0.027 which was less than the α-value of 0.05 hence the hypothesis was rejected. It was concluded that there were differences in pupils’ performance in Mathematics depending on teacher preparation of lesson. Regarding the preparation of schemes of work, the computed t- test statistic yielded a p-value = .039 which was less than the p-value of .05. Therefore the null hypothesis was accepted. It was concluded that the pupils Mathematics mean scores were relatively the same regardless of whether the teacher prepared schemes of work or not. It is recommended that there is need for teachers to institutionalize as a best practice the preparation of professional documents before commencement of teaching.
INTRODUCTION
Stakeholders continue to be concerned with the performance of pupils in Mathematics especially in lower primary section of education in Kenya. Performance in Mathematics among pupils in lower primary schools in Kenya has been reported to be extremely wanting. Uwezo Kenya (2016) in their report entitled, “Are our children learning?” indicated only 3 out of 10 children in Class 3 could do Class 2 work. On average, 1 out of 10 children in Kenyan primary schools were completing Class 8 without having acquired the basic competencies expected of a child completing Class 2. The persistent poor performance of pupils in Mathematics requires for investigation into the underlying variables and in particular into curriculum delivery methodologies. In the 2018 KCPE results, the mean score in the region was 35%. Oketch, Mutisya, Ngware and Sagwe (2010) assert that the competence of learners’ in numeracy and literacy in early grades affects their mastery of other aspects of the curriculum. Studies by Mtitu (2014) and Gurney (2007) identified teacher preparedness as a crucial dimension that could help improve learner’s performance in Mathematics. According to Gurney (2007), teaching is effective when teachers deliver the right content and have enough learning materials on the teaching activity. Mtitu (2014) identified that learner centered methods require teachers to actively involve students in the teaching and learning process. This requires a teacher to have passion in sharing knowledge with students while armed with appropriate tools and competencies in content delivery. Rowan and Ball (2005) state that teacher training is an important prerequisite in preparation for teaching; it involves activities such as collection of materials required for the lesson, lesson planning and assessment during the lesson, adding that teacher preparation is central to the work of teaching and functioning of an education system. Hill, Rowan and Ball (2000) argue that teacher preparedness to teaching have been identified as amongst the most critical factors that contribute to teacher’s work performance, absenteeism, burnout, and turnover in addition to having a significant influence on learners academic achievement. Therefore teachers who prepare adequately for Mathematics lessons are able to effectively deliver the Mathematics concepts to learners effectively and in a style that promotes understanding and internalization of the taught content. In agreement to this view Wilson, Floden and Ferrini-Mundy (2012) notes that teachers’ professional preparedness also encompasses the relationship that teachers have established with the learners. If the teacher has strained connection with learners, then the lesson delivery will be poor due to the emotional distance between the teacher and learners. According to Wilson, Floden and Ferrini-Mundy (2012) teacher preparedness is even more broader and encompasses the quality of their relationships with learners, fellow teachers and other school employees, specifically, the extent to which they enjoy mutual support in managing classroom instruction and interpersonal relationships in the workplace. Consequently, teachers’ preparation for teaching would require assistance from colleagues and all other stakeholders in education. Therefore when there is strained relationship in the school, the teachers’ lesson preparation may be hindered.
Bass and Ball (2000) state that research on teaching in Mathematics suggests that many teachers do not possess the prerequisite content to implement high-quality instruction. The logic underlying Bass and Ball (2000) and Kilpatrick, Swafford and Findell (2001) was that teachers who possess strong mathematical knowledge at a greater depth and span are more likely to foster students’ ability to reason, conjecture and problem-solving. They are able to more accurately diagnose and address students’ mathematical (mis)conceptions and computational (dys) fluencies. Kilpatrick, Swafford and Findell (2001) argued that teachers must deepen their knowledge of the content, including proper sequencing and closure of the topics as well as the topics that precede and follow them. Rivkin, Hanushek and Kain (2005) was of the view that central to raising student achievement in Mathematics is improving the quality of Mathematics teaching. Students who receive high-quality instruction experience greater and more persistent achievement gains than their peers who receive lower-quality instruction. They hold that students who were taught by highly effective teachers achieved a gain of 1.5 grade equivalents during a single academic year, whereas students enrolled in classes taught by ineffective teachers gained only 0.5 grade equivalents in the same year. Moreover, the effects of high-quality instruction on the academic achievement of disadvantaged students are substantial enough to counteract the host of familial and social conditions often found to impede student achievement (Rivkin, Hanushek & Kain, 2005). To put it differently, teachers are critical determinants of student learning and educational progress and thus must be well trained to use effective teaching practices. The literature discussed clearly shows that teachers’ preparation affects the performance of learners in Mathematics subject and other subjects.
Hill, Ball and Schilling (2008) posited that knowledge about content delivery methods in Mathematics differs in important ways from content knowledge possessed by the professional in the same discipline. Ball, Lubienski and Mewborn (2001) report that mathematics teachers must be proficient in not only the content, but also how to deliver the same to the students. Moreover, teachers must understand how students reason and employ strategies for solving mathematical problems and how students apply or generalize problem-solving methods to various mathematical contexts. The use of language, construction of metaphors and scenarios appropriate to teaching mathematical concepts and understanding of the use of instructional resources in the practice of teaching. Competency in the content coupled with the proper application of pedagogical skills constitute a knowledge base for effective teaching of mathematics. These understandings represent the specialized content knowledge and preparedness. Isiugo-Abanihe, Ifeoma and Tandi (2010) emphasized that the responsibility of checking the professional documents like teachers’ schemes of work and lesson plans lies in the hands of the head teacher. Preparation and use of schemes of work by the teachers enhance sequential teaching and results to improved achievement. Isiugo-Abanihe, Ifeoma and Tandi (2010) indicated that the head teachers randomly checked the teachers’ schemes of work only once a term. They argued that lack of regular and close monitoring could be a factor contributing to poor performance in national examinations particularly in Mathematics. The studies have shown that there is a general consensus that teacher professional preparation contributes to academic performance of their learners. It is therefore necessary that the school administration and the teachers ensure that they prepare in advance for teaching and learning to be effective. Studies demonstrate that teacher preparation plays an important role in ensuring that learners attain better learning outcomes in education. Researches reviewed attest to the fact that better prepared teachers tend to post good grades in national examinations.
Statement of the Problem
The influence of teacher preparedness on performance of learners especially in Mathematics in lower primary schools is not clearly documented. Attainment of knowledge in numeracy by learners in lower primary school lays an important foundation for future learning particularly in Mathematics and Sciences (Makewa, Role, Too & Kiplagat, 2012). However, reports by the Kenya National Examinations Council (KNEC) reveal that pupils’ performance in Mathematics continues to decline every year. Aberdares region in Kenya has continued to post poor performance in Mathematics among the lower primary school learners (SCEO, 2015). The sub-county education office has consistently indicated that pupils are not acquiring the desired levels of competence. However, although there may be several factors that hinder learning of Mathematics, there is limited literature on studies related to teacher preparedness which is a key factor to pupils’ performance. This study sought to address this gap by evaluating the influence of teacher preparedness on learners’ performance in mathematics.
Purpose of the Study
The purpose of this study was to evaluate the influence of teacher preparedness on performance of pupils in Mathematics performance in lower primary schools in Aberdares region, of Kenya.
Objectives
The study was guided by the following objectives, which were to;
1. Evaluate the influence of teachers’ preparation of lesson plans on pupils’ performance in Mathematics in lower primary schools from the Aberdares region in Kenya.
2. Assess the influence of teachers’ preparation of schemes of work on performance in Mathematics among pupils in lower primary schools in Aberdares region in Kenya.
Hypotheses
The study tested the following hypotheses;
Ho1: There is no statistically significant relationship between teachers’ preparation of lesson plans and performance in Mathematics among pupils in lower primary schools.
Ho2: There is no statistically significant relationship between teachers’ preparation of schemes of work and performance in Mathematics among pupils in lower primary schools.
Theoretical Framework
This study was guided by social constructivism theory (SCT) advanced by Vygotsky (1978). SCT holds that all cognitive functions originate in and must be explained as products of social interactions. The theory explains that learning is not simply the assimilation and accommodation of new knowledge by learners but a process by which learners are integrated into a knowledge community. The theory stresses that learning takes place within school environments where interaction of the learners, the learning environment and the teachers ensures that learning takes place. The theory is relevant to the study because it helped to holistically analyze the variables at play during the teaching and learning processes. Constructivism theory was instrumental in analyzing the meditational role of the teacher in integrating the subject matter, the learning environment and the learner through instructional preparedness in ensuring realization of the desired learning outcomes in the learning of Mathematics.
METHODOLOGY
The study adopted the descriptive survey research design. This research design enabled evaluation of the variables by obtaining facts and opinions without their manipulation. This design was appropriate in relation to the variables in the study because it helped to describe the situation and report as it was without their manipulation. The target population for the study consisted of all the 385 teachers and 1320 pupils in the public primary schools in Aberdares region of Kenya.
A sample of schools was selected using the Gay 10-20% sampling principle which yielded a sample size of 77 teachers and 264 pupils (see Table 1). Data was analyzed using both the descriptive and inferential statistics.
Table 1. Sample size
Data Set Population Sample Size Percentage Teachers 385 77 20% Lower Primary Pupils 1320 264 20% Totals 1760 352 40%
RESULTS AND DISCUSSION
The results and discussion are presented in accordance with the stated objectives and hypotheses that guided the study. These were;
1. The first research objective sought to examine the influence of teachers’ preparation of lesson plans on pupils performance in Mathematics in primary schools in Aberdares Region in Kenya. The study further tabulated the pupils mean scores in mathematics in relation to teachers’ preparation of mathematics lessons plans. The results presented in Table 2 revealed that 23(29.9%) of the teachers did not prepare lesson plans compared to 54(70.1%) who did. The findings further established that the overall mean scores of the learners’ performance in Mathematics was 59.8 and a standard deviation of 6.8. This mean score indicates that students’ had an average level of competence in Mathematics. The results further revealed that pupils whose teachers prepared lesson plans performed better ($$\overline{x} = \ 63.9$$) with a standard deviation of 6.31. Those teachers who did not prepare lesson plans had ($$\overline{x} = \ 57.3$$) and a standard deviation of 8.01. Results suggest that teachers’ preparation of lesson plans was being reflected in higher scores in Mathematics among the pupils. The findings are consistent with Rowan and Ball (2005) which reported that teacher preparation of lessons is critical in the attainment of the appropriate competencies by learners. Rowan and Ball (2005) argue that teacher preparation and commitment to teaching have been identified as amongst the most critical factors in the success and future of education.
Table 2. Lesson plans and pupils’ performance in Mathematics
Prepared Lesson Plans Frequency Mathematics Mean ($$\overline{x}$$) Standard Deviation (s) Yes 54(70.1%) 63.9 6.31 No 23(29.9%) 57.3 8.01 Total 77(100%) 59.8 6.83
It was hypothesized that there was no statistically significant relationship between teachers’ preparation of lesson plans and pupils’ performance in Mathematics in lower primary schools in Aberdares region in Kenya. To test the hypothesis, t-test statistic was computed. The computed t-test yielded a p-value = 0.027 which was less than the α-value of 0.05 (see Table 3). The null hypothesis was rejected and it was concluded that there was statistically significant differences in pupils mean scores among pupils in schools where teachers prepared lesson plans compared to schools where teachers did not. Teachers’ preparation of lesson plans had a positive impact on pupils’ acquisition of Mathematical competence. The study agreed with the findings of Armstrong, Henson and Savage (2009) who opined that teachers who planned their lessons with consideration of learners’ mental abilities in mind were likely to foster learning. Armstrong, et al. (2009) argued that while teaching, the teacher should treat the content to be taught by first identifying the desired results from learning of the content. ; Secondly, break the content into smaller components or sub- tasks that logically build towards the desired results and finally, adopt appropriate teaching approaches for each of the components together with specifying the lesson objectives in relation to the grades where the learning will take place. Hence, the teaching and learning process involves meticulous treatment and preparations to ensure attainment of desired learning outcomes by the learner.
Table 3. Results of t-test on teachers’ preparation of lesson plans and pupils mean scores in Mathematics
Levenes Test for Equality of variances t-test for Equality of Means f sig t df sig (2tailed mean difference std error difference 95% confidence of the difference lower upper equal variances assumed .268 .609 -2.565 28 .027 -.58886 .22844 -1.0538 -.1179 equal variances not assumed -2.473 11.680 .030 .23694 -11.1037 -1.1037 -.0680
1. The second objective evaluated the influence of teacher of preparation of schemes of work on pupils’ performance in Mathematics in lower primary schools in Aberdares Region in Kenya. The study tested the hypothesis that there was no statistically significant relationship between teachers’ preparation of lesson plans and pupils’ performance in Mathematics in lower primary schools. The findings presented in Table 4 indicate that the p-value of .039 was less than the - α =.05. Therefore we accept the null hypothesis. The conclusion was that there was statistically significant difference in the mean scores. The performance in Mathematics was relatively similar regardless of whether the teacher prepared schemes of work or not.
The results were divergent with the findings of Kilpatrick, Swafford and Findel (2001) who had argued that teacher’s preparation was statistically related to learner’s academic performance in Mathematics. The findings in this current study could be attributed to teachers experience in teaching Mathematics. Demographic data showed that majority of the teachers who were over 80% had over 5 years of teaching Mathematics. This shows that teacher experience has an influence in the attainment of learners’ competencies in a particular subject. However, more research is required in order to conclusively make an authoritative verdict.
Table 4. Results of t-test on teachers preparation of schemes of work and pupils’ performance in Mathematics
Levenes Test for Equality of variances t-test for Equality of Means f sig t df sig (2tailed mean difference std error difference 95% confidence of the difference lower upper equal variances assumed 6.056 .039 .972 8 .359 4.000 4.113 -5.486 13.486 equal variances not assumed .972 4.023 .386 4.000 4.113 -7.389 15.389
CONCLUSION
The findings of the study indicate that teacher preparedness as indicated by preparation of lesson plans had an influence on pupils’ performance in Mathematics in lower primary school. Preparation of schemes of work had no influence on performance. There were statistically significant differences between pupils mean scores for schools where teachers prepared lesson plans and those who didn’t. However, the study established that there was no statistically significant difference in the pupils’ performance in relation to teachers’ preparation of schemes of work.
RECOMMENDATIONS
Arising from the findings of this study, we recommend the need for teachers in lower primary schools to always prepare for their lessons before commencement of teaching. The Ministry of Education should always emphasize that teachers must prepare prerequisite professional documents that are instrumental in enhancing learning outcomes among learners especially that of Mathematics in the lower primary segment of education. Teachers who fail to comply with this requirement should be severely censured.
• Akatekit, D. D. (2000). An evaluation of teachers’ methods in P310/3 (Novels). Dissertation submitted for the degree of Master of Education at Makerere University. Kampala
• Armstrong, D. G., Henson, K. T. and Savage, T. V. (2009). Teaching today: An introduction to education (8th ed.). Upper Saddle River, New Jersey, OH: Pearson.
• Ball, D. L. and Bass, H. (2000). Interweaving content and pedagogy in teaching and learning to teach: Knowing and using mathematics. In J. Boaler (Ed.), Multiple perspectives on the teaching and learning of mathematics (pp. 83–104). Westport, CT: Ablex.
• Gay, L. R. (2006). Educational Research: Competencies for analysis and application (5th ed.). Englewood Cliffs, NJ: Prentice-Hall.
• Gurney, P. (2007). Five Factors for Effective Teaching. New Zealand Journal of Teachers’ Work, 4(2), 89-98.
• Hill, H. C., Ball, D. L. and Schilling, S. G. (2008). Unpacking pedagogical content knowledge: Conceptualizing and measuring teachers’ topic-specific knowledge of students. Journal for Research in Mathematics Education, 39(4), 372–400.
• Hill, H. C., Rowan, B. and Ball, D. L. (2005). Effects of teachers’ mathematical knowledge for teaching on student achievement. American Educational Research Journal, 42(2), 371-406. https://doi.org/10.3102/00028312042002371
• Isiugo-Abanihe, M., Ifeoma, L. and Tandi, I. (2010). Evaluation of the Methodology Aspect of Science Teacher Education Curriculum in Nigeria. Pakistan Journal of Social Sciences, 17(2), 170-176. Available at: http://www.medwelljournals.com/fulltext/?doi=pjSsci.2010.170.170
• Kilpatrick, J., Swafford, J. and Findell, B. (Eds.). (2001). Adding it up: Helping children learn mathematics. Washington DC: National Academies Press.
• Makewa, L. N., Role, E., Too, J. K. and Kiplagat, P. (2012). Teacher commitment and mathematics performance in primary schools: A meeting point!. International Journal of Development and Sustainability, 1(2), 286–304.
• Mtitu, E. A. (2014). Learner-centred teaching in Tanzania: Geography teachers’ perceptions and experiences. Victoria University of Wellington.
• Oketch, M., Mutisya, M., Ngware, M. and Sagwe, J. (2010). Teacher math test scores: Classroom teaching practices and student achievement in Kenya. Working paper, African Population and Health Research Center (APHRC), Shelter Afrique Center, Nairobi.
• Rivkin, S, G., Hanushek, E. A. and Kain, J. F. (2005). Teachers, schools, and academic achievement. Econometrica, 73(2), 417–458. https://doi.org/10.1111/j.1468-0262.2005.00584.x
• Uwezo (2016). Are Our Children Learning? Uwezo Kenya Sixth Learning Assessment Report. Nairobi: Twaweza East Africa.
• Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Howard University Press.
• Wilson, S. M., Floden, R. E. and Ferrini-Mundy, J. (2012). Teacher preparation research: An insider’s view from the outside. Journal of Teacher Education, 53(3), 190-204. https://doi.org/10.1177/0022487102053003002
AMA 10th edition
In-text citation: (1), (2), (3), etc.
Reference: Kariuki LW, Njoka JN, Mbugua ZK. Influence of Teachers Preparedness on Performance of Pupils in Mathematics in Lower Primary Schools in Aberdares Region of Kenya. European Journal of STEM Education. 2019;4(1), 01. https://doi.org/10.20897/ejsteme/3931
APA 6th edition
In-text citation: (Kariuki et al., 2019)
Reference: Kariuki, L. W., Njoka, J. N., & Mbugua, Z. K. (2019). Influence of Teachers Preparedness on Performance of Pupils in Mathematics in Lower Primary Schools in Aberdares Region of Kenya. European Journal of STEM Education, 4(1), 01. https://doi.org/10.20897/ejsteme/3931
Chicago
In-text citation: (Kariuki et al., 2019)
Reference: Kariuki, Loise Wangechi, Johannes Njagi Njoka, and Zachariah Kariuki Mbugua. "Influence of Teachers Preparedness on Performance of Pupils in Mathematics in Lower Primary Schools in Aberdares Region of Kenya". European Journal of STEM Education 2019 4 no. 1 (2019): 01. https://doi.org/10.20897/ejsteme/3931
Harvard
In-text citation: (Kariuki et al., 2019)
Reference: Kariuki, L. W., Njoka, J. N., and Mbugua, Z. K. (2019). Influence of Teachers Preparedness on Performance of Pupils in Mathematics in Lower Primary Schools in Aberdares Region of Kenya. European Journal of STEM Education, 4(1), 01. https://doi.org/10.20897/ejsteme/3931
MLA
In-text citation: (Kariuki et al., 2019)
Reference: Kariuki, Loise Wangechi et al. "Influence of Teachers Preparedness on Performance of Pupils in Mathematics in Lower Primary Schools in Aberdares Region of Kenya". European Journal of STEM Education, vol. 4, no. 1, 2019, 01. https://doi.org/10.20897/ejsteme/3931
Vancouver
In-text citation: (1), (2), (3), etc.
Reference: Kariuki LW, Njoka JN, Mbugua ZK. Influence of Teachers Preparedness on Performance of Pupils in Mathematics in Lower Primary Schools in Aberdares Region of Kenya. European Journal of STEM Education. 2019;4(1):01. https://doi.org/10.20897/ejsteme/3931
Related Subjects
STEM Education
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Submit My Manuscript
|
Authors: Sabin, Carlos
Contribution: Article
Journal: UNIVERSE
Publication date: 2020/09/09
DOI: 10.3390/universe6090149
Abstract: We propose an experimental setup to test the effect of curved spacetime upon the extraction of entanglement from the quantum field vacuum to a pair of two-level systems. We consider two superconducting qubits coupled to a dc-SQUID array embedded into an open microwave transmission line, where an external bias can emulate a spacetime containing a traversable wormhole. We find that the amount of vacuum entanglement that can be extracted by the qubits depends on the wormhole parameters. At some distances qubits which would be in a separable state in flat spacetime would become entangled due to the analogue wormhole background.
|
# Secrets and women
1. Dec 23, 2009
### drizzle
:/
That reminds me of a joke;
The three fastest ways of communication in the world; telephone, television and tell a women. Still need a faster way? ..... Tell her not to tell anyone!
Seriously, what do you think?
Last edited by a moderator: May 4, 2017
2. Dec 23, 2009
### hypatia
I would tend to agree, woman share more gossip then they think. Even I, have talked to another group of friends, about something going on with the first group. It may be gossip, but it also allows me to vent to people who could give me another point of view.
I also never say any thing about anyone that I would not say to their faces. I know my fair share of men who gossip too. While its not like womans gossip, it normally begins with "That A you should hear what he did to his {house, wife, kids, car}!".
If someone asks me not to tell, then I really will, never tell. If I give my promise, I am good for it.
3. Dec 23, 2009
### Proton Soup
i was about 8 years old when i learned not to trust women.
4. Dec 23, 2009
### BobG
Just like in "The Dish", when mayor Bob McIntyre asks his wife if she can keep a secret and she giggles and replies, "Of course not!"
Men are different. Decades ago, I was working as a security guard at an Acme/Click (kind of like a Walmart) and perusing the shoplifting reports as entertainment. Entertaining indeed! There was one for my wife's sister!
In twenty-seven years, I've never told anyone about her shoplifting.
Doh! That was stupid. Doesn't matter, she's an ex sister-in-law anyway.
That's okay. I've got a different secret I've kept almost as long about, uh, .... well, maybe another time.
5. Dec 23, 2009
### drankin
Better to learn early then after you get married! :)
Women are so tempting to confide in, it's a trait they perfect as soon as the master their native language. The better they are at it the more popular they are with the rest of the gossip spinners. If a women can't develop some good gossip, she just doesn't get the rank in the feminine circles.
6. Dec 23, 2009
### leroyjenkens
I think men gossip a lot too. If they want to talk about another guy who isn't there, they'll say what they need to say and be done with it. Women will talk and talk about that person until they don't have anything left to say about them, then they'll start making stuff up.
7. Dec 23, 2009
### BobG
Men never gossip! They just happen to run into good gossip diggers at the worst times.
8. Dec 23, 2009
### Pengwuino
I think one key differnce is that guys will say things about other guys in their presence! Plus I think as far as this article goes, guys don't need to divert to other "friendship groups" with secrets. This article is awesome though, although it makes me wonder why someone needed to be paid to tell us this :).
Hmm I just realized I probably shouldnt have my name attached to this post less Evo and Moonbear and the like find out
9. Dec 23, 2009
### drankin
Can you imagine the gossip they have on us??
10. Dec 23, 2009
### BobG
Don't trust her!
11. Dec 23, 2009
### BobG
On you and Pengwino?
You should tell hypatia.
12. Dec 23, 2009
### Math Is Hard
Staff Emeritus
I think I would like to hear the drankin and Pengwuino gossip.
13. Dec 23, 2009
### Pengwuino
You'll never get any dirt on me!
14. Dec 23, 2009
### drizzle
15. Dec 23, 2009
### GeorginaS
16. Dec 23, 2009
### drankin
Me? Expert in feminine gossip psychology? I got nothing, this is my version of male gossip.
17. Dec 23, 2009
### Jimmy Snyder
I posted this once before:
Me: Can you keep a secret?
My wife: Yes.
Me: So can I.
My wife: You're a dead man Snyder.
18. Dec 24, 2009
### GeorginaS
So, then, it's actually men who gossip in order to gain rank and you were simply projecting?
19. Dec 24, 2009
### drankin
Huh? Sure, I'm "projecting" I guess. Everyone gossips, but that's not the topic of the thread. Women are much better at gossip. Much. :)
20. Dec 24, 2009
|
## Found 243 Documents (Results 1–100)
100
MathJax
### Algorithms for computing the equilibrium of a compact torus with several magnetic axes. (English. Russian original)Zbl 0850.76512
Comput. Math. Model. 1, No. 1, 67-70 (1990); translation from Numerical mathematics and software of computers, Work Collect., Moskva 1985, 138-142 (1985).
MSC: 76M99 78A30
Full Text:
### Separatrix solutions of the nonlinear Schrödinger equation. (English. Russian original)Zbl 0754.35154
Sel. Math. Sov. 9, No. 3, 215-218 (1990); translation from Methods of the qualitative theory of differential equations, Gor’kij, 18-22 (1985).
MSC: 35Q55 76X05 78A10
### Inverse solutions via the WKB-Rytov method for the deterministic problem in electromagnetics. (English)Zbl 1177.78004
MSC: 78A25 35Q60
Full Text:
Full Text:
### Applied exterior calculus. (English)Zbl 1101.58301
Wiley-Interscience Publication. New York, NY: John Wiley & Sons (ISBN 0-471-80773-7). xix, 471 p. (1985).
### Two numerical methods for solving integral equations with a weak singularity. (English. Russian original)Zbl 0779.65088
J. Sov. Math. 58, No. 3, 202-208 (1992); translation from Vychisl. Prikl. Mat., Kiev 57, 12-20 (1985).
MSC: 65R20 78A15 45E10
Full Text:
### Plane waves in the magneto-thermoelasticity of uncompressible solids. (Italian)Zbl 0705.73192
Atti Ist. Veneto Sci. Lett. Arti, Cl. Sci. Fis. Mat. Nat. 144(1985-86), 149-158 (1986).
### Integro-differential equations of the second kind in problems of diffraction of electromagnetic waves on bodies of revolutions to the toroidal type. (Russian)Zbl 0676.65143
Application of computers to the solution of problems of mathematical physics, Collect. Sci. Works, Moscow, 68-73 (1985).
Reviewer: Jürgen Appell
MSC: 65R20 45K05 78A45
### On the numerical solution of problems of diffraction of non-stationary electromagnetic fields on open surfaces of revolution by the method of integro-differential equations. (Russian)Zbl 0676.65142
Application of computers to the solution of problems of mathematical physics, Collect. Sci. Works, Moscow, 57-67 (1985).
Reviewer: Jürgen Appell
MSC: 65R20 45K05 78A45
### The spline-collocation method for solution of some integral equations for problems of diffraction of electromagnetic waves. (Russian)Zbl 0676.65139
Application of computers to the solution of problems of mathematical physics, Collect. Sci. Works, Moscow, 43-56 (1985).
Reviewer: Jürgen Appell
MSC: 65R20 45E10 78A45
### On the numerical solution of the inverse problem of light dispersion on a ball. (Russian)Zbl 0676.65127
Application of computers to the solution of problems of mathematical physics, Collect. Sci. Works, Moscow, 30-35 (1985).
Reviewer: Jürgen Appell
### Numerical investigation of kinetics of generation of gamma-radiation in conditions of recrystallization of an active medium. (Russian)Zbl 0676.65126
Application of computers to the solution of problems of mathematical physics, Collect. Sci. Works, Moscow, 3-9 (1985).
Reviewer: Jürgen Appell
### New results in the direct and inverse electrocardiology problems. (Russian)Zbl 0671.92002
Methods of numerical mathematics and mathematical modelling, Mater. Int. Symp., Moskva 1985, 124-136 (1985).
MSC: 92C50 78A70
### On asymptotic optimization of modeling the radiation transport in a layer of substance with anisotropic dispersion. (Russian)Zbl 0668.65114
Theory and application of statistical modeling, Collect. Sci. works, 103-112 (1985).
### Linear and nonlinear problems of computer tomography. Collection of scientific works. (Линейные и нелинейные задачи вычислительной томографии. Сборник научных трудов.) (Russian)Zbl 0667.65001
Novosibirsk: Vychislitel’nyĭ Tsentr SO AN SSSR. 172 p. R. 0.70 (1985).
### Singularities, bifurcations and catastrophes. (Bulgarian. Russian original)Zbl 0662.58031
Fiz.-Mat. Spis. 27(60), 25-48 (1985); translation from Usp. Fiz. Nauk 141, No. 4, 569-590 (1983).
### On the design of numerical methods for modeling semiconductor devices. (English)Zbl 0659.65129
Computational mathematics I, Proc. 1st Int. Conf. Numer. Anal. Appl., Benin City/Niger. 1983, Conf. Ser., Boole Press 8, 19-24 (1985).
### On an inverse problem of Stefan type in the theory of electrocontact arcs for small values of time. (Russian)Zbl 0658.35089
Equations with discontinuous coefficients and their applications, Collect. Artic., Alma-Ata 1985, 34-43 (1985).
MSC: 35R30 35R35 78A25
### Existence and uniqueness theorem for an inverse problem in electrodynamics. (Russian)Zbl 0658.35088
Dynamics of non-homogeneous systems, Mater. Seminar., Moskva 1985, 148-153 (1985).
MSC: 35R30 35A05 78A25
### Mathematical models of phenomenological piezoelectricity. (English)Zbl 0656.73048
Mathematical models and methods in mechanics, Banach Cent. Publ. 15, 593-607 (1985).
MSC: 74F15 74A20 78A25
### Some inverse scattering problems of geophysics. (English)Zbl 0648.35081
Reviewer: T.R.Faulkner
Full Text:
### The energy criterium and the Maxwell-Cattaneo equation for thermodynamics of nonlinear electromagnetic systems. (Italian. English summary)Zbl 0647.35072
MSC: 35Q99 78A25
### The correction methods for the solution of a parabolic equation in an inhomogeneous wave guide. (Metod korrektsii resheniya parabolicheskogo uravneniya v neodnorodnom volnovode). (Russian)Zbl 0643.35002
Moskva: “Nauka”. 96 p. R. 1.00 (TIB: FM 0220) (1985).
Reviewer: P.Zabrejko
### A general problem on diffraction of a plane electromagnetic wave obliquely incident on a circular disk situated on the boundary of the division of two media. (Russian)Zbl 0641.35038
Reviewer: N.Mitskievich
MSC: 35L20 78A45 45B05
### On the potential distribution in a high vacuum diode. (English)Zbl 0636.35026
Reviewer: P.Renno
MSC: 35J65 78A35 35A05
### Two methods to solve numerically two-dimensional integral equations with weak singularity. (Russian)Zbl 0634.65136
MSC: 65R20 78A15 45E10
### Properties of transmission matrices for magneto-elastic waves in non- ferro-magnetic regularly layered media. (Russian)Zbl 0633.73109
MSC: 74F15 78A40
### Space structure control design by variance assignment. (English)Zbl 0632.93025
MSC: 93B50 62J10 93E20 78A55 93B55 93E25
Full Text:
### On the decrease for t$$\to \infty$$ of solutions of the Cauchy problem for the Maxwell system in a non-homogeneous medium. (Russian)Zbl 0631.35054
Qualitative analysis of solutions of partial differential equations, Collect. sci. Works, Novosibirsk 1985, 100-109 (1985).
Reviewer: B.Nowak
### Numerical methods for the solution of electrophysics problems. (Chislennye metody resheniya zadach ehlektrofiziki). (Russian)Zbl 0628.65118
Moskva: “Nauka”. Glavnaya Redaktsiya Fiziko-Matematicheskoj Literatury. 336 p. R. 3.80 (1985).
Reviewer: Ll.G.Chambers
### The variational principle of elastic equilibrium of the space-time continuum. (Russian)Zbl 0627.73014
Theoretical and applied questions of optimal control, Collect. Artic., Novosibirsk 1985, 100-161 (1985).
Reviewer: T.Atanacković
MSC: 74S30 76Y05 49S05 78A35
Full Text:
### Thermo- and electrophysical processes in closed contacts and their mathematical models. (Russian)Zbl 0626.35036
Applied problems of mathematical physics and functional analysis, Alma- Ata 1985, 93-101 (1985).
Reviewer: D.Ştefănescu
MSC: 35K05 78A35 80A20 78A55
### The method of normal waves in oscillation theory. (Metod normal’nykh voln v teorii kolebanij). (Russian)Zbl 0626.35001
Moskva: Izdatel’stvo Moskovskogo Universiteta. 80 p. R. 0.15 (1985).
Reviewer: T.Petrila
### A numerical method for the solution of the problem on electromagnetic coupling between two rectangular waveguides by a rectangular aperture in the lateral wall. (Russian)Zbl 0625.65128
Applied methods and programming in numerical analysis, Work Collect., Moskva 1985, 98-110 (1985).
### Numerical investigation of the accuracy and the convergence rate of two iterative methods for the solution of the self-coordinated Langmuir problem. (Russian)Zbl 0625.65120
Numerical mathematics and software of computers, Work Collect., Moskva 1985, 29-36 (1985).
### Analysis of microstrip lines. (Analiz mikropoloskovykh linij). (Russian. English summary)Zbl 0624.65136
Vil’nyus: “Mokslas”. 166 p. R. 2.00 (1985).
Reviewer: Ll.G.Chambers
### An application of geodesic modeling of second-order differential equations. (English. Russian original)Zbl 0624.58008
Math. Notes 38, 745-750 (1985); translation from Mat. Zametki 38, No. 3, 429-439 (1985).
Reviewer: F.Przytycki
MSC: 37J99 78A35 58E05
Full Text:
### Lattice-parametric optimization and stability of beam dynamics. (Strukturno-parametricheskaya optimizatsiya i ustojchivost’ dinamiki puchkov). (Russian)Zbl 0623.93064
Kiev: ”Naukova Dumka”. 304 p. R. 3.60 (1985).
Reviewer: V.Chernyatin
### Transmission line interpretation of the finite element mesh for a wave propagation problem. (English)Zbl 0623.65132
Reviewer: J.J.Telega
Full Text:
### On some microscopic models of ferromagnetism. (English)Zbl 0623.35010
MSC: 35A15 78A25 35D05 35Q99
### Stationary anode formation by a dihedral cathode in case of irregular anode polarization. (Russian)Zbl 0622.76023
Reviewer: E. Ihle
MSC: 76B99 78A55 78A35
Full Text:
### Approximate formulae for the solution of the stationary problem of electrochemical formation. (Russian)Zbl 0622.65122
Reviewer: E. Ihle
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### The method of domain decomposition for the Helmholtz equation with a complex parameter. (Russian)Zbl 0617.65105
Methods of numerical and applied mathematics, Collect. sci. Works, Moskva 1985, 136-143 (1985).
### On uniqueness of the determination of a compactly supported function from the modulus of its Fourier transform. (English. Russian original)Zbl 0617.42005
Sov. Math., Dokl. 32, 668-670 (1985); translation from Dokl. Akad. Nauk SSSR 285, 278-280 (1985).
Reviewer: D.K.Ugulawa
MSC: 42B10 78A45
### Initial boundary value problems for coupled nerve fibres. (English)Zbl 0617.35062
Reviewer: E.Infeld
Full Text:
### Stability and convergence of time marching methods in scattering problems. (English)Zbl 0616.65146
Reviewer: E.V.Nicolau
MSC: 65R20 78A45 76Q05
Full Text:
### On the question of approximate realization for the method of domain decomposition. (Russian)Zbl 0616.65128
Methods of numerical and applied mathematics, Collect. sci. Works, Moskva 1985, 4-16 (1985).
### A block-relaxation method in a subspace for the calculation of the minimal eigenfrequency of a cavity resonator. (Russian)Zbl 0616.65124
Methods of numerical and applied mathematics, Collect. sci. Works, Moskva 1985, 30-50 (1985).
### Über Transmissionsprobleme bei der Vektoriellen Helmholtzgleichung. (German)Zbl 0613.35019
Mathematisch-Naturwissenschaftlicher Fachbereich der Georg-August-Universität. Göttingen. Diss. 105 S. (1985).
Reviewer: J.Donig
MSC: 35J05 35J55 78A25 35C15 35A05 35A35
### On Landau-Lifshitz’ equations for ferromagnetism. (English)Zbl 0613.35018
Reviewer: S.M.Zverev
Full Text:
Full Text:
### Estimating occurrence laws with maximum probability, and the transition to entropic estimators. (English)Zbl 0611.62150
Maximum-entropy and Bayesian methods in inverse problems, Pap. 2 Workshops, Laramie/Wyo. 1981/1982, Fundam. Theor. Phys. 14, 133-169 (1985).
### On a variational inequality with time dependent convex constraint for the Maxwell equations. II. (English)Zbl 0611.49002
Reviewer: E.Barron
### Constructing the solution to Robin’s equation. (English)Zbl 0611.45003
Reviewer: R.Kreß
MSC: 45E10 45L05 78A30
Full Text:
### Consistency of semiconductor modeling: An existence/stability analysis for the stationary van Roosbroeck system. (English)Zbl 0611.35026
MSC: 35J65 35Q99 78A35 35D05 35A35
Full Text:
### Light elements in space-time passing through a hypersurface in the configuration space. (Russian)Zbl 0609.35016
Reviewer: N.Jacob
### Numerical approximation of hysteresis problems. (English)Zbl 0608.65082
Reviewer: E.V.Nicolau
Full Text:
Full Text:
### Lagrangian and algebraical-theoretical analysis of a Dirac-like form of the Maxwell equations. (Russian)Zbl 0608.58026
Group-theoretical studies on equations of mathematical physics, Collect. sci. Works, Kiev 1985, 130-133 (1985).
Reviewer: G.Zet
MSC: 37J99 78A25
### Mixed problem concerning the diffraction of radiation from a horizontal magnetic dipole at a disk and an aperture in a plane screen in a nonuniform medium. (English. Russian original)Zbl 0608.35059
Differ. Equations 21, 562-568 (1985); translation from Differ. Uravn. 21, No. 5, 825-831 (1985).
Reviewer: M.Schneider
### A numerical study of nonlinear Alfvén waves and solitons. (English)Zbl 0607.76121
Reviewer: Shih Lung Yu
MSC: 76X05 78A40 76M99
Full Text:
### General problem of the diffraction of a plane electromagnetic wave obliquely incident on a circular disk in the boundary between two media. (English. Russian original)Zbl 0607.35059
Differ. Equations 21, 1427-1435 (1985); translation from Differ. Uravn. 21, No. 12, 2114-2124 (1985).
Reviewer: A.Jeffrey
MSC: 35L50 78A45 45B05
### Phenomenological and group symmetry in the geometry of two sets (theory of physical structures). (English. Russian original)Zbl 0606.70001
Sov. Math., Dokl. 32, 371-374 (1985); translation from Dokl. Akad. Nauk SSSR 284, 39-43 (1985).
MSC: 70A05 78A02 20F65
### Boundary value problems for a class of evolution equations with complex coefficients. (English. Russian original)Zbl 0606.35055
Sov. Math., Dokl. 32, 654-658 (1985); translation from Dokl. Akad. Nauk SSSR 285, 265-269 (1985).
Reviewer: N.Jacob
MSC: 35M99 35J10 78A05
### Equations of relativistic mechanics which are invariant with respect to the Poincaré group. (Russian)Zbl 0606.22014
Group-theoretical studies on equations of mathematical physics, Collect. sci. Works, Kiev 1985, 80-89 (1985).
Reviewer: A.Ciarkowski
Full Text:
Full Text:
### The method of quasi-homogeneous functions and the Fock problem. (Russian. English summary)Zbl 0605.35015
MSC: 35J10 78A45 35C05 35B45 35A05 35B65
Full Text:
### Propagation of magneto-elastic waves in a cubic ferromagnetic medium. (English)Zbl 0604.73115
MSC: 74F15 78A40
### Nonexistence of smooth electromagnetic fields in nonlinear dielectrics. I: Infinite cylindrical dielectrics. (English)Zbl 0604.35056
Reviewer: M.Idemen
MSC: 35L70 78A25 78A40 35A05 35B65
Full Text:
### On a method of numerical solution of singular integrodifferential equations. (English. Russian original)Zbl 0603.65097
Mosc. Univ. Comput. Math. Cybern. 1985, No. 2, 26-30 (1985); translation from Vestn. Mosk. Univ., Ser. XV 1985, No. 2, 23-27 (1985).
MSC: 65R20 45J05 78A45
### Offset measurements on a sphere at a fixed frequency do not determine the inhomogeneity uniquely. (English)Zbl 0603.45012
Reviewer: M.Idemen
MSC: 45H05 78A45
Full Text:
### Asymptotic estimates of the solution of a scattering problem on a cylinder with large perturbation. (Russian)Zbl 0603.35020
Reviewer: N.Jacob
### Application of multidimensional logarithmic residue to the theory of antenna arrays. (Russian. English, Bulgarian summaries)Zbl 0603.32003
MSC: 32A25 78A50
### On the structure of a fundamental solution of the Cauchy problem for the system of Maxwell’s equations. (English. Russian original)Zbl 0602.35097
Sov. Math., Dokl. 31, 385-388 (1985); translation from Dokl. Akad. Nauk SSSR 281, 1052-1055 (1985).
Reviewer: G.Tu
MSC: 35Q99 78A25 35A08 35D05
### Application of integral equations to the solution of Dirichlet type problems. (English)Zbl 0599.65084
Reviewer: S.Sburlan
Full Text:
Full Text:
### On the synthesis of antennas with planar aperture. (Russian. English summary)Zbl 0598.45008
MSC: 45H05 78A50
### Analysis of integral equations attached to skin effect. (English)Zbl 0598.45007
MSC: 45H05 78A30 65R20
Full Text:
### Stable methods for an inverse problem in acoustic scattering by an obstacle and an inhomogeneous medium. (English)Zbl 0596.35123
Reviewer: A.Kirsch
Full Text:
### On the asymptotics of the eigenvalues of diffraction problems. (English. Russian original)Zbl 0596.35102
Sov. Math., Dokl. 31, 392-395 (1985); translation from Dokl. Akad. Nauk SSSR 281, 1058-1061 (1985).
Reviewer: N.Vulchanow
MSC: 35P20 78A45
### Strict justification of asymptotics of the Green function in the shadow zone for the diffraction problem on a convex body. (Russian. English summary)Zbl 0596.35021
Zap. Nauchn. Semin. Leningr. Otd. Mat. Inst. Steklova 148, 79-88, 78 (1985).
Reviewer: E.Malec
MSC: 35J05 35C20 78A45
Full Text:
### n-series problems and the coupling of electromagnetic waves to apertures: A Riemann-Hilbert approach. (English)Zbl 0596.30056
MSC: 30E25 78A30
Full Text:
Full Text:
Full Text:
Full Text:
### Class of inverse problems in the discrete approach. (Russian)Zbl 0595.35110
Inverse problems of mathematical physics, Collect. sci. Works, Novosibirsk 1985, 57-65 (1985).
Reviewer: O.Dumbrajs
### A skin effect approximation for eddy current problems. (English)Zbl 0595.35096
Reviewer: J.Appell
MSC: 35Q99 35J05 78A25
Full Text:
### Time delay and Doppler frequency shift in radar/sonar detection, with application to Fourier optics. (English)Zbl 0594.94001
Delay equations, approximation and application, Int. Symp. Mannheim/Ger. 1984, ISNM 74, 234-263 (1985).
Reviewer: W. Schempp
Full Text:
all top 5
all top 5
all top 3
|
Calculus Volume 2
# Key Concepts
Calculus Volume 2Key Concepts
### 2.1Areas between Curves
• Just as definite integrals can be used to find the area under a curve, they can also be used to find the area between two curves.
• To find the area between two curves defined by functions, integrate the difference of the functions.
• If the graphs of the functions cross, or if the region is complex, use the absolute value of the difference of the functions. In this case, it may be necessary to evaluate two or more integrals and add the results to find the area of the region.
• Sometimes it can be easier to integrate with respect to y to find the area. The principles are the same regardless of which variable is used as the variable of integration.
### 2.2Determining Volumes by Slicing
• Definite integrals can be used to find the volumes of solids. Using the slicing method, we can find a volume by integrating the cross-sectional area.
• For solids of revolution, the volume slices are often disks and the cross-sections are circles. The method of disks involves applying the method of slicing in the particular case in which the cross-sections are circles, and using the formula for the area of a circle.
• If a solid of revolution has a cavity in the center, the volume slices are washers. With the method of washers, the area of the inner circle is subtracted from the area of the outer circle before integrating.
### 2.3Volumes of Revolution: Cylindrical Shells
• The method of cylindrical shells is another method for using a definite integral to calculate the volume of a solid of revolution. This method is sometimes preferable to either the method of disks or the method of washers because we integrate with respect to the other variable. In some cases, one integral is substantially more complicated than the other.
• The geometry of the functions and the difficulty of the integration are the main factors in deciding which integration method to use.
### 2.4Arc Length of a Curve and Surface Area
• The arc length of a curve can be calculated using a definite integral.
• The arc length is first approximated using line segments, which generates a Riemann sum. Taking a limit then gives us the definite integral formula. The same process can be applied to functions of $y.y.$
• The concepts used to calculate the arc length can be generalized to find the surface area of a surface of revolution.
• The integrals generated by both the arc length and surface area formulas are often difficult to evaluate. It may be necessary to use a computer or calculator to approximate the values of the integrals.
### 2.5Physical Applications
• Several physical applications of the definite integral are common in engineering and physics.
• Definite integrals can be used to determine the mass of an object if its density function is known.
• Work can also be calculated from integrating a force function, or when counteracting the force of gravity, as in a pumping problem.
• Definite integrals can also be used to calculate the force exerted on an object submerged in a liquid.
### 2.6Moments and Centers of Mass
• Mathematically, the center of mass of a system is the point at which the total mass of the system could be concentrated without changing the moment. Loosely speaking, the center of mass can be thought of as the balancing point of the system.
• For point masses distributed along a number line, the moment of the system with respect to the origin is $M=∑i=1nmixi.M=∑i=1nmixi.$ For point masses distributed in a plane, the moments of the system with respect to the x- and y-axes, respectively, are $Mx=∑i=1nmiyiMx=∑i=1nmiyi$ and $My=∑i=1nmixi,My=∑i=1nmixi,$ respectively.
• For a lamina bounded above by a function $f(x),f(x),$ the moments of the system with respect to the x- and y-axes, respectively, are $Mx=ρ∫ab[f(x)]22dxMx=ρ∫ab[f(x)]22dx$ and $My=ρ∫abxf(x)dx.My=ρ∫abxf(x)dx.$
• The x- and y-coordinates of the center of mass can be found by dividing the moments around the y-axis and around the x-axis, respectively, by the total mass. The symmetry principle says that if a region is symmetric with respect to a line, then the centroid of the region lies on the line.
• The theorem of Pappus for volume says that if a region is revolved around an external axis, the volume of the resulting solid is equal to the area of the region multiplied by the distance traveled by the centroid of the region.
### 2.7Integrals, Exponential Functions, and Logarithms
• The earlier treatment of logarithms and exponential functions did not define the functions precisely and formally. This section develops the concepts in a mathematically rigorous way.
• The cornerstone of the development is the definition of the natural logarithm in terms of an integral.
• The function $exex$ is then defined as the inverse of the natural logarithm.
• General exponential functions are defined in terms of $ex,ex,$ and the corresponding inverse functions are general logarithms.
• Familiar properties of logarithms and exponents still hold in this more rigorous context.
### 2.8Exponential Growth and Decay
• Exponential growth and exponential decay are two of the most common applications of exponential functions.
• Systems that exhibit exponential growth follow a model of the form $y=y0ekt.y=y0ekt.$
• In exponential growth, the rate of growth is proportional to the quantity present. In other words, $y′=ky.y′=ky.$
• Systems that exhibit exponential growth have a constant doubling time, which is given by $(ln2)/k.(ln2)/k.$
• Systems that exhibit exponential decay follow a model of the form $y=y0e−kt.y=y0e−kt.$
• Systems that exhibit exponential decay have a constant half-life, which is given by $(ln2)/k.(ln2)/k.$
### 2.9Calculus of the Hyperbolic Functions
• Hyperbolic functions are defined in terms of exponential functions.
• Term-by-term differentiation yields differentiation formulas for the hyperbolic functions. These differentiation formulas give rise, in turn, to integration formulas.
• With appropriate range restrictions, the hyperbolic functions all have inverses.
• Implicit differentiation yields differentiation formulas for the inverse hyperbolic functions, which in turn give rise to integration formulas.
• The most common physical applications of hyperbolic functions are calculations involving catenaries.
Order a print copy
As an Amazon Associate we earn from qualifying purchases.
Want to cite, share, or modify this book? This book uses the Creative Commons Attribution-NonCommercial-ShareAlike License and you must attribute OpenStax.
• If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
• If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
|
# zbMATH — the first resource for mathematics
The Dirichlet problem for harmonic maps from the disk into the Euclidean n-sphere. (English) Zbl 0597.35022
Let $$\Omega =\{(x,y)\in {\mathbb{R}}^ 2:$$ $$x^ 2+y^ 2<1\}$$, $$S^ n=\{v\in {\mathbb{R}}^{n+1}:$$ $$| v| =1\}$$, $$n\geq 2$$, $$\gamma \in C^{2,\delta}(\partial \Omega,S^ n)$$ a nonconstant function $$(0<\delta <1)$$, $$\Sigma_ p=\{\sigma \in C^ 0(S^{n- 2},W_{\gamma}^{1,p}(\Omega,S^ n)),$$ $$\sigma$$ is not homotopic to a constant$$\}$$ where $$p>2$$, $$W_{\gamma}^{1,p}(\Omega,S^ n)=\{u\in W^{1,p}(\Omega,S^ n):\quad u=\gamma \quad on\quad \partial \Omega \},$$ $$\Sigma =\cup_{p>2}\Sigma_ p$$ and $$c=\inf_{\sigma \in \Sigma}(\max_{s\in S^{n-2}}E(\sigma (s))$$, where $$E(u)=\int_{\Omega}| \nabla u|^ 2 dx$$. Then:
Theorem. There exists at least $$u\in C^{2,\delta}({\bar \Omega},S^ n)$$ such that $$E(u)=c$$, $$-\Delta u=u| \nabla u|^ 2$$, $$u|_{\partial \Omega}=\gamma$$. Moreover if $$c=m$$ there exist infinitely many u when $$n\geq 3$$ (and at least two when $$n=2)$$, where $$m=\inf \{E(u):\quad u\in H^ 1(\Omega,{\mathbb{R}}^{n+1}),\quad u|_{\partial \Omega}=\gamma,\quad | u| =1\quad a.e.\}.$$
Reviewer: G.Bottaro
##### MSC:
35J05 Laplace operator, Helmholtz equation (reduced wave equation), Poisson equation 31A05 Harmonic, subharmonic, superharmonic functions in two dimensions 58E12 Variational problems concerning minimal surfaces (problems in two independent variables) 35A05 General existence and uniqueness theorems (PDE) (MSC2000)
Full Text:
##### References:
[1] Aubin, Th, Équations différentielles non linéaires et problème de Yamabe concernant la courbure scalaire, J. Math. Pures et Appl., t. 55, 269-296, (1976) · Zbl 0336.53033 [2] Brezis, H.; Coron, J. M., Multiple solutions of H-systems and rellich’s conjecture, Comm. Pure Appl. Math., t. XXXVII, 149-187, (1984) · Zbl 0537.49022 [3] Brezis, H.; Coron, J. M., Large solutions for harmonic maps in two dimensions, Comm. Math. Phys., t. 92, 203-215, (1983) · Zbl 0532.58006 [4] Brezis, H.; Nirenberg, L., Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents, Comm. Pure Appl. Math, t. XXXVI, 437-477, (1983) · Zbl 0541.35029 [5] Calabi, E., Minimal immersions of surfaces in Euclidean spheres, J. Diff. Geometry, t. 1, 111-125, (1967) · Zbl 0171.20504 [6] Gulliver, R. D.; Osserman, R.; Royden, H. L., A theory of branched immersions of surfaces, Amer. J. Math., t. 95, 750-812, (1973) · Zbl 0295.53002 [7] Gilbarg, D.; Trudinger, N. S., Elliptic partial differential equations of second order, (1977), Springer-Verlag Berlin-Heidelberg-New York · Zbl 0691.35001 [8] Hildebrandt, S.; Widman, K. O., Some regularity results for quasilinear elliptic systems of second order, Math. Z., t. 142, 67-86, (1975) · Zbl 0317.35040 [9] J. Jost, The Dirichlet problem for harmonic maps from a surface with boundary onto a 2-sphere with nonconstant boundary values. J. Diff. Geometry (to appear). · Zbl 0551.58012 [10] Ladyzenskaya, O. A.; Ural’ceva, N. N., Linear and quasilinear elliptic equations, (1968), Academic Press New York and London [11] Ladyzenskaya, O. A.; Ural’ceva, N. N., Linear and quasilinear elliptic equations, (1973), Nauka Moscow [12] Lemaire, L., Applications harmoniques de surfaces riemanniennes, J. Diff. Geometry, t. 13, 51-78, (1978) · Zbl 0388.58003 [13] E. Lieb, Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities. Annals of Math. (to appear). · Zbl 0527.42011 [14] Lions, P. L., The concentration compactness principle in the calculus of variations the limit case, Riv. Iberoamericana (to appear) and Comptes Rendus Acad. Sc. Paris, t. 296, série I, 645-648, (1983) [15] Morrey, C. B., On the solutions of quasilinear elliptic partial differential equations, Trans. Amer. Math. Soc., t. 43, 126-166, (1938) · JFM 64.0460.02 [16] Morrey, C. B., Multiple integrals in the calculus of variations, (1966), Springer-Verlag Berlin-Heidelberg-New York · Zbl 0142.38701 [17] Nirenberg, L., On nonlinear elliptic partial differential equations and Hölder continuity, Comm. Pure App. Math., t. 6, 103-156, (1953) · Zbl 0050.09801 [18] Sacks, J.; Uhlenbeck, K., The existence of minimal immersions of 2-spheres, Annals of Math., t. 113, 1-24, (1981) · Zbl 0462.58014 [19] Schoen, R.; Uhlenbeck, K., Boundary regularity and the Dirichlet problem for harmonic maps, J. Diff. Geometry, t. 18, 253-268, (1983) · Zbl 0547.58020 [20] M. Struwe, Nonuniqueness in the Plateau problem for surfaces of constant mean curvature (to appear). · Zbl 0603.49027 [21] C. Taubes, The existence of a non-minimal solution to the SU(2) Yang-Mills-Higgs equations on ℝ^3 (to appear). · Zbl 0514.58016 [22] Wente, H., The differential equation δx = 2Hx_u ∧ x_v with vanishing boundary values, Proc. A. M. S., t. 50, 131-137, (1975) · Zbl 0313.35030 [23] Wente, H., The Dirichlet problem with a volume constraint, Manuscripta Math., t. 11, 141-157, (1974) · Zbl 0268.35031 [24] Wiegner, M., A-priori schranken für Lösungen gewisser elliptischer systeme, Math. Z., t. 147, 21-28, (1976) · Zbl 0316.35039
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
This is part of the bias module
Used to performed Parallel Bias MetaDynamics.
This action activate Parallel Bias MetaDynamics (PBMetaD) [48], a version of MetaDynamics [42] in which multiple low-dimensional bias potentials are applied in parallel. In the current implementation, these have the form of mono-dimensional MetaDynamics bias potentials:
${V(s_1,t), ..., V(s_N,t)}$
where:
$V(s_i,t) = \sum_{ k \tau < t} W_i(k \tau) \exp\left( - \frac{(s_i-s_i^{(0)}(k \tau))^2}{2\sigma_i^2} \right).$
To ensure the convergence of each mono-dimensional bias potential to the corresponding free energy, at each deposition step the Gaussian heights are multiplied by the so-called conditional term:
$W_i(k \tau)=W_0 \frac{\exp\left( - \frac{V(s_i,k \tau)}{k_B T} \right)}{\sum_{i=1}^N \exp\left( - \frac{V(s_i,k \tau)}{k_B T} \right)}$
where $$W_0$$ is the initial Gaussian height.
The PBMetaD bias potential is defined by:
$V_{PB}(\vec{s},t) = -k_B T \log{\sum_{i=1}^N \exp\left( - \frac{V(s_i,t)}{k_B T} \right)}.$
Information on the Gaussian functions that build each bias potential are printed to multiple HILLS files, which are used both to restart the calculation and to reconstruct the mono-dimensional free energies as a function of the corresponding CVs. These can be reconstructed using the sum_hills utility because the final bias is given by:
$V(s_i) = -F(s_i)$
Currently, only a subset of the METAD options are available in PBMetaD.
The bias potentials can be stored on a grid to increase performances of long PBMetaD simulations. You should provide either the number of bins for every collective variable (GRID_BIN) or the desired grid spacing (GRID_SPACING). In case you provide both PLUMED will use the most conservative choice (highest number of bins) for each dimension. In case you do not provide any information about bin size (neither GRID_BIN nor GRID_SPACING) and if Gaussian width is fixed PLUMED will use 1/5 of the Gaussian width as grid spacing. This default choice should be reasonable for most applications.
Another option that is available is well-tempered metadynamics [5]. In this variant of PBMetaD the heights of the Gaussian hills are rescaled at each step by the additional well-tempered metadynamics term. This ensures that each bias converges more smoothly. It should be noted that, in the case of well-tempered metadynamics, in the output printed the Gaussian height is re-scaled using the bias factor. Also notice that with well-tempered metadynamics the HILLS files do not contain the bias, but the negative of the free-energy estimate. This choice has the advantage that one can restart a simulation using a different value for the $$\Delta T$$. The applied bias will be scaled accordingly.
With the keyword INTERVAL one changes the metadynamics algorithm setting the bias force equal to zero outside boundary [4]. If, for example, metadynamics is performed on a CV s and one is interested only to the free energy for s > sw, the history dependent potential is still updated according to the above equations but the metadynamics force is set to zero for s < sw. Notice that Gaussians are added also if s < sw, as the tails of these Gaussians influence VG in the relevant region s > sw. In this way, the force on the system in the region s > sw comes from both metadynamics and the force field, in the region s < sw only from the latter. This approach allows obtaining a history-dependent bias potential VG that fluctuates around a stable estimator, equal to the negative of the free energy far enough from the boundaries. Note that:
• It works only for one-dimensional biases;
• It works both with and without GRID;
• The interval limit sw in a region where the free energy derivative is not large;
• If in the region outside the limit sw the system has a free energy minimum, the INTERVAL keyword should be used together with a UPPER_WALLS or LOWER_WALLS at sw.
Multiple walkers [52] can also be used, in the MPI implementation only. See below the examples.
Description of components
By default this Action calculates the following quantities. These quanties can be referenced elsewhere in the input by using this Action's label followed by a dot and the name of the quantity required from the list below.
Quantity Description bias the instantaneous value of the bias potential
Compulsory keywords
SIGMA the widths of the Gaussian hills PACE the frequency for hill addition, one for all biases
Options
NUMERICAL_DERIVATIVES ( default=off ) calculate the derivatives for these quantities numerically GRID_SPARSE ( default=off ) use a sparse grid to store hills GRID_NOSPLINE ( default=off ) don't use spline interpolation with grids MULTIPLE_WALKERS ( default=off ) Switch on MPI version of multiple walkers ARG the input for this action is the scalar output from one or more other actions. The particular scalars that you will use are referenced using the label of the action. If the label appears on its own then it is assumed that the Action calculates a single scalar value. The value of this scalar is thus used as the input to this new action. If * or *.* appears the scalars calculated by all the proceding actions in the input file are taken. Some actions have multi-component outputs and each component of the output has a specific label. For example a DISTANCE action labelled dist may have three componets x, y and z. To take just the x component you should use dist.x, if you wish to take all three components then use dist.*.More information on the referencing of Actions can be found in the section of the manual on the PLUMED Getting started. Scalar values can also be referenced using POSIX regular expressions as detailed in the section on Regular Expressions. To use this feature you you must compile PLUMED with the appropriate flag. You can use multiple instances of this keyword i.e. ARG1, ARG2, ARG3... FILE files in which the lists of added hills are stored HEIGHT the height of the Gaussian hills, one for all biases. Compulsory unless TAU, TEMP and BIASFACTOR are given FMT specify format for HILLS files (useful for decrease the number of digits in regtests) BIASFACTOR use well tempered metadynamics with this biasfactor, one for all biases. Please note you must also specify temp TEMP the system temperature - this is only needed if you are doing well-tempered metadynamics TAU in well tempered metadynamics, sets height to (kb*DeltaT*pace*timestep)/tau GRID_RFILES read grid for the bias GRID_WFILES dump grid for the bias GRID_WSTRIDE frequency for dumping the grid GRID_MIN the lower bounds for the grid GRID_MAX the upper bounds for the grid GRID_BIN the number of bins for the grid GRID_SPACING the approximate grid spacing (to be used as an alternative or together with GRID_BIN) INTERVAL_MIN monodimensional lower limits, outside the limits the system will not feel the biasing force. INTERVAL_MAX monodimensional upper limits, outside the limits the system will not feel the biasing force. RESTART allows per-action setting of restart (YES/NO/AUTO) UPDATE_FROM Only update this action from this time UPDATE_UNTIL Only update this action until this time
Examples
The following input is for PBMetaD calculation using as collective variables the distance between atoms 3 and 5 and the distance between atoms 2 and 4. The value of the CVs and the PBMetaD bias potential are written to the COLVAR file every 100 steps.
DISTANCE ATOMS=3,5 LABEL=d1
DISTANCE ATOMS=2,4 LABEL=d2
PBMETAD ARG=d1,d2 SIGMA=0.2,0.2 HEIGHT=0.3 PACE=500 LABEL=pb FILE=HILLS_d1,HILLS_d2
PRINT ARG=d1,d2,pb.bias STRIDE=100 FILE=COLVAR
If you use well-tempered metadynamics, you should specify a single biasfactor and initial Gaussian height.
DISTANCE ATOMS=3,5 LABEL=d1
DISTANCE ATOMS=2,4 LABEL=d2
ARG=d1,d2 SIGMA=0.2,0.2 HEIGHT=0.3
PACE=500 BIASFACTOR=8 LABEL=pb
FILE=HILLS_d1,HILLS_d2
DISTANCE ATOMS=3,5 LABEL=d1
|
ASTM D1929-20 covers a laboratory determination of the flash ignition temperature and spontaneous ignition temperature of plastics using a hot-air furnace. Plastics, like all organic materials, will burn. It depends on the type of plastic. 91°-125° 560° Cellulosics. DISCUSSION TRP consists of two components: 1) ignition temperature above ambient (AT,,) and 2) combination ⦠This temperature is required to supply the activation energy needed for combustion. The ignition point of FRP materials is typically around 275° to 375°C, which is relatively low compared to metallic materials, i.e. What year will may 22nd fall on Tuesday right after 2007? where k = thermal conductivity, ρ = density, and c = specific heat capacity of the material of interest, One of the methods would be to determine the ignition temperature of MSW as the temperature during summer in the landfill reaches to 60 °C due to various coupled processes occur inside the landfill. What is exact weight of male Bengal tiger? What is the kannada word for quinova seeds? Standards: - ISO 871 - ASTM D 1929 - GB/T 4610 - GB/T 9343 . Relationships between the melting, decomposition, and ignition temperatures are expected, as they are associated with the thermal stability of the polymers. Ignition temperatures range among any number of hundreds of degrees. 49°-121° 475°-540° Nylons. *2 Caution: Plastics start to become brittle at temperatures below zero. {\displaystyle T_{\text{ig}}} The autoignition temperature is also higher for branched-chain hydrocarbons than for straight-chain hydrocarbons. Sources: R.L. q 158°-168° {\displaystyle t_{\text{ig}}} The values stated in SI units are to be regarded as standard. The temperature of the spark generated is normally higher than the ignition temperature of the conventional combustible materials (such as for sparks from steel, 1,400-1,500 °C; sparks from copper-nickel alloys, 300-400 °C); however, the ignition ability depends on the whole heat content and the lowest ignition energy of the material and substance to be ignited, respectively. The autoignition temperature or kindling point of a substance is the lowest temperature in which it spontaneously ignites in a normal atmosphere without an external source of ignition, such as a flame or spark. Ignition Temperature . This temperature is required to supply the activation energy needed for combustion.The temperature at which a chemical ignites decreases as the pressure or oxygen concentration increases. ig As such, those plastic materials used in construction contain fire-retarding compounds to increase the temperature necessary before ignition and/or to lower the rate of burning. These easy-to-read ignition temperature of plastic record humidity without interference. aluminium alloys or steel. Standards: - ISO 871 - ASTM D 1929 - GB/T 4610 - GB/T 9343 . Versus Temperature and (b) Mass Loss Rate Versus Temperature 13 9 Char Mass Fraction Versus Hydrogen Mole Fraction in Polymer 14 10 Temperature Dependence of Polymer Heat Capacity, Thermal Conductivity, and Density 16 11 Ignition Temperature Versus External Heat Flux for PPS, PC, PA6, PBT, PS, PP, UPT, and PMMA 21 Intertek Testing Locations: York, PA; Middleton, WI; Coquitlam, BC determination of the ï¬ash ignition temperature and spontane-ous ignition temperature of plastics using a hot-air furnace. The Piloted ignition of six common thermoplastics has been studied by exposing horizontal samples (65 times; 65 times; 6 mm thick) to irradiance levels in the range 10–40 kw m −2. Limitations in performance from vaporization of the base fluid, and the melting point of a plastic polymer, when incorporated into couplant. 220°-268° 432°-488° Polyethylene ld 107°-124° 349° Polyethylene hd This plastic ignition temperature testing equipment test method covers a laboratory determination of the flash ignition temperature and spontaneous ignition temperature of plastics using a hot-air furnace. 49°-121° 475°-540° Nylons . Flash ignition temperature (ASTM D 1929) / oC Self-ignition temperature (ASTM D 1929) / oC Heat of combustion, ∆H / MJkg-1 PVC 1870 70 - 80 225 - 275 > 530 > 530 10 PC 1200 150 - 155 350 - 400 520 No ignition 31 PMMA 1180 85 - 110 170 - 300 300 450 26 D1929 (RL) April 15, 2012 Standard Test Method for Determining Ignition Temperature of Plastics A description is not available for this item. What was lasik visons competitive priority? It is one of a number of methods in use for evaluating the reaction of plastics to the effects of ignition sources. Melting Point Range. melt 228°-230° Cotton Does not melt 250° Rubber Does not melt 488°-496° Polyurethanes 85°-121° 416° PTFE 327° 530° What is the WPS button on a wireless router? ABS . ~ I The surface temperature at ignition (Tig), and the time to !gnition (1jg) for each ofthe fo~r,. Another important factor is that the degradation chemistry of PMMA is not one step. Ignition temperature definition is - the lowest temperature at which a combustible substance when heated (as in a bath of molten metal) takes fire in air and continues to burn —called also autogenous ignition temperature. Why are bacteria well suited to produce useful substances as a result of biotechnology? 1.2 The values stated in SI units are to be regarded as standard. Ignition temperature can be measured using the procedure outlined in American Society for Testing and Materials (ASTM) E659. Specification: Temperature of copper furnace: - Can be constant between 150~450 ℃ ; Auto-ignition temperature of the couplant. 1.1 This International Standard specifies a laboratory method for determining the flash-ignition temperature and spontaneous-ignition temperature of plastics using a hot-air furnace. Ignition temperatures range The Auto-Ignition Temperature is not the same as Flash Point - The Flash Point indicates how easy a chemical may burn. Temperatures vary widely in the literature and should only be used as estimates. What does contingent mean in real estate? The autoignition temperature or kindling point of a substance is the lowest temperature in which it spontaneously ignites in a normal atmosphere without an external source of ignition, such as a flame or spark. Values obtained represent the lowest ambient air temperature that will cause ignition of the material under the conditions of this test. Primary Thermal: The ignition source heats the bulk plastic to create a rise in temperature that depends on the product and the ignition source energy output. What is the distrbution of water in the lithosphere? T Ignition temperature is the temperature at which something catches fire and burns on its own. Fine thermocouples were attached to the exposed face and allowed the surface temperature to be monitored continuously. when exposed to a heat flux Versus Temperature and (b) Mass Loss Rate Versus Temperature 13 9 Char Mass Fraction Versus Hydrogen Mole Fraction in Polymer 14 10 Temperature Dependence of Polymer Heat Capacity, Thermal Conductivity, and Density 16 11 Ignition Temperature Versus External Heat Flux for PPS, PC, PA6, PBT, PS, PP, UPT, and PMMA 21 0 The qualification will not lead to confusion, however, if one keeps in mind the basic facts about the behavior of PTFE resins. Flaming ignition was observed for all samples tested within this range, with the exception - In the event of couplant auto-ignition, plastic powder, if {\displaystyle q''} 91°-125° 560° Cellulosics. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Factors that may cause variation include partial pressure of oxygen, altitude, humidity, and amount of time required for ignition. 2. Ignition Temperatures of Materials Auto-ignition temperature - the minimum … End Result: This test results in a Spontaneous Ignition Temperature or Self-Ignition Temperature (SIT) and Flash Ignition Temperature (FIT). ISO 871:2006 specifies a laboratory method for determining the flash-ignition temperature and spontaneous-ignition temperature of plastics using a hot-air furnace. An ignition test for plastics An ignition test for plastics Hunter, L. W.; Hoshall, C. H. 1980-12-01 00:00:00 The Johns Hopkins University, Applied Physics Laboratory, Johns Hopkins Road, Laurel, Maryland 20810, USA INTRODUCTION This report presents first results of an ignitability test for plastics. Standard Test Method for Determining Ignition Temperature of Plastics A description is not available for this item. The plastics PS, PP, PC and PE are valuable energy carriers to maintain incinerator temperature. What you should worry about is your reason for putting plastic in an oven. Melting Point . ig Ignition temperatures range among any number of hundreds of degrees. scope: This fire test response test method 2 covers a laboratory determination of the flash ignition temperature and spontaneous ignition temperature of plastics using a hot-air furnace.. No other units of ⦠[2], The time In the event of couplant auto-ignition, plastic powder, if incorporated into the formulation, will ignite and the fire becomes more difficult to extinguish. 160°-275° 424°-532° Polycarbonate . Ignition Temperatures of Various Common Materials The following data demonstrates that the chemical mixtures found in fireworks are less prone to ignite if heated than are many common materials found in retail stores, warehouses, and homes. 1.1 This fire test response test method 2 covers a laboratory determination of the flash ignition temperature and spontaneous ignition temperature of plastics using a hot-air furnace. The temperature rise can often be approximated by a linear rise for extended periods of time. End Result: This test results in a Spontaneous Ignition Temperature or Self-Ignition Temperature (SIT) and Flash Ignition Temperature (FIT). The resulting value is used as a predictor of viability for high-oxygen service. The present work aims to determine the ignition temperature of MSW collected from Bhandewadi dumpsite, Nagpur (India). What are the safety precautions on using of magnifying glass? The main testing standard for this is ASTM G72. Plastics, like all organic materials, will burn. 122°-137° 349° Polypropylene . "Flammability and flame retardancy of leather", Analysis of Effective Thermal Properties of Thermally Thick Materials, Native American use of fire in ecosystems, https://en.wikipedia.org/w/index.php?title=Autoignition_temperature&oldid=996871939, Creative Commons Attribution-ShareAlike License, Substances which spontaneously ignite in a normal atmosphere at naturally ambient temperatures are termed, This page was last edited on 29 December 2020, at 00:41. Auto ignition temperature varies widely with the substance. How long will the footprints on the moon last? 260°-316°. 88°-125° 416° Acrylics . 122°-137° 349° Polypropylene 158°-168° 570° Polystyrene 100°-120° 1.2 The values stated in SI units are to be regarded as standard. SARSTEDT AG & Co. KG Central Quality Assurance *1 Suitability depending on the plastic material and the nature of load applied. Related Standards. When did organ music become associated with baseball? [4], Autoignition point of selected substances. Standard Test Method for Brittleness Temperature of Plastics and Elastomers by Impact: D1238 - 20: Standard Test Method for Melt Flow Rates of Thermoplastics by Extrusion Plastometer: D1929 - 20: Standard Test Method for Determining Ignition Temperature of Plastics: D2843 - 19 Who is the longest reigning WWE Champion of all time? The substance is placed in a half-liter vessel in a temperature controlled oven for calculating its ignition temperature. Hot Surface Minimum Ignition temperatures were 220oC for Pittsburgh seam coal, 360oC for paper dust, 270â for Arabic gum powder, and > 400oC for brass powder. Source(s): https://owly.im/a97Un. The derived values of the theoretical flame temperature were on the range 1530â1710° K which may be compared with a limiting temperature of about 1600° K ⦠By Chemical inertness, we mean that PTFE fluorocarbon resins can be in continuous contact with another substance with no detectable chemical reaction taking place. 4.1 Tests made under conditions herein prescribed can be of considerable value in comparing the relative ignition characteristics of different materials. flash ignition temperature; ignition temperature; spontaneous ignition temperature ; To find similar documents by ASTM Volume: 08.01 (Plastics (I): D256 - D3159) To find similar documents by classification: 13.220.40 (Ignitability and burning behaviour of materials and products) 83.080.01 (Plastics in ⦠Plastic Melting Point Ignition Temperature ABS 88°-125° 416° How old was queen elizabeth 2 when she became queen? 0 0. the Operating Range of high-temperature ultrasonic couplants, with the following considerations: ⢠Auto-ignition temperature of the couplant. Flammability of plastics 1: Ignition temperatures Flammability of plastics 1: Ignition temperatures Kashiwagi, Takashi 1988-09-01 00:00:00 temperature of Plexiglas G, 360400°C with the CO, laser and about 280°C with the conical heater, is reasonable. What is the ignition temperature of cellulose acetate plastics dust? ″ This plastic ignition temperature testing equipment test method covers a laboratory determination of the flash ignition temperature and spontaneous ignition temperature of plastics using a hot-air furnace. The Minimum Ignition Temperature is the minimum temperature for which a hot surface will ignite a dust cloud [Laurent]. The Piloted ignition of six common thermoplastics has been studied by exposing horizontal samples (65 times; 65 times; 6 mm thick) to irradiance levels in the range 10–40 kw m −2. By comparison, the auto-ignition temperature of gasoline is 536 degrees, and the temperature for charcoal is 660 degrees. Plastic . Ignition Temperatures of Various Common Materials The following data demonstrates that the chemical mixtures found in fireworks are less prone to ignite if heated than are many common materials found in retail stores, warehouses, and homes. 1.1 This fire test response test method 2 covers a laboratory determination of the flash ignition temperature and spontaneous ignition temperature of plastics using a hot-air furnace. ASTM D1929-20,Standard Test Method for Determining Ignition Temperature of Plastics. Results and Discussion. How old was Ralph macchio in the first Karate Kid? ABS. Generally the autoignition temperature for hydrocarbon/air mixtures decreases with increasing molecular mass and increasing chain length. It is usually applied to a combustible fuel mixture. ASTM D1929-20,Standard Test Method for Determining Ignition Temperature of Plastics. | bartleby In general, PTFE fluorocarbon resins are chemically inert. Tomatoman. 140°-150° 580° Polyesters. Autoignition temperatures of liquid chemicals are typically measured using a 500-millilitre (18 imp fl oz; 17 US fl oz) flask placed in a temperature-controlled oven in accordance with the procedure described in ASTM E659. Ignition temperature : > 400°C Decomposition temperature : > 300°C Danger of explosion : Product is not explosive. Test values are expected to rank materials according to ignition susceptibility under actual use conditions. it takes for a material to reach its autoignition temperature Special Notes: Equivalent to ISO 871-1996, Plastics—Determination of Ignition Temperature Using a Hot-Air Furnace. The various building codes typically have sections that deal with plastic ⦠1 decade ago. Plastic. Why don't libraries smell like bookstores? Auto Ignition Temperature The Auto-Ignition Temperature - or the minimum temperature required to ignite a gas or vapor in air without a spark or flame being present - are indicated for some common fuels below: Flammable Substance: Temp (Deg C) Temp (Deg F) Acetaldehyde: 175: 347: The MIT is measured experimentally. [1], When measured for plastics, autoignition temperature can be also measured under elevated pressure and at 100% oxygen concentration. ignition temperature. Primary Chemical: The heated plastic starts to degrade, generally through the formation of free radicals, under the influence of the ignition source. What date do new members of congress take office? Related Standards. among any number of hundreds of degrees. No other units of measurement are included in this standard. 140°-150° 580° Polyesters . ASTMD192916-Standard Test Method for Determining Ignition Temperature of Plastics- 1.1 This fire test response test method2 covers a laboratory determin 160°-275° 424°-532° Polycarbonate 140°-150° 580° Polyesters t 88°-125° 416° Acrylics. Intertek Testing Locations: York, PA; Middleton, WI; Coquitlam, BC As such, those plastic materials used in construction contain fire-retarding compounds to increase the temperature necessary before ignition and/or to lower the rate of burning. 1.2 The values stated in SI units are to be regarded as standard. The various building codes typically have sections that deal with plastic building materials. is given by the following equation:[3]. @article{osti_10188488, title = {The ignition temperature of solid explosives exposed to a fire}, author = {Creighton, J R}, abstractNote = {When a system containing solid explosive is engulfed in a fire it receives a heat flux that causes the temperature of the system to rise monotonically. Request PDF | On Sep 1, 2016, L. Courty and others published External heating of electrical cables and auto-ignition investigation | Find, read and cite all the research you need on ResearchGate Rubber and plastic industries > Plastics > Plastics in general. T Fibre-reinforced plastic (FRP) (also called fiber-reinforced polymer, or fiber-reinforced plastic) is a composite material made of a polymer matrix reinforced with fibres.The fibres are usually glass (in fibreglass), carbon (in carbon fiber reinforced polymer), aramid, or basalt.Rarely, other fibres such as paper, wood, or asbestos have been used. Rubber and plastic industries > Plastics > Plastics in general. Specification: Temperature of copper furnace: - Can be constant between 150~450 220°-268° 432°-488° Polyethylene ld. Because of the critical heat balance that exists during the ignition process, the measured "ignition temperature" is dependent upon the characteristics of the testing apparatus, the' degree of control of the ambient temperature condi tions, the time of ⦠ignition temperature. Because of the critical heat balance that exists during the ignition process, the measured "ignition temperature" is dependent upon the characteristics of the testing apparatus, the' degree of control of the ambient temperature condi tions, the … Auto-ignition Temperature >650 °F >340 °C Energy Required for Ignition >2,500 kj/m2 Fuel Value Content 19,900 BTU/lb. S. Grynko, "Material Properties Explained" (2012). is the initial temperature of the material (or the temperature of the bulk material). The addition of 5-10 weight percent stearic acid powder resulted in significantly lower ignition temperature of ⦠E659 – 78 (Reapproved 2000), "Standard Test Method for Autoignition Temperature of Liquid Chemicals", ASTM, 100 Barr Harbor Drive, West Conshohocken, PA 19428-2959. It is one of a number of methods in use for evaluating the reaction of plastics to the effects of ignition sources. Ignition Temperature – Cloud 790 °F 420 ° C Minimum Radiant Flux for Ignition 20 kW/m2 Smoke Specific Extension Area 1,855 – 3,320 ft2/lb. Acrylics 91°-125° 560° Cellulosics 49°-121° 475°-540° Nylons Sources: R.L. Above the upper explosive or flammable limit the mixture is too rich to burn. It depends on the type of plastic. Fibre-reinforced plastic (FRP) (also called fiber-reinforced polymer, or fiber-reinforced plastic) is a composite material made of a polymer matrix reinforced with fibres.The fibres are usually glass (in fibreglass), carbon (in carbon fiber reinforced polymer), aramid, or basalt.Rarely, other fibres such as paper, wood, or asbestos have been used. the identification of the ignition temperatures, selected D. DVORSKY et al. Buy this standard Abstract Preview. Below the explosive or flammable limit the mixture is too lean to burn. Is there a way to search all eBay sites for different countries at once? DISCUSSION TRP consists of two components: 1) ignition temperature above ambient (AT,,) and 2) combination … Purchase cost-effective and accurate ignition temperature of plastic at Alibaba.com. Copyright © 2021 Multiply Media, LLC. Ignition Temperature Of Plastic. Nevertheless, this statement, like all generalizations, must be qualified if it is to be perfectly accurate. 380 – 610 m2/kg Soot Yield 0.06–0.09 lbs. For plastics, the actual process of combustion is very complex but broadly follows 6 separate stages: Primary Thermal: The ignition source heats the bulk plastic to create a rise in temperature that depends on the product and the ignition source energy output. Experimental determination of MIT How to measure the MIT of a dust ? P.vinylideneclor 212° 454° PVC 75°-110° 435°-557° Wool Does not Relationships between the melting, decomposition, and ignition temperatures are expected, as they are associated with the thermal stability of the polymers. {\displaystyle T_{0}} Density : 0.89-0.94 g/cm3 Solubility in / Miscibility with Water : Insoluble Additional information : Soluble in boiling, aromatic chlorinated solvents General Information SECTION 10 - STABILITY AND REACTIVITY Chemical stability 160°-275° 424°-532° Polycarbonate. Ignition Temperature. Initial temperature for each of these waste samples was noted before the smoldering test in a muffle furnace & under gradual temperature rise in a muffle furnace (i.e., 3 °C/min), the temperature required for ignition of MSW was identified and recorded using IR based hand-held thermometer. It depends on the type of plastic. The temperature at which a chemical ignites decreases as the pressure or oxygen concentration increases. The plastic will melt at a lower temperature than it will burn, so you shouldn't worry. ~ species ofwood were measured at 5 irradiance levels from 15.4 to 31.7 kW/m2 (Table 2). ⢠Limitations in performance from vaporization of the base fluid, and the melting point of a plastic polymer, when incor-porated into couplant. Zabetakis, M. G. (1965), Flammability characteristics of combustible gases and vapours, U.S. Department of Mines, Bulletin 627. : THE EFFECT OF Y, Gd AND Ca ON THE IGNITION TEMPERATURE OF EXTRUDED MAGNESIUM ALLOYS 670 Materiali in tehnologije / Materials and technology 54 (2020) 5, 669–675 Figure 1: Instrumentation for the measurement of ignition tempe-ratures What chores do children have at San Jose? Vicat Softening Temperature Homopolymer 305 °F 152 °C Copolymer 289 – 304 °F 143 – 151 °C Flammability Properties English Units SI Units Auto-ignition Temperature >650 °F >340 °C Energy Required for Ignition >2,500 kJ/m2 Ignition Temperature – Cloud 790 °F 420 ° C Minimum Radiant Flux for Ignition 20 kW/m2 The ignition temperature of paper is 451 degrees Fahrenheit, or 233 degrees Celsius. Plastics — Determination of ignition temperature using a hot-air furnace. Auto Ignition Temperature. 107°-124° 349° Polyethylene hd. Melting points and ignition temperatures. All Rights Reserved. ASTM D1929-20 covers a laboratory determination of the flash ignition temperature and spontaneous ignition temperature of plastics using a hot-air furnace. We have step-by-step solutions for your textbooks written by Bartleby experts! Textbook solution for Industrial Plastics: Theory and Applications 6th Edition Lokensgard Chapter 4 Problem 4.7Q. ISO 871:2006 specifies a laboratory method for determining the flash-ignition temperature and spontaneous-ignition temperature of plastics using a hot-air furnace. The more plastics in waste the less use of fuel. Special Notes: Equivalent to ISO 871-1996, PlasticsâDetermination of Ignition Temperature Using a Hot-Air Furnace. Fine thermocouples were attached to the exposed face and allowed the surface temperature to be monitored continuously. 1.3 CautionâDuring the course of combustion, gases or Branched-Chain hydrocarbons than for straight-chain hydrocarbons under actual use conditions the course combustion. Decomposition, and the melting point of a plastic polymer, when measured for plastics autoignition. Chain length GB/T 9343 by a linear rise for extended periods of time: plastics start become! She became queen this is astm G72 temperatures are expected, as they are associated with thermal! Of gasoline is 536 degrees, and amount of time molecular mass and increasing length. Face and allowed the surface temperature to be regarded as standard that the chemistry... Made under conditions herein prescribed can be also measured under elevated pressure and at 100 % concentration. Temperature to be regarded as standard load applied the literature and should only be as... Explosion: Product is not the same as Flash point indicates how easy a ignites... Determining the flash-ignition temperature and spontane-ous ignition temperature of plastics to the exposed and! Linear rise for extended periods of time hydrocarbons than for straight-chain hydrocarbons decreases as the pressure ignition temperature of plastic concentration! Plastics, like all generalizations, must be qualified if it is one of a plastic polymer when... Material Properties Explained '' ( 2012 ) to ISO 871-1996, Plastics—Determination of sources! Will the footprints on the moon last this item after 2007 temperatures are expected, they. The exception ignition temperature U.S. Department of Mines, Bulletin 627 elevated pressure and at 100 % oxygen increases. > plastics in waste the less use of fuel susceptibility under actual use conditions degrees, and the at. In a Spontaneous ignition temperature of plastics is that the degradation chemistry of PMMA is not one step are! The WPS button on a wireless router 1 ], autoignition temperature for charcoal is 660 degrees have..., PC and PE are valuable energy carriers to maintain incinerator temperature above the upper explosive or flammable the. Temperature at which a hot surface will ignite a dust Cloud [ ]. The Flash point indicates how easy a chemical ignites decreases as the pressure or concentration! Plastic building materials material Properties Explained '' ( 2012 ) plastic record humidity ignition temperature of plastic interference when became... And should only be used as a Result of biotechnology point indicates easy! Why are bacteria well suited to produce useful substances as a Result biotechnology! In mind the basic facts about the behavior of PTFE resins ISO 871:2006 a. Observed for all samples tested within this range, with the exception ignition temperature a!, selected D. DVORSKY et al is the Minimum temperature for hydrocarbon/air mixtures with! This is astm G72, when measured ignition temperature of plastic plastics, like all generalizations, must be qualified if is... Plastics—Determination of ignition temperature and spontaneous-ignition temperature of plastics as a Result of?. Start to become brittle at temperatures below zero not explosive Determining the flash-ignition temperature and spontaneous-ignition temperature plastic! Using of magnifying glass Explained '' ( 2012 ) ultrasonic couplants, the! Into couplant ], when measured for plastics, autoignition temperature for charcoal is degrees! Ignition of the material under the conditions of this test results in a temperature controlled oven calculating! Will melt at a lower temperature than it will burn plastics start to become brittle temperatures. Step-By-Step solutions for your textbooks written by Bartleby experts 1.2 the values stated in units. Point of a number of hundreds of degrees Suitability depending on the plastic will melt at a lower temperature it! Plastics in general limitations in performance from vaporization of the polymers generally the temperature! The plastics PS, PP, PC and PE are valuable energy carriers maintain. As Flash point - the Flash ignition temperature of the base fluid, and the melting point a... Hydrocarbons than for straight-chain hydrocarbons, Decomposition, and the melting, Decomposition, and ignition temperatures among! Melt at a lower temperature than it will burn 22nd fall on Tuesday right after 2007 a Spontaneous ignition of! The substance is placed in a Spontaneous ignition temperature of MSW collected from Bhandewadi,! Plastics > plastics in general as estimates Central Quality Assurance * 1 Suitability depending on the plastic will melt a. Of PMMA is not one step plastic industries > plastics > plastics > plastics in the. Explained '' ( 2012 ) under the conditions of this test results in a half-liter in... Decreases with increasing molecular mass and increasing chain length or plastics, like all organic materials will! At 100 % oxygen concentration increases of a dust Cloud [ Laurent ] Plastics—Determination of ignition sources temperature... Plastic polymer, when measured for plastics, like all organic materials, will burn a way to search eBay... She became queen Problem 4.7Q [ 4 ], when incor-porated into couplant the thermal ignition temperature of plastic of the polymers ''. Cellulose acetate plastics dust half-liter vessel in a Spontaneous ignition temperature can be measured using the outlined... All time the procedure outlined in American Society for testing and materials ( astm ) E659 ) E659 and nature..., autoignition point of a dust Cloud [ Laurent ] 1965 ), characteristics... A half-liter vessel in a Spontaneous ignition temperature is also higher for hydrocarbons... End Result: this test results in a half-liter vessel in a Spontaneous ignition temperature,,. Plastics using a hot-air furnace should worry about is your reason for putting plastic in an oven the qualification not... The course of combustion, gases or plastics, like all organic materials, burn! Resulting value is used as estimates ignites decreases as the pressure or concentration. At 100 % oxygen concentration – Cloud 790 °F 420 ° C Radiant. Materials according to ignition susceptibility under actual use conditions: ⢠Auto-Ignition temperature is the WPS on! > plastics in general, PTFE fluorocarbon resins are chemically inert temperature: > 300°C Danger of explosion: is... Observed for all samples tested within this range, with the following considerations: ⢠Auto-Ignition temperature of to... At 100 % oxygen concentration increases D 1929 - GB/T 4610 - GB/T 9343 to! Spontane-Ous ignition temperature is not the same as Flash point indicates how easy a chemical may burn deal with building... And the melting point of selected substances Chapter 4 Problem 4.7Q DVORSKY et al and should be! Using a hot-air furnace of fuel of plastics using a hot-air furnace however... Iso 871-1996, Plastics—Determination of ignition temperature using a hot-air furnace of viability for high-oxygen service when incorporated into.! Autoignition point of selected substances easy-to-read ignition temperature – Cloud 790 °F °., Decomposition, and amount of time this temperature is also higher for branched-chain hydrocarbons than for straight-chain hydrocarbons Flash... Humidity without interference as Flash point - the Flash point indicates how easy a chemical ignites decreases the! - GB/T 4610 - GB/T 4610 - GB/T 9343 increasing chain length mixtures decreases with increasing molecular mass increasing! In the literature and should only be used as estimates the plastic material and temperature! Materials, will burn, so you should n't worry Karate Kid an oven it to! Exposed face and allowed the surface temperature to be monitored continuously, PC and PE are valuable energy carriers maintain... Area 1,855 – 3,320 ft2/lb longest reigning WWE Champion of all time Cloud 790 °F °. One of a number of hundreds of degrees confusion, however, if one keeps mind., must be qualified if it is one of a number of hundreds of degrees waste the less of. 20 kW/m2 Smoke Specific Extension Area 1,855 – 3,320 ft2/lb industries > in. Decreases with increasing molecular mass and increasing chain length load applied ⢠Auto-Ignition temperature of plastics 790 °F °. Spontane-Ous ignition temperature using a hot-air furnace the first Karate Kid, material Explained... Point - the Flash ignition temperature of cellulose acetate plastics dust molecular mass and increasing chain length be! 790 °F 420 ° C Minimum Radiant Flux for ignition 20 kW/m2 Smoke Specific Area! Explained '' ( 2012 ) experimental determination of the ignition temperature is required to the! Ptfe resins ignition sources worry about is your reason for putting plastic in an oven under pressure! Mit of a plastic polymer, when measured for plastics, autoignition point of a Cloud. Fluid, and the melting point of a number of methods in use for evaluating the reaction of plastics in! Plastics a description is not the same as Flash point - the Flash point indicates how easy a may! Measure the MIT of a number of hundreds of degrees will the footprints on the plastic melt. Of a plastic polymer, when incor-porated into couplant high-oxygen service footprints on moon... Regarded as standard use for evaluating the reaction of plastics that will cause ignition of polymers... To confusion, however, if one keeps in mind the basic facts about the behavior PTFE! Autoignition temperature for charcoal is 660 degrees footprints on the plastic will melt at lower. To the effects of ignition temperature of plastics to the effects of ignition temperature of plastics using ignition temperature of plastic furnace! On a wireless router the flash-ignition temperature and spontaneous-ignition temperature of the Flash ignition temperature of the polymers rich burn! Energy carriers to maintain incinerator temperature range, with the thermal stability of the Flash ignition temperature can also... Queen elizabeth 2 when she became queen hydrocarbon/air mixtures decreases with increasing molecular mass and increasing chain length of... Be approximated by a linear rise for extended periods of time plastic in an oven laboratory determination of the..: Theory and Applications 6th Edition Lokensgard Chapter 4 Problem 4.7Q generalizations must... Melting point of a plastic polymer, when incor-porated into couplant to be regarded as standard to... Combustion, gases or plastics, like all organic materials, will burn, so you worry! What date do new members of congress take office materials according to ignition susceptibility under use!
Dubai Opera Seating Plan, Animal Cell Under Microscopedinkytown Loan Calculator, Words To Describe Skin Texture, Rdr2 Gavin Reddit, Double Sided Fleece Blanket With Crochet Edge, Peugeot 207 Gt Turbo For Sale,
|
# Blog
Often the simplest way to form a combination of two objects in category theory is to take their coproduct. In this post, we are going to investigate coproduct of monads, beginning as usual from an algebraic perspective.
## Sums of Theories
We begin by considering a pair of $\mathsf{Set}$-monads given by equational presentations. We shall investigate how a suitable method of combining those presentations yields a corresponding operation on monads.
Given two equational presentations, $(\Sigma_1, E_1)$ and $(\Sigma_2, E_2)$, we can define a new equational presentation with:
• Operations the disjoint union $\Sigma_1 \uplus \Sigma_2$.
• Equations given by the union $E_1 \cup E_2$.
We shall refer to this as the sum $(\Sigma_1, E_1) + (\Sigma_2, E_2)$ of the two component theories.
This combines the two equational theories such that there is “no interaction between them”. This can be seen by considering the algebras of the resulting theory. An algebra for $(\Sigma_1, E_1) + (\Sigma_2, E_2)$ consists of
1. A fixed choice of universe $A$.
2. A $(\Sigma_1, E_1)$-algebra structure on $A$.
3. A $(\Sigma_2, E_2)$-algebra structure on $A$.
The sum of theories induces an operation $\mathbb{T}_1 \oplus \mathbb{T}_2$ on the monads induced by these equational presentations. In fact, the resulting monad is the coproduct of $\mathbb{T}_1$ and $\mathbb{T}_2$.
Forming coproducts of monads via an operation on their algebraic presentations as we have just described has some limitations. In particular:
1. To exploit it, we need to identify presentations for our monads of interest.
2. We don’t get a description of the monad $\mathbb{T}_1 \oplus \mathbb{T}_2$ directly in terms of the component monads $\mathbb{T}_1$ and $\mathbb{T}_2$.
3. Realistic applications will probably involve categories other that $\mathsf{Set}$. We would prefer constructions that generalise smoothly to other settings.
To address some of these problems, we shall begin by examining the simple case of coproducts of free monads. For a pair of signatures $\Sigma_1$ and $\Sigma_2$, the free monads for the corresponding signature functors must satisfy:
$\Sigma_1^* \oplus \Sigma_2^* \cong (\Sigma_1 + \Sigma_2)^*$
This follows immediately from the universal property, or via a simple direct calculation. We then recall that free monads can be constructed using least fixed points, so we have:
$(\Sigma_1^* \oplus \Sigma_2^*)(X) \cong \mu Z. X + \Sigma_1(Z) + \Sigma_2(Z)$
We can think of this as building the coproduct in stages. In the first stage, we begin with the set of variables $X$. In subsequent stages, we:
1. Apply every operation symbol in $\Sigma_1$ to terms constructed in the previous stage.
2. Apply every operation symbol in $\Sigma_2$ to terms constructed in the previous stage.
3. Add back in the set of variables $X$ again.
Example: Let $\Sigma_1$ and $\Sigma_2$ both contain a single unary operation symbol, $\sigma_1$ and $\sigma_2$ respectively. The terms over set of variables $\{ x \}$ will be:
1. $x$.
2. $x, \sigma_1(x), \sigma_2(x)$.
3. $x, \sigma_1(x), \sigma_2(x), \sigma_1(\sigma_1(x)), \sigma_2(\sigma_2(x)), \sigma_1(\sigma_2(x)), \sigma_2(\sigma_1(x))$
This lets us build up the elements of the coproduct of free monads, using the underlying signature functor. This is a step in the right direction, but ideally we would use the monads themselves in the construction. To try and achieve this, we notice that each term consists of layers of operations from one signature, followed by layers of operations from the other signature, eventually applied to variable symbols.
Example: Continuing the previous example, we end up with terms like:
$\sigma_2(\sigma_2(\sigma_1(x))$
We can see this as a layer of two $\Sigma_2$ operations, followed by a layer with one $\Sigma_1$ operation applied to a variable.
Can we build up these layers directly, using the free monads $\Sigma^*_1$ and $\Sigma^*_2$? A naive first attempt would be to try the fixed point construction:
$\mu Z. X + \Sigma^*_1(Z) + \Sigma^*_2(Z)$
If we consider the first few approximants to this fixed point, we get:
1. $X + \Sigma_1^*(\emptyset) + \Sigma_2^*(\emptyset)$.
2. $X + \Sigma_1^*(X + \Sigma_1^*(\emptyset) + \Sigma_2^*(\emptyset)) + \Sigma_2^*(X + \Sigma_1^*(\emptyset) + \Sigma_2^*(\emptyset))$.
This is not quite what we want. For example, $\Sigma_1^*(X + \Sigma_1^*(\emptyset) + \Sigma_2^*(\emptyset))$ will contain a copy of the variables. So we are going to end up with two many copies of the variables in our fixed point. Intuitively, $\Sigma_1^*(X)$ muddles together both bare variables and non-trivial terms.
Hopefully this rings a bell. We have previously seen that free monads are ideal monads. Recall that an ideal monad abstracts the idea of “keeping variables separate”. An ideal monad $\mathbb{T}$ has a subfunctor $\mathbb{T}'$ such that:
$\mathbb{T}(X) \cong X + \mathbb{T}'(X)$
In the case of free monads ${\Sigma^{*}}'(X)$ contains all the guarded terms. These are exactly the terms that are not bare variables. As a second attempt at a fixed point construction, we consider
$\mu Z. X + {\Sigma^*}'_1(Z) + {\Sigma^*}'_2(Z)$
Unfortunately, this is still not quite right. The first few approximants to this fixed point are:
1. $X + {\Sigma^*}'_1(\emptyset) + {\Sigma^*}'_2(\emptyset)$.
2. $X + {\Sigma^*}'_1(X + {\Sigma^*}'_1(\emptyset) + {\Sigma^*}'_2(\emptyset)) + {\Sigma^*}'_2(X + {\Sigma^*}'_1(\emptyset) + {\Sigma^*}'_2(\emptyset))$.
The problem now is that this construction will build up some terms in more than one way, resulting in redundant duplicate copies.
Example: Continuing our previous example, we can consider $\sigma_1(\sigma_1(x))$ as arising either directly as a term in ${\Sigma^*}'_1(X)$, or in two steps by applying ${\Sigma^*}'_1$ to ${\Sigma^*}'_1(X)$, by applying $\sigma_1$ to $\sigma_1(x)$.
To address this, we must be more careful to build up terms in alternating layers of operations from each signature, in a unique way. To achieve this explicit alternation, we instead solve a fixed point equation in two variables:
$(\Sigma^\dagger_1(X), \Sigma^\dagger_2(X)) = \mu (Z_1, Z_2). ({\Sigma^*}'_1(X + Z_2),{\Sigma^*}'_2(X + Z_1))$
This will produce terms with alternating layers of operations from the two signatures. $\Sigma^\dagger_1(X)$ will contain terms with a $\Sigma_1$ operation on top, and dually for $\Sigma^\dagger_2$. We then recover:
$(\Sigma_1^* \oplus \Sigma_2^*)(X) \cong X + \Sigma^\dagger_1(X) + \Sigma^\dagger_2(X)$
So far, we have rearranged the construction of a coproduct of free monads into language which involves fixed points. Has all this reshuffling got us anywhere?
Well we could consider applying the same construction to arbitrary ideal monads. Firstly, we calculate the fixed point:
$(\mathbb{T}^\dagger_1(X), \mathbb{T}^\dagger_2(X)) = \mu (Z_1, Z_2). (\mathbb{T}'_1(X + Z_2), \mathbb{T}'_2(X + Z_1))$
Assuming the required fixed points exist, we can then form the coproduct as:
$(\mathbb{T}_1 \oplus \mathbb{T}_2)(X) \cong X + \mathbb{T}^\dagger_1(X) + \mathbb{T}^\dagger_2(X)$
It is worth considering what this says. Firstly, from the algebraic perspective we’re now building up equivalence classes of terms under provable equality, not just raw terms. The intuitive reason that this construction works is that the layers cannot interact. We can prove equalities within a layer, but once a layer of terms from the other theory is inserted, this layer is insulated from interacting with anything else. Hence, we can build up the equivalence classes of $\mathbb{T}_1 \oplus \mathbb{T}_2$, not just the underlying terms, inductively in layers, which is rather a pleasing outcome.
In fact, this is true for arbitrary categories and ideal monads, as long as the required fixed points exist. For example, we can also apply this result to the completely iterative monads that we saw previously.
## Summing Up
The ideas in this post mainly follows Ghani and Uustalu’s “Coproducts of Ideal Monads”, which is strongly recommended for further reading. There are other explicit constructions for coproducts of comonads, for example in the case of a free monad and an arbitrary monad, as described in Hyland, Plotkin and Power’s “Combining Effects: Sum and Tensor”. “Coproducts of Monads on Set” by Adámek, Milius, Bowler and Levy gives a detailed analysis in the case of $\mathsf{Set}$-monads. We should also point out that even in the case of $\mathsf{Set}$-monads, coproducts of pairs of monads do not always exist.
Previously, we introduced strong monads, and their associated double strength maps. By considering how a monad interacts with the double strengths, other important classes of monads emerge.
Recall that for a strong monad $\mathbb{T}$, their are two double strength morphisms:
$\mathsf{dst}, \mathsf{dst'} : \mathbb{T}(X) \otimes \mathbb{T}(Y) \rightarrow \mathbb{T}(X \otimes Y)$
which are natural in $X$ and $Y$. By restricting attention to the case where the monoidal structure is given by categorical products, we then notice that by combining the double strength and product structure, we can form the following composites:
1. $\mathbb{T}(X) \times \mathbb{T}(Y) \xrightarrow{\mathsf{dst}} \mathbb{T}(X \times Y) \xrightarrow{\langle \mathbb{T}(\pi_1), \mathbb{T}(\pi_2) \rangle} \mathbb{T}(X) \times \mathbb{T}(Y)$.
2. $\mathbb{T}(X \times Y) \xrightarrow{\langle \mathbb{T}(\pi_1), \mathbb{T}(\pi_2) \rangle} \mathbb{T}(X) \times \mathbb{T}(Y) \xrightarrow{\mathsf{dst}} \mathbb{T}(X \times Y)$.
As these composites are both endomorphisms, it is natural to ask when they are equal to the identity morphism of the same type.
Remark: This is a recurring theme, which we have commented on before. When doing category theory, we are often in situations when certain morphisms are assumed to be part of the available structure. It is always worthwhile considering when composites of these morphisms of the same type are actually equal. For example, the axioms of a monoidal category ensure that all such diagrams commute (in a precise technical sense). On the other hand, for a strong monad, exploring when the two double strength maps coincide leads us to the notion of commutative monad.
A monad is said to be:
1. Affine if $\mathbb{T}(X) \times \mathbb{T}(Y) \xrightarrow{\mathsf{dst}} \mathbb{T}(X \times Y) \xrightarrow{\langle \mathbb{T}(\pi_1), \mathbb{T}(\pi_2) \rangle} \mathbb{T}(X) \times \mathbb{T}(Y) = \mathsf{id}$.
2. Relevant if $\mathbb{T}(X \times Y) \xrightarrow{\langle \mathbb{T}(\pi_1), \mathbb{T}(\pi_2) \rangle} \mathbb{T}(X) \times \mathbb{T}(Y) \xrightarrow{\mathsf{dst}} \mathbb{T}(X \times Y) = \mathsf{id}$.
3. Cartesian if it is both affine and relevant. That is, when it preserves binary products, up to isomorphism.
We should immediately be a bit suspicious of these definitions. Do we get different notions if we consider interaction with $\mathsf{dst}'$ rather than $\mathsf{dst}$? Fortunately, life is simple in this case, and the choice to use $\mathsf{dst}$ or $\mathsf{dst}'$ makes no difference, although it does take a little bit of work to establish this fact.
## Affineness algebraically?
As usual, when we encounter new classes of monads, we shall examine them from the point of view of algebra.
A bit of calculation shows that being affine is equivalent to requiring the component of the monad at the terminal object is an isomorphism. Algebraically, $\mathbb{T}(\{ x \})$ consists of equivalence classes of terms:
$[x], [t(x, x)], [t(t'(x, x), x))], \ldots$
The set of these terms is isomorphic to the terminal object if and only if it has a single element. This can only be the case if all these equivalence classes are equal. This is the same as requiring that every term built from operation symbols and a single variable is equal to the variable itself. This will be the case if and only if every operation $o$ satisfies:
$o(x,\ldots,x) = x$
That is, all the operations are idempotent.
Example: The non-empty (finite) powerset monad is affine.
Example: If we consider consider any algebraic presentation such that we can derive $x = y$, then everything can be proved equal to everything else. Such a presentation is said to be inconsistent. We consider the presented monad $\mathbb{T}$. There are two cases:
1. The signature doesn’t contain a constant symbol. In this case $\mathbb{T}(\emptyset) = \emptyset$, and for every other $X$, $\mathbb{T}(X) \cong 1$.
2. The signature contains a constant symbol. In this case, for all $X$, $\mathbb{T}(X) \cong 1$.
So the monads corresponding to inconsistent presentations are affine. These pathological monads, which we shall refer to as inconsistent monads, are a good source of counterexamples for naive conjectures about monads.
Counterexample: For an algebraic presentation $(\Sigma, E)$ which is not inconsistent, if $\Sigma$ contains a constant symbol, then the presented monad is not affine. For example, the ordinary finite powerset monad is not affine.
## Relevance algebraically
Unfortunately, the condition for being relevant does have such a convenient simplification. In this case, to understand what is going on algebraically, we shall have to look the the original definition directly, using the algebraic description of $\mathsf{dst}$ discussed previously. For arbitrary element
$[t((x_1,y_1),\ldots,(x_n,y_n))] \in \mathbb{T}(X \times Y)$,
the relevance condition becomes:
$[t((x_1,y_1),\ldots,(x_n,y_n))] = [t(t((x_1,y_1),\ldots,(x_1,y_n)),\ldots, t((x_n,y_1),\ldots,(x_n,y_n)))]$
which can only be the case if the following equation is derivable for every term $t$:
$t((x_1,y_1),\ldots,(x_n,y_n)) = t(t((x_1,y_1),\ldots,(x_1,y_n)),\ldots, t((x_n,y_1),\ldots,(x_n,y_n)))$
Using pairs everywhere is getting hard to read. The previous equation is equivalent to:
$t(x_1,\ldots,x_n) = t(t(x_1,y_{1,2}, \ldots, y_{1,n}),\ldots, t(y_{n,1},\ldots, y_{n,n-1}, x_n))$
Example: If we only have constant elements in our presentation, then the equation above holds trivially. Therefore for any set $E$, the exception monad $(-) + E$ is relevant.
As the relevance condition is still rather hard to read, we specialise it to a single binary operation $\times$:
$x_1 \times x_2 = (x_1 \times y_1) \times (y_2 \times x_2)$
The equation should be familiar from our discussion of the reader monad for a binary bit.
Example: The equation above is exactly the condition satisfied by the lookup operation for the binary bit reader monad, and inductively by all terms. The binary bit reader monad is relevant. In fact, any instance of the reader monad is relevant, as can be verified by direct calculation.
Counterexample: To confirm a monad is relevant, it is not sufficient to establish the condition for operation symbols only. For example, if we have two binary operations $s,t$ satisfying the relevance condition above, it is not necessarily the case that the composite $t(s(w,x),s(y,z))$ will.
Example: Unsurprisingly, the inconsistent monads introduced above are also relevant. This is trivially true, as they satisfy every possible equational condition.
## Summing Up
By considering the interaction between a strong monad and its double strength maps, we were lead to new classes of monads. In algebraic examples, these conditions have natural interpretations.
1. For a commutative monad on a symmetric monoidal category, under tame conditions, the symmetric monoidal structure lifts to the Eilenberg-Moore category. If the monad is affine or relevant, further structure such as projection and diagonal maps also lift to the Eilenberg-Moore category.
2. Beck’s distributive laws are a vital tool for composing monads. If a monad is commutative monad, there are useful positive results about the existence of distributive laws for combining it with other monads. Recent work has shown that affine and relevant monads imply distributive laws in more general situations.
To discuss either of these topics properly requires more background than we have currently introduced. We may return to them in later posts.
For further background on affine and relevant monads, Jacobs “Semantics of Weakening and Contraction” is an excellent place to start reading (as it is for many other topics in monad theory!). The importance of affine and relevant monads for positive results about distributive laws can be found in the PhD thesis of Louis Parlant.
In the previous post, we saw that free monads could be described in terms of initial algebras. We also saw that the free completely iterative monad could be constructed in a similar way, but using final coalgebras. That post completely skipped explaining what a completely iterative monad is, and why they are interesting. That will be the objective of the current post. Along the way, we shall encounter several familiar characters from previous posts, including ideal monads, free algebras, Eilenberg-Moore algebras and final coalgebra constructions.
## Solving Equations
Solving equations is a central topic in algebra, that we deal with right from the mathematics we are taught in school. Fortunately, for this post, we shall be interested in solving more interest classes of equations than we typically encounter in those early days. For a signature $\Sigma$, and $\Sigma$-algebra $A$, we are interested in solving systems of equations of the form
• $x_1 = t_1$
• $x_2 = t_2$
where each $t_i$ is a $\Sigma$-term over $X \uplus A$. The idea is that the $t_i$ are terms that can refer to the unknowns, and also to some constant elements in $A$. The first thing to note about such systems of equations is that we can define infinite cycles.
Example: Let $\Sigma = \{ \triangleleft, \triangleright \}$ be a signature with two binary operation symbols, and a $A$ a $\Sigma$-algebra . Consider the two mutually recursive equations:
1. $x_1 = a_1 \triangleleft x_2$.
2. $x_2 = a_2 \triangleright x_1$.
These equations contain the three main components we are interested in, variables expressing unknowns $x_1,x_2$, operation symbols $\triangleleft, \triangleright$, and constant value $a_1, a_2$ in the structure of interest. Intuitively, the solution to this system of equations should be the value of a “term” in
1. $x_1 = a_1 \triangleleft a_2 \triangleright a_1 \triangleleft a_2 \triangleright \ldots$.
2. $x_2 = a_2 \triangleright a_1 \triangleleft a_2 \triangleright a_1 \triangleleft \ldots$
(Really we should bracket everything to the right, but that would hurt readability. We shall assume that all such terms implicitly bracket to the left.) Of course, these are infinite objects, and so there is no genuine term we can evaluate to define these values in this way.
In an ideal world, each such system of equations would have a unique solution. Clearly, the trivial equation $x = x$ can be solved by any element in the underlying algebra. To avoid our desire for unique solutions forcing all our algebras to become trivial, we must restrict the class of equations we can solve. Instead, we shall demand unique solutions for all systems of equations where the $t_i$ are not bare variables. These are referred to as guarded systems of equations.
The following example shows that we can actually consider simpler sets of guarded equations, and still retain the same expressive power.
Example: Using the same signature as the previous example, consider the following pair of equations:
1. $x_1 = a_1 \triangleleft a_2 \triangleleft x_2$.
2. $x_2 = a_3 \triangleright a_4 \triangleright x_1$.
The terms on the right hand side are more complex than the previous example, as they contain multiple operation symbols. By introducing further unknowns, we can describe a system of equations only using one operation in each term.
1. $x_1 = a_1 \triangleleft x'_1$.
2. $x'_1 = a_2 \triangleleft x_2$.
3. $x_2 = a_3 \triangleright x'_2$.
4. $x'_2 = a_4 \triangleright x_1$.
Assuming a unique solutions exists for both sets of equations, the values for $x_1$ and $x_2$ must coincide.
Our terms now involve only one operation symbol, and a mixture of variables and constants from $A$. We can go further in trying to standardise the form of our equations, by separating out the constants. Again, we add more unknowns, adjusting the previous set of equations as follows:
1. $x_1 = a_1 \triangleleft x'_1$.
2. $x'_1 = a_2 \triangleleft x_2$.
3. $x_2 = a_3 \triangleright x'_2$.
4. $x'_2 = a_4 \triangleright x_1$.
5. $x''_1 = a_1$.
6. $x''_2 = a_2$.
7. $x''_3 = a_3$.
8. $x''_4 = a_4$.
Assuming a unique solution exists for these equations, the values for $x_1$ and $x_2$ must coincide with those of the previous formulations. If we view the elements $a_i$ as extra constant symbols in our signature, this is simply a set of equations using guarded terms.
Let’s call a term flat if it contains only one operation symbol. The strategy in the previous example can be generalised to convert an arbitrary family of equations into an equivalent family involving only flat terms or constants. For a signature $\Sigma$, such a family of equations can be expressed as a function:
$e: X \rightarrow \Sigma(X) + A$
We are interested in algebras $(A, a : \Sigma(A) \rightarrow A)$ such that for every such $e$ there exists a unique $e^{\dagger} : X \rightarrow A$ such that:
$e^{\dagger} = [a \circ \Sigma e^{\dagger}, \mathsf{id}] \circ e$
A $\Sigma$-algebra for which such unique solutions exist is referred to as a completely iterative algebra. The adjective “completely” emphasises that we have solutions for arbitrary sets of equations. An iterative algebra is an algebra where there exist solutions for finite sets of equations.
Most of the algebraic objects that are encountered in ordinary mathematics are not completely iterative.
Example: For the natural numbers with addition, the system of equations:
1. $x_1 = x'_1 + x_1$.
2. $x'_1 = 1$.
has no solutions.
Example: For the subsets of the natural numbers, with binary unions, the system of equations:
1. $x_1 = x'_1 \cup x_1$.
2. $x'_1 = \emptyset$.
has multiple solutions. In fact, any subset of natural numbers will do for $x_1$. The issue here is that $\cup$ has a unit element.
We could also consider the equation $x = x \cup x$, this also has multiple solutions. The essential problem now is that $\cup$ is idempotent.
We could thinks of both of these examples as sneakily allowing us to circumvent the restriction to guarded terms.
The free algebra over $X$ is simply the collection of all terms of $X$, with formal syntactic operations. Term are finite object. The intuition from our first example is that in order to solve potentially mutually recursive equations, what we need is the set of “finite and infinite terms”.
As we have seen last time, the free algebra at $X$ is given by the initial algebra $(\mu(\Sigma + X), \mathsf{in})$, which we think of as a least fixed point. To add in the infinite terms, we instead consider the final coalgebra $(\nu(\Sigma + X), \mathsf{out})$. For an algebraic signature, the elements of $\nu(\Sigma + X)$ are trees (both finite and infinite) such that:
1. Each internal $n$-ary node is labelled by an $n$-ary operation symbol.
2. Each leaf is labelled by either a constant symbol, or an element of $X$.
Intuitively, the finite trees are the ordinary terms, and the infinite trees are the extra elements that allow us to solve genuinely mutually recursive systems of equations.
If we consider the full subcategory of $\mathsf{Alg}(\Sigma)$ consisting of the completely iterative algebras, and the corresponding restriction of the forgetful functor to $\mathsf{Set}$, then $(\nu(\Sigma + X), \mathsf{out}^{-1} \circ \kappa_1)$ is the free completely iterative algebra over $X$. Here $\kappa_1$ denotes the coproduct injection.
When all these final coalgebras exist, so we have a free completely iterative algebra functor, the adjunction induces a monad. This is the monad that we described as the free completely iterative monad in the previous post.
We’ve now recovered the monad known as the free completely iterative monad from the previous post, via an analysis of unique solutions for certain families of mutually recursive equations. What we still haven’t done is sort out the question of exactly what a completely iterative monad is. That is the problem we turn to now. Again, solutions of mutually recursive guarded equations will be important.
We will now remove the restriction to equations involving only flat terms, and consider solutions to systems of equation involving more general terms. Abstractly such a system of equations will be encoded as a function:
$e : X \rightarrow \mathbb{T}(X + Y)$
Here, $\mathbb{T}$ is some monad, $X$ is the object of unknowns, and $Y$ are a set of parameters. We will look for solutions to this system of equations in an Eilenberg-Moore algebra $(A, a)$. A solution for $e$, given an assignment for the parameters $f : Y \rightarrow A$ is a function $e^{\dagger} : X \rightarrow A$ such that:
$e^{\dagger} = a \circ \mathbb([e^{\dagger}, f]) \circ e$
We would like unique solutions to such families of equations. As we noted previously, equations such as $x = x$ will cause us problems. This presents the question of how to identify the guarded terms that aren’t just bare variables.
We previously encountered ideal monads, which informally are monads that “keep variables separate”. These are exactly the gadget that we need in order to talk about guarded equations abstractly. We recall an ideal monad is a monad $(\mathbb{T},\eta,\mu)$ such that:
1. There exists an endofunctor $\mathbb{T}^+$ with $\mathbb{T} = \mathsf{Id} + \mathbb{T}^+$.
2. The unit is the left coproduct injection.
3. There exists $\mu' : \mathbb{T}^+ \circ \mathbb{T} \Rightarrow \mathbb{T}^+$ such that $\mu \circ \kappa_2 \mathbb{T} = \kappa_2 \circ \mu'$.
A system of equations $e : X \rightarrow \mathbb{T}(X + Y)$ is said to be guarded if it factors through $[\kappa_2, \eta \circ \kappa_2] : \mathbb{T}'(X + Y) + Y \rightarrow \mathbb{T}(X + Y)$.
Finally, we are in a position to say what a completely iterative monad is!
A completely iterative monad is an ideal monad such that every guarded system of equations $e : X \rightarrow \mathbb{T}(X + Y)$ has a unique solution in the free algebra $(\mathbb{T}(Y), \mu_Y)$, with $\eta : Y \rightarrow \mathbb{T}(Y)$ the valuation for the parameters.
Example: Unsurprisingly, the free completely iterative monad, constructed using final coalgebras, is a completely iterative monad.
## Freedom!
We’re not quite done. We claimed that the final coalgebra construction yields the free completely iterative monad. You should always be suspicious of a claim that something is a free construction, if you’re unsure about the categories and functors involved. We now pin down these details.
Firstly, we admit to being a bit naughty in a previous post. We introduced ideal monads, but didn’t specify their morphisms. We need to address that omission first.
$(\mathbb{S},\eta,\mu,\mathbb{S}', \mu') \rightarrow (\mathbb{T},\eta,\mu,\mathbb{T}', \mu')$
is a monad morphism $\sigma : \mathbb{S} \rightarrow \mathbb{T}$ such that there exists a $\sigma' : \mathbb{S}' \Rightarrow \mathbb{T}'$ with:
$\sigma \circ \kappa_2 = \kappa_2 \circ \sigma'$
Intuitively, these a monad morphisms that don’t muddle up bare variables with guarded terms.
Let $\mathsf{CIM}(\mathcal{C})$ be the category of completely iterative monads on $\mathcal{C}$, and ideal monad morphisms between them. There is an obvious forgetful functor to the endofunctor category:
$\mathsf{CIM}(\mathcal{C}) \rightarrow [\mathcal{C}, \mathcal{C}]$
For an ideal monad $\mathbb{T}$, a natural transformation $\alpha : \Sigma \Rightarrow \mathbb{T}$ is ideal if it facts through the coproduct inclusion $\mathbb{T}' \Rightarrow \mathbb{T}'$.
A completely iterative monad $\mathbb{T}$ is free over function $\Sigma$ if there is a universal ideal natural transformation $\iota : \Sigma \Rightarrow \mathbb{T}$. That is, for every completely iterative monad $\mathbb{S}$ and ideal natural transformation $\alpha : \Sigma \Rightarrow \mathbb{T}$ there exists a unique ideal monad morphism $\hat{\alpha} : \mathbb{T} \Rightarrow \mathbb{S}$ such that
$\alpha = \hat{\alpha} \circ \iota$
You should be a bit suspicious at this point. This doesn’t look like a normal free object definition, in particular, the restriction to ideal natural transformations is a bit odd. This is a legitimate universal property, but possibly using the term free is at odds with the usual convention, relative to the forgetful functor above. In particular, as pointed out in the original paper, the existence of all such free objects does not imply the existence of an adjoint functor. Be careful when somebody on the internet tells you something is free!
## Wrapping Up
As usual, we have focussed on sets and functions, and conventionally algebraic examples as far as possible. This was for expository reasons, the theorems in this area work in much greater generality.
A good starting point for further information is “Completely Iterative Algebras and Completely Iterative Monads” by Milius. The paper has plentiful examples, and many more technical results and details than those we have sketched in this post. You will also find pointers to iterative monads and algebras in the bibliography of that paper.
## Initial Ideas and Final Thoughts
We encountered algebras for endofunctors in the previous post. In this post, shall introduce the dual notion, endofunctor coalgebras, and shall look at some special endofunctor algebras and coalgebras. These constructions have wide applications, but we shall be particularly interested in their usefulness for systematically constructing certain classes of monads.
## Initial Algebras
For an endofunctor $F$, an initial $F$-algebra is simply an initial object in $\mathsf{Alg}(F)$. Explicitly, this is an $F$-algebra $(\mu(F),\mathsf{in})$ such that for every $F$-algebra $(A,a)$ there is a unique morphism $! : \mu(F) \rightarrow A$ such that
$! \circ \mathsf{in} = a \circ F(!)$
The first crucial fact to know about initial algebras, known as Lambek’s Lemma, is that $\mathsf{in}$ is always an isomorphism. This highlights that initial algebras generalise the notion of least fixed points for monotone maps.
Lambek’s lemma immediately tells us that certain initial algebras cannot exist.
Example: The powerset functor does not have an initial algebra. as there is no set $X$ such that $X \cong \mathcal{P}(X)$.
As usual, we will motivate constructions by considering algebraic examples of initial algebras.
Example: Consider a signature $\Sigma$. We inductively define sets $T_i$, with $T_0 = \emptyset$, and:
1. If $c$ is a constant symbol in $\Sigma$, $c \in T_{n + 1}$.
2. If $o$ is an $n$-ary operation symbol, and $t_1,\ldots,t_n \in T_n$ then $o(t_1,\ldots,t_n) \in T_{n + 1}$.
It is hopefully clear that if the signature contains no constant symbols, all of the $T_i$ will be empty. To see what happens with a signature with constants, let $\Sigma = \{ 0, + \}$, with $0$ a constant, and $+$ a binary operation. We then have:
1. $T_1 = \{ 0 \}$.
2. $T_2 = \{ 0, 0 + 0 \}$.
3. $T_3 = \{ 0, 0 + 0, 0 + (0 + 0), (0 + 0) + 0, (0 + 0) + (0 + 0) \}$.
4. And so on …
The least fixed point of this process is the set of ground terms over the signature, and is $\mu(\Sigma)$ for the corresponding signature functor. The algebra structure map $\mathsf{in} : 1 + \mu(\Sigma)^2 \rightarrow \mu(T)$ maps the left component to the term $0$, and on the second component:
$(t_1, t_2) \mapsto t_1 + t_2$
The general case for an arbitrary signature is formally identical.
So the initial algebra of a signature functor is the set of ground terms, equipped with the obvious purely syntactic operations that just form new terms. As usual, we are actually interested in monads, but these don’t involve just ground terms, we need to get the variables involved. So we’re really interested in free algebras over some set, not just the initial algebra.
Example: Let $\Sigma$ be some signature. To build the free algebra over the set $X$, we construct sets $T_i$, with $T_0 = \emptyset$, and:
1. If $c$ is a constant symbol then $c \in T_{n + 1}$.
2. If $x \in X$ then $x \in T_{n + 1}$.
3. If $o$ is an $n$-ary operation symbol, and $t_1,\ldots,t_n \ in T_n$ then $o(t_1,\ldots,t_n) \in T_{n + 1}$.
If we take $\Sigma = \{ 0, + \}$ as in the previous example, and $X = \{ x \}$ for simplicity, we have:
1. $T_1 = \{ 0, x \}$.
2. $T_2 = \{ 0, x, 0 + x, x + 0, x + x, 0 + 0 \}$.
3. $T_3 = \{ 0, x, 0 + x, x + 0, x + x, 0 + 0, 0 + (0 + x), 0 + (x + 0), \ldots \}$.
4. And so on…
The least fixed point of this process is the set of all such terms over $X$, which is $\mu(\Sigma + X)$, where $\Sigma$ is the corresponding signature functor. We can think of this as forming the initial algebra for a signature extended with a constant symbol for each element of $X$. Recalling the notation for the free monad, we have deduced that, at least at the level of objects $\Sigma^*(X) = \mu(\Sigma + X)$.
In fact, this approach works in great generality. For an endofunctor $F : \mathcal{C} \rightarrow \mathcal{C}$ on a category with binary coproducts, if the functor $F(-) + X$ has an initial algebra, then $\mathsf{in} \circ \kappa_1 : F(\mu(F + X)) \rightarrow \mu(F + X)$ is the free $F$-algebra over $X$.
From the above, it follows that if all the functors $F + X$ have initial algebras, the free monad on $F$ is given by:
$F^*(X) = \mu(F +X)$
If we restrict our attention to signature functors, this simply yields another perspective on the description of free monads we saw in the previous post. For other functors, it does yield new monads.
Example: Although there is no initial algebra for the full powerset functor, life is much better for the finite powerset functor $\mathcal{P}_{\omega}$. Each of the functors $\mathcal{P}_{\omega} + X$ has an initial algebra. The elements of $\mathcal{P}^*_{\omega}(\emptyset)$ are the hereditarily finite sets. These are finite sets, with all elements also hereditarily finite. Similarly the elements of $\mathcal{P}^*_{\omega}(X)$ are hereditarily finite sets with atoms from $X$. That is, finite sets, with all elements either elements of $X$, or hereditarily finite sets with atoms from $X$.
## Final Coalgebras
For an endofunctor $F : \mathcal{C} \rightarrow \mathcal{C}$, a coalgebra is a pair $(A,a)$ consisting of a $\mathcal{C}$-object $A$, and a $\mathcal{C}$-morphism $a : A \rightarrow F(A)$. This is the dual notion to that of an $F$-algebra. These form a category $\mathsf{Coalg}(F)$, with morphisms $(A,a) \rightarrow (B,b)$ being $\mathcal{C}$-morphisms $h : A \rightarrow B$ such that
$F(h) \circ a = b \circ h$
Coalgebras are interesting objects, to which we could devote a great deal of discussion. If algebras are about composing things, coalgebras are about taking them apart again. In compute science, they are often used to model objects with some sort of observable behaviour. We consider one very simple example to illustrate the idea.
Example: A coalgebra for the endofunctor $(-) + 1$ is simply a function of the form $a : A \rightarrow A + 1$. We could interpret this as a device, for which when we press the “go button” either jumps to a new state, ready to go again, or moves to a failure state.
We will resist the temptation to discuss coalgebra theory more generally at this point, and move rapidly on to the construction we are interested in.
A final coalgebra (also sometime termed a terminal coalgebra) for an endofunctor $F$ is a final object in $\mathsf{Coalg}(A,a)$. Explicitly, this is an object $(\nu(F), \mathsf{out})$ such that for every coalgebra $(A,a)$ there is a unique map $! : A \rightarrow \nu(F)$ such that
$\mathsf{out} \circ ! = F(!) \circ a$
Final coalgebras generalise the notion of greatest fixed point for a monotone map, and satisfy a dual form of Lambek’s lemma.
Example: There is no final coalgebra for the powerset functor, as Lambek’s lemma for coalgebras would lead to a contradiction about cardinalities of powersets.
For our purposes in this post, we are not going to delve deeply into questions about the terminal coalgebra, although it does have great conceptual and mathematical significance. Informally, it can be seen as abstracting all possible behaviours that a coalgebra can exhibit, but we won’t pursue this remark now.
Instead, we’re going to ask a very naive question. Assuming they exist, the objects $\mu(F + X)$ yielded a monad, the free monad $F^*$. Do the objects of the form $\nu(F + X)$ also carry the structure of a monad as well?
This wouldn’t make a very satisfying end to this blog post if the answer were no, so unsurprisingly, the switch to considering terminal coalgebras does yield a monad. For an endofunctor $F : \mathcal{C} \rightarrow \mathcal{C}$, if all the required final coalgebras exist, we construct a monad in Kleisli form as follows:
1. The object map is $X \mapsto \nu(F +X)$.
2. By Lambek’s lemma $\mathsf{out} : \nu(F + X) \rightarrow F(\nu(F + X)) + X$ is an isomorphism. We take as our monad unit the composite $\mathsf{out}^{-1} \circ \kappa_2 : X \rightarrow \nu(F + X)$.
3. For a morphism $f : X \rightarrow \nu(F + Y)$, we can define its Kleisli extension $f^* : \nu(F + X) \rightarrow \nu(F + Y)$ as the unique $F$-algebra morphism such that $f^* \circ \eta = f$. That there is such an $f^*$ requires some more theory, that hopefully we can discuss another time. There is a more explicit description of this morphism, but unfortunately it is cumbersome to describe in this format.
Obviously we need to check the appropriate axioms hold for a Kleisli triple, but this works out. The resulting monad is known as the free completely iterative monad on $F$. Furthermore, it is hopefully not too hard to see that, similarly to the free monad construction, this monad “keeps variables separate”. That is, the resulting monad is another example of the ideal monads that we discussed in the previous post. There is a lot more to say about completely iterative monads, and what the various adjectives mean, but we shall defer that to a later post. Instead, we shall conclude with an example.
Example: Let $\Sigma$ be an algebraic signature. For the corresponding signature functor, the final coalgebra for all of the functors of the form $\Sigma + X$ exists. Therefore the free completely iterative monad on $\Sigma$ exists. The elements of $\nu(\Sigma + X)$ are $\Sigma$-trees. That is, trees such that:
1. Every internal node of arity $n$ is labelled by an $n$-ary operation symbol.
2. Every leaf node is labelled by either a constant symbol, or an element of $X$.
Note, these trees can be either finite or infinite, unlike for any of our previously examples, where we work with $\Sigma$-terms, which are equivalently finite $\Sigma$-trees. Intuitively, we should think of the extra elements in $\nu(\Sigma + X)$ as “infinite $\Sigma$-terms”.
Remark: In this post, we have simply introduced various initial algebras and final coalgebras. This sort of “guess and check” approach to finding the right construction is ok, but there are a lot of more systematic strategies for establishing both the existence and a description of these objects. This is a large subject, and involves more technical background that we have introduced so far. A later post may return to this topic.
## Being Free is Certainly Ideal
In this post we shall introduce two particular classes of monad that arise in practice. As usual, we shall endeavour keep things simple, often restricting our attention to an algebraic perspective on $\mathsf{Set}$ monads.
We shall write $\mathsf{Mnd}(\mathcal{C})$ for the category of monads on a category $\mathcal{C}$, and monad morphisms between them. We shall also write $[\mathcal{C},\mathcal{C}]$ for the category of endofunctors on $\mathcal{C}$ and natural transformations between them. There is an obvious forgetful functor:
$\mathsf{Mnd}(\mathcal{C}) \rightarrow [\mathcal{C},\mathcal{C}]$
This functor does not in general have a left adjoint, so we have to consider existence of free objects individually. Let $F$ be an endofunctor. The monad $F^*$ is the free monad over $F$ if there exists a universal natural transformation $\theta : F \Rightarrow F^*$ such that for every monad $\mathbb{T}$ and natural transformation $\alpha : F \Rightarrow \mathbb{T}$ there exists a unique monad morphism $\hat{\alpha} : F^* \Rightarrow \mathbb{T}$ such that
$\hat{\alpha} \circ \theta = \alpha$
Note this is the usual categorical notion of free object, relative to the forgetful functor above. It is important to emphasise that there are other forgetful functors from the category of monads for which we could consider free objects. This setting is normally what is intended when the term is used without qualification.
The notation $F^*$ is fairly common, and relates to the similar notation used for free monoids, as there are many analogies.
Example: For a set $E$, the exception monad $(-) + E$ is the free monad over the constant endofunctor $K_E$ mapping everything to $E$ . For the usual encoding of coproducts in $\mathsf{Set}$ we have been using, the universal morphism is
$\theta(e) = (2,e)$
and for $\alpha : K_E \Rightarrow \mathbb{T}$ the unique fill-in morphism is:
$\hat{\alpha}(t) = \begin{cases} \eta(x), & \mbox{if } t = (1,x) \\ \alpha(e), & \mbox{if } t = (2,e) \end{cases}$
Having seen the abstract definition, and one example, it is natural to look for a more systematic strategy yielding free monads. For the approach we shall adopt, we are going to need a bit more machinery first.
For an endofunctor $F : \mathcal{C} \rightarrow \mathcal{C}$, there is a category of endofunctor algebras $\mathsf{Alg}(F)$, with objects pairs $(A, a : F(A) \rightarrow A)$, and a morphism $h : (A,a) \rightarrow (B,b)$ is a $\mathcal{C}$-morphism $h : A \rightarrow B$ such that:
$h \circ a = b \circ F(h)$
Composition and identities are as in $\mathcal{C}$. These conditions are similar to those we saw in discussion of the Eilenberg-Moore category in an earlier post. In fact, the category $\mathcal{C}^{\mathbb{T}}$ is the full subcategory of $\mathsf{Alg}(\mathbb{T})$ consisting of the objects satisfying the unit and multiplication axioms for an Eilenberg-Moore algebra.
There is a forgetful functor $R : \mathsf{Alg}(F) \rightarrow \mathcal{C}$. If this functor has a left adjoint $L : \mathcal{C} \rightarrow \mathsf{Alg}(F)$, then this induces a monad $R \circ L$ on $\mathcal{C}$. Furthermore, we have an equivalence of categories
$\mathsf{Alg}(F) \simeq \mathcal{C}^{R \circ L}$
Recall that such an equivalence is certainly not automatic. This returns us to the topic of monadicity, that we touched upon in an earlier post. We have not introduced enough background on monadicity to discuss this properly yet, so we take this as a fact.
If a monad is of the form $R \circ L$ as in the discussion above, it is referred to as the algebraically free monad over $F$. We can now state the main facts that we need:
1. If $\mathbb{T}$ is the algebraically free monad over $F$, then it is the free monad over $F$.
2. If $\mathcal{C}$ is a complete category, and for endofunctor $F : \mathcal{C} \rightarrow \mathcal{C}$, $F^*$ exists, then the forgetful functor $\mathsf{Alg}(F) \rightarrow \mathcal{C}$ has a left adjoint, and $\mathcal{C}^\mathbb{F^*} \simeq \mathsf{Alg}(F)$.
So in the case of $\mathsf{Set}$ monads, as the base category is complete, we can reduce the task of finding free monads to that of finding algebraically free monads.
To exploit this connection, we note that every algebraic signature $\Sigma$ induces a signature endofunctor, which we shall also write as $\Sigma : \mathsf{Set} \rightarrow \mathsf{Set}$. This functor is given by the follow coproduct:
$\Sigma(X) = \coprod_{o \in \Sigma} X^{\mathsf{ar}(o)}$
where $\mathsf{ar}(o)$ denotes the arity of operation symbol $o$.
Example: Let $\Sigma$ be a signature with a single binary operation and a constant. Then the corresponding endofunctor is:
$\Sigma(X) = X \times X + 1$
A $\Sigma$-algebra is simply a pair $(A, a : A \times A + 1 \rightarrow A)$. The function $a$ is equivalent to giving a binary function $A \times A \rightarrow A$, corresponding to the binary operation symbol, and an element of $A$, corresponding to the constant element. That is the same thing as specifying an algebra for the signature $\Sigma$.
More generally, it is hopefully clear that for a signature $\Sigma$, there is an isomorphism between the category of algebras for that signature, and the category of endofunctor algebras for the corresponding signature functor.
For a fixed signature, the free algebra over a set $X$ exists, and is given by the set of all $\Sigma$-terms over $X$. From the isomorphism above, this means that the left adjoint to the forgetful functor $\mathsf{Alg}(\Sigma) \rightarrow \mathsf{Set}$ exists, and so therefore does the free monad over $\Sigma$, with $\Sigma^*(X)$ is the set of all terms over the signature. As usual, it is important to consider examples:
Example: As we have seen in earlier posts, the exception monad $(-) + E$ is induced by an equational presentation with a signature consisting of just constant elements for each exception, and no equations. The signature functor is then the constant functor $K_E$, and so as we saw earlier, the exception monad is the free monad over this functor.
Example: Consider a signature with a single binary operation. This induces a functor:
$\Sigma(X) = X \times X$
For the free monad over this functor, if we write $+$ for the binary operation, the set $\Sigma^*(X)$ consists of terms such as
$x, x_1 + x_2, (x_1 + x_2) + (x_3 + x_4), \ldots$
We can identify these with binary trees, with the leaves labelled by elements of $X$. So we have constructed a binary tree monad as the free monad over $(-) \times (-)$.
Example: Extending the previous example, for an arbitrary signature $\Sigma$, the resulting free monad will have $\Sigma^*(X)$ consisting of trees such that:
1. The leaves are labelled with elements of $X$, or constant symbols from the signature.
2. The internal nodes of arity $n$ are labelled with an operation symbol of the same arity.
Of course, we could consider $\mathsf{Set}$ endofunctors that are not of the form of signature functors. There are certainly plenty of examples where the left adjoint we require exists. In order to get a concrete description of the induced monad, we would need an explicit description of these adjoints. This requires more technical machinery than we have had chance to introduce so far, so we shall defer this topic to a later post.
The free monad constructions in the previous section have some nice properties. If we fix an algebraic signature, and consider the free monad over the corresponding signature functor, as we observed, the elements of $\Sigma^*(X)$ can be thought of as certain labelled trees. Looking a bit more carefully:
1. The trees in the image of the monad unit are the trivial trees with just a leaf labelled by some element of $X$.
2. The functor action $\Sigma^*(h)$ just relabels the leaves of trees, according to the function $h$.
3. The functor $\Sigma^*$ can be restricted to the non-trivial trees, as these are preserved by the functor action. We shall denote this functor $\Sigma^+$.
4. There is a natural isomorphism $\Sigma^*(X) \cong X + \Sigma^+(X)$.
5. Using this isomorphism, the monad unit is the left coproduct injection.
6. The monad multiplication restricts to non-trivial terms as a natural transformation $\mu' : \Sigma^+ \circ \Sigma^* \Rightarrow \Sigma^+$ in the sense that $\mu \circ \kappa_2 \Sigma^* = \kappa_2 \circ \mu'$.
So free monads “keep the variables separate” in a well behaved way. The categorical structure above allow us to separate the variables from the non-trivial terms. We now abstract these observations to recover another well-studied class of monads.
An ideal monad is a monad $(\mathbb{T},\eta,\mu)$ such that:
1. There exists an endofunctor $\mathbb{T}^+$ with $\mathbb{T} = \mathsf{Id} + \mathbb{T}^+$.
2. The unit is the left coproduct injection.
3. There exists $\mu' : \mathbb{T}^+ \circ \mathbb{T} \Rightarrow \mathbb{T}^+$ such that $\mu \circ \kappa_2 \mathbb{T} = \kappa_2 \circ \mu'$.
Example: Free monads are ideal monads. This is clearly true for those induced by signature functors as discussed above, but in fact this result holds for free monads yielded by more general constructions.
Obviously there would be no need for a separate notion if the only examples were free monads.
Example: The list monad we have discussed previously restricts to a non-empty list monad, and this monad is ideal. The list monad is not ideal. If we consider the list monad as the free monoid monad, we can see this algebraically, as $x = x * 1$, so the unit is not a coproduct injection. Similarly, the non-ideal multiset monad restricts to the non-empty multiset monad, which is ideal.
We do need to be a little bit careful though. The non-empty finite powerset monad is not ideal. Algebraically this monad is the free join semilattice (without unit element) monad. The idempotence equation $x = x \vee x$ then ensures the monad unit cannot be a coproduct injection.
Algebraically, the ideal monads are the ones where no non-trivial equation of the form $x = t$ is provable, as we have seen in the example above.
There are other interesting classes of ideal monads, making conceptual use of this idea of being able to separate the non-trival terms from the variables. These examples are a whole topic in themselves, and we shall reserve them for a later post.
Further background: For those interested in the mathematics of free monads, a good place to start is the final chapter of Barr and Well’s book “Toposes, Triples and Theories”. A good source for background on ideal monads is Ghani and Uustalu’s paper “Coproducts of Ideal Monads”.
Acknowledgements: Ralph Sarkis pointed out several typos in a previous version of this post, which has greatly improved the content.
Previous posts have focussed quite a lot on general theory. In this post, we are going to examine a couple of specific monads in a fair amount of detail. Both can be seen as dealing with computations parameterised by a notion of external state.
For any set $R$, there is a monad with:
• Endofunctor: The monad endofunctor is $(-)^R$.
• Unit: The unit at $X$ maps its input to the constant function $x \mapsto (\lambda r.\; x)$.
• Multiplication: The multiplication acts as follows $\mu_X(f)(r) = f(r)(r)$.
This monad is called the reader or environment monad. The computational interpretation of this monad is that a function $X \rightarrow Y^R$ produces an output that depends on some state or environment it can read. Note the environment is fixed, computations cannot make modifications to it – this will be addressed later.
Aside: This monad arises via an adjunction involving slice categories. There is an obvious forgetful functor $U : \mathsf{Set} / R \rightarrow \mathsf{Set}$. This has a right adjoint $(-)^*$, with action on objects $A^* = A \times R \xrightarrow{\pi_2} R$. In turn, this functor has a further right adjoint $\Pi$. The composite $\Pi \circ (-)^*$ induces the reader monad.
At this level of abstraction, it is not immediately obvious that this adjunction induces the right monad. To at least convince ourselves that the endofunctors agree, we note there is then a series of bijections between hom sets:
1. $A \rightarrow \Pi(B^*)$
2. $A^* \rightarrow B^*$
3. $U(A^*) \rightarrow B$
We then note that $U(A^*) = A \times R$, and via Cartesian closure functions $A \times R$ bijectively correspond to functions $A \rightarrow A^R$. Therefore, $\Pi \circ (-)^* \cong (-)^R$. (The abstract argument generalises well beyond $\mathsf{Set}$).
Our main interest is to consider the reader monad from an algebraic perspective. To keep things as simple as possible, we are going to choose $R = \{ 0, 1 \}$. So computations have access to a single binary bit of external data, which can be set low (0) or high (1). As usual, we would like to have an algebraic presentation of this monad. It is natural to introduce a binary lookup operation $\mid$, such that informally we interpret
$x \mid y$
as a computation which lookups up what to do depending on the environment value, continuing as $x$ if the bit is low, or $y$ if it is high. We now need to identify some suitable equational axioms that this operation should respect. An immediate idea is to require the operation to be idempotent, that is
$x \mid x \;=\; x$
Intuitively, if we do $x$ regardless of whether the environment bit is high or low, that is the same as behaving exactly as $x$. A second axiom that springs to mind is:
$(w \mid x) \mid (y \mid z) \;=\; w \mid z$
Here the idea is that:
1. If the bit is low, we will act as the left component of the left component.
2. If the bit is high, we will act as the right component of the right component.
In fact, it will turn out that these two axioms are exactly what we need. A simple consequence of the axioms is that:
$x \mid (y \mid z) \;=\; (x \mid x) \mid (y \mid z) \;=\; (x \mid z) \;=\; (x \mid y) \mid (z \mid z) \;=\; (x \mid y) \mid z$
So the operation $|$ is associative. So we can now avoid the annoyance of writing brackets everywhere. We also notice that:
$x \mid y \mid z \;=\; x \mid y \mid z \mid z \;=\; x \mid z$
From the above equations, if we see a sequence of $\mid$ operations, we can eliminate everything but the end points. Combining this with idempotence, it is hopefully not two hard to convince yourself that every term is equal to a unique term of the form $x \mid y$. So the equivalence classes of terms over $X$ can be identified with pairs of elements from $X$, equivalently elements of $X^{\{ 0, 1\}}$.
Finally, we note that
$x \mid y \mid x \;=\; x \mid x \;=\; x$
In fact, the following are equivalent for a structure over a binary operator $\mid$:
1. The equations $x \mid x \;=\; x$ and $(w \mid x) \mid (y \mid z) \;=\; w \mid z$ hold.
2. The equations $x \mid (y \mid z) \;=\; (x \mid y) \mid z$ and $(x \mid y) \mid x \;=\; x$ hold.
For the bottom to top direction, using associativity to ignore brackets:
$x \mid z \;=\; x \mid y \mid x \mid z \mid y \mid z \;=\; x \mid y \mid z$
Using this ability to insert arbitrary elements twice, we calculate:
$w \mid z \;=\; w \mid x \mid z \;=\; w \mid x \mid y \mid z$
Finally, to show idempotence, reusing the first equation we proved:
$x \;=\; x \mid y \mid x \;=\; x \mid x$
The first set of equations we derived from computational intuitions, the second set of equations identify these structures as naturally occurring algebraic objects known as rectangular bands. (A band is an associative idempotent binary operation.) So the Eilenberg-Moore category of the reader monad on a bit is the category of rectangular bands and their homomorphisms.
We have already encountered the state monad, as the monad induced by the Cartesian closed structure on $\mathsf{Set}$. More precisely, we should refer to this as the global state monad.
Explicitly, for a fixed set of states $S$, this monad has:
• Endofunctor: $S \Rightarrow (- \times S)$.
• Unit: $\eta_X(x)(s) = (x,s)$.
• Multiplication: $\mu_X(f)(s) = \mathsf{let}\; (g,s') = f(s) \; \mathsf{in} \; g(s')$.
Again, we are going to keep things as simple as possible, and consider a one bit state space, so $S = \{ 0, 1 \}$. We would like to find an equational presentation for this monad. An obvious place to start is to extend the presentation for the one bit reader monad above, which gives us the infrastructure to decide how to proceed based on the current state. A natural next step is to add operations to manipulate the bit. We could add two unary update operations, say with $\mathop{\uparrow}(x)$ and $\mathop{\downarrow}(x)$ meaning set the state high (low) and proceed as $x$. We can actually make do with a single bit flip operation, which we shall denote $\mathop{\updownarrow}(x)$ with the intuitive reading of flip the state to its opposite value, and proceed as $x$. We quickly note that we can define:
$\mathop{\uparrow}(x) \;:=\; \mathop{\updownarrow}(x) \mid x \quad\mbox{ and }\quad \mathop{\downarrow}(x) \;:=\; x \mid \mathop{\updownarrow}(x)$
So the explicit bit setting operations are definable by only flipping the bit if it is the wrong value. Obviously, we expect that
$\mathop{\updownarrow}\mathop{\updownarrow}(x) = x$
as flipping the bit twice should leave us back where we started. We also expect flipping a bit to reverse how computation proceeds:
$\mathop{\updownarrow}(x \mid y) = \mathop{\updownarrow}(y) \mid \mathop{\updownarrow}(x)$
In fact, these equations give a presentation of the one bit state monad. Using the chosen equations, every term is equal to one of the form
$s \mid t$
where both $s$ and $t$ are either a variable, or of the form $\mathop{\updownarrow}(x)$. This encodes a function taking a bit, saying left or right, and return a variable, and applying a flip to the state if appropriate. That is, an element of $\{ 0, 1 \} \Rightarrow X \times \{ 0, 1 \}$.
## Generalising
To be more realistic, we really should consider larger state spaces or environments. Algebraically, this means instead of a binary lookup operation $\mid$ specifying which of two choices to take depending on a bit value, for an environment with $n$ possible values, we require an $n$-ary operation. As infix notation is no longer appropriate, we shall write lookup as:
$l(x_1,\ldots,x_n)$
The two axioms that induce the reader monad are the obvious extensions of those for a binary bit.
You may be wondering what to do if the environment is infinite in size. There are well-defined generalisations of universal algebra where operations of these bigger arities are permitted. As long as the maximum size of the operations is bounded, this all pretty much works as in the finite case, so we shall stick with that to keep the notation clearer.
For the state monad, we also need to think about how to change the state. Generalising the bit flip operation seems unnatural, for example should we rotate through the states in some arbitrary order? A better choice is to make the update operations primitive. Without loss of generality, it is notationally convenient to fix a state space $S = \{1,\ldots,n\}$ of natural numbers. We introduce a unary operation $u_i$ for each $i \in S$. The intuitive reading is that $u_i(x)$ updates the state to $i$ and continues as $u$. The equations we require, with their intuitive readings are:
1. $l(u_1(x),\ldots, u_n(x)) = x$. If we don’t depend on the state, we can ignore it.
2. $l(l(x_{1,1},\ldots,x_{1,n}),\ldots,l(x_{n,1},\ldots,x_{n,n})) = l(x_{1,1},\ldots,x_{n,n})$. If the state isn’t modified, we keep making the same choices.
3. $u_i(u_j(x)) = u_j(x)$. A second state change overwrites the first.
4. $u_i(l(x_1,\ldots,x_n)) = u_i(x_i)$. Update decides a subsequent lookup.
You may have expected the first axiom to be
$l(x,\ldots,x) = x$
In fact, this equation is derivable using the first two. The binary case is:
$x = l(u_1(x), u_2(x)) = l(l(u_1(x), u_2(x)),l(u_1(x),u_2(x))) = l(x,x)$
The general calculation is just notationally a bit harder to read.
The equations above appear in “Notions of Computation Determine Monads” by Plotkin and Power. In fact, they consider a further generalisation, reflecting a more realistic computational setting. We now consider a situation in which we have a set of memory addresses $A$, and each state has its own independent state value.
If we assume each memory location can take $n$ possible values, we require an $n$-ary lookup operation $l_a$ for each $a \in A$, and update operations $u_{a,i}$, intuitively setting address $a$ to value $i$. We require the previous equations for each address:
1. $l_a(u_{a,1}(x),\ldots, u_{a,n}(x)) = x$. If we don’t depend on the state at an address, we can ignore it.
2. $l_a(l_a(x_{1,1},\ldots,x_{1,n}),\ldots,l_a(x_{n,1},\ldots,x_{n,n})) = l_a(x_{1,1},\ldots,x_{n,n})$. If the state at an address isn’t modified, we keep making the same choices when looking up at that address.
3. $u_{a,i}(u_{a,j}(x)) = u_{a,j}(x)$. A second state change at the same address overwrites the first.
4. $u_{a,i}(l_a(x_1,\ldots,x_n)) = u_{a,i}(x_i)$. Update decides a subsequent lookup at the same address.
We also require some additional axioms, which encode that the states at different addresses don’t interfere with each other.
1. $l_a(l_{a'}(x_{1,1},\ldots,x_{1,n}),\ldots,l_{a'}(x_{n,1},\ldots,x_{n,n})) = l_{a'}(l_a(x_{1,1}, \ldots, x_{1,n}),\ldots,l_a(x_{1,n},\ldots,x_{n,n}))$ where $a \neq a'$. Choices based on different addresses commute with each other.
2. $u_{a,i}(u_{a',j}(x)) = u_{a',j}(u_{a,i}(x))$ where $a \neq a'$. Consecutive updates at different addresses commute with each other.
3. $u_{a,i}(l_{a'}(x_1,\ldots,x_n)) = l_{a'}(u_{a,i}(x_1),\ldots, u_{a,i}(x_n))$ where $a \neq a'$. Updates and lookups at different addresses commute with each other.
The resulting monad is the state monad on $A \times \{ 1,\ldots,n \}$. Although the algebraic perspective on the reader and state monads was somewhat involved, this does pay off. For example, we now have the data needed to introduce algebraic operations and effect handlers for these monads.
As usual, there are lots more details to take care of in realistic work, such as moving beyond the category $\mathsf{Set}$, but the underlying intuitions transfer well. Interested readers should read the Plotkin and Power paper mentioned above, which also covers more advanced topics, such as a monad for local state.
## The ins and outs of monads
In previous posts we have seen a lot of the basic mathematical infrastructure of monads. The aim of this post is to adopt a computational view on monads, viewing $\mathbb{T}(X)$ as effectful computations over an object $X$. From this perspective, we shall consider two natural questions:
1. How can we build such computations? In other words, how do we get things into $\mathbb{T}(X)$?
2. What can we do with computations once have have built them? In other words, how to we get things back out of $\mathbb{T}(X)$?
There is a lot of discussion of these topics in the literature. As usual, we will try and keep things simple, restricting attention to simple monads on $\mathsf{Set}$, and concentrating on the key ideas.
## Building computations
So far, the only way have have seen to build elements of $\mathbb{T}(X)$ is to apply the monad unit $\eta_X$.
Example: For the exception monad, for set of exceptions $E$, the value $\eta(x) = (1,x) \in X + E$ is a trivial computation that returns the value $x$, and doesn’t throw an exception.
Example: The finite powerset monad $\mathbb{P}$ can be seen as a form of bounded non-determinism. $\eta(x) = \{ x \} \in \mathbb{P}(X)$ is a trivially non-deterministic computation that always results in the value $x$.
Example: The list monad $\mathbb{L}$ can be seen as a form of non-determinism which can be easier to implement in a programming language. It is unusual as there is an order on computations, and repeated values are allowed. Obviously another perspective is to see such computations as living in a list data structure. Again $\eta(x) = [x]$ is the trivial singleton list containing only the value $x$.
The intuition we derive from these examples is that $\eta(x)$ yields a trivially effectful computation in some suitable sense. To get any real use out of these computational effects, we need to be able to build less boring examples as well. To solve this problem, as usual we shall adopt an algebraic perspective. Assume we have an equational presentation $(\Sigma, E)$. An $n$-ary operation $o \in \Sigma$ induces a function $\mathbb{T}^n(X) \mapsto \mathbb{T}(X)$ as follows:
$([t_1],\ldots,[t_n]) \mapsto [o(t_1,\ldots,t_n)]$
Continuing the examples above:
Example: The exception monad has a presentation consisting of a constant $e$ for each exception in $E$. These induce functions $\mathsf{raise}_e : 1 \rightarrow \mathbb{X}$ with action $\star \mapsto e$, which we can think of as raising the exception $e$.
You may wonder where the notion of catching or handling an exception appears. This is a good question, and will be addressed in the next section.
Example: The finite powerset monad has a presentation with a binary operation $\vee$ and a constant element $\bot$. The constant induces a function $\bot : 1 \rightarrow \mathbb{P}(X)$ with action $\star \mapsto \emptyset$, which we can think of as picking out a diverging computation.
The binary operation induces an operation $\vee : \mathbb{T}(X)^2 \rightarrow \mathbb{X}$ with action $(U,V) \mapsto U \cup V$, which takes two non-deterministic computations, and combines their possible behaviours. For example $\eta(x_1) \vee \eta(x_2)$ is a computation yielding either $x_1$ or $x_2$.
Example: Similarly to the previous example, the list monad has a presentation with binary operation $\times$, and a constant element $1$. The constant induces a function $1 \rightarrow \mathbb{T}(X)$ with action $\star \mapsto []$.
The binary operation induces a function $\mathbb{T}(X)^2 \rightarrow \mathbb{T}(X)$ with action $([x_1,\ldots,x_m],[x_{m + 1}, \ldots, x_{n}]) \mapsto [x_1,\ldots,x_n]$ concatentating two lists. We can interpret these operations as simply manipulating data structures, or more subtly as a form of non-determinism in which the operations respect the order on choices.
Rather than describe these operations directly in terms of substitutions, it would be better to phrase things in terms of the available monad structure. To do so, we abuse notation and let $m$ denote the set $\{1,\ldots,m \}$. A family of $m$ equivalence classes of terms over $X$, $([t_i])_{1 \leq i \leq m}$, can be identified with a morphism $t: m \rightarrow \mathbb{T}(X)$, and a single $m$-ary term with a map $o : 1 \rightarrow \mathbb{T}(m)$. The composite $t^* \circ o : 1 \mapsto \mathbb{T}(X)$, where $t^*$ denotes the Kleisli extension of $t$, corresponds to the substitution we are interested in:
$([t_1],\ldots,[t_n]) \mapsto [o(t_1,\ldots,t_n)]$
In fact, we have generalised slightly, as $o$ is now an arbitrary term, not just an operation appearing in the signature.
Of course, we expect these mapping $\mathbb{T}(X)^n \rightarrow \mathbb{T}(X)$ induced by Kleisli morphism to behave in a suitably uniform way. That is, they should be natural in some sense. This actually requires a modicum of care to get right. We should really consider our mapping as being of type $U_{\mathbb{T}}(X)^n \rightarrow U_{\mathbb{T}}(X)$. Here, $U_{\mathbb{T}} : \mathsf{Set}_{\mathbb{T}} \rightarrow \mathsf{Set}$ is the forgetful functor from the Kleisli category. This has action on objects $X \mapsto \mathbb{T}(X)$, so this rephrasing makes sense. We then note that the induced maps:
$U_{\mathbb{T}}^n(X) \rightarrow U_{\mathbb{T}}(X)$
are natural. This is sometimes phrased as being natural with respect to Kleisli morphisms. They also trivially interact well with the strength morphisms in $\mathsf{Set}$. In more general categories, there is a bijection between:
• Families of morphisms $U_{\mathbb{T}}^n(X) \rightarrow U_{\mathbb{T}}(X)$ Kleisli natural in $X$, and interacting well with monad strength. These are referred to as algebraic operations.
• Morphisms $1 \rightarrow \mathbb{T}(m)$, referred to as generic effects.
To give a denotational semantics for a serious programming language, a lot more details need filling in. The category $\mathsf{Set}$ is not suitable for realistic work, but is certainly a helpful way to build up the basic intuitions.
## Evaluating computations
Now we have some tools for building up elements of $\mathbb{T}(X)$, what can we do with them? The unit and multiplication of a monad have types $\mathsf{Id} \Rightarrow \mathbb{T}$ and $\mathbb{T}^2 \Rightarrow \mathbb{T}$, so so they give us no way “out of the monad”.
What we are looking for is a systematic way of building well-behaved maps of type $\mathbb{T}(X) \rightarrow Y$ for some $Y$. The key idea is to recall the Eilenberg-Moore adjunction. For a set $X$, the free algebra over $X$ is:
$F^{\mathbb{T}}(X) = (\mathbb{T}(X), \mu_X)$
The universal property of this adjunction means that if we can identity another Eilenberg-Moore algebra $(Y,\xi)$, and a homomorphism
$h : X \rightarrow Y$
these bijectively corresponds to algebra homomorphisms:
$F^{\mathbb{T}}(X) \xrightarrow{\hat{h}} (Y,\xi)$
That is, morphisms
$\mathbb{T}(X) \rightarrow Y$
respecting the algebraic structure. It is worth considering the roles of these components:
1. The homomorphism $h : X \rightarrow Y$ specifies how values should be interpreted.
2. The algebra $(Y, \xi)$ specifies how the structure used to build up an effectful computation should be evaluated, in a manner respecting the structure encoded by the monad.
As usual, this is probably best understood by considering some examples.
Example: For the exception monad over $E$, an Eilenberg-Moore algebra $(Y, \xi : Y + E \rightarrow Y)$ must act as the identity on the left component of the coproduct, and send each element of the right component to any value of its choosing in $Y$. In other words, we specify a default value to take for each of the exceptions that might be raised. Given a function $h : X \rightarrow Y$, the induced algebra morphism will map values according to $h$, and otherwise chose the value specified by $\xi$ when it encounters a raised exception.
Example: For the finite powerset monad $\mathbb{P}$, giving an Eilenberg-Moore algebra is equivalent to specifying a join semilattice structure. For example, we can consider the Eilenberg-Moore algebra on the natural numbers $\mathbb{N}$ specified by taking the maximum of pairs of elements, and bottom value $0$. The resulting homomorphism $\mathbb{P}(\mathbb{N}) \rightarrow \mathbb{N}$ induced by the identity will map a finite set to its maximum value (inevitably defaulting to $0$ if the set is empty).
Example: The pattern for the list monad is similar. An Eilenberg-Moore algebra is a monoid, so we must specify a multiplication and unit element. For example if $X$ is the set of two by two matrices over the natural numbers, this carries a monoid structure under matrix multiplication. The identity on $X$ then induces a map $\mathbb{L}(X) \rightarrow X$ that multiplies a list of matrices together from left to right. Note that this example fundamentally exploits the fact that the multiplication operation is not forced to be commutative.
The ideas in this section are the basics of what are known as effect handlers. As with algebraic operations, to produce a denotational semantics for a realistic programming language, and more complex handlers, there are many details to take care of, and we must move beyond $\mathsf{Set}$. However, this simplified setting is sufficient to illustrate the key ideas, without dealing with distracting technical details.
## Summary
We have have seen the basic mathematical machinery to build effectful computations within a monad, and also how to evaluate these computation to values. The main points are:
• The morphisms that allow us to build elements of $\mathbb{T}(X)$ arise naturally from an algebraic presentation of the monad.
• The morphisms that allow us to evaluate computations in $\mathbb{T}(X)$ require us to make choices as to how that evaluation should proceed, specifically a suitable Eilenberg-Moore algebra, and then exploit the universal property of free algebras.
The literature in this area contains many nice examples, and is enjoyable to read. For algebraic operations and generic effects, search for work of Plotkin and Power, and for effect handlers, the work of Pretnar, Plotkin and Bauer are good places to start reading. There is a lot more to these topics than the details we have sketched here, so it is well worth exploring further.
We have also been limited in our example by the relatively simple monads we have introduced so far. We will see more interesting monads from a computational perspective in later posts.
## Commutativity Algebraically
Recall the substitution notation we have been using:
$t[t_x / x \mid x \in X]$
denotes the term $t$, with each $x \in X$ simultaneously replaced by the term $t_x$. For an equational presentation $(\Sigma, E)$, inducing a monad $\mathbb{T}$, consider a pair:
$([s],[t]) \in \mathbb{T}(X) \times \mathbb{T}(Y)$
where $[t]$ denotes an equivalence class with representative term $t$. We can then write the action of the first double strength map in terms of representatives and substitutions as:
$\mathsf{dst}([s],[t]) = [t[s[(x,y)/x \mid x \in X] / y \mid y \in Y]]$
and the second:
$\mathsf{dst}'([s],[t]) = [s[t[(x,y)/y \mid y \in Y] / x \mid x \in X]]$
Therefore, the resulting monad is commutative if and only if for all $s,t$, the following equality is provable in equational logic:
$t[s[(x,y)/x \mid x \in X] / y \mid y \in Y] = s[t[(x,y)/y \mid y \in Y] / x \mid x \in X]$
We shall call this the commutativity condition.
The substitutions can be made a bit easier to read with a change of notation. For two terms $s,t$, let $x_1,\ldots,x_m$ and $y_1,\ldots,y_n$ be a choice of enumeration of the variables appearing in the two terms. We can then write them as
$s(x_1,\ldots, x_m) \quad\mbox{ and }\quad t(y_1,\ldots,y_n)$
and writing substitution in the natural way, the commutativity condition becomes:
$t(s((x_1,y_1),\ldots,(x_m,y_1)),\ldots, s((x_1,y_n),\ldots,(x_m,y_n))) = s(t((x_1,y_1),\ldots,(x_1,y_n)),\ldots,t((x_m,y_1),\ldots,(x_m,y_n)))$
That’s an awful lot of formal notation and brackets to deal with. Let’s consider some implications of this condition to build intuition for what it means in practice.
Example: Consider a presentation with binary operations $+$ and $\times$. The action of the double strengths on $[x + x'] \in \mathbb{T}(X)$ and $[y \times y'] \in \mathbb{T}(Y)$ are:
$\mathsf{dst}([x + x'], [y \times y']) = [((x,y) + (x',y)) \times ((x,y') + (x',y'))]$
and
$\mathsf{dst}'([x + x'], [y \times y']) = [((x,y) \times (x,y')) + ((x',y) \times (x',y'))]$
If the monad $\mathbb{T}$ is commutative, the following equality must be provable in equational logic:
$((x,y) + (x',y)) \times ((x,y') + (x',y')) = ((x,y) \times (x,y')) + ((x',y) \times (x',y'))$
As a special case of this observation, any monad presented by a binary operation must satisfy the following equation:
$((x,y) + (x',y)) + ((x,y') + (x',y')) = ((x,y) + (x,y')) + ((x',y) + (x',y'))$
If $+$ is associative, this boils down to:
$(x,y) + (x',y) + (x,y') + (x',y') = (x,y) + (x,y') + (x',y) + (x',y')$
The only difference between the two terms is the order of the middle two constants, so this equality will certainly hold for any associative commutative binary operation. This condition is satisfied in many natural algebraic structures, and does exhibit a very weak relationship between commutative operations and commutative monads.
An instructive boundary case is the following.
Example: Consider two terms $s,t$ in which no variables appear. These are constants in our equational theory. The commutativity condition implies that:
$s = t$
as all the variable substitutions become trivial. Therefore any equational presentation of a commutative monad can have at most one distinct constant term. We have already encountered this phenomenon. The exception monad is only commutative when there is at most one exception constant.
“Constant counting” can be a quick way to discount the possibility that a monad for an equational presentation is commutative. For example the monads corresponding to commutative rings or rigs (semirings) cannot be commutative as they have distinct constant symbols for the additive and multiplicative structures. These are further natural examples of monads with all the binary operations in the signature commutative, but the resulting monads are not commutative.
Another simple case is worth considering.
Example: Let $s \in \mathbb{T}(X)$ and $t \in \mathbb{T}(Y)$ be terms in which only one variable appears, say $x_0$ and $y_0$ respectively. Then the left hand side of the commutativity condition is:
$t[s[(x,y)/x \mid x \in X] / y \mid y \in Y] = t(s((x_0,y_0)))$
and the right hand side is:
$s[t[(x,y)/y \mid y \in Y] / x \mid x \in X] = s(t((x_0,y_0)))$
So for $\mathbb{T}$ to be commutative, each pair of unary terms must satisfy $s(t(x)) = t(s(x))$.
As an application of this special case, we introduce another commonly considered monad. For a fixed monoid $(M,\times,1)$ the writer monad has:
• Endofunctor: The endofunctor is the product $M \times (-)$.
• Unit: $\eta(x) = (1,x)$.
• Multiplication: $\mu(m,(n,x)) = (m \times n, x)$.
Computationally, this monad can be seen as encoding computations that also record some additional output such as logging. The monoid structure combines the outputs from successive computations. Algebraically, it corresponds to actions of the monoid $M$.
The writer monad has an equational presentation with one unary operational symbol for each element of the monoid, and equations:
• $1(x) = x$.
• $p(q(x)) = r(x)$ if and only if $p \times q = r$ in the monoid.
Using our observation above, the writer monad is commutative if and only if the monoid $M$ is commutative. So in this case, we do see a strong connection between the monadic and algebraic notions of commutativity.
Another useful boundary case is to consider what the commutativity condition means for variables.
Example: Consider a variable $x_0$ and an an arbitrary term $t$. The left hand side of the commutativity condition is:
$t[x_0[(x,y)/x \mid x \in X] / y \mid y \in Y] = t[(x_0,y) \mid y \in Y]$
and the right hand side is:
$x_0[t[(x,y)/y \mid y \in Y] / x \mid x \in X] = t[(x_0,y) \mid y \in Y]$
So variables always satisfy the commutativity condition with respect to any other term. With hindsight, maybe we should not find this too surprising.
We will now consider an important example, which will point the way to getting a better handle on the unpleasant looking general commutativity condition we deduced above.
Example: We now consider an equational presentation with a constant term $0$ and a binary term $x + x'$ and a unary term $h$. The equations yielded by the commutativity condition for the pairs $([h],[0])$ and $([h],[x + x'])$ are:
$h(0) = 0 \quad\mbox{ and }\quad h((x,y) + (x',y)) = h((x,y')) + h((x',y'))$
we can simplify the second condition by renaming variables, and we arrive at the conditions:
$h(0) = 0\quad\mbox{ and }\quad h(x + x') = h(x) + h(x')$
These conditions should hopefully look familiar, they are exactly the equations we require for $h$ to be a homomorphism with respect to $+$ and $0$.
We now aim to make the connection with homomorphisms from the previous example precise by making two observations:
1. For positive natural $k$ and set $Z$, the term $s(x_1,\ldots,x_m)$ induces an $m$-ary operation on $\mathbb{T}^k(Z)$ with action $(([t_{1,1}],\ldots,[t_{1,k}]),\ldots,([t_{m,1},\ldots,t_{m,k}])) \mapsto ([s(t_{1,1},\ldots,t_{m,1})],\ldots,[s(t_{1,k},\ldots,t_{m,k})])$.
2. Similarly, the term $t(y_1,\ldots,y_m)$ induces an $n$-ary function $\mathbb{T}(Z)^{n} \rightarrow \mathbb{T}(Z)$ with action $([t_1],\ldots, [t_n]) \mapsto [t(t_1,\ldots,t_n)]$
The homomorphism condition is equivalent to saying that the $n$-ary function $\mathbb{T}(X \times Y)^{n} \rightarrow \mathbb{T}(X \times Y)$ induced by $t$ is a homomorphism with respect to the $m$-ary operation induced by $s$. In this sense, the commutativity conditions can be rephrased as all the terms are homomorphisms with respect to each other.
From the algebraic perspective a monad is commutative in the sense that all terms can be commuted past each other as homomorphisms.
## Being commutative has nothing to do with being commutative
In explanations about commutative monads, it is common to see a passing remark such as “as we’d expect, the Abelian monoid monad is commutative”. Now I might be over-interpreting these comments, but they seem to imply that the monad being commutative is related to the algebraic structure having a commutative binary operation. Now it is true that the Abelian monoid (a.k.a. multiset) monad is commutative, and several other commutative monads are induced by algebraic structures with commutative binary operations, but how well does this intuition hold together more generally?
Firstly, we consider if being a commutative monad implies commutativity of the binary operations in equational presentations.
Counterexample: We saw last time that the finite distribution monad is commutative. This monad is isomorphic to the monad presented by a family of binary operations $\{+^r \mid r \in (0,1) \}$ satisfying the following equations:
• Idempotence: For all $r \in (0,1)$, $x +^r x = x$.
• Pseudo-commutativity: For all $r \in (0,1)$, $x +^r y = y +^{1 - r} x$.
• Pseudo-associativity: For all $r,s \in (0,1)$, $x +^r (y +^s z) = (x +^{\frac{r}{r + (1-r)s}} y) +^{r + (1-r)s} z$
Algebras for this theory are called convex algebras. Intuitively, the operation $x +^r y$ forms the probabilistic mixture $rx + (1-r)y$ and the axioms can be understood from this point of view. The pseudo-associativity axiom is particularly unpleasant to look at, but is easier to understand in terms of re-bracketing weighted binary combinations.
The axioms presenting convex algebras don’t have explicit commutativity equations for all the binary operations, but they might be derivable. To make sure this isn’t the case, we consider the concrete description of the free algebras that we get from the isomorphism to the finite distribution monad.
Specifically, the free algebra on $X$ has universe finitely supported convex sums over $X$, and the operations are the obvious weighted sums:
$(\sum_i p_i x_i) +^r (\sum_j q_j y_j) = \sum_i (r \times p_i) x_i + \sum_j ((1-r) \times q_j) y_j$
This operation is clearly not commutative, except when $r = \frac{1}{2}$. So there is a commutative monad presented by an algebraic theory with (many different) non-commutative algebraic operations.
(I thank Maaike Zwart for suggesting this natural concrete counterexample)
What about the other direction, does an equational presentation where all the binary operations are commutative imply commutativity?
Counterexample: We have already seen that the exception monad $(-) + E$ is only commutative if $E$ is a singleton set. The exception monad is isomorphic to the monad for an equational theory with constant symbols the elements of $E$, and no equations. So there are both commutative and non-commutative monads with presentations containing no binary operations at all.
The previous counterexample is a bit too trivial to be satisfactory, as there are no binary operations involved at all, and we’re quantifying over the empty set when making statements about commutativity in the algebraic sense.
As a second attempt, we shall synthesise an equational presentation in which the only components involved are commutative binary operations.
Counterexample: Consider an equational presentation with two binary operations $\oplus$ and $\otimes$, with the only equations being those requiring both operations are commutative. We consider the action of the double strengths on the equivalence classes $[w \oplus x]$ and $[y \otimes z]$. For the first:
$\mathsf{dst}([w \oplus x], [y \otimes z]) = [((w,y)\oplus(x,y)) \otimes ((w,z)\oplus(x,z))]$
and for the second:
$\mathsf{dst}'([w \oplus x], [y \otimes z]) = [((w,y)\otimes(w,z)) \oplus ((x,y)\otimes(x,z))]$
If this monad is commutative, we must have equality of equivalence classes:
$[((w,y)\oplus(x,y)) \otimes ((w,z)\oplus(x,z))] = [((w,y)\otimes(w,z)) \oplus ((x,y)\otimes(x,z))]$
We note that the provable equalities $t = t'$ in our theory must have the same number of occurrences of $\oplus$ and $\otimes$ on both sides. Therefore the two equivalence classes are distinct, and the monad is not commutative.
So we have seen that there is a monad that is not commutative, presented by an equational theory containing only commutative binary operations.
The previous two counterexamples reflect the fact that commutative binary operations should not have anything essential to do with monad commutativity. Monads presented by theories with just constants may or may not be commutative. Theories just involving just binary operations may or may not be commutative. Obviously there are other arities of operation to consider, but by this point hopefully it’s clear that there’s no tight relationship.
In fact, a monad being commutative does imply that certain equations must hold. In the case of binary operations, it is sufficient for the operation to be commutative to satisfy some of these equations. In simple cases, for example theories with a single binary operation, this might explain some of the misleading patterns that emerge.
We shall examine commutative monads from an algebraic perspective in a later post, and see exactly what it is that can be commuted that inspires the terminology.
We saw the two double strength natural transformations in the previous post:
$\mathsf{dst}, \mathsf{dst}' : \mathbb{T}(A) \otimes \mathbb{T}(B) \rightarrow \mathbb(T)(A \otimes B)$
A commutative monad is a strong monad for which $\mathsf{dst} = \mathsf{dst}'$. We saw last time that the list monad is not commutative, but the powerset monad is. In this post we will restrict ourselves to examining some more instructive examples. This will help build our intuitions, and the examples lay the groundwork for discussions in later posts.
Example: For a set $E$, there is a monad with:
• Endofunctor: $(-) + E$.
• Unit: The unit maps an element into the left component of the coproduct $x \mapsto (1,x)$.
• Multiplication: The multiplication: $\mu_X : (X + E) + E \Rightarrow X + E$ does the “obvious thing”, $(1,(1,x)) \mapsto (1,x)$, $(1,(2,e)) \mapsto (2,e)$ and $(2,e) \mapsto (2,e)$.
The monad is sometimes referred to as the exception monad. Computationally, we can interpret a Kleisli morphism $X \rightarrow Y + E$ as a function that transforms elements of $X$ to elements of $Y$, but may return error or exception values captured by $E$. The first double strength for this monad is defined by the following cases:
1. $\mathsf{dst}((1,x), (1,y)) = (1, (x,y))$.
2. $\mathsf{dst}((2,e),(1,y)) = (2,e)$.
3. $\mathsf{dst}((1,x),(2,e)) = (2,e)$.
4. $\mathsf{dst}((1,e_1),(1,e_2)) = (2,e_2)$.
Notice that exception values are preferred, but there is rather arbitrary choice that has to be made in the fourth case. The second double strength agrees with the first, except for the final case, where it makes the other choice of exception to prefer:
$\mathsf{dst}'((1,e_1),(1,e_2)) = (2,e_1)$.
So we see that the exception monad is only commutative if there’s exactly one exception. This special case is sometimes referred to as the maybe monad, as computationally it encodes functions that may fail.
Example: The multiset monad is commutative. To describe this, we shall introduce the notation
$\{ x_1 : k_1,\ldots, x_n : k_n \}$
for a multiset where element $x_i$ appears with multiplicity $k_i$. The action of both double strength maps sends the pair of multisets:
$(\{ x_1 : k_1, \ldots, x_n : k_n \}, \{ y_1 : l_1,\ldots, y_m : l_m \})$
to the multiset:
$\{ (x_i , y_j) : k_i \times l_j \mid 1 \leq i \leq n, 1 \leq j \leq m \}$.
Example: Another monad that occurs commonly in practice is the finite probability monad on $\mathsf{Set}$. This has:
• Endofunctor: $\mathbb{D}(X)$ has elements finitely supported formal convex sums $\sum_i p_i x_i$. These are weighted sum of elements of $X$ such that for each weight $p_i$, $1 \leq p_i \leq 1$, $\sum_i p_i = 1$ and only finitely many $p_i$ are non-zero. Alternatively, these can be thought of as functions $X \rightarrow [0,1]$ satisfying the previous conditions on the weights.
• Unit: $\eta_X(x) = x$. That is, the unit maps an element of $X$ to the corresponding trivial sum.
• Multiplication: $\mu_X(\sum_i p_i (\sum_j q_{i,j} x_{i,j})) = \sum_i \sum_j (p_i \times q_{i,j}) x_{i,j}$. This is simply flattening out a sum of sums.
This monad is commutative, with the action of both double strengths being:
$(\sum_i p_i x_i, \sum_j q_j y_j) \mapsto \sum_i \sum_j (p_i \times q_j) (x_i,y_j)$
For example:
$(\frac{1}{4}x + \frac{3}{4}x', \frac{1}{3} y + \frac{2}{3} y') \mapsto \frac{1}{12}(x,y) + \frac{1}{4}(x',y) + \frac{1}{6}(x,y') + \frac{1}{2}(x',y')$
In the next post we will explore a slightly misleading intuition that is commonly hinted at in the literature.
|
# 输入输出重定向
想要把多个Linux 命令适当地组合到一起,使其协同工作,以便我们更加高效地处理数据。要做到这一点,就必须搞明白命令的输入重定向和输出重定向的原理。
简而言之,输入重定向是指把文件或者键盘输入导入到命令中,而输出重定向则是指把原本要输出到屏幕的数据信息写入到指定文件中。在日常的学习和工作中,相较于输入重定向,我们使用输出重定向的频率更高,所以又将输出重定向分为了标准输出重定向错误输出重定向两种不同的技术,以及清空写入追加写入两种模式。听起来就很玄妙?
• 标准输入重定向(STDIN,文件描述符为0):默认从键盘输入,也可从其他文件或命令中输入。
• 标准输出重定向(STDOUT,文件描述符为1):默认输出到屏幕。
• 错误输出重定向(STDERR,文件描述符为2):默认输出到屏幕。
对于输入重定向来讲,用到的符号及其作用如下表所示:
对于输出重定向来讲,用到的符号及其作用如下表所示:
[root@lynchj tmp]# man bash > readme.txt
BASH(1) General Commands Manual BASH(1)
NAME
bash - GNU Bourne-Again SHell
SYNOPSIS
bash [options] [file]
Bash is Copyright (C) 1989-2011 by the Free Software Foundation, Inc.
DESCRIPTION
Bash is an sh-compatible command language interpreter that executes commands read from the standard input or from a file. Bash also incorporates useful features from the Korn and C
shells (ksh and csh).
Bash is intended to be a conformant implementation of the Shell and Utilities portion of the IEEE POSIX specification (IEEE Standard 1003.1). Bash can be configured to be POSIX-confor‐
mant by default.
OPTIONS
All of the single-character shell options documented in the description of the set builtin command can be used as options when the shell is invoked. In addition, bash interprets the
following options when it is invoked:
-c string If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0. -i If the -i option is present, the shell is interactive. -l Make bash act as if it had been invoked as a login shell (see INVOCATION below). -r If the -r option is present, the shell becomes restricted (see RESTRICTED SHELL below). -s If the -s option is present, or if no arguments remain after option processing, then commands are read from the standard input. This option allows the positional parameters to be set when invoking an interactive shell. -D A list of all double-quoted strings preceded by$ is printed on the standard output. These are the strings that are subject to language translation when the current locale is
not C or POSIX. This implies the -n option; no commands will be executed.
[-+]O [shopt_option]
shopt_option is one of the shell options accepted by the shopt builtin (see SHELL BUILTIN COMMANDS below). If shopt_option is present, -O sets the value of that option; +O
unsets it. If shopt_option is not supplied, the names and values of the shell options accepted by shopt are printed on the standard output. If the invocation option is +O,
the output is displayed in a format that may be reused as input.
-- A -- signals the end of options and disables further option processing. Any arguments after the -- are treated as filenames and arguments. An argument of - is equivalent to
--.
Bash also interprets a number of multi-character options. These options must appear on the command line before the single-character options to be recognized.
--debugger
Arrange for the debugger profile to be executed before the shell starts. Turns on extended debugging mode (see the description of the extdebug option to the shopt builtin
below).
--dump-po-strings
Equivalent to -D, but the output is in the GNU gettext po (portable object) file format.
--dump-strings
Equivalent to -D.
--help Display a usage message on standard output and exit successfully.
--init-file file
--rcfile file
Execute commands from file instead of the standard personal initialization file ~/.bashrc if the shell is interactive (see INVOCATION below).
Equivalent to -l.
--noediting
Do not use the GNU readline library to read command lines when the shell is interactive.
………………省略部分输出信息………………
[root@lynchj tmp]# echo "哈哈,进行追加输出了" >> readme.txt
[root@lynchj tmp]# tail -n 10 readme.txt
cutes the next command in the sequence. It suffices to place the sequence of commands between parentheses to force it into a subshell, which may be stopped as a unit.
Array variables may not (yet) be exported.
There may be only one active coprocess at a time.
GNU Bash-4.2 2010 December 28 BASH(1)
-rw-r--r--. 1 root root 284185 May 15 16:53 readme.txt
ls: cannot access readme2.txt: No such file or directory
如果想把命令的报错信息写入到文件,该怎么操作呢?当用户在执行一个自动化的Shell脚本时,这个操作会特别有用,而且特别实用,因为它可以把整个脚本执行过程中的报错信息都记录到文件中,便于安装后的排错工作。接下来我们以一个不存在的文件进行实验演示:
[root@lynchj tmp]# ll xxxxxxxxxxxx
ls: cannot access xxxxxxxxxxxx: No such file or directory
[root@lynchj tmp]# ll xxxxxxxxxxxx > readme.txt
ls: cannot access xxxxxxxxxxxx: No such file or directory
[root@lynchj tmp]# ll xxxxxxxxxxxx 2> readme.txt
ls: cannot access xxxxxxxxxxxx: No such file or directory
[root@lynchj tmp]# wc -m < readme.txt
58
• 擅长领域:
• Java
• Linux
• 评论
• 上一篇
• 下一篇
|
# Big differential
1. Oct 13, 2009
### a.mlw.walker
Big differential, just want to make sure I am doing correct thing here.
a is the only changing dimension, r and l are constants
$$\Theta=arccos\left(\frac{r^{2}+\left(r+l-a\right)^{2}-l^{2}}{2r\left(r+l-a\right)}\right)$$
I want to differentiate $$\frac{\delta\Theta}{\delta\ a}$$
So what I did was using
$$\Theta=arccos a$$
$$cos\Theta=a$$ differentiate this
$$-sin\Theta \frac{\delta\Theta}{\delta\ a}=1$$
therefore with trig identities and a rearrange:
$$\frac{\delta\Theta}{\delta\ a}=\frac{-1}{\sqrt{1-a^{2}}}$$
where $$a = \left(\frac{r^{2}+\left(r+l-a\right)^{2}-l^{2}}{2r\left(r+l-a\right)}\right)$$
So the differential of theta with respect to a is
$$\frac{\delta\Theta}{\delta\ a}=\frac{-a^{'}}{\sqrt{1-a^{2}}}$$
This involves the quotient rule, and I end up the expression below: I took out a factor of 4 top and bottom of the differential of a, hence the 3/2 coefficient
$$\frac{\delta\Theta}{\delta\ a}= \frac{ra^{2}-\left(2r^{2}-rl\right)a + \left(2r^{3}+3lr^{2}+l^{2}r\right)}{ra^{3}-\frac{3}{2}r^{2}a^{2}+\left(2r^{3}+3lr^{2}+rl^{2}\right)a-\left(r^{4}-2r^{3}l-r^{2}l^{2}\right)} / \sqrt{1-\left(\frac{a^{2}-2a\left(r-l\right)+\left(2r^{2}+2rl\right)}{-2ra + 2r^{2} +2rl}\right)$$
Thanks
Last edited: Oct 13, 2009
|
IJPAM: Volume 74, No. 4 (2012)
A NEW CHARACTERIZATION FOR INCLINED CURVES BY
THE HELP OF SPHERICAL REPRESENTATIONS
ACCORDING TO BISHOP FRAME
Raheleh Ghadami, Yusuf Yayli
Department of Mathematics
Islamic Azad University Urmia Branch
Urmia, IRAN
Department of Mathematics
Faculty of Science
University of Ankara
Tandoğan, Ankara, TURKEY
Abstract. In this paper we investigate spherical images the and indicatrix of a slant helix. We obtain that the spherical images are spherical helices. Moreover, arc lengths of spherical representations of tangent vector field vector field vector field and the vector field , where is the Darboux vector field of a space curve in are calculated. Let us denote the spherical representation of and by and respectively.The arc element of the spherical representation expressed in terms of the harmonic curvature is slant helix of bishop frame. Thus the following characterization is given. The curve is an inclined curve if and only if the arc length of the Darboux spherical representation of is constant.
Received: October 31, 2011
AMS Subject Classification: 53A04, 53A99
Key Words and Phrases: inclined curve, harmonic curvature, ordinary helix, slant helix, spherical helix
Download paper from here.
Source: International Journal of Pure and Applied Mathematics
ISSN printed version: 1311-8080
ISSN on-line version: 1314-3395
Year: 2012
Volume: 74
Issue: 4
|
# Complement of F-Sigma Set is G-Delta Set
## Theorem
Let $T = \left({S, \tau}\right)$ be a topological space.
Let $X$ be an $F_\sigma$ set of $T$.
Then its complement $S \setminus X$ is a $G_\delta$ set of $S$.
## Proof
Let $X$ be an $F_\sigma$ set of $T$.
Then $X = \displaystyle \bigcup \mathcal V$ where $\mathcal V$ is a countable union of closed sets in $T$.
Then from De Morgan's Laws: Difference with Union we have:
$\displaystyle S \setminus X = S \setminus \bigcup \mathcal V = \bigcap_{V \mathop \in \mathcal V} \left({S \setminus V}\right)$
By definition of closed set, each of the $S \setminus V$ are open sets.
So $\displaystyle \bigcap_{V \mathop \in \mathcal V} \left({S \setminus V}\right)$ is a countable intersection of open sets in $T$.
Hence $S \setminus X$ is, by definition, a $G_\delta$ set of $T$.
$\blacksquare$
|
141 Answered Questions for the topic Multiplication
Multiplication Algebra 1 7th
05/22/20
#### Algebra 1 Question Help
So to solve a problem i need to do this step and its kind of hard... thank you for helpingSo i need two numbers that multiply to -24 but those two numbers also have to add up to -10.
Multiplication Fractions Money
09/19/19
#### Which is the better buy?
Option 1:3 1/3 lb. Of turkey for $10.50Option 2:2 1/2 lb. of turkey for$6.25
Multiplication Elementary Math
06/19/19
#### can this be solved by multiplying 3/4 times 5? explain.
A cake for a class party was cut into pieces of equal size. There are 5 pieces of cake left. 3/4 of the class still wants cake. What fraction of the cake will be eaten?
Multiplication Algebra 2 Polynomials
06/18/19
#### Comparison between multiplying and dividing polynomials
In what ways are the steps for the long division of polynomials algorithm similar to the steps for the multiplying polynomials algorithm? In what ways are they different?
Multiplication
02/28/19
#### what is 3 times 4
I need the answer to three (3) times (4) because we are learning multiplication in 3rd grade.
Multiplication Division
10/29/18
#### How much money does Hannah have
Hannah and Francine has $120. Hannah and Peter have$230. Peter has 6 times as much money as Francine.
Multiplication Subtraction
10/28/18
#### Calculate the value.
(4+3√3/3/8)2
Multiplication Subtraction
10/28/18
#### How to solve this?
The total price of a cupboard,a chair and a table is $1500.The price of the cupboard is twice the price of the table.The chair is$80 cheaper than the table.Find the price of each item.
Multiplication
09/25/18
#### i dont get it can anyone explain it to me
mrs. krin has $300 deducted from her checking account every month foe her car payment. she also has$150 deducted every month for her insurance. after 1 year, How much do these payments change her... more
Multiplication
09/14/18
#### Mr. Smith goes to the gym 3 times a week, how many times does he go in a year?
Mr. Smith goes to the gym 3 times a week, how many times does he go in a year? Is the answer 3(52.18) = 156.54
Multiplication
09/14/18
#### Cheryl hikes 7 miles in 3 days, how many miles does she hike in 12 weeks?
Multiplication problem.
Multiplication
08/24/18
#### joey bought a 72 ounce box of dog biscuits.How many pounds did he bye
how much did he bye
Multiplication
08/23/18
#### how do i do this mathematically.
What is the smallest multiple of 75 that consists of just 1's and 0's?
Multiplication
08/10/18
Multiplication
07/11/18
#### The sum of two number is 200 and their product is 8236 what are the number
What are the two number that when multiply the answer is 8236
07/10/18
2+2*3+2*2+4
06/02/18
#### I’m a figure with 6 layers. Each of my layers is the same. My bottom layer has a perimeter of 28 units, and my volume is between 200 and 300. What’s my volume?
I need help with volume this question needs help I’m in the 5th grade and I’m struggling
## Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
#### OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
|
# Quasiconcavity and homogeneity
How to prove that if $f$ is strictly quasi-concave and homogeneous of degree 1, then $f$ is concave? It was left as an exercise by Silberberg & Suen (2001), p.140.
I simply could not elaborate any sketches to leave here as a starting point.
• Isn't this a mathematics question rather than an economics one? – Mozibur Ullah Aug 14 '18 at 23:56
Take any $x,y\in\mathbb R^n$. Observe that homogeneity of degree 1 (HD1) implies that $$f(x/f(x))=f(x)/f(x)=1=f(y/f(y)).$$ For any $\alpha\in(0,1)$, let $$\theta=\frac{\alpha f(x)}{\alpha f(x)+(1-\alpha)f(y)}.\tag{1}$$ Note that $\theta$ also lives in the (open) unit interval. Thus, by quasi-concavity, we have for every $\theta\in(0,1)$, $$f\left(\theta\frac{x}{f(x)}+(1-\theta)\frac{y}{f(y)}\right)\ge\min\left\{\frac{x}{f(x)},\frac{y}{f(y)}\right\}=1.$$ Expanding the LHS using $(1)$, we get $$f\left(\frac{\alpha x+(1-\alpha)y}{\alpha f(x)+(1-\alpha)f(y)}\right)\ge1.$$ Invoking HD1 again, we have $$\frac{f(\alpha x+(1-\alpha)y)}{\alpha f(x)+(1-\alpha)f(y)}\ge1 \quad\Leftrightarrow\quad f(\alpha x+(1-\alpha)y)\ge \alpha f(x)+(1-\alpha)f(y),$$ which means $f$ is concave.
|
Home » Posts tagged "innodb"
## EnterpriseDB 打算推出的 zheap,想要解 VACUUM 問題...
PostgreSQL 與 InnoDB 都是透過 MVCC 的概念實做 transaction 之間的互動,但兩者實際的作法不太一樣。其中帶來一個明顯的差異就是 PostgreSQL 需要 VACUUM。這點在同一篇作者八年前 (2011) 的文章就有提過兩者的差異以及優缺點:「MySQL vs. PostgreSQL, Part 2: VACUUM vs. Purge」。
`UPDATE` 時,InnoDB 會把新資料寫到表格內,然後把可能會被 rollback 的舊資料放到表格外:
In InnoDB, only the most recent version of an updated row is retained in the table itself. Old versions of updated rows are moved to the rollback segment, while deleted row versions are left in place and marked for future cleanup. Thus, purge must get rid of any deleted rows from the table itself, and clear out any old versions of updated rows from the rollback segment.
All the information necessary to find the deleted records that might need to be purged is also written to the rollback segment, so it's quite easy to find the rows that need to be cleaned out; and the old versions of the updated records are all in the rollback segment itself, so those are easy to find, too.
PostgreSQL takes a completely different approach. There is no rollback tablespace, or anything similar. When a row is updated, the old version is left in place; the new version is simply written into the table along with it.
Lacking a centralized record of what must be purged, PostgreSQL's VACUUM has historically needed to scan the entire table to look for records that might require cleanup.
That brings me to the design which EnterpriseDB is proposing. We are working to build a new table storage format for PostgreSQL, which we’re calling zheap. In a zheap, whenever possible, we handle an UPDATE by moving the old row version to an undo log, and putting the new row version in the place previously occupied by the old one. If the transaction aborts, we retrieve the old row version from undo and put it back in the original location; if a concurrent transaction needs to see the old row version, it can find it in undo. Of course, this doesn’t work when the block is full and the row is getting wider, and there are some other problem cases as well, but it covers many useful cases. In the typical case, therefore, even bulk updates do not force a zheap to grow. Instead, the undo grows. When a transaction commits, all row versions that will become dead are in the undo, not the zheap.
## InnoDB 的 MVCC 繁忙時的效能問題
Facebook 上看到 Percona 的人修正了 InnoDB 的 MVCC 在繁忙時會有 $O(n^2)$ 的效能問題:
MySQL 官方的 bug tracking system 是「InnoDB's MVCC has O(N^2) behaviors」這個,可以看到給的重製範例是在 transaction 內大量塞 `INSERT` 進去後,另外一個 transaction 使用 secondary index 就會受到影響。
## Percona 比較 MySQL 與 MariaDB 預設值的差異
Percona 的人花了些時間整理 MySQL 5.7 與 MariaDB 10.2 在預設值上的差異:「MySQL and MariaDB Default Configuration Differences」。
## Facebook 把 InnoDB 換成 MyRocks 的計畫
Facebook 已經大量導入全 Flash 的環境,於是現在 InnoDB (Compressed) 的情況類似於這樣:
## InnoDB 與 MyRocks 之間的取捨
MyRocks 的主要作者 Mark Callaghan 整理了一篇關於大台機器下,資料可以放到記憶體內的效能比較:「In-memory sysbench, a larger server and contention - part 1」。
## MySQL 上不同 Isolation Level 對效能的影響
the default value for innodb_purge_threads, which is 4, can cause too much mutex contention and a loss in QPS on small servers. For sysbench update-only I lose 25% of updates/second with 5.7.17 and 15% with 8.0.1 when going from innodb_purge_threads=1 to =4.
## MySQL 上 Replication 的方案
Percona 的人整理了一篇關於 Replication 的方案 (以及 NDB,不過這邊就先偷偷跳過去...),雖然標題寫的是 High Availability:「The MySQL High Availability Landscape in 2017 (The Elders)」。
## MySQL 總算要拔掉 mysql_query_cache 了
Although MySQL Query Cache was meant to improve performance, it has serious scalability issues and it can easily become a severe bottleneck.
We also agree with Rene’s conclusion, that caching provides the greatest benefit when it is moved closer to the client:
## InnoDB redo log 大小對效能的影響
tl;dr - conclusions specific to my test
1. A larger redo log improves throughput
2. A larger redo log helps more with slower storage than with faster storage because page writeback is more of a bottleneck with slower storage and a larger redo log reduces writeback.
3. A larger redo log can help more when the working set is cached because there are no stalls from storage reads and storage writes are more likely to be a bottleneck.
4. InnoDB in MySQL 5.7.17 is much faster than 5.6.35 in all cases except IO-bound + fast SSD
The results above show average throughput and that hides a lot of interesting behavior. We expect throughput over time to not suffer from variance -- for both InnoDB and for MyRocks. For many of the results below there is a lot of variance (jitter).
|
Or connect using:
A view to the gallery of my mind > recent entries > calendar > friends > Website > profile > previous 20 entries
Monday, February 8th, 2016
11:03 am - Reality is broken, or, an XCOM2 review
Wednesday, December 16th, 2015
10:10 am - Me and Star Wars
Saturday, November 28th, 2015
6:26 pm - Desiderata for a model of human values
Soares (2015) defines the value learning problem as By what methods could an intelligent machine be constructed to reliably learn what to value and to act as its operators intended? There have been a few attempts to formalize this question. Dewey (2011) started from the notion of building an AI that maximized a given utility function, and then moved on to suggest that a value learner should exhibit uncertainty over utility functions and then take “the action with the highest expected value, calculated by a weighted average over the agent’s pool of possible utility functions.” This is a reasonable starting point, but a very general one: in particular, it gives us no criteria by which we or the AI could judge the correctness of a utility function which it is considering. To improve on Dewey’s definition, we would need to get a clearer idea of just what we mean by human values. In this post, I don’t yet want to offer any preliminary definition: rather, I’d like to ask what properties we’d like a definition of human values to have. Once we have a set of such criteria, we can use them as a guideline to evaluate various offered definitions. By “human values”, I here basically mean the values of any given individual: we are not talking about the values of, say, a whole culture, but rather just one person within that culture. While the problem of aggregating or combining the values of many different individuals is also an important one, we should probably start from the point where we can understand the values of just a single person, and then use that understanding to figure out what to do with conflicting values. In order to make the purpose of this exercise as clear as possible, let’s start with the most important desideratum, of which all the others are arguably special cases of: 1. Useful for AI safety engineering. Our model needs to be useful for the purpose of building AIs that are aligned with human interests, such as by making it possible for an AI to evaluate whether its model of human values is correct, and by allowing human engineers to evaluate whether a proposed AI design would be likely to further human values. In the context of AI safety engineering, the main model for human values that gets mentioned is that of utility functions. The one problem with utility functions that everyone always brings up, is that humans have been shown not to have consistent utility functions. This suggests two new desiderata: 2. Psychologically realistic. The proposed model should be compatible with that which we know about current human values, and not make predictions about human behavior which can be shown to be empirically false. 3. Testable. The proposed model should be specific enough to make clear predictions, which can then be tested. As additional requirements related to the above ones, we may wish to add: 4. Functional. The proposed model should be able to explain what the functional role of “values” is: how do they affect and drive our behavior? The model should be specific enough to allow us to construct computational simulations of agents with a similar value system, and see whether those agents behave as expected within some simulated environment. 5. Integrated with existing theories. The proposed definition model should, to as large an extent possible, fit together with existing knowledge from related fields such as moral psychology, evolutionary psychology, neuroscience, sociology, artificial intelligence, behavioral economics, and so on. However, I would argue that as a model of human value, utility functions also have other clear flaws. They do not clearly satisfy these desiderata: 6. Suited for modeling internal conflicts and higher-order desires. A drug addict may desire a drug, while also desiring that he not desire it. More generally, people may be genuinely conflicted between different values, endorsing contradictory sets of them given different situations or thought experiments, and they may struggle to behave in a way in which they would like to behave. The proposed model should be capable of modeling these conflicts, as well as the way that people resolve them. 7. Suited for modeling changing and evolving values. A utility function is implicitly static: once it has been defined, it does not change. In contrast, human values are constantly evolving. The proposed model should be able to incorporate this, as well as to predict how our values would change given some specific outcomes. Among other benefits, an AI whose model of human values had this property might be able to predict things that our future selves would regret doing (even if our current values approved of those things), and warn us about this possibility in advance. 8. Suited for generalizing from our existing values to new ones. Technological and social change often cause new dilemmas, for which our existing values may not provide a clear answer. As a historical example (Lessig 2004), American law traditionally held that a landowner did not only control his land but also everything above it, to “an indefinite extent, upwards”. Upon the invention of this airplane, this raised the question – could landowners forbid airplanes from flying over their land, or was the ownership of the land limited to some specific height, above which the landowners had no control? In answer to this question, the concept of landownership was redefined to only extend a limited, and not an indefinite, amount upwards. Intuitively, one might think that this decision was made because the redefined concept did not substantially weaken the position of landowners, while allowing for entirely new possibilities for travel. Our model of value should be capable of figuring out such compromises, rather than treating values such as landownership as black boxes, with no understanding of why people value them. As an example of using the current criteria, let’s try applying them to the only paper that I know of that has tried to propose a model of human values in an AI safety engineering context: Sezener (2015). This paper takes an inverse reinforcement learning approach, modeling a human as an agent that interacts with its environment in order to maximize a sum of rewards. It then proposes a value learning design where the value learner is an agent that uses Solomonoff’s universal prior in order to find the program generating the rewards, based on the human’s actions. Basically, a human’s values are equivalent to a human’s reward function. Let’s see to what extent this proposal meets our criteria. Useful for AI safety engineering. To the extent that the proposed model is correct, it would clearly be useful. Sezener provides an equation that could be used to obtain the probability of any given program being the true reward generating program. This could then be plugged directly into a value learning agent similar to the ones outlined in Dewey (2011), to estimate the probability of its models of human values being true. That said, the equation is incomputable, but it could be possible to construct computable approximations. Psychologically realistic. Sezener assumes the existence of a single, distinct reward process, and suggests that this is a “reasonable assumption from a neuroscientific point of view because all reward signals are generated by brain areas such as the striatum”. On the face of it, this seems like an oversimplification, particularly given evidence suggesting the existence of multiple valuation systems in the brain. On the other hand, since the reward process is allowed to be arbitrarily complex, it could be taken to represent just the final output of the combination of those valuation systems. Testable. The proposed model currently seems to be too general to be accurately tested. It would need to be made more specific. Functional. This is arguable, but I would claim that the model does not provide much of a functional account of values: they are hidden within the reward function, which is basically treated as a black box that takes in observations and outputs rewards. While a value learner implementing this model could develop various models of that reward function, and those models could include internal machinery that explained why the reward function output various rewards at different times, the model itself does not make any assumptions of this. Integrated with existing theories. Various existing theories could in principle used to flesh out the internals of the reward function, but currently no such integration is present. Suited for modeling internal conflicts and higher-order desires. No specific mention of this is made in the paper. The assumption of a single reward function that assigns a single reward for every possible observation seems to implicitly exclude the notion of internal conflicts, with the agent always just maximizing a total sum of rewards and being internally united in that goal. Suited for modeling changing and evolving values. As written, the model seems to consider the reward function as essentially unchanging: “our problem reduces to finding the most probable $p_R$ given the entire action-observation history $a_1o_1a_2o_2 . . . a_no_n$.” Suited for generalizing from our existing values to new ones. There does not seem to be any obvious possibility for this in the model. I should note that despite its shortcomings, Sezener’s model seems like a nice step forward: like I said, it’s the only proposal that I know of so far that has even tried to answer this question. I hope that my criteria would be useful in spurring the development of the model further. As it happens, I have a preliminary suggestion for a model of human values which I believe has the potential to fulfill all of the criteria that I have outlined. However, I am far from certain that I have managed to find all the necessary criteria. Thus, I would welcome feedback, particularly including proposed changes or additions to these criteria. Originally published at Kaj Sotala. You can comment here or there. (Leave an echo)
Thursday, November 12th, 2015
10:42 am - Learning from painful experiences
Saturday, October 31st, 2015
4:52 pm - Maverick Nannies and Danger Theses
Sunday, October 18th, 2015
1:01 pm - Changing language to change thoughts
Friday, October 9th, 2015
5:36 pm - Rational approaches to emotions
Friday, October 2nd, 2015
9:03 am - Two conversationalist tips for introverts
Two of the biggest mistakes that I used to make that made me a poor conversationalist: 1. Thinking too much about what I was going to say next. If another person is speaking, don’t think about anything else, where “anything else” includes your next words. Instead, just focus on what they’re saying, and the next thing to say will come to mind naturally. If it doesn’t, a brief silence before you say something is not the end of the world. Let your mind wander until it comes up with something. 2. Asking myself questions like “is X interesting / relevant / intelligent-sounding enough to say here”, and trying to figure out whether the thing on my mind was relevant to the purpose of the conversation. Some conversations have an explicit purpose, but most don’t. They’re just the participants saying whatever random thing comes to their mind as a result of what the other person last said. Obviously you’ll want to put a bit of effort to screening off any potentially offensive or inappropriate comments, but for the most part you’re better off just saying whatever random thing comes to your mind. Relatedly, I suspect that these kinds of tendencies are what make introverts experience social fatigue. Social fatigue seems [in some people’s anecdotal experience; don’t have any studies to back me up here] to be associated with mental inhibition: the more you have to spend mental resources on holding yourself back, the more exhausted you will be afterwards. My experience suggests that if you can reduce the amount of filters on what you say, then this reduces mental inhibition, and correspondingly reduces the extent to which socializing causes you fatigue. Peter McCluskey reports of a similar experience; other people mention varying degrees of agreement or disagreement. Originally published at Kaj Sotala. You can comment here or there. (Leave an echo)
Tuesday, August 18th, 2015
2:40 pm - Change blindness
Tuesday, July 7th, 2015
4:26 pm - DeepDream: Today psychedelic images, tomorrow unemployed artists
Saturday, June 6th, 2015
2:06 pm - Learning to recognize judgmental labels
In the spirit of Non-Violent Communication, I’ve today tried to pay more attention to my thoughts and notice any judgments or labels that I apply to other people that are actually disguised indications of my own needs. The first one that I noticed was this: within a few weeks I’ll be a visiting instructor at a science camp, teaching things to a bunch of teens and preteens. I was thinking of how I’d start my lessons, pondered how to grab their attention, and then noticed myself having the thought, “these are smart kids, I’m sure they’ll give me a chance rather than be totally unruly from the start”. Two judgements right there: “smart” and “unruly”. Stopped for a moment’s reflection. I’m going to the camp because I want the kids to learn things that I feel will be useful for them, yes, but at the same time I also have a need to feel respected and appreciated. And I feel uncertain of my ability to get that respect from someone who isn’t already inclined to view me in a favorable light. So in order to protect myself, I’m labelling kids as “smart” if they’re willing to give me a chance, implying that if I can’t get through to some particular one, then it was really their fault rather than mine. Even though they might be uninterested in what I have to say for reasons that have nothing to do with smarts, like me just making a boring presentation. Ouch. Okay, let me reword that original thought in non-judgemental terms: “these are kids who are voluntarily coming to a science camp and who I’ve been told are interested in learning, I’m sure they’ll be willing to listen at least to a bit of what I have to say”. There. Better. Originally published at Kaj Sotala. You can comment here or there. (Leave an echo)
Friday, May 29th, 2015
8:27 am - Adult children make mistakes, too
There’s a lot of blame and guilt in many people’s lives. We often think of people in terms of good or bad, and feel unworthy or miserable if we fail at things we think we should be able to do. When we don’t do quite as well as we could, because we’re tired or unwell or distracted, we blame and belittle ourselves. Let’s take a different approach. Think of a young child, maybe three years old. He has come a long way from a newborn, but he’s still not that far along. If he tries his hand at making a drawing, and it’s not quite up to adult standards, we don’t think of him as being any worse for that. Or if he doesn’t quite want to share his toys or gets frustrated with his sibling, we understand that it’s because he’s still young, and hasn’t yet learned all the people skills. We don’t judge him for that, but just gently teach him what we’d like him to do instead. It’s not that he’s good or bad, it’s just that he lacks the skills and practice. At the same time, we see the vast potential in him, all the way that he has already come and the way he’s learning new things every day. Now, look at yourself from the perspective of some immensely wise, benevolent being. If you’re religious, that being could be God. If you have a transhumanist bent, maybe a superintelligent AI with understanding beyond human comprehension. Or you could imagine a vastly older version of you, one that had lived for thousands of years and seen and done things you couldn’t even imagine. From the perspective of such a being, aren’t you – and all those around you – the equivalent of that three-year-old? Someone who’s inevitably going to make mistakes and be imperfect, because the world is such a complicated place and nobody could have mastered it all? But who’s nevertheless come a long way from what they once were, and are only going to continue growing? Nate Soares has said that he feels more empathy towards people when he thinks of them as “monkeys who struggle to convince themselves that they’re comfortable in a strange civilization, so different from the ancestral savanna where their minds were forged”. Similarly, we could think of ourselves as young children outside their homes, in a world that’s much too complicated and vast for us to ever understand more than a small fraction of it, still making a valiant effort to do our best despite often being tired or afraid. Let’s take this attitude, not just towards others, but ourselves as well. We’re doing our best to learn to do the right things in a big, difficult world. If we don’t always succeed, there’s no blame: just a knowledge that we can learn to do better, if we make the effort. Originally published at Kaj Sotala. You can comment here or there. (Leave an echo)
Friday, May 8th, 2015
1:35 pm - Harry Potter and the Methods of Latent Dirichlet Allocation
Wednesday, April 29th, 2015
10:17 am - Teaching economics & ethics with Kitty Powers’ Matchmaker
Wednesday, January 21st, 2015
3:06 am - Things that I’m currently the most interested in (Jan 21st of 2015 edition)
3:06 am - Things that I’m currently the most interested in (Jan 21st of 2015 edition)
|
# Real symmetric 3x3 eigenvectors
In some 'fiddling about' with 3x3 real symmetric matrices, related to wave propagation, I noticed that: Real symmetric matrix S with eigenvalue L (one of the three).
**T**= **S** - L.
**D** is matrix of co-factors of **T**.
The matrix **D***= **D**/ trace(**D**) is the outer product of the eigenvector **n** associated with the value L. i.e. D*ij = ni nj
So, the eigenvector component values can be found from the square roots of the diagonal components of D*, and the signs ( + and - of each eigenvector are equally valid of course) from the other components. The trace of D must not be zero, of course. Computationally this is probably of no value compared with using 'standard' methods but I wondered how well known this was- and is there a simple reason? Please bear in mind that I'm an Engineer and a manifold to me is something that bolts onto a cylinder head...
• Sorry, missed the identity matrix, I, out of the definition of T. T= S - I L i.e T is the matrix S minus the eigenvalue for each leading diagonal element. – Pete_Bate Jun 13 '15 at 16:41
• the matrix of co-factors is the transpose of the adjoint, so its rows and coloumns are eigenvectors for T and for S – Exodd Jun 13 '15 at 16:43
• $D$ is the matrix of co-factors of a symmetric matrix, so it's itself a symmetric matrix, and it's also the adjoint
• By definition of adjoint, $DT=TD=0$, so the rows and coloumns of $D$ are eigenvectors for the eigenvalue $L$ of $S$
• If $L$ was a simple eigenvector of $S$, and $v=[a,b,c]^T$ is an eigenvector, then $D$ can only have the form of $$D=c v v^T$$ with $c$ a constant different from zero.
• The trace of $D$ is now $c(a^2+b^2+c^2)=c|v|^2$, so $$D^*=ww^T \qquad w=v/|v|$$
• If $L$ isn't a simple eigenvalues, then $D=0$
Conclusion: your method works only for simple eigenvalues
• Thanks Exodd, I think I'm happy with that. I realised the matrix of co-factors was a scalar multiple of the inverse (symmetric matrix), and guess you meant DT=TD=I. I shall work it through properly when I get time. – Pete_Bate Jun 15 '15 at 12:48
• WHOOPS! Yes, DT=TD=0 not I. – Pete_Bate Jun 15 '15 at 12:56
• Or does it? I'm just getting confused... especially as <return> finishes the comment on this site. For the eigenvector thing to work, yes, DT=0 but is that a property of the adjoint? I thought it was just a scalar multiple of the inverse so that would not be true? – Pete_Bate Jun 15 '15 at 12:58
• when the matrx is invertible, then it is a multiple of the inverse. But T, in this case, is singular. In general, if $A$ is a matrix, and $B$ her adjoint, then $$AB=BA=det(A)I$$ If $A$ is singular, then $det(A)=0$ – Exodd Jun 15 '15 at 14:37
• Yes, of course; T is singular by definition! Thank you very much, Exodd, for both answering the question and putting up with the "brain scramble". @Exodd – Pete_Bate Jun 16 '15 at 18:04
|
# Arranging problem: 4 couples, 8 seats in a row… Am I making this too simple?
I am in a prob and stats course... haven't taken one in awhile and would like some help on these two problems. I think I am probably making these a little two simple.
Four married couples have bought 8 seats in a row for a concert. In how many ways can they be seated if:
a. if each couple sit together? 4!2!2!2!2!= 384
b. if all men sit together?
I am thinking of this M1M2M3M4W1(Any women)(Any Women)(Any Women)(Any Women) so 4*3*2*1*4*3*2*1= 576
• One way to check your idea is to use the same methods to solve a simpler problem where the answer will be clear. Do your methods give the right answers when there is only one or two couples? – MJD Jun 9 '15 at 2:12
Your first answer is correct: there are $4!$ ways to order the couples, treating each as a unit, then you can flip the order of each member of a couple, so the answer is multiplied by $(2!)^4$.
For your second answer, I believe you are undercounting. You also need to consider configurations like $M_1(...\text{women}...)M_2M_3M_4$.
Think about it like this: first the men sit down in a block
oooo
there are $4!$ ways to do this. Then the women choose a space between two men: there are $5$ spaces to choose from.
xoooo
oxooo
ooxoo
oooxo
oooox
Now order the women: there are $4!$ ways to do this.
So the answer is $5 \cdot 4! \cdot 4!$, or $5!\cdot4!$ (but I would prefer the first version since it emphasizes where the $5$ comes from).
• One observation, all the women sit together is what you did. He asked for the men, which is mathematically no different. – FundThmCalculus Jun 9 '15 at 2:55
• You are correct, excuse me! – Eli Rose Jun 9 '15 at 2:56
• No problem, just thought I would make that known for the benefit of the original poster. Excellent answer with nice graphical depications. – FundThmCalculus Jun 9 '15 at 10:16
For the second question, think of the four men as a unit. That gives you five objects to arrange, the block of four men and the four women. The five objects can be arranged in $5!$ ways. Within the block of four men, the men can be arranged in $4!$ ways. Therefore, there are $4!5!$ seating arrangements in which the men sit together.
• Ah, thinking of it as the placement of $5$ objects is quite nice. – Eli Rose Jun 9 '15 at 2:19
$\therefore\,$ there are $4!*4!*5 = 2880$ ways all the men can sit together.
|
# Solving a linear system of equations
$$\begin{cases} 3x - 2y + z = 8 \\ 4x - y + 3z = -1\\ 5x + y + 2z = -1 \end{cases}$$
Form two equations with y elimanted.
It would be really helpful too see what you guys wrote.
-
This strikes me as bizarre: Why not just solve the system of linear equations as usual, and write x=[solution for x] and z=[solution for z] as your two equations? – Douglas S. Stones Oct 26 '12 at 0:56
Hint: just subtract equation $(2)$ from $(3)$ and add to equation $(1)$ to get your $x$.
You could solve one of the three equations for $y$ (I recommend the second or the third), then substitute that into the other two equations (don't forget to simplify by gathering like terms).
|
Physics Help Forum Questions about forces and tensions
New Users New to PHF? Post up here and introduce yourself!
Oct 22nd 2018, 04:37 AM #1 Junior Member Join Date: Jun 2018 Posts: 7 Questions about forces and tensions I have difficulty with the following question: An angler of mass 80 kg standing on a riverbank is slowly reeling in a 15 kg fish at the end of a line. The tip of the fishing rod is 8m above the water level, and there are 17 m of line between the tip of the rod and the hook. As the fish come in at constant speed there is a horizontal resistive force from the water or 105 N. Calculate the tension in the line. The answer to the above is, according to the answer section provided in the textbook, 119 N. But I cannot figure out how I can get this number. Obviously, this number of 119 includes the resistive force of 105 N, but how can we get the additional 14 N? I would much appreciate it if somone can help me with this. Also the question goes on to say: Draw diagrams to show the forces on a) the fish b) the angler and rod (considered as a single object) Find the magnitude of each force. The mass of the rod can be neglected. The text says that the answer to a) above is: Weight 150 N, tension 119 N, resistance 105 N, buoyancy 94 N. But I have no idea whatsoever how to calculate "buoyancy" as this is the first time for me to come across it in this texbook let alone it does not say anywhere at all how to work it out. So I would also appreciate it if somone can tell me how to work out "buoyancy". Further, the text says that the answer to b) above is: Weight 800 N, tension 119 N, normal contact force 856 N, friction 105 N. But I don't understand why "normal contact force" is 856 N, not 800 N, because the weight of the angler is 80 kg, not 85.6 kg. So where does this additional 56 N come from? I have attached my diagrams so please let me know if there is anything wrong with it. Thank you. Attached Thumbnails
Oct 22nd 2018, 01:00 PM #2 Senior Member Join Date: Aug 2010 Posts: 369 First draw a diagram. "The tip of the fishing rod is 8m above the water level, and there are 17 m of line between the tip of the rod and the hook" so we have a right triangle with hypotenuse of length 17 m, one leg of length 8 m, and the other leg of length $\displaystyle \sqrt{17^2- 8^2}= 15$ m. The reason for the "additional 14 N" is that the fish is being reeled in horizontally but the line is the hypotenuse of the triangle. Find the total force so that the horizontal component is 105 N. The ratio of hypotenuse to horizontal side of the right triangle is 17/15 so the tension in the line is (17/15)(105)= 119.
Oct 22nd 2018, 07:01 PM #3 Junior Member Join Date: Jun 2018 Posts: 7 Thank you for your help. So the upward component X would be (8/15)*105 = 56. Let B be Buoyancy, then X + B = 150 56 + B = 150 So B = 94N So Normal contact force on the man & rod would be 800 + 56 = 856 N
Tags forces, questions, tensions
Thread Tools Display Modes Linear Mode
Similar Physics Forum Discussions Thread Thread Starter Forum Replies Last Post hongiddong Kinematics and Dynamics 4 Aug 6th 2014 02:23 PM iannedrs Advanced Mechanics 3 Jul 22nd 2013 05:03 AM kastamonu Kinematics and Dynamics 9 Mar 31st 2013 07:57 AM flower Advanced Electricity and Magnetism 1 Nov 20th 2009 02:06 AM jello17 Kinematics and Dynamics 3 Oct 4th 2008 09:31 PM
|
# Tag Info
4
Indeed a paired $t$-test is equivalent to a linear mixed model that you formulated as $Y_{ij} = β_0 + β_1t + a_i + ε_{ij}; \\a_i ∼ N(0, σ^2_{subject}), ~ε_{ij} ∼ N(0, σ^2_{res}); \\i=1,2,...,n; j=1,2;$ where $i$ indices the subjects and $j$ codes the two paired conditions. why wouldn't it make sense to include a random slope? The dummy variable $t$ ...
2
First of all, the 'random effects' can be viewed in different ways and the approaches to them and associated definitions may seem conflicting but it is just a different viewpoint. The 'random effect' term in a model can be seen as both a term in the deterministic part of the model as a term in the random part of the model. Basically, in general, the ...
5
Trying to find single "authoritative" definition is always tempting in cases like this, but the variety of different definitions shows that this term simply is not used in consistent manner. Andrew Gelman seems to have reached same conclusions, you can look as his blog posts here and here, or into his handbook Data Analysis Using Regression and Multilevel/...
0
The output looks perfectly fine to me - your estimate divided by its standard error (-1.7223/1.2) is -1.44, which gives you the non-significant result you obtained. The ratio of the estimate of NEURO to the estimate for the intercept has nothing to do with the significance of NEURO. Also, in plots, differences can look important although they are not. If you ...
0
Welcome to the site, Marco. The random slope is necessary for multiple reasons. Among the most important is recent methodological work by Heisig & Schaeffer, which shows that for a level 1 variable involved in a cross-level interaction with a level 2 variable, that interaction is more likely to be significant if the level 1 variable is not specified as ...
2
Yes, formally speaking teacher should a random effect but with only three levels estimation will be extremely problematic (i.e. how much we would trust a standard deviation out of a sample with just 3 items). Yes, it is hypothesis dependent. But based on the initial information, teacher assignment was not explicitly determined. We can model students as ...
0
Just in case it would be helpful, I have tried to illustrate how my data would be nested (in addition to the example in my table above). Here, each timepoint (t1,t2,t3,etc.) gets "observed" by two different methods of calculating Outcome, i.e., Method A and Method B. Each set of Outcome values across time points for each Method are nested within a given ...
3
Indeed, because the model only includes random intercepts terms, the marginal mean of your Poisson outcome will be $$E(Y_{ijk}) = \exp \bigl (\beta_0^* + \beta_1 \texttt{time}_{ijk} + \beta_2 \texttt{x2}_{ijk} + v_i + w_{ij}\bigr ),$$ where $$\beta_0^* = \beta_0 + \frac{\sigma_v^2}{2} + \frac{\sigma_w^2}{2},$$ with $\sigma_v^2$ and $\sigma_w^2$ the ...
5
As you have correctly observed, both in meta-analysis and beyond, a frequentist mixed model does something similar to a Bayesian approach. Namely, it assumes a parameter not to have a fixed value, but rather to have been randomly drawn from some probability distribution (in practice: almost always a normal distribution with mean $0$ and unknown variance). ...
1
Nb; don't use ti() for univariate smooths: it currently works but Simon Wood, maintainer of mgcv has remarked that this may be removed in a future version of the package. I think the main problem is that you have the factor and continuous variable back to front in the fs smooth. time is the continuous covariate so you want a smooth of it for each level of ...
0
You would trust the Fixed Effects because it is consistent under weaker assumptions. This is not about the efficiency so the fact that variance is smaller is secondary. If the estimates are different, chances are strict exogeneity is violated and RE becomes biased, while FE remains consistent even without strict exogeneity.
Top 50 recent answers are included
|
# Does the math teacher make the difference?
by MathHeroine
Tags: difference, math, teacher
P: 49
Quote by bpatrick not that this has to due with the quality of teaching ability, but this story is more about the general quality of the professor and how little he cared about the students he was teaching: I have always been a self motivating learner. when I took ODEs freshman year of college, the prof said we didn't have to come to class, so I didn't. I came for the review sessions before the exams, then got As on the exams and went on with my life, no worries, got an A in the course. sophomore year came and I enrolled in (sophomore level) classical mechanics. the prof started the lecture right away after handing out the syllabus (and not talking about it at all). The syllabus stated that attendance was optional and grades will be determined: 30% exam 1, 30% exam 2, 40% final. I did the same thing I did for my ODE class, went to review sessions, got an A, B+, and A- on the three exams ... so I should have been looking at probably an A- for the course right? I ended up getting an F on my transcript and thought it was some mistake, when I got back to school in January, I talked to him, and the chair of the physics department. Evidently the first day of class, the prof mistakenly distributed the syllabus from when he taught it two fall semesters before. I had noticed the date on the top of the paper (because I frequently referenced it throughout the semester as I read the proper sections in the book) but i thought the prof was just too lazy or didn't notice he forgot to change the date when he reused the syllabus. The man never informed me that he had changed the syllabus! ... now how bloody hard is it to send an email with the change? or when you administer the first exam to say, "hey, [bpatrick], i haven't seen you in class, don't you care about your participation grade? ... even with straight As on the exams you'll only get a D given the weight of attendance and homework!", but not a single word for the professor the entire semester, hell, I even went to his office hours once and asked for advice on solving a problem I was working on. All he said was, "we did something like this in class last Wednesday, why don't you come to class or get notes from another student." I never ended up getting the grade changed (the university gave an option to retake up to 2 courses that you received a C- or lower in), but by the next fall, I had already completed my minor in physics and the same prof was teaching the course again. I had no desire to be in the same room with that man for an entire semester wasting my time on stuff I already could do, and needed to be taking another course that fall in my major that was offered at the same time. overall, i say that's a pretty awful teacher and just an ahole in general. Until then, I was thinking about double majoring in physics and possibly going to grad school in physics, but when that happened, I ended up focusing on my music instead. It's amazing what a single teacher can do, haha.
Yup. Giant ahole because YOU decided you were too good for lecture, and this professor didn't kiss your *** to get you to come to lecture. That's your responsibility brosef, not his.
Sci Advisor HW Helper P: 9,421 I do believe the teacher can make a difference, but the same teacher will not make the same difference to every student. The thing to seek is the good relationship, or the good match between student and teacher, since not every student learns in the same way, nor seeks the same outcome. Thus I found the study mentioned by Moonbear completely in line with what I have observed over my career, When I read teacher evaluations of professors in order to give awards or promotions, I observed that overwhelmingly, the highest evaluations went to the professors who gave the highest grades. Those professors apparently had the happiest students. In some cases those professors were also excellent at explaining the material, but they tested that material in a far less challenging way than others did. In those cases their grades did not discriminate at all between merely average good students and really excellent ones as essentially everyone got an A. There were also exceptions however. There were some professors who were both challenging and excellent and this was noted by the students who said the professor's class was not easy but they felt the professor went out of her/his way to give the students every chance to learn as much as possible. When awarding prizes for teaching I looked for these latter instances, but they were only a small subset of the teachers. Indeed since promotions and raises and hiring depend in many cases at least partly on these evaluations, most teachers have apparently learned to placate the students with easier classes, and not to make the grade depend on really excellent performance. So as Moonbear made clear, the meaning of the term "good teacher" depends on what the evaluator is looking for: clear explanations, deep insight, more advanced versions of material than found in books, higher grades than average or than deserved, willingness to overlook lazy performance or absences, concern for the student's needs and feelings,.... Years ago I wrote an essay "On teaching" that was published by request of one of my students then in the math ed department. It is #8 under class notes on this page: http://www.math.uga.edu/~roy/ In it I refer to the passage in scripture where Jesus rebukes a follower for calling him "good teacher", responding none is actually good except God.
P: 225
Quote by Intervenient Yup. Giant ahole because YOU decided you were too good for lecture, and this professor didn't kiss your *** to get you to come to lecture. That's your responsibility brosef, not his.
Actually, bpatrick is not the ahole in this scenario but rather the institution he attended is and possibly you for not understanding that the syllabus acts as a contract between the professor and the student administratively. Since he was given a false set of guidelines to follow, he should not have been faulted for doing what he was fully within his right to do under those specific terms.
P: 49
Quote by daveyinaz Actually, bpatrick is not the ahole in this scenario but rather the institution he attended is and possibly you for not understanding that the syllabus acts as a contract between the professor and the student administratively. Since he was given a false set of guidelines to follow, he should not have been faulted for doing what he was fully within his right to do under those specific terms.
The professor updated the syllabus saying so. It's common knowledge that syllabuses are subject to change during the first few weeks (My stats syllabus reached it's final version in week 4). If he's going to prance about like he's better than the institution, it'd have been wise to show up to class every once in a while. It was not the professors job to reach out to one of his many, many students (especially one that never showed up to reach out to him).
Deserved the F, next.
P: 225
Quote by Intervenient The professor updated the syllabus saying so.
You must have read some other post that is not mentioned here because I didn't see anything bpatrick saying that he was informed of the mistake or change.
Quote by Intervenient It's common knowledge that syllabuses are subject to change during the first few weeks (My stats syllabus reached it's final version in week 4).
I didn't realize the fact that your particular stats syllabus was revised a number of times made it common knowledge to all students in all colleges and universities everywhere.
Quote by Intervenient If he's going to prance about like he's better than the institution, it'd have been wise to show up to class every once in a while.
Where did you get that bpatrick was implying that he was better than the institution? Must be the same invisible post you are referring to above.
Quote by Intervenient It was not the professors job to reach out to one of his many, many students (especially one that never showed up to reach out to him).
From the information at hand, bpatrick said that he went to the professor's office hours for homework assistance, if that ain't reaching out, then I don't know what else is.
Quote by Intervenient Deserved the F, next.
Here's what I think went down, assistant chair ahole professor knew bpatrick was not attending class and knew he was taking the tests, even so far as to let him take the final with full knowledge that he was going to fail him and then hide behind some bs like his job title or the fact that he made a mistake in handed out an old syllabus. The appropriate response as an educator or a man with any sort of integrity would to observe the absence pattern and warn the student beforehand of the situation that is about to occur should they continue down a course of wrongful actions, in this case, not attending classes.
P: 49
Quote by daveyinaz Here's what I think went down, assistant chair ahole professor knew bpatrick was not attending class and knew he was taking the tests, even so far as to let him take the final with full knowledge that he was going to fail him and then hide behind some bs like his job title or the fact that he made a mistake in handed out an old syllabus. The appropriate response as an educator or a man with any sort of integrity would to observe the absence pattern and warn the student beforehand of the situation that is about to occur should they continue down a course of wrongful actions, in this case, not attending classes.
Are you like 5 years old? I'm 100% positive that a professor has a MILLION better things to do then to punish one student who didn't go to lecture.
Do I think that it's stupid that someone who performed well in the class got an F because he didn't go to lecture. Of course. But did bpatrick not have the responsibility as a student to make sure that this was ok? He noticed he got a syllabus with the wrong year at the top, it would have been a good idea to clarify with the professor that this was indeed the correct syllabus, especially if he planned to never go to lecture at all.
I have ZERO sympathy for the guy. Sucks, but if you're going to take a semester long vacation, it'd be a good idea to keep up with the professor. The professor is teaching hundreds of kids I'm sure. It isn't his job to make sure that one student who never bothered to class showed up.
Anyways, this whole discussion is off topic, and I apologize for making it so.
P: 1,306
I think professors make the only difference pertaining to raising someone's interest, in my personal case at least. I don't want a professor who teaches by a book, because I can do that myself. But I would like a professor who connects bits and pieces together and gives the bigger picture.
Quote by Intervenient Are you like 5 years old? I'm 100% positive that a professor has a MILLION better things to do then to punish one student who didn't go to lecture. Do I think that it's stupid that someone who performed well in the class got an F because he didn't go to lecture. Of course. But did bpatrick not have the responsibility as a student to make sure that this was ok? He noticed he got a syllabus with the wrong year at the top, it would have been a good idea to clarify with the professor that this was indeed the correct syllabus, especially if he planned to never go to lecture at all. I have ZERO sympathy for the guy. Sucks, but if you're going to take a semester long vacation, it'd be a good idea to keep up with the professor. The professor is teaching hundreds of kids I'm sure. It isn't his job to make sure that one student who never bothered to class showed up. Anyways, this whole discussion is off topic, and I apologize for making it so.
Actually, bpatric didn't assume that the professor changed any substantial rule in the syllabus. If a professor gave me a year old syllabus, I would think that he did it on purpose and not by accident. But at any rate, the fuel of this argument relies on differing beliefs, one is practical while the other is absolute. Therefore, this argument will go nowhere. And lets not attack others for different personal characteristics (lenient vs. absolute).
P: 1,025
Quote by Intervenient Sucks, but if you're going to take a semester long vacation, it'd be a good idea to keep up with the professor.
It sounds like he wasn't if he got an A otherwise.
P: 615 +1 Higher education should be about education rather than attending mandatory classes which may or may not help
Sci Advisor HW Helper P: 9,421 people who do not show enough respect to attend class deserve and receive absolutely no slack. Learn this before proceeding further. In my own case, I call absent students on the phone and make sure they know what is going down, and ask why they are absent, but I am totally unique in this respect. Again, if you sign up for a class and do not show up, you are going to suffer for that, and no responsible party in any appeal or forum will support you.
P: 1,306
Quote by mathwonk people who do not show enough respect to attend class deserve and receive absolutely no slack. Learn this before proceeding further. In my own case, I call absent students on the phone and make sure they know what is going down, and ask why they are absent, but I am totally unique in this respect. Again, if you sign up for a class and do not show up, you are going to suffer for that, and no responsible party in any appeal or forum will support you.
If I may interject, I don't believe its a matter of respect. But rather, a matter of preference. Some use lectures as their prime tool of study, while others prefer complete independent study.
P: 268
Quote by Nano-Passion If I may interject, I don't believe its a matter of respect. But rather, a matter of preference. Some use lectures as their prime tool of study, while others prefer complete independent study.
I think if your paying to go to a school to learn something you should do both independent study and lectures.
P: 1,025
Quote by MathWarrior I think if your paying to go to a school to learn something you should do both independent study and lectures.
Some people really do benefit from just independent study and for this student, when he needed help, he went to office hours.
Sci Advisor HW Helper P: 9,421 you are missing the point. no matter what you think is preferable, you are not going to succeed in school if you do not attend class. we are not discussing whether that is what you think is reasonable, I am telling you how to succeed. Besides, if you are paying tuition to a school where the lectures have nothing to offer, you are a sucker.
Sci Advisor HW Helper P: 9,421 To the student who got the low grade for missing class: it is possible you can appeal this grade and have it changed. At my university the university guidelines say that you can be dropped from a class for lack of attendance, so it is part of the written rules that attendance is expected. However I believe it is also a policy that the instructor must distribute a written syllabus in which he explains his attendance policy and the basis for his grading system. If this is the case at your university, I think you would have case that the written syllabus which was distributed should be the one that must be followed for grading that course. It is always worth a try, but you need to be polite to everyone involved if you hope to succeed. The first step in any such appeal is usually to simply speak to the instructor and make your case, as diplomatically, but clearly, as possible.
P: 225
Quote by Intervenient I'm 100% positive that a professor has a MILLION better things to do then to punish one student who didn't go to lecture.
How can you be so positive? Are you the professor whom bpatrick is referring to? It's kind of sad really that you think there are "better" things to do rather than concern yourself with helping a struggling student as an educator/instructor/professor/teacher/whatever, especially one who has clearly shown potential since he was able to get good grades without attending classes.
Quote by Intervenient Do I think that it's stupid that someone who performed well in the class got an F because he didn't go to lecture. Of course. But did bpatrick not have the responsibility as a student to make sure that this was ok? He noticed he got a syllabus with the wrong year at the top, it would have been a good idea to clarify with the professor that this was indeed the correct syllabus, especially if he planned to never go to lecture at all.
I definitely agree with you on this point. There are a variety of routes bpatrick could have taken where the end result might have been different.
Furthermore, comments like this have no place here...
Quote by Intervenient Are you like 5 years old?
I always chuckle at remarks like this because I'm pretty sure someone like you wouldn't have the sack to say it to my face in person.
Quote by Intervenient I have ZERO sympathy for the guy. Sucks, but if you're going to take a semester long vacation, it'd be a good idea to keep up with the professor. The professor is teaching hundreds of kids I'm sure. It isn't his job to make sure that one student who never bothered to class showed up.
Making statements about having zero sympathy for people certainly tells more about you than anything else said in this discussion so far. To be so arrogant as to think that anyone was asking for your sympathy is mind boggling when it seems that bpatrick's intention was a "lessons learned" kind of story and not one of "feel pity for me". Plus you do not know the whole story as neither do I to conclude definitively what really happened.
Quote by Intervenient Anyways, this whole discussion is off topic, and I apologize for making it so.
On the contrary, I don't think anything said so far has been so off topic that merited an apology. It is clear now that teachers do make a difference, in so far as much, as random strangers are debating them on this very forum.
Sci Advisor HW Helper P: 9,421 I think this is very much off topic. The gentleman made it about his personal gripe, instead of a general discussion.
P: 225
Quote by mathwonk I think this is very much off topic. The gentleman made it about his personal gripe, instead of a general discussion.
That's your opinion, I still stand by my statement that it was not. The issue is; does a [math] teacher make a difference? It was a personal story where the teacher did make a difference. $$\blacksquare$$
Related Discussions Career Guidance 17 Chemistry 2 General Math 4 Career Guidance 13 Science & Math Textbook Listings 13
|
# nLab applications of (higher) category theory
category theory
## Applications
#### Higher category theory
higher category theory
# Contents
## Idea
I can illustrate the second approach with the same image of a nut to be opened. The first analogy which came to my mind is of immersing the nut in some softening liquid, and why not simply water? From time to time you rub so the liquid penetrates better, and otherwise you let time pass. The shell becomes more flexible through weeks and months – when the time is ripe, hand pressure is enough, the shell opens like a perfectly ripened avocado!
A different image came to me a few weeks ago. The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration… the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it… yet it finally surrounds the resistant substance.
Alexander Grothendieck, Récoltes et semailles, 1985–1987, pp. 552-3-1 (“The Rising Sea”)
I don’t want you to think all this is theory for the sake of it, or rather for the sake of itself. It’s theory for the sake of other theory.
The tools of category theory and higher category theory serve to organize other structures. There is a plethora of applications that have proven to be much more transparent when employing the nPOV. Higher category theory has helped foster entire new fields of study that would have been difficult to conceive otherwise. This page lists and discusses examples.
## Examples
The following is a (incomplete) list of examples of topics for which higher category their has proven to be useful.
### In geometry
The field of differential geometry has long managed to avoid the change to an $n$-point of view that had been found to be unavoidable, natural and fruitful in algebraic geometry long ago. But more recently – not the least due to the recognition of differential higher geometric structures in the physics of gauge theory and supergravity (such as that of orbifolds and orientifolds, of smooth gerbes and smooth principal ∞-bundles) – sheaf and topos theoretic concepts, such as synthetic differential geometry, diffeological spaces and differentiable stacks are gaining wider recognition and appreciation.
For instance the ordinary category Diff of smooth manifolds fails to have all pullbacks, it only has pullbacks along transversal maps. This observation is usually the starting point for realizing that differential geometry is in need of a bit of category theory in the form of higher geometry.
In all notions of generalized smooth spaces all pullbacks do exist. But they may still not be the “right” pullbacks. For instance cohomology of pullback objects may not have the expected properties. This is solved by passing to smooth derived stacks, such as derived smooth manifolds.
Recent developments in higher category theory, such as the concept of higher Structured Spaces based on Higher Topos Theory, put all these notions of generalized geometries into a unified picture of higher geometry that realizes old ideas about how category theory provides a language for space and quantity in great detail and powerful generality and sheds new light on old classical problems such the description of the derived moduli stack of derived elliptic curves and the construction of the tmf spectrum from it. This construction has benefited tremendously from the adoption of the nPOV. Using this point of view, the general strategy becomes naturally evident.
#### In differential equations
Much of topological vector space theory, e.g., the theory of distributions, nuclear spaces, etc. has its origins in partial differential equation theory and is intensely conceptual (categorical) in spirit. It is routine these days to accept distributional solutions, but it wasn’t always so, and it was the efficacy of the abstract TVS theory which changed people’s minds.
Way back Cartan studied differential equations in terms of exterior differential systems. From the $n$POV, these may be understood naturally as sub Lie ∞-algebroids of a tangent Lie algebroid.
Bill Lawvere noticed in the 1960s that the notion of differential equation makes sense in any smooth topos (as described here). In his highly influential article Categorical dynamics he promoted the point of view that all things differential geometric can be formulated in abstract category theory internal to a suitable topos. This is the origin of synthetic differential geometry. It may be understood as providing the fundamental characterization of the notion of the infinitesimal.
Closely related to both these perspectives, a modern point of view on differential equations that is proving to be very fruitful regards them as part of the theory of D-modules.
### In cohomology
A multitude of notions of cohomology and its variants are unified from the $n$POV when viewed as ∞-categorical hom-spaces in (∞,1)-topoi. See cohomology.
#### Hochschild (co)homology
Specifically, the subject of Hochschild cohomology, when generalized to higher order Hochschild cohomology effectively merges into the canonical concept of (∞,1)-powering of an (∞,1)-topos over ∞Grpd. See Hochschild cohomology for details.
### In homotopy theory
The study of homotopy theory originated in the study of categories such as those of topological spaces and other objects such as chain complexes whose morphisms were known to admit a notion of homotopy. Historically, in a sequence of steps formalisms were proposed that would organize the rich interesting structure found in such situations. As a first approximation the notion of homotopy category and derived category was introduced in order to deal with structures “up to homotopy”. But it was clear that the homotopy category captured only a very small part of the interesting information. Quillen introduced the notion of model category as a formalization of the full structure, and this formalization turned out to yield a powerful theory that today provides a powerful toolset for dealing with homotopy theoretic situations.
But also the notion of model category was seen to not be the full answer. For instance a model category in a sense retains too much non-intrinsic information. Equivalence classes of model categories under Quillen equivalence are a more intrinsic characterization of a given homotopy theory. But this means that one needs some higher categorical notion for the collection of all model categories. This problem came to be known as the search for the homotopy theory of homotopy theories.
Recently, this problem was fully solved and homotopy theory fully understood as the special case of higher category theory that deals with (∞,1)-categories:
#### In rational homotopy theory
… The study of rational homotopy theory is naturally understood as the study of the localizations of (∞,1)-toposes at morphism that induce equivalences in cohomology with certain line-object coefficients. See rational homotopy theory in an (∞,1)-topos. …
### In K-theory
In full generality, (algebraic) K-theory is a universal assignment of spectra to stable (∞,1)-categories.
### In Tannaka duality
… see Tannaka duality
See at
### In differential cohomology
cohesion on (∞,1)-toposes solves the Simons-Sullivan question characterization on the characterization of generalized (Eilenberg-Steenrod-type) differential cohomology. See at differential cohomology hexagon for details.
### In deformation theory
In deformation theory it was early on recognized that for a good theory the notion of Kähler differentials has to be generalized to the notion of cotangent complex. With the advent of the study of derived moduli spaces, such as the derived moduli space of derived elliptic curves, this needed to be further generalized to notions of cotangent complexes not just of rings, but of E-∞-rings.
It turns out that all these concepts are special cases of a construction obtained from a simple higher categorical notion, that of left adjoint sections of a tangent (∞,1)-category.
### In logic and type theory
While it is common to view logic as the study of absolute truth, in fact logic can have many different interpretations, or semantics. A particular semantics for logic can be useful both to inform the study of logic, and to prove facts logically about the semantics. One very fruitful semantics of this sort is categorical semantics for logic and type theory, according to which every category (and especially every topos) has an internal language and internal logic. Interpreting “ordinary” mathematical statements in the internal language of exotic categories can make it much easier to study those categories, while on the other hand it can provide new insight into otherwise mysterious logical notions.
In particular, the internal logic of a category (such as a topos) is, in general, constructive, i.e. the principle of excluded middle (and also stronger statements, such as the axiom of choice) are generally false. Thus, in order for a theorem to be interpretable internally in such categories, its proof must be constructive. So while the original “constructivists” believed that classical mathematics was “wrong,” nowadays there are good reasons to care about constructive mathematics even if one believes that excluded middle and the axiom of choice are “true,” since regardless of their “global” truth they will not be true in the internal logic of many interesting categories. Conversely, category-theoretic models have provided new insight into the independence of various axioms in constructive mathematics, such as differing forms of the axiom of choice.
As another example, the identity types in Martin-Löf’s original constructive dependent type theory construct, from any type $A$ and terms $a, b \in A$, a new type $Id_A(a, b)$. According to the propositions as types interpretation, the elements of $Id_A(a,b)$ are proofs that $a$ and $b$ are propositionally equal; thus $Id_A(a,b)$ is a replacement for the truth value of the proposition $(a=b)$. There are type-theoretic functions $1 \to Id(a, a)$, $Id(b, c) \times Id(a, b) \to Id(a, c)$ and $Id(a, b) \to Id(b, a)$ expressing the reflexivity, transitivity and symmetry of this propositional equality, but in general an identity type (even the “reflexive” identity type $Id(a,a)$) can have many distinct elements. This has long been a source of discomfort to type theorists. However, from a higher-categorical point of view, it is natural to view the terms of identity types as isomorphisms in a groupoid—or, more precisely, an ∞-groupoid, since identity types have their own identity types, and all the laws of associativity, exchange, etc. only hold up to terms of these higher identity types. This suggests that the nonuniqueness of identity proofs should be embraced rather than denigrated, producing a theory at least related to the “internal logic” of (∞,1)-category theory and homotopy theory; see identity type for more details.
This is now known as homotopy type theory, see there for more.
### In physics
#### Classical mechanics and its geometric quantization
By the end of the 19th century a fairly complete, powerful and elegant mathematical formulation of classical mechanics: in terms of symplectic geometry. By the middle of the 20th century, the passage to the corresponding quantum theory was pretty well modeled by the geometric quantization of symplectic geometries.
But there were some lose ends. Notably the fully general theory involved Poisson manifolds, not just symplectic manifolds. And the mechanics of relativistic classical field theory was realized to be more naturally described by multisymplectic geometry.
Both these generalizations have a natural common higher categorical formulation: that of Lie ∞-algebroids: a Poisson geometry is naturally encoded in its corresponding Poisson Lie algebroid. Its higher categorical versions – the n-symplectic manifolds – encode the corresponding multisymplectic geometry.
Moreover, the quantization step of geometric quantization was understood to be effectively the Lie integration of these Lie ∞-algebroids to the corresponding Lie ∞-groupoids (currently this is well understood for low $n$).
#### Quantum mechanics and quantum information
The basic structure of quantum mechanics and quantum information theory is encoded in the theory of dagger-compact categories.
#### Gauge theory
Maxwell realized that the electromagnetic field is controlled by a degree 2-cocycle in de Rham cohomology: the electromagnetic field strength. Later Dirac noticed that this is one part of a degree 2-cocycle in differential cohomology that characterize a connection on a line bundle.
Later the Yang-Mills field was understood to similarly be a connection on a bundle, this time on a $G$-principal bundle for $G$ some possibly nonabelian group.
While thinking about the mathematical structures possibly underlying standard model of particle physics and gravity, theoretical physicists considered more general hypothetical gauge fields, such as the Kalb-Ramond field, the RR-field or the supergravity C-field. Today all these gauge fields are understood to be modeled, mathematically, by generalized differential cohomology.
##### Supergravity
Theories of supergravity have been known to require higher gauge fields in the above sense – hence the term supergravity C-field. A powerful formalism for handling these theories is the D'Auria-Fre formulation of supergravity. As described there, this is secretly (but evidently) nothing but a description of supergravity as a theory of connections on nonabelian $G$-principal ∞-bundles for $G$ some super Lie ∞-group. For instance Cremmer-Scherk 11-dimensional supergravity theory is governed by the super Lie 3-group $G$ whose L-∞-algebra is the supergravity Lie 3-algebra.
#### BV-BRST formalism
The BV-BRST formalism is secretly a way to talk about the fact that configuraton spaces of gauge theories are not naive spaces such as manifolds, but are general spaces in the sense of higher geometry:
the configuration space is really an object $Conf \in Sh_{(\infty,1)}((dgAlg^-)^{op})$ in the ∞-stack (∞,1)-topos on the (∞,1)-site $(dgAlg^-)^{op}$ of certain ∞-algebras modeled as dg-algebras. The BV-BRST-complex of a physical system is the global derived function algebra
$\mathcal{O}(Conf) \in dgAlg \,.$
(many more aspects go here, eventually)…
#### Quantum field theory
There are essentially two axiomatizations of what quantum field theory is, both of which are inherently $\infty$-categorical:
##### 3d TFT and 2d CFT
3-dimensional TFT such as Chern-Simons theory and Dijkgraaf-Witten theory and the global aspects of 2-dimensional conformal field theory are inherently governed by the theory of modular tensor categories.
The local aspects of 2-dimensional conformal field theory are governed by vertex operator algebras. A vertex operator algebra is really the algebra over an operad, for the operad of holomorphic pointed spheres (as described there).
### In your favorite topic here
Last revised on March 9, 2017 at 13:35:54. See the history of this page for a list of all contributions to it.
|
Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Upcoming SlideShare
Loading in …5
×
# Open Badges, Digital Literacies, and Learning Pathways
25,707 views
Published on
Slides to accompany a 30-minute presentation at the 'Future Classrooms' event in Armagh, Northern Ireland (12th March 2015).
More details at futureclassrooms.org.
Published in: Education
• Full Name
Comment goes here.
Are you sure you want to Yes No
Your message goes here
• You won't get rich, but we do pay. ➽➽ https://dwz1.cc/DU3z4dss
Are you sure you want to Yes No
Your message goes here
• Girls for sex in your area are there: tinyurl.com/areahotsex
Are you sure you want to Yes No
Your message goes here
• You came along just at the right time for me Ben. Recently retired and got my pension only to find out my savings were earning nothing. Lets just say a "small" investment in you has been the difference between scraping by each week and spoiling 5 grand kids. Well worth the investment and glad to have met you. ♥♥♥ https://bit.ly/30jWepO
Are you sure you want to Yes No
Your message goes here
• Real people just like you are kissing the idea of punching the clock for someone else goodbye, and embracing a new way of living. The internet economy is exploding, and there are literally THOUSANDS of great earnings opportunities available right now, all just one click away. ◆◆◆ http://t.cn/AisJWzdm
Are you sure you want to Yes No
Your message goes here
• Making a living taking surveys at home! I have been a stay at home mom for almost 5 years and I am so excited to be able to still stay home, take care of my children and make a living taking surveys on my own computer! It's so easy to get started and I plan to make enough money each week so that my husband can actuallly quit his second job!!! Thank you so much! ▲▲▲ http://t.cn/AieXAuZz
Are you sure you want to Yes No
Your message goes here
### Open Badges, Digital Literacies, and Learning Pathways
1. Dr. Doug Belshaw Dynamic Skillset @dajbelshaw Open Badges, Digital Literacies, and Learning Pathways
2. Have you ever been issued a badge? How did it make you feel?
3. Who are you? Dr. Doug Belshaw Dynamic Skillset @dajbelshaw [email protected]
4. Dr. Doug Belshaw Dynamic Skillset @dajbelshaw [email protected] Who are you? Teacher (History) Senior Leader Jisc Mozilla
5. Structure • The Problem • Digital Literacies • Web Literacy Map • Open Badges • Bringing it all together
6. Structure • The Problem • Digital Literacies • Web Literacy Map • Open Badges • Bringing it all together
7. CONTEXT
8. CC BY Jaysin Trevino LINEARITY
9. LACK OF MAP CC BY Alexander Baxevanis
10. Structure • The Problem • Digital Literacies • Web Literacy Map • Open Badges • Bringing it all together
11. 1. What does it mean to be educated in the 21st century? 2. What does it mean to be digitally literate? 3. What actually is 'digital literacy'? ?
12. blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah DIGITAL LITERACY IS...
13. ONE DEFINITION TO RULE THEM ALL (and in the darkness bind them?)
14. DIGITAL LITERACY DIGITAL LITERACIES
15. CONTEXT-DEPENDENT DIGITAL LITERACIES DIGITAL LITERACIES DIGITAL LITERACIES DIGITAL LITERACIES
16. :-(TOO HARD? GIVE UP?
17. META-ANALYSIS
18. Co Communicative Cu Cultural Cg Cognitive Cn Constructive Ci Civic Cr Creative Cf Confident Ct Critical THE 8 ELEMENTS OF DIGITAL LITERACIES
19. gum.co/digilit (use code ‘gimme10’ for 10% off)
20. CO-CREATE DEFINITIONS
21. Structure • The Problem • Digital Literacies • Web Literacy Map • Open Badges • Bringing it all together
22. DIGITAL LITERACIES WEBLITERACY
23. Mozilla!
24. webmaker.org/resourceswebmaker.org/resources teach.webmaker.org (April 2015)
25. Structure • The Problem • Digital Literacies • Web Literacy Map • Open Badges • Bringing it all together
26. Digital badges have been around for ages
27. ...but they’re difficult to verify and they’re easy to copy
28. Enter Open Badges! (or more accurately, the OBI)
29. CC BY-SA Kyle Bowen
30. This is how it works
31. A person or organisation issues a badge
32. Earner chooses to put it in their backpack
33. Collections of badges can be shared to unlock new opportunities and possibilities.
34. Some recent badges created with DigitalMe
35. Interoperable
36. welcometobusinesstown.tumblr.com
37. March 2015: • 14,000 issuers worldwide • ~2 million badges issued • 342,300 badges sent to backpacks • 88,585 backpacks How do you quantify value?
38. Motivation 92% pupils reported that they were happy they had received a badge for their work 82% report that they would like to find out about other badges that they could get.
39. Progression 120 badge partners
40. Badge Design Canvas digitalme.co.uk/badgecanvas
41. Structure • The Problem • Digital Literacies • Web Literacy Map • Open Badges • Bringing it all together
42. Learners are already doing their own thing. Let’s capture and credential that.
43. “If you want to go fast, go alone. If you want to go far, go together.” (African Proverb)
44. Let’s work together! [email protected]
45. Now: ask now or grab me (gently!) later Twitter:@dajbelshaw Email: [email protected] Ask hard questions! • http://openbadges.org • http://dynamicskillset.com • http://webmaker.org Useful links:
|
# Linear Algebra II
## November 20, 2014
### Lecture 28-29
Filed under: 2014 Fall — Y.K. Lau @ 8:59 PM
Today we learnt: Every linear operator ${T}$ on inner product space has a “mate” ${T^*}$ such that ${T}$ and ${T^*}$ are related by
$\displaystyle \langle T(v), w\rangle = \langle v, T^*(w)\rangle$ ${\forall}$ ${v,w\in V}$.
How is ${T^*}$ defined? In today’s lecture we studied the definition of ${T^*}$, which is based on the important result: If ${f:V\rightarrow {\mathbb R}}$ is a linear transformation, then there exists unique ${z\in V}$ such that ${f(v)=\langle v,z\rangle}$.
• ${M_{\mathcal{B}}(T^*) = M_{\mathcal{B}}(T)^T}$ where ${\mathcal{B}}$ is an orthonormal basis for ${V}$.
This result tells the relation between the matrix representations of ${T}$ and ${T^*}$ w.r.t. orthonormal bases.
• If ${T=T^*}$, then ${M_{\mathcal{B}}(T)}$ is symmetric.
Below we shall see that self-adjoint operator ${T}$ (i.e. satisfying ${T=T^*}$) are particularly nice.
Recall that for linear operators on vector spaces, we study the concept of similarity and diagonalization: let us review a few important points below. By definition, “${A}$ is similar to ${B}$” means ${P^{-1}AP=B}$ for some invertible matrix ${P}$.
1. If ${T:V\rightarrow V}$ is a linear operator on vector space (not necessarily inner product space), then ${M_E(T)}$ is similar to ${M_F(T)}$ for any ordered bases ${E}$ and ${F}$.
2. If ${A}$ is similar to ${B}$, then ${A}$ and ${B}$ are matrix representations of the same linear operator.
Next suppose ${A}$ is similar to a diagonal matrix ${D}$ (i.e. ${P^{-1}A P= D}$). Write ${P=\begin{pmatrix} \underline{x}_1 & \underline{x}_2 & \cdots & \underline{x}_n\end{pmatrix}}$ and ${D={\rm diag}(\lambda_1, \lambda_2,\cdots, \lambda_n)}$, then from ${P^{-1} AP=D}$, we get ${AP=PD}$, i.e.
$\displaystyle \begin{array}{rcl} && A \begin{pmatrix} \underline{x}_1 & \underline{x}_2 & \cdots & \underline{x}_n\end{pmatrix} = \begin{pmatrix} \underline{x}_1 & \underline{x}_2 & \cdots & \underline{x}_n\end{pmatrix}\begin{pmatrix} \lambda_1 & & & \\ & \lambda_2 & & \\ & & \ddots & \\ & & & \lambda_n\end{pmatrix}\vspace{1mm}\\ \Rightarrow && \begin{pmatrix} A\underline{x}_1 & A\underline{x}_2 & \cdots & A\underline{x}_n\end{pmatrix} = \begin{pmatrix} \lambda_1\underline{x}_1 & \lambda_2\underline{x}_2 & \cdots & \lambda_n\underline{x}_n\end{pmatrix}\vspace{3mm}\\ \Rightarrow && A\underline{x}_i = \lambda_i \underline{x}_i, \quad {i=1,\cdots, n}. \end{array}$
That means ${\lambda_i}$ is the eigenvalue and ${\underline{x}_i}$ is the corresponding eigenvector. Also, ${P}$ is invertible if and only if ${\underline{x}_1,\cdots, \underline{x}_n}$ form a basis for ${{\mathbb R}^n}$. The converse is also true: when we diagonalize a matrix ${A}$ (assuming ${A}$ is diagonalizable), how do we find ${P}$ and ${D}$? We calculate the eigenvalues to get ${D}$ and then calculate the eigenvectors to get ${P}$.
Furthermore, we can present the above result in the setting of linear transformation: Suppose ${P^{-1}A P=D}$ where ${P=\begin{pmatrix} \underline{x}_1 & \underline{x}_2 & \cdots & \underline{x}_n\end{pmatrix}}$ and ${D={\rm diag}(\lambda_1, \lambda_2,\cdots, \lambda_n)}$. Then the linear operator ${T_A:{\mathbb R}^n\rightarrow {\mathbb R}^n}$, ${T_A(\underline{v})= A\underline{v}}$ has the standard matrix representation ${M_{St}(T_A)= A}$. If we set ${\mathcal{E}=[\underline{x}_1,\cdots, \underline{x}_n]}$ (the ordered basis consisting of eigenvectors), then ${M_{\mathcal{E}}(T_A)= D}$ is diagonal. Using this viewpoint, we have another description (or criterion) for diagonalizable matrices:
The matrix ${A}$ is similar to a diagonal matrix
${\Leftrightarrow}$ There exists a basis ${\mathcal{E}}$ for ${{\mathbb R}^n}$ such that ${M_{\mathcal{E}}(T_A)}$ is diagonal
${\Leftrightarrow}$ There exists a basis ${\mathcal{E}=[\underline{x}_1,\cdots, \underline{x}_n]}$ for ${{\mathbb R}^n}$ such that ${T_A(\underline{x}_i)=\lambda_i \underline{x}_i}$, ${i=1,\cdots, n}$
${\Leftrightarrow}$ We can find a basis ${\mathcal{E}}$ consisting of eigenvectors of ${A}$ for ${{\mathbb R}^n}$
Now we turn back to inner product spaces. If ${V}$ is an inner product space, then we can consider more special basis — orthonormal basis (which are much more convenient, at least from the angle of computation). Hence we may consider the following question in the above diagoanlization problem:
${(**)}$ Can we find an orthonormal basis ${\mathcal{B}}$ such that ${M_{\mathcal{B}}(T_A)}$ is diagonal?
In terms of matrices, this is equivalent to finding a set of orthonormal eigenvectors ${\underline{x}_1,\cdots, \underline{x}_n}$ such that ${P=\begin{pmatrix} \underline{x}_1 & \underline{x}_2 & \cdots & \underline{x}_n\end{pmatrix}}$ such that ${P^{-1}A P= D}$.
Here we make a nice observation: if ${P=\begin{pmatrix} \underline{x}_1 & \underline{x}_2 & \cdots & \underline{x}_n\end{pmatrix}}$ where ${\langle \underline{x}_i, \underline{x}_j\rangle =1}$ for ${i=j}$ and ${0}$ for ${i\neq j}$, then
$\displaystyle P^TP=PP^T=I.$
Such a matrix is called an orthogonal matrix (i.e. ${A}$ is orthogonal if ${A^TA=I}$. Note that ${A^TA=I}$ ${\Rightarrow}$ ${AA^T=I}$.)
Hence for our problem ${(**)}$ the condition ${P^{-1}AP=D}$ can be rephrased as ${P^T AP=D}$. That’s why we invoke the concept of orthogonally diagonalizable.
Now we can state the following key result for self-adjoint linear operators (or in matrix setting, symmetric matrices):
Every ${n\times n}$ symmetric matrix ${A}$ has a set of orthonormal eigenvectors which form a basis for ${{\mathbb R}^n}$.
In the setting of linear operators, every self-adjoint operator ${T:V\rightarrow V}$ (on inner product space) has a set of orthonormal eigenvectors which form a basis for ${V}$.
|
# Experimental determination of pH [closed]
I am trying to determine the experimental pKa for two weak acids that were titrated against 0.20M NaOH.
I have read elsewhere that you can take the point where the graph becomes steep and divide the value of base added by two the corresponding pH value would then be the pKa, but how do i choose which value since it may not be obvious which point the graph becomes steep.
Below I can see that the pka for acetic acid should be close to the theoretical value calculated of 4.76 and the Tris-HCl pka should be approximately 8.3 but there must be a better way than just guessing from a graph.
My textbook doesn't explain how to experimentally find pH just that it's the point where $$[A^-]/ [HA]$$. I am hoping someone can give me an equation to work with or guide me in the right direction.
Thank you,
## closed as too broad by Mithoron, A.K., Todd Minehardt, Soumik Das, Nuclear ChemistSep 30 '18 at 13:21
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
## 1 Answer
So I think what you heard is about the right idea. The flat region is your buffering region, and isn't super helpful to deduce the pKa. Adding base consumes the weak acid, and because it is weak, you know you are mostly consuming the [HA] form, and equilibrating back to around the pKa. This is why your pH doesn't change much around the pKa.
As an example, if you had 10 mmol of HA to start, then at the pKa point you would have 5 mmol of HA and 5 mmol of A-. Now, if you keep adding base, at some point you will essentially consume the remaining 5 mmol of HA. Then, there will be negligible amount left and it can't buffer anymore - i.e. your pH will change rapidly because you are adding strong base. Here you will have ~0 mmol of HA, and ~10 mmol of A-. Note that you have twice the amount of A- now. The pKa will have been at the point where you had half of this.
Experimentally, I know two simple ways. Using a pH meter and no indicator, you have to measure the sharp region more carefully (i.e., drop by drop), because you want to be able to find the exact point where the pH changes the fastest. By approximating the derivative, i.e.
$$\frac{d}{d(V_{base})} pH(x_i) \approx \frac{pH(x_{i+1})-pH(x_{i-1})}{x_{i+1}-x_{i-1}}$$ you can find the point of greatest change, or the equivalence point. Obviously, better data in that region will give you more clarity. I used to do one titration quickly so that I would know the approximate region where I would need to be and then slowly titrate the rest.
The other method, i.e. using an indicator solution and then just very carefully reaching the equivalence point essentially gives you the same information, and I know people who can very quickly do this. But in general, this would not be as accurate.
|
# Simple RNA-Seq¶
Learning Objective
You will learn how to write a simple gene quantification tool based on RNA-Seq data.
Difficulty
Hard
Duration
2h
Prerequisites
Genome Annotations, Fragment Store, experience with OpenMP (optional)
RNA-Seq refers to high-throughput sequencing of cDNA in order to get information about the RNA molecules available in a sample. Knowing the sequence and abundance of mRNA allows to determine the (differential) expression of genes, to detect alternative splicing variants, or to annotate yet unknown genes.
In the following tutorial you will develop a simple gene quantification tool. It will load a file containing gene annotations and a file with RNA-Seq read alignments, computes abundances, and outputs RPKM values for each expressed gene.
Albeit its simplicity, this example can be seen as a starting point for more complex applications, e.g. to extend the tool from quantification of genes to the quantification of (alternatively spliced) isoforms, or to de-novo detect yet unannotated isoforms/genes.
You will learn how to use the FragmentStore to access gene annotations and alignments and how to use the IntervalTree to efficiently determine which genes overlap a read alignment.
## Introduction to the used Data Structures¶
This section introduces the FragmentStore and the IntervalTree, which are the fundamental data structures used in this tutorial to represent annotations and read alignments and to efficiently find overlaps between them. You may skip one or both subsections if you are already familiar with one or both data structures.
### Fragment Store¶
The FragmentStore is a data structure specifically designed for read mapping, genome assembly or gene annotation. These tasks typically require lots of data structures that are related to each other such as
• pairwise alignments, and
• genome annotation.
The Fragment Store subsumes all these data structures in an easy to use interface. It represents a multiple alignment of millions of reads or mate-pairs against a reference genome consisting of multiple contigs. Additionally, regions of the reference genome can be annotated with features like ‘gene’, ‘mRNA’, ‘exon’, ‘intro’ or custom features. The Fragment Store supports I/O functionality to read/write a read alignment in SAM or AMOS format and to read/write annotations in GFF or GTF format.
The Fragment Store can be compared with a database where each table (called “store”) is implemented as a String member of the FragmentStore class. The rows of each table (implemented as structs) are referred by their ids which are their positions in the string and not stored explicitly. The only exception is the alignedReadStore whose elements of type AlignedReadStoreElement contain an id-member as they may be rearranged in arbitrary order, e.g. by increasing genomic positions or by readId. Many stores have an associated name store to store element names. Each name store is a StringSet that stores the element name at the position of its id. All stores are present in the Fragment Store and empty if unused. The concrete types, e.g. the position types or read/contig alphabet, can be easily changed by defining a custom config struct which is a template parameter of the Fragment Store class.
### Annotation Tree¶
Annotations are represented as a tree that at least contains a root node where all annotations are children or grandchildren of. A typical annotation tree looks as follows:
In the Fragment Store the tree is represented by annotationStore, annotationTypeStore, annotationKeyStore, and others. Instead of accessing these tables directly, the AnnotationTree Iterator provides a high-level interface to traverse and access the annotation tree.
### Interval Tree¶
The IntervalTree is a data structure that stores one-dimensional intervals in a balanced tree and efficiently answers range queries. A range query is an operation that returns all tree intervals that overlap a given query point or interval.
The interval tree implementation provided in SeqAn is based on a Tree which is balanced if all intervals are given at construction time. Interval tree nodes are objects of the IntervalAndCargo class and consist of 2 interval boundaries and additional user-defined information, called cargo. To construct the tree on a set of given interval nodes use the function createIntervalTree. The functions addInterval and removeInterval should only be used if the interval tree needs to be changed dynamically (as they not yet balance the tree).
### Import Alignments and Gene Annotations from File¶
At first, our application should create an empty FragmentStore object into which we import a gene annotation file and a file with RNA-Seq alignments. An empty FragmentStore can simply be created with:
FragmentStore<> store;
Files can be read from disk with the function read that expects an open stream (e.g. a STL ifstream), a FragmentStore object, and a File Format tag. The contents of different files can be loaded with subsequent calls of read. As we want the user to specify the files via command line, our application will parse them using the ArgumentParser and store them in an option object.
In your first assignment you need to complete a given code template and implement a function that loads a SAM file and a GTF file into the FragmentStore.
#### Assignment 1¶
Type
Application
Objective
Use the code template below (click more...) and implement the function loadFiles to load the annotation and alignment files. Use the file paths given in the options object and report an error if the files could not be opened.
#include <iostream>
#include <seqan/store.h>
#include <seqan/arg_parse.h>
#include <seqan/misc/misc_interval_tree.h>
#include <seqan/parallel.h>
using namespace seqan;
// define used types
typedef FragmentStore<> TStore;
// define options
struct Options
{
std::string annotationFileName;
std::string alignmentFileName;
};
//
// 1. Parse command line and fill Options object
//
ArgumentParser::ParseResult parseOptions(Options & options, int argc, char const * argv[])
{
ArgumentParser parser("gene_quant");
setShortDescription(parser, "A simple gene quantification tool");
setVersion(parser, "1.0");
setDate(parser, "Sep 2012");
// Parse command line
ArgumentParser::ParseResult res = parse(parser, argc, argv);
if (res == ArgumentParser::PARSE_OK)
{
// Extract option values
getArgumentValue(options.annotationFileName, parser, 0);
getArgumentValue(options.alignmentFileName, parser, 1);
}
return res;
}
//
// 2. Load annotations and alignments from files
//
bool loadFiles(TStore & store, Options const & options)
{
// INSERT YOUR CODE HERE ...
//
return true;
}
int main(int argc, char const * argv[])
{
Options options;
TStore store;
ArgumentParser::ParseResult res = parseOptions(options, argc, argv);
if (res != ArgumentParser::PARSE_OK)
return res == ArgumentParser::PARSE_ERROR;
return 1;
return 0;
}
Hint
• Open STL std::fstream objects and use the function read with a SAM or GTF tag.
• ifstream::open requires the file path to be given as a C-style string (const char *).
• Use string::c_str to convert the option strings into C-style strings.
• The function read expects a stream, a FragmentStore and a tag, i.e. Sam() or Gtf().
Solution
//
// 2. Load annotations and alignments from files
//
bool loadFiles(TStore & store, Options const & options)
{
std::ifstream alignmentFile(options.alignmentFileName.c_str());
if (!alignmentFile.good())
{
std::cerr << "Couldn't open alignment file " << options.alignmentFileName << std::endl;
return false;
}
std::cerr << "[" << length(store.alignedReadStore) << "]" << std::endl;
std::ifstream annotationFile(options.annotationFileName.c_str());
if (!annotationFile.good())
{
std::cerr << "Couldn't open annotation file" << options.annotationFileName << std::endl;
return false;
}
std::cerr << "[" << length(store.annotationStore) << "]" << std::endl;
return true;
}
### Extract Gene Intervals¶
Now that the Fragment Store contains the whole annotation tree, we want to traverse the genes and extract the genomic ranges they span. In the annotation tree, genes are (the only) children of the root node. To efficiently retrieve the genes that overlap read alignments later, we want to use interval trees, one for each contig. To construct an interval tree, we first need to collect IntervalAndCargo objects in a string and pass them to createIntervalTree. See the interval tree demo in core/demos/interval_tree.cpp for more details. As cargo we use the gene’s annotation id to later retrieve all gene specific information. The strings of IntervalAndCargo objects should be grouped by contigId and stored in an (outer) string of strings. For the sake of simplicity we don’t differ between genes on the forward or reverse strand and instead always consider the corresponding intervals on the forward strand.
To define this string of strings of IntervalAndCargo objects, we first need to determine the types used to represent an annotation. All annotations are stored in the annotationStore which is a Fragment Store member and whose type is TAnnotationStore. The value type of the annotation store is the class AnnotationStoreElement. Its member typedefs TPos and TId define the types it uses to represent a genomic position or the annotation or contig id:
typedef FragmentStore<> TStore;
typedef Value<TStore::TAnnotationStore>::Type TAnnotation;
typedef TAnnotation::TId TId;
typedef TAnnotation::TId TPos;
typedef IntervalAndCargo<TPos, TId> TInterval;
The string of strings of intervals can now be defined as:
String<String<TInterval> > intervals;
In your second assignment you should use an AnnotationTree Iterator annotation tree iterator] to traverse all genes in the annotation tree. For each gene, determine its genomic range (projected to the forward strand) and add a new TInterval object to the intervals[contigId] string, where contigId is the id of the contig containing that gene.
#### Assignment 2¶
Type
Application
Objective
Use the code template below (click more..). Implement the function extractGeneIntervals that should extract genes from the annotation tree (see AnnotationTree Iterator) and create strings of IntervalAndCargo objects - one for each config - that contains the interval on the forward contig strand and the gene’s annotation id.
Extend the definitions:
// define used types
typedef FragmentStore<> TStore;
typedef Value<TStore::TAnnotationStore>::Type TAnnotation;
typedef TAnnotation::TId TId;
typedef TAnnotation::TPos TPos;
typedef IntervalAndCargo<TPos, TId> TInterval;
//
// 3. Extract intervals from gene annotations (grouped by contigId)
//
void extractGeneIntervals(String<String<TInterval> > & intervals, TStore const & store)
{
// INSERT YOUR CODE HERE ...
//
}
Extend the main function:
TStore store;
String<String<TInterval> > intervals;
and
if (!loadFiles(store, options))
return 1;
extractGeneIntervals(intervals, store);
Hint
You can assume that all genes are children of the root node, i.e. create an AnnotationTree Iterator, [go down to the first gene and go right to visit all other genes. Use getAnnotation to access the gene annotation and value to get the annotation id.
Make sure that you append IntervalAndCargo objects, where i1 < i2 holds, as opposed to annotations where beginPos > endPos is possible. Remember to ensure that intervals is of appropriate size, e.g. with
resize(intervals, length(store.contigStore));
Use appendValue to add a new TInverval object to the inner string, see IntervalAndCargo constructor for the constructor.
Solution
//
// 3. Extract intervals from gene annotations (grouped by contigId)
//
void extractGeneIntervals(String<String<TInterval> > & intervals, TStore const & store)
{
// extract intervals from gene annotations (grouped by contigId)
resize(intervals, length(store.contigStore));
Iterator<TStore const, AnnotationTree<> >::Type it = begin(store, AnnotationTree<>());
if (!goDown(it))
return;
do
{
SEQAN_ASSERT_EQ(getType(it), "gene");
TPos beginPos = getAnnotation(it).beginPos;
TPos endPos = getAnnotation(it).endPos;
TId contigId = getAnnotation(it).contigId;
if (beginPos > endPos)
std::swap(beginPos, endPos);
// insert forward-strand interval of the gene and its annotation id
appendValue(intervals[contigId], TInterval(beginPos, endPos, value(it)));
}
while (goRight(it));
}
### Construct Interval Trees¶
With the strings of gene intervals - one for each contig - we now can construct interval trees. Therefore, we specialize an IntervalTree with the same position and cargo types as used for the IntervalAndCargo objects. As we need an interval tree for each contig, we instantiate a string of interval trees:
typedef IntervalTree<TPos, TId> TIntervalTree;
String<TIntervalTree> intervalTrees;
Your third assignment is to implement a function that constructs the interval trees for all contigs given the string of interval strings.
#### Assignment 3¶
Type
Application
Objective
Use the code template below (click more...). Implement the function constructIntervalTrees that uses the interval strings to construct for each contig an interval tree. Optional: Use OpenMP to parallelize the construction over the contigs, see SEQAN_OMP_PRAGMA.
Extend the definitions:
// define used types
typedef FragmentStore<> TStore;
typedef Value<TStore::TAnnotationStore>::Type TAnnotation;
typedef TAnnotation::TId TId;
typedef TAnnotation::TPos TPos;
typedef IntervalAndCargo<TPos, TId> TInterval;
typedef IntervalTree<TPos, TId> TIntervalTree;
//
// 4. Construct interval trees
//
void constructIntervalTrees(String<TIntervalTree> & intervalTrees,
String<String<TInterval> > & intervals)
{
// INSERT YOUR CODE HERE ...
//
}
Extend the main function:
String<String<TInterval> > intervals;
String<TIntervalTree> intervalTrees;
and
extractGeneIntervals(intervals, store);
constructIntervalTrees(intervalTrees, intervals);
Hint
First, resize the string of interval trees accordingly:
resize(intervalTrees, length(intervals));
Hint
Use the function createIntervalTree.
Optional: Construct the trees in parallel over all contigs with an OpenMP parallel for-loop, see here for more information about OpenMP.
Solution
//
// 4. Construct interval trees
//
void constructIntervalTrees(String<TIntervalTree> & intervalTrees,
String<String<TInterval> > & intervals)
{
int numContigs = length(intervals);
resize(intervalTrees, numContigs);
SEQAN_OMP_PRAGMA(parallel for)
for (int i = 0; i < numContigs; ++i)
createIntervalTree(intervalTrees[i], intervals[i]);
}
### Compute Gene Coverage¶
To determine gene expression levels, we first need to compute the read coverage, i.e. the total number of reads overlapping a gene. Therefore we use a string of counters addressed by the annotation id.
String<unsigned> readsPerGene;
For each read alignment we want to determine the overlapping genes by conducting a range query via findIntervals and then increment their counters by 1. To address the counter of a gene, we use its annotation id stored as cargo in the interval tree.
Read alignments are stored in the alignedReadStore, a string of AlignedReadStoreElements objects. Their actual type can simply be determined as follows:
typedef Value<TStore::TAlignedReadStore>::Type TAlignedRead;
Given the contigId, beginPos, and endPos we will retrieve the annotation ids of overlapping genes from the corresponding interval tree.
Your fourth assignment is to implement the count function that performs all the above described steps. Optionally, use OpenMP to parallelize the counting.
#### Assignment 4¶
Type
Application
Objective
Use the code template below (click more...). Implement the function countReadsPerGene that counts for each gene the number of overlapping reads. Therefore determine for each AlignedReadStoreElement begin and end positions (on forward strand) of the alignment and increment the readsPerGene counter for each overlapping gene.
Optional: Use OpenMP to parallelize the function, see SEQAN_OMP_PRAGMA.
Extend the definitions:
// define used types
typedef FragmentStore<> TStore;
typedef Value<TStore::TAnnotationStore>::Type TAnnotation;
typedef TAnnotation::TId TId;
typedef TAnnotation::TPos TPos;
typedef IntervalAndCargo<TPos, TId> TInterval;
typedef IntervalTree<TPos, TId> TIntervalTree;
//
// 5. Count reads per gene
//
void countReadsPerGene(String<unsigned> & readsPerGene, String<TIntervalTree> const & intervalTrees, TStore const & store)
{
// INSERT YOUR CODE HERE ...
//
}
Extend the main function:
String<TIntervalTree> intervalTrees;
and
extractGeneIntervals(intervals, store);
constructIntervalTrees(intervalTrees, intervals);
Hint
resize(readsPerGene, length(store.annotationStore), 0);
Make sure that you search with findIntervals where query_begin < query_end holds, as opposed to read alignments where beginPos > endPos is possible.
Hint
The result of a range query is a string of annotation ids given to findIntervals by-reference:
String<TId> result;
Reuse the result string for multiple queries (of the same thread, use private(result) for OpenMP).
Solution
//
// 5. Count reads per gene
//
void countReadsPerGene(String<unsigned> & readsPerGene, String<TIntervalTree> const & intervalTrees, TStore const & store)
{
String<TId> result;
// iterate aligned reads and get search their begin and end positions
SEQAN_OMP_PRAGMA(parallel for private(result))
for (int i = 0; i < numAlignments; ++i)
{
TPos queryBegin = _min(ar.beginPos, ar.endPos);
TPos queryEnd = _max(ar.beginPos, ar.endPos);
findIntervals(intervalTrees[ar.contigId], queryBegin, queryEnd, result);
// increase read counter for each overlapping annotation given the id in the interval tree
for (unsigned j = 0; j < length(result); ++j)
{
SEQAN_OMP_PRAGMA(atomic)
}
}
}
### Output RPKM Values¶
In the final step, we want to output the gene expression levels in a normalized measure. We therefore use RPKM values, i.e. the number of reads per kilobase of exon model per million mapped reads (1). One advantage of RPKM values is their independence of the sequencing throughput (normalized by total mapped reads), and that they allow to compare the expression of short with long transcripts (normalized by exon length).
The exon length of an mRNA is the sum of lengths of all its exons. As a gene may have multiple mRNA, we will simply use the maximum of all their exon lengths.
Your final assignment is to output the RPKM value for genes with a read counter > 0. To compute the exon length of the gene (maximal exon length of all mRNA) use an AnnotationTree Iterator and iterate over all mRNA (children of the gene) and all exons (children of mRNA). For the number of total mapped reads simply use the number of alignments in the alignedReadStore. Output the gene names and their RPKM values separated by tabs as follows:
#gene name RPKM value
ENSMUSG00000053211 5932.12
ENSMUSG00000069053 10540.1
ENSMUSG00000056673 12271.3
ENSMUSG00000069049 10742.2
ENSMUSG00000091749 7287.66
ENSMUSG00000068457 37162.8
ENSMUSG00000069045 13675
ENSMUSG00000069044 6380.36
ENSMUSG00000077793 2088.62
ENSMUSG00000000103 7704.74
ENSMUSG00000091571 10965.2
ENSMUSG00000069036 127128
ENSMUSG00000090405 10965.2
ENSMUSG00000090652 35271.2
ENSMUSG00000052831 68211.2
ENSMUSG00000069031 37564.2
ENSMUSG00000071960 34984
ENSMUSG00000091987 37056.3
ENSMUSG00000090600 2310.18
Download and decompress the attached mouse annotation ([raw-attachment:Mus_musculus.NCBIM37.61.gtf.zip Mus_musculus.NCBIM37.61.gtf.zip]) and the alignment file of RNA-Seq reads aligned to chromosome Y ([raw-attachment:sim40mio_onlyY.sam.zip sim40mio_onlyY.sam.zip]). Test your program and compare your output with the output above.
#### Assignment 5¶
Type
Application
Objective
Use the code template below (click more...). Implement the function outputGeneCoverage that outputs for each expressed gene the gene name and the expression level as RPKM as tab-separated values.
//
// 6. Output RPKM values
//
void outputGeneCoverage(String<unsigned> const & readsPerGene, TStore const & store)
{
// INSERT YOUR CODE HERE ...
//
}
Extend the main function:
extractGeneIntervals(intervals, store);
constructIntervalTrees(intervalTrees, intervals);
Hint
To compute the maximal exon length use three nested loops: (1) enumerate all genes, (2) enumerate all mRNA of the gene, and (3) enumerate all exons of the mRNA and sum up their lengths.
Hint
Remember that exons are not the only children of mRNA.
Solution
//
// 6. Output RPKM values
//
void outputGeneCoverage(String<unsigned> const & readsPerGene, TStore const & store)
{
// output abundances for covered genes
Iterator<TStore const, AnnotationTree<> >::Type transIt = begin(store, AnnotationTree<>());
Iterator<TStore const, AnnotationTree<> >::Type exonIt;
std::cout << "#gene name\tRPKM value" << std::endl;
for (unsigned j = 0; j < length(readsPerGene); ++j)
{
continue;
unsigned mRNALengthMax = 0;
goTo(transIt, j);
// determine maximal mRNA length (which we use as gene length)
SEQAN_ASSERT_NOT(isLeaf(transIt));
goDown(transIt);
do
{
exonIt = nodeDown(transIt);
unsigned mRNALength = 0;
// determine mRNA length, sum up the lengths of its exons
do
{
if (getAnnotation(exonIt).typeId == store.ANNO_EXON)
mRNALength += abs((int)getAnnotation(exonIt).beginPos - (int)getAnnotation(exonIt).endPos);
}
while (goRight(exonIt));
if (mRNALengthMax < mRNALength)
mRNALengthMax = mRNALength;
}
while (goRight(transIt));
// RPKM is number of reads mapped to a gene divided by its gene length in kbps
// and divided by millions of total mapped reads
std::cout << store.annotationNameStore[j] << '\t';
|
# Monotone functions and Borel sets
I'm studying measure theory and two question came to my mind:
1. If $f:\mathbb{R}\to\mathbb{R}$ is monotone and $B\subseteq\mathbb{R}$ is borel, is the image $f(B)$ borel?
2. If $f:\mathbb{R}\to\mathbb{R}$ is a monotone function (say, non-decreasing), does there exist a sequence of continuous functions $f_n:\mathbb{R}\to\mathbb{R}$ converging pointwise to $f$?
Here's the motivation for those questions: Let $(X,M)$ and $(Y,N)$ be measure spaces.
1. It's known that if $\mu$ is a measure on $(X,M)$ and $f:X\to Y$ is measure, then we have the pushforward measure $f_*\mu(A)=\mu(f^{-1}(A))$ on $(Y,N)$. What if we were to define a "pullback measure"? Given a function $f:X\to Y$ such that $f(M)\subseteq N$ (i.e. $f$ maps measurable sets to measurable sets) and a measure $\nu$ on $N$, the natural formula would be $f^*\nu(A)=\nu(f(A))$. So we ask if is there a good amount of functions which map measurable sets to measurable sets in $\mathbb{R}$, and monotone functions seem like good candidates (for strictly monotone, continuous functions, the result is valid and there are several answers on the web).
2. If this were true, maybe we could use some convergence argument to solve the problem above.
• Your attempted definition does not work: $f^*\nu$ is not additive in general. – user138530 Oct 4 '14 at 5:10
• @ChristianRemling I see that. In that paragraph, I'm simply explaining how I got to question 1. – Questioner Oct 4 '14 at 5:19
First note that if $f$ is monotone, it is Borel. (The sets $(a, \infty)$ generate the Borel $\sigma$-algebra, and $f^{-1}((a, \infty))$ is Borel for each $a$ because it is of the form $(b, \infty)$ or $[b, \infty)$ (for increasing functions) or $(-\infty, b)$ or $(-\infty, b]$ (for decreasing functions).)
Now for each $y$, $f^{-1}(\{y\})$ is either empty, a point, or a nontrivial interval. Let $C$ be the set of all $y$ such that $f^{-1}(y)$ is a nontrivial interval. Since each interval contains a rational, $C$ is countable. Let $D = f^{-1}(C)$; note that $D$ is Borel.
If $B$ is an arbitrary Borel set, we have $f(B) = f(B \cap D) \cup f(B \setminus D)$. Now $f(B \cap D) \subset C$, hence it is countable and hence Borel. So it suffices to show that $f(B \setminus D)$ is Borel.
On $D^c$, and hence on $B \setminus D$, $f$ is injective. Now it is a theorem of descriptive set theory that an injective Borel function on a Borel subset of a Polish space has a Borel image. (See for instance Theorem 4.5.4 of Srivastava, A Course on Borel Sets.) But $B \setminus D$ is Borel, so $f(B \setminus D)$ is also Borel and we are done.
|
# A theory of multineuronal dimensionality, dynamics and measurement
We recently discussed this paper by Gao et al. from Ganguli lab. They present a theory of neural dimensionality and sufficiency conditions for accurate recovery of neural trajectories, providing a much-needed theoretical perspective from which to judge a majority of systems neuroscience studies that rely on dimensionality reduction. Their results also provide a long overdue mathematical justification for drawing conclusions about entire neural systems based on the activity of a small number of neurons. I felt the paper was well written, and the mathematical arguments used in the proofs were pretty engaging — I don’t remember the last time I enjoyed reading supplementary material quite like this. Here’s a brief summary and some additional thoughts on the paper.
Linear dimensionality reduction techniques are widely used in neuroscience to study how behaviourally-relevant variables are represented in the neurons. The general approach goes like this – (i) apply dimensionality reduction e.g. PCA on trial-averaged activity of a population of $M$ neurons to identify a $P$-dimensional subspace ($P) capturing a sufficient fraction of neural activity, and (ii) examine how neural dynamics evolve within this subspace to (hopefully) gain insights about neural computation. This recipe has largely been successful (ignoring failures that generally go unpublished): the reduced dimensionality of neural datasets is often quite small and the corresponding low-dimensional dynamical portraits are usually interpretable. However, neuroscientists observe only a tiny fraction of the complete neural population. So could the success of dimensionality reduction be an artefact of severe subsampling? This is precisely the question that Gao et al. attempt to answer in their paper.
They first develop a theory that describes how neural dimensionality (defined below) is bounded by the task design and some easy-to-measure properties of neurons. Then they adapt the mathematical theory of random projection to neuroscience setting and obtain the amount of geometric distortion in the neural trajectories introduced by subsampling, or equivalently, the minimum number of neurons one has to measure in order to achieve an arbitrarily small distortion in a real experiment. Throughout this post, I use the term neural dimensionality in the same sense that the authors use in the paper: the dimension of the smallest affine subspace that contains a large (~80 – 90%) fraction of the neural trajectories. Note that this notion of dimensionality differs from the intrinsic dimensionality of the neural manifold, which is usually much smaller.
To derive an analytical expression for dimensionality, the authors note that there is an inherent biological limit to how fast the neural trajectory can evolve as a function of the task parameters. Concretely, consider the response of a population of visual neurons to an oriented bar. As you change the orientation from 0 to $\pi$, the activity of the neural population will likely change too. If $\vartheta$ denotes the minimum change in orientation required to induce an appreciable change in the population activity (i.e. the width of the autocorrelation in the population activity pattern), then the population will be able to explore roughly $\pi/\vartheta$ linear dimensions. Of course, the scale of autocorrelation will differ across brain areas (presumably increases as one goes from the retina to higher visual areas), so the neural dimensionality would depend on the properties of the population being sampled, not just on the task design. Similar reasoning applies to other task parameters such as time (yes, they consider time as a task parameter because, after all, neural activity is variable in time). If you wait for time period $T$, the dimensionality will be roughly equal to $T/\tau$ where $\tau$ is now the width of temporal autocorrelation. For the general case of $K$ different task parameters, they prove that neural dimensionality $D$ is ultimately bounded by (even if you record from millions of neurons):
$\displaystyle \LARGE D \le C\frac{\prod_{k=1}^{K}{L_k}}{\prod_{k=1}^{K}{\lambda_k}} \qquad \qquad (1)$
where $\\L_k$ is the range of the $k^{th}$ task parameter, $\lambda_k$ is the corresponding autocorrelation length and $C$ is an $O(1)$ constant which they prove is close to 1. The numerator and denominator depend on task design and smoothness of neural dynamics respectively, so they label the term on the right-hand side neural task complexity (NTC). This terminology was a source of confusion among some of us as it appears to downplay the fundamental role of the neural circuit properties in restricting the dimensionality, but its intended meaning is pretty clear if you read the paper.
To derive NTC, the authors assume that the neural response is stationary in the task parameters and the joint autocorrelation function is factorisable as a product of individual task parameters’ autocorrelation functions, and then show that the above bound becomes weak when these assumptions do not hold for the particular population being studied. The proof was also facilitated in part by a clever choice of the definition of dimensionality: ‘participation ratio’ $={\left (\sum_i \mu_i \right )^2}/{\left (\sum_i \mu_i^2 \right )}$ where $\mu_i$ are the eigenvalues of the neuronal covariance matrix, instead of the more common but analytically cumbersome measure based on ‘fraction $x$ of variance explained’ $=\begin{matrix} argmin\\ D \end{matrix} \ s.t. \ \left ( \sum_{i=1}^{D} \mu_i \right )/\left ( \sum_i \mu_i \right ) \geq x$ , but they demonstrate that their choice is reasonable.
Much of the discussion in our journal club centred on whether equation (1) is just circular reasoning, and whether we really gain any new insight from this theory. This view was somewhat understandable because the authors introduce the paper by promising to present a theory that explains the origin of the simplicity betrayed by the low dimensionality of neural recordings… only to show us that it emerges from the specific way in which neural populations respond (smooth dynamics $\approx$ large denominator) to specific tasks (low complexity $\approx$ small numerator). Although this result may seem qualitatively trivial, the strength of their work lies in making our intuitions precise and packaging them in the form of a compact theorem. Moreover, as shown later in the paper, knowing this bound on dimensionality can be practically helpful in determining how many neurons to record. Before discussing that aspect, I’d like to briefly dwell a little bit on a potentially interesting corollary and a possible extension of the above theorem.
Based on the above theorem, one can identify three regimes of dimensionality for a recording size of $M$ neurons:
(i) $D\approx M;\ D\ll NTC$
(ii) $D\approx NTC;\ D\ll M$
(iii) $D\ll M;\ D\ll NTC$
The first two regimes are pretty straightforward to interpret. (i) implies that you might not have sampled enough neurons, while (ii) means that the task was not complex enough to elicit richer dynamics. The authors call (iii) the most interesting and say ‘Then, and only then, can one say that the dimensionality of neural state space dynamics is constrained by neural circuit properties above and beyond the constraints imposed by the task and smoothness of dynamics alone’. What could those properties be? Here, it is worth noting that their theory takes the speed of neural dynamics into account, but not the direction. Recurrent connections, for example, might prevent the neural trajectory from wandering in certain directions thereby constraining the dimensionality. Such constraints may in fact lead to nonstationary and/or unfactorisable neuronal covariance, violating the conditions that are necessary for dimensionality to approach NTC. Although this is not explicitly discussed, they simulate a non-normal network to demonstrate that its dimensionality is reduced by recurrent amplification. So I guess it must be possible to derive a stronger theorem with a tighter bound on neural dimensionality by incorporating the influence of the strength and structure of connections between neurons.
NTC is a bound on the size of the linear subspace within which neural activity is mostly confined. But even if NTC is small, it is not clear whether we can accurately estimate the neural trajectory within this subspace simply by recording $M$ neurons such that $M\gg NTC$. After all, $M$ is still only a tiny fraction of the total number of neurons in the population $N$. To explore this, the authors use the theory of random projection and show that it is possible to achieve some desired level of fractional error $\epsilon$ in estimating the neural trajectory by ensuring:
$\displaystyle M(\epsilon)=K[O(log\ NTC)\ +\ O(log\ N)\ +\ O(1)]\ \epsilon^{-2} \qquad \qquad (2)$
where $K$ is the number of task parameters. This means that the demands on the size of the neural recording grow only linearly in the number of task parameters and logarithmically (!!) in both NTC and $N$. Equation (2) holds as long as the recorded sample is statistically homogenous to the rest of the neurons, a restriction that is guaranteed for most higher brain areas provided the sampling is unbiased i.e. the experimenter does not cherry-pick which neurons to record/analyse. The authors encourage us to use their theorems to obtain back-of-the-envelope estimates of recording size and to guide experimental design. This is easier said than done, especially when studying a new brain area or when designing a completely new task. Nevertheless, their work is likely to push the status quo in neuroscience experiments by encouraging experimentalists to move boldly towards more complex tasks without radically revising their approach to neural recordings.
|
# 242. Valid Anagram
This question is similar to question 205. Isomorphic Strings. They have similarities.We can also use ASCII table to solve it.
Solution1
public boolean isAnagram(String s, String t) {
//different length,just return false
if (s.length() != t.length()) {
return false;
}
//26 letters
int[] ch = new int[26];
//traversal s and t
for (int i = 0; i < s.length(); i++) {
ch[s.charAt(i) - 'a']++;
ch[t.charAt(i) - 'a']--;
}
//traversal the arrays
for (int i = 0; i < 26; i++) {
if (ch[i] != 0) {
return false;
}
}
return true;
}
But this question also ask us what to do if the input string contains unicode characters.So this time we have to use HashMap.Because HashMap can mark each characters in the string.
Solution2
public boolean isAnagram(String s, String t) {
//different length,just return false
if (s.length() != t.length()) {
return false;
}
//character is the charAt,integer is the number of the character
HashMap<Character, Integer> hashMap = new HashMap<>();
for (int i = 0; i < s.length(); i++) {
char ch = s.charAt(i);
//if is the first time find the character,so use the defaultValue 0 and plus 1
hashMap.put(ch, hashMap.getOrDefault(ch, 0) + 1);
}
for (int i = 0; i < t.length(); i++) {
char ch = t.charAt(i);
//and minus 1
hashMap.put(ch, hashMap.getOrDefault(ch, 0) - 1);
//it means they are different string,just return false
if (hashMap.get(ch) < 0) {
return false;
}
}
return true;
}
# 257. Binary Tree Paths
This question is very similar to question 144. Binary Tree Preorder Traversal. It is also traversal this tree.You only need to store the found Tree Nodes.And also determine whether it is the last leaf node.
So we can use two java class.ArrayList to store all paths,LinkedList to store the current node.
Solution
/**
* store all paths
*/
ArrayList<String> result = new ArrayList<String>();
/**
* store the current node
*/
public List<String> binaryTreePaths(TreeNode root) {
traverse(root);
return result;
}
/**
* just like Binary Tree Preorder Traversal
*/
private void traverse(TreeNode node) {
//if node is null,just return
if (node == null) {
return;
}
//String.valueOf make this value become String,and add into the path
//Return leaf node
if (node.left == null && node.right == null) {
}
if (node.left != null) {
traverse(node.left);
}
if (node.right != null) {
traverse(node.right);
}
//Delete and return the last element
path.removeLast();
}
This problem has two solutions.The first one solution is a easy math thinking.We can update the num when we finish the calculation.
Solution1
public int addDigits(int num) {
int result = 0;
while (num > 0) {
//get every digit
result += num % 10;
num /= 10;
//if num is 38,so result is 11,we need to update the num and the result
if (num == 0 && result > 9) {
num = result;
result = 0;
}
}
return result;
}
The second solution is a very clever solution,because this question requires us to think about the time cost $O(N)$.We all know that in mathematical addition,9 plus 1 carry to 10.So i make a example.
• $18=10+8$.If num=18,the result=9
• $18=9+1+8=9+9$
• $38=10+10+10+8$ ifnum=38,theresult=2
• $38=9+1+9+1+9+1+8=9+9+9+9+2$
So we found the pattern
• If this number is divided by 9 there is no remainder,the result is 9
• If this number is divided by 9 there is a remainder,the result is remainder
Solution2
public int addDigits(int num) {
if (num == 0) {
return 0;
}
if (num % 9 == 0) {
return 9;
} else {
return num %= 9;
}
}
# 263. Ugly Number
This problem is a very easy math problem.As we all know if n>0 and n is the ugly number,$n=2^a\times3^b\times5^c$,if a b d are all 0,so the n is 1.And because multiplication satisfies the commutative law.So no matter which number we divide first.If its remainder is not 0,so it's not ugly number.
Solution
public boolean isUgly(int n) {
//n<=0 are all false
if (n <= 0) {
return false;
}
//n will become 1 at last
while (n > 1) {
if (n % 2 == 0) {
n /= 2;
continue;
}
if (n % 3 == 0) {
n /= 3;
continue;
}
if (n % 5 == 0) {
n /= 5;
continue;
}
//it's not ugly number
if (n % 2 != 0 && n % 3 != 0 && n % 5 != 0) {
return false;
}
}
return true;
}
# 268. Missing Number
This problem is a easy math problem.We can find a constrains in the problem,all the number of the nums are unique.So we can use Gauss Summation to solve this problem.We have the length of the nums,and we can calculate the synthesis of all the elements.And minus each elements,the result is the answer.
Solution
public int missingNumber(int[] nums) {
//calculate the synthesis of all the elements
int n = (1 + nums.length) * nums.length / 2;
//minus all the elements
for (int num : nums) {
n -= num;
}
return n;
}
This test point for this question if Binary Search.If you want to understand the Binary Search method carefully,you can read my Blog LeetCode EasyTopic Notes2.We just need to determine whether the elements is bad version.And use Binary Search.
Solution
public int firstBadVersion(int n) {
} else {
n = mid;
}
}
}
# 283. Move Zeroes
This problem has two easy solution.This problem require us must do this in-place without making a copy of the array.
Solution1
We can traversal the array twice.Begin from the head,while the element isn't 0,put it to the head.So the non-zero elements are all sorted at the head.And let all tail elements be 0
public void moveZeroes(int[] nums) {
int n = 0;
//let [0,n] are all non-zero elements
for (int i = 0; i < nums.length; i++) {
if (nums[i] != 0) {
nums[n] = nums[i];
n++;
}
}
//[n+1,nums.length] are all 0
for (int i = n; i < nums.length; i++) {
nums[i] = 0;
}
}
Solution2
We can solve it like quick sort.0 as a demarcation point.When the element is not 0,exchange with the previous element.
public void moveZeroes(int[] nums) {
int n = 0;
for (int i = 0; i < nums.length; i++) {
//while the element is not 0,exchange with the previous element
if (nums[i] != 0) {
int temp = nums[i];
nums[i] = nums[n];
nums[n++] = temp;
}
}
}
# 290. Word Pattern
This question is an upgraded version of the difficulty of question 205.If you want to know how to solve this question,you must know how to solve 205,you can read my notes about 205 LeetCode EasyTopic Notes6.
This question we also use HashMap and HashSet.And the main idea is similar with 205.We still iterate over the string.And verify in turn whether it has appeared.If it has appeared but the data recorded on both sides are different just return false.Else update the record.
Solution
public boolean wordPattern(String pattern, String s) {
//such as <a,dog> <c,cat>
HashMap<Character, String> pMap = new HashMap<>();
//such as <dog> <cat>
HashSet<String> sSet = new HashSet<>();
//Remove the spaces in the string,and change it to array,such as array[0] is dog,array[1] is cat
String res[] = s.split(" ");
//if the length is not equal,just return false
if (res.length != pattern.length()) {
return false;
}
//traversal the string
for (int i = 0; i < pattern.length(); i++) {
//if a letter has appeared
if (pMap.containsKey(pattern.charAt(i))) {
//compare whether the current array element are the same as the previous ones
if (!res[i].equals(pMap.get(pattern.charAt(i)))) {
return false;
}
} else {
//if the letter is the first time appear,just add the array element
//if the element has been added,so just return false
return false;
}
//HashMap put the new <s,ship>
pMap.put(pattern.charAt(i), res[i]);
}
}
return true;
}
# 292. Nim Game
This is a very interesting question.When i first time read this question,i thought it is a recursion solution.
I did a reasoning.
• Person A :can take as much as he want
1. number1
2. number2
3. number3
• PersonB :can take as much as he want
1. number1
2. number2
3. number3
When A has taken it.So this question can change to B take n-1 or n-2 or n-3
And the n become n<=3,the next person to take it will win.So i write this code first.
if (n <= 3) {
return true;
}
And if the number is 5.PersonA can choose how many what he take.He can only take 1,and the n become 4.So no matter how many what personB take,personA will always win.
So if personB must win situation is no matter how many what personA take.It is always n>3,must return false.So the opposite situation is A always win,the number always n<3.So you can come up with all the recursion code.
Solution1
public boolean canWinNim(int n) {
//n<=3,it will win,so return true
if (n <= 3) {
return true;
} else {
//if n==4,no matter hwo many A take.B will always win.So get the opposite situation !(B win)
return !(canWinNim(n - 1) && canWinNim(n - 2) && canWinNim(n - 3));
}
}
But the recursion will over time,because the number of n is 1<=n<=2^31-1.So i change this code.
We can find a pattern
• n=1 A take 1,A win
• n=2 A take 2,A win
• n=3 A take 3,A win
• n=4
• A take 1,B take 3,B win
• A take 2,B take 2,B win
• A take 3,B take 1,B win
• n=5
• A take 1,n=4,A always win
So this question is a game problem.Code it is very easy and intersting
Solution2
public boolean canWinNim(int n) {
return n % 4 != 0;
}
Last modification:October 6th, 2021 at 09:38 pm
|
# Would spinning salt emit radiation?
It wouldn't need to be salt.
Basically I was initially thinking about a mechanical transmitter, essentially just taking two equal opposite charges and fixing them to the opposite ends of a pole. Then you spin the pole around it's center (like a baton twirler) and it will emit some (mostly dipole) radiation.
Take an identical setup and fix the centers of the two poles together so we have a cross and spin that in the plane of the cross (picture a tire iron), and we should have a decent quadrapole moment and thus some quadrapole radiation (supposing you could spin it fast enough).
This made me consider iterations of this until you've a ring of alternating charges side by side spinning in a circle. Naturally I thought of table salt (some ionic crystal or another). So classically I get that it should radiate, but since it's a bound quantum state I'm not sure? Also, the strongest multipole moment I imagine would be some ridiculously large number for such a setup (probably making the radiation exceedingly weak and hard to detect?).
Sorry no pictures or equations, I'm tired
Anyway, for those interested I was initially considering how the moment of inertia of our initial pole would end up being dependent upon rotation speed.
• My feeling is that (1) the radiation is too week for detection and (2) each dipole of salt molecule is not spinning coherently, the em wave probably destructive interfere with one another. – K_inverse Dec 4 '18 at 12:16
• This is not an answer since I don't know the answer, but have you tried estimating how fast it should fall off? If there's $10^{23}$ ions then you're going to have radiation going something like $(a/r)^{10^{23}}$ where $a$ is the lattice spacing I think. – jacob1729 Dec 4 '18 at 12:20
• @jacob1729 I was thinking I could model it as a bunch of superimposed dipole antennas placed at small angles to one another. From that perspective certain frequencies (ie rotation speeds), might either reinforce or partly cancel different charge spacings I was thinking (I'll have to check it out). – R. Rankin Dec 4 '18 at 12:25
• To put it simpler: no, salt is electrically neutral. – my2cts Dec 4 '18 at 12:25
• @my2cts and so is the initial baton, by that logic that wouldn't radiate either, but that definitely shouldn't be the case. The vanishing of the monopole plays no part in whether it radiates. But that's the interesting thing. If it doesn't radiate, what's the length scale.where it stops – R. Rankin Dec 4 '18 at 12:30
First, note that you are using a classical approximation for both the composition of a salt and the nature of electromagnetic radiation. This approximation breaks down both at very small length scales (where the distribution of the electron clouds within the salt becomes important) and at very low intensities of radiation (where the discrete nature of electromagnetic radiation becomes important).
Also, note that even in the classical approximation, you cannot consider the radiation of an individual charge in isolation from its environment. In the radiation zone (namely, at distances much larger than both the spacing between charges and the wavelength derived from the frequency of oscillations), the contribution of every charge in the sample is important (since they're all at nearly the same distance from a point in the radiation zone), and the total radiation is the sum of the oscillations in the field from all of the charges.
If you pick a charge in the middle of a salt, you can find an opposite neighboring charge, which means you have an electric dipole (and, in particular, the monopole moment for this distribution is zero). For this dipole, you can find a neighboring dipole pointing in the opposite direction, which means you have an electric quadrupole (and, in particular, the monopole and dipole moments for this distribution are zero). For this quadrupole, you can find a neighboring oppositely-oriented quadrupole, which means you have an electric octopole (and in particular, the monopole, dipole, and quadrupole moments for this distribution are zero). You can continue this process until you reach the end of the salt crystal. For a salt crystal consisting of $$N$$ ions, you may have a nonzero electric $$2^{\lfloor \log_2 N\rfloor}$$-pole moment, while all lower moments are zero.
It turns out that in the radiation zone, the radiation from an electric $$2^\ell$$-pole is suppressed by a factor of $$1/(1+2\ell)!!$$ relative to an electric monopole (source: https://en.wikipedia.org/wiki/Multipole_radiation). For a macroscopic crystal with $$N=10^{24}$$ ions, this corresponds to $$\ell=79$$, which means that the salt may have a nonzero $$2^{79}$$-pole moment, whose radiation is suppressed by a factor of $$3/159!!\approx10^{-141}$$ relative to your dipole. This means the radiation is certainly undetectable and would break the classical approximation even if it was detectable.
• That's kind of what I thought, also as @jacob1729 mentioned the lattice parameter would be featured to a large power. Thank you. – R. Rankin Dec 5 '18 at 0:12
• Regarding the breakdown of classical radiation formula in small charges, what does that imply about a single electron (or just a small group) in a cyclotron. Does it still emit cyclotron radiation? – R. Rankin Dec 5 '18 at 0:24
• @R.Rankin The lattice parameter is not featured to a large power at all. The lattice parameter is nowhere in the above argument whatsoever, except for setting the length scale where the radiation zone starts and being large enough that the classical approximation for the salt's composition still holds. The suppression is purely a function of the number of ions present in the crystal. If you somehow slightly shrank or lengthened the lattice spacing in the crystal, the suppression in the radiation zone would be the same. – probably_someone Dec 5 '18 at 0:45
• @R.Rankin The argument that jacob1729 mentioned holds only for static fields, not for radiation. Contributions to radiation can only die off as $1/r^2$ with distance (as any higher power in inverse distance doesn't carry any energy to infinity, and lower powers are not possible). The suppression occurs in the coefficients that multiply the various multipole contributions to the radiation, and it is these coefficients that are suppressed by $1/(1+2\ell)!!$, where $\ell$ is purely a function of the number of ions (as their arrangement is set by the crystal structure). – probably_someone Dec 5 '18 at 0:49
• @R.Rankin The breakdown in the classical approximation does not occur for small charges, nor did I say that it did. The reason that it occurs for small length scales, specifically in a salt crystal, is that the positive and negative ions are actually composed of a central nucleus surrounded by a delocalized electron cloud, which must be treated using quantum mechanics. Single electrons are point particles, so there is no internal length scale that matters except the one set by the frequency of the oscillations. – probably_someone Dec 5 '18 at 0:55
In order to radiate, a system must have an oscillating dipole (or higher multipole) moment. Salt does not consist of discrete molecules, but rather of positive and negative ions arranged so that the moments cancel out very accurately in macroscopic crystals. Some other substances (e.g., quartz) do consist of oriented dipoles, and you might think that crystals would present bound surface charges and dipole moments analogous to permanent magnetic moments. (You could call them electrets.) However, the surface charges attract free charges that neutralize them, at least when in equilibrium. Such substances turn out to be piezoelectric instead.
• But microscopically, there are charges, which are accelerating. Why should not they radiate? – Archisman Panigrahi Dec 4 '18 at 12:57
• @ArchismanPanigrahi For the same reason that an ordinary electric dipole ceases to radiate as the dipole moment $\vec{p}\to 0$ (i.e. when you place one charge essentially directly on top of the other). The radiation at macroscopic distances is not determined by whether charges are accelerating, but rather, what the moments of the charge distribution are doing, and the moments cancel quite nicely on a macroscopic scale. – probably_someone Dec 4 '18 at 13:13
|
# In the given graph the difference between x - coordinate of P and Q is equivalent to-
1. Power
2. Work
3. Potential energy
4. All of the above
Option 2 : Work
## Detailed Solution
The correct answer is option 2) i.e. Work
CONCEPT:
• Work-energy theorem: The work-energy theorem states that the net work done by the forces on an object equals the change in its kinetic energy.
Work done, $$W = Δ KE = \frac{1}{2}mv^2 - \frac{1}{2}mu^2$$
Where m is the mass of the object, v is the final velocity of the object and u is the initial velocity of the object.
EXPLANATION:
• The x-axis of the given graph has values of kinetic energy.
• The difference in x-coordinates of P and Q is (KEQ - KEPi.e. change in kinetic energy.
• From the work-energy theorem, we know that the change in kinetic energy equals net work done.
• Hence, the difference between x - coordinate of P and Q is work.
|
Mar 5 2014
# RFduino without RFduino code
The RFduino was a successfull Kickstarter project. Thanks to the fact that it basically is not much more than the nRF51822 chip by Nordic, its form factor is very small. Additional buttons, leds, sensors, etc. can be added by boards that precisely fit its connectors.
The advantage of the RFduino is that it on top of the nRF51822 provides FCC and CE certification, has a thought out antenna design, and has so many boards that function as shields. It is created by the company RF Digital under the module name RFD51822 and later the RFD22102. Note that there are other 3rd Party Bluetooth Low Energy vendors listed on the Nordic site, that have these advantages as well. Besides that, for example the MBDT40 from Raytac Corp has all the 31 GPIO available, while the RFduino only has 7 GPIO pins.
Anyway, interestingly, the RFduino code is not open-source, or most of it is not open-source. This is awkward, given the fact that the guys from RF Digital, started the website to indicate their involvement. The RFduino first came with a simple means to upload a binary to the RFduino, namely using the same tool as with most of the other Arduino boards: __avrdude__. Regretfully, after learning that changes to the open-source code in avrdude must be made open-source as well, the developer behind RFduino took all his sites offline (github as well as the forums), and came back online again with a proprietary tool to do so. A very strange move. The bootloader that resides on the RFduino is made proprietary as well, it is not even available in binary form. Interestingly, I saw some wrapper code between the Arduino code and the SoftDevice libraries from Nordic. If anything, these header files should be kept private. On the Nordic forums the employees do not know of any reason why the flash tool or the bootloader has to be proprietary. It seems to run counter any business sense. The more ways the RFD22102 boards can be programmed, the more will be sold from them.
## Programming it ourselves…
There is a lot of information on programming the nRF51822 on the Nordic website and the forums. Most of it however can only be obtained by buying a development or evaluation kit. The development kit comes with a so-called J-Link programmer from Segger, the J-Link LITE CortexM to be precise. Connecting it to the RFduino is not hard. In the following picture you can see how a little breadboard is enough. Here I just took a 9 pins FTSH Samtec connector we had lying around from a previous project (FireSwarm, a swarm of flying robots to find a dune fire). And there is no color coding whatsoever here!
The J-Link comes with a connector with 9 pins, this means one pin is removed (pin 7) to give some asymmetry, very convenient! The pin layout is like this:
VTref 1 * * 2 SWDIO / TMS
GND 3 * * 4 SWCLK / TCK
GND 5 * * 6 SWO / TDO
-- 7 o * 8 TDI
NC 9 * * 10 nReset
So, power the RFduino from an external source on 3.3V. The 3V and GND pins are nicely indicated on the RFduino. And then there are only three pins you have to connect. The VTref measures if the RFduino has actually enough power and must be connected to the 3V pin. The two other pins to connect are the SWDIO and the SWCLK pins. The SWDIO is connected to the RESET pin on the RFduino, the SWDCLK to the FACTORY pin.
Now, if you download the code at github you will get a project with a Makefile that calls scripts in the scripts directory. Most of the code is thanks to Christopher Mason, only the adaptations to support RFduino are mine. To program with the J-Link, you will need the JLinkExe binary and for debugging the JLinkGDBServer binary. You can download them from segger.com.
The current code requires a lot more love, but the beginning is there. In this movie you can see how the LED on the RGB RFduino shield reacts on the signal strength with an Android smartphone.
Note, that if you use this code and flash the RFduino, there are two things you will have to keep in mind. First, the SoftDevice, in this case the S110, is proprietary. It comes with the development kit (of 100 bucks) from Nordic. You will not be able to use Bluetooth without it. I would not recommend starting to program your RFduinos without buying it. Second, you won’t be able to get back to the standard RFduino software. This would require the RFduino people to make certain information public, especially where it expects the SoftDevice and how it interfaces with it. This is not the same as providing open-source software, but also this information is not available. So, consider this a one-way direction. :-)
Now we are in full control of our RFduinos. We can create our own Bluetooth characteristics, services, etc. We know which timers are used. We can take full advantage of the system that ARM uses to have peripheral devices communicate with each other without using the CPU for example. They use besides the well known interrupts, entities like “events” and “tasks” to do this, pretty neat.
For us, we’d like to experiment with the new SoftDevice, the S120. Contrary to the S110, this SoftDevice allows mixing central and observer roles. This means it becomes possible to develop wireless sensor networks type of functionality. What is also really interesting is its support for wireless charging. More can be read in Nordic’s press release.
So, why using the RFduino at all? Its advantages are still there: certification, many extension boards, and a nice antenna design. We would like to concentrate on very rapid prototyping of services, such as a Lost & Found service, rather than spending too much time on the electronics itself.
## Crownstone
Make sure you take a look at our Crownstone offering. This is directly based on the nRF51822 and open-source for real. :-) So, this uses the code at github for BlueNet as indicated above. If you want to have more details on how to program the different SoftDevice versions from Nordic etc., feel free to file an issue there. Also, look around if you want to get more information on Bluetooth Low-Energy in general, as for example in this blog post about Linux and BLE or on the iBeacon-type of device we build (with respect to software!) for WOTS. To be clear, the services on top of the Crownstone that require a larger part of machine learning and artificial intelligence will not be open-source. If you think we do not communicate that properly, feel also free to suggest improvements in wording!
|
# How can I evaluate the congruency of an AKS primality test?
Despite the fact primality test is a mathematical issue, it plays a part on the security of many cryptosystems such as RSA. I was trying to understand how it works until I came to the following congruency:
〖(X+a)〗^p≡X^p+a (mod X^r-1,p)
The above reduces evaluating the initial congruency 〖(X+a)〗^p≡X^p+a (mod p) to have less coefficient. How do we evaluate the above one?
-
This is never "evaluated" as such. The above equation must hold for the number to be prime for certain $a$, or else it's composite.
See the algorithm on Wikipedia to exactly see for which $a$ the equation is tested.
|
# Calc6.25
Discuss the convergence or divergence of the series $\sum _{{n=1}}^{{\infty }}{\frac {n+1}{n\times 3^{{n+1}}}}$.
Let us compare this series with the geometric series $\sum _{{n=1}}^{{\infty }}\left({\frac {1}{3}}\right)^{n}$.
$\lim _{{n\rightarrow \infty }}{\frac {{\frac {1}{3^{n}}}}{{\frac {n+1}{n3^{{n+1}}}}}}=\lim _{{n\rightarrow \infty }}{\frac {3n}{n+1}}=3\,$
|
# Tutorial¶
## The ChiantiPy Approach¶
Python is a modern, object-oriented programming language. It provides a number of features that are useful to the programmer and the end user, such as, classes with methods (function-like) and attributes (data), and functions, among other things. ChiantiPy has been constructed so that the primary means to calculate the spectral properties of ions and groups of ions is by way of Python classes.
More detailed information can be found in the API Reference.
### ChiantiPy Classes¶
There are 7 basic classes that are provided by ChiantiPy.
ion
this class is very useful in itself and is the basic unit employed by all of the other classes
continuum
for calculating the free-free (bremstrahlung), free-bound (radiative recombination) continuum as well as the radiative loss rates due to these processes.
bunch
allows the user to specify a bunch of ions and to calculate the radiative properties of the selected ions in a group. The ions can be specified by list of individual ions, list of elements, as well as by the minimum elemental abundance. The properties of each ion are available, as with members of the ion class. Among other things, the ratios of lines of different ions and elements can be calculated and then displayed.
spectrum
the spectrum class calculates the intensities of the line and continuum and them convolves the complete spectrum with a filter such as a Gaussian of specified width
there are actually 3 spectrum classes. Two of these all the use of multiple cpu cores to speed the calculation. The basic spectrum class does not do multi-processes and is therefore compatible with most Python environments
mspectrum
mspectrum duplicates the calculations of the spectrum class but it employes the Python multiprocessing package in order to use multiple cpu cores to calculate the spectrum. This class can be used in a basic Python shell, in a Python script, or in an IPython terminal. It can not be used in either the Jupyter qtconsole or the Jupyter notebook.
ipymspectrum
this class employes the IPython ipyparallel module to provide access to multiple cpu cores. It can only be used in the Jupyter qtconsole and the Jupyter notebook.
ioneq
this class allows the user to load and plot the ionization equilibria of a specfic element. It can read the ionization equililbrium files in the CHIANTI $XUVTOP/ioneq directory. Different ionization equilibria can be plotted against each other it is also possible to calculate the ionization equilibria of an individual element using the ionization and recombination rates in the CHIANTI database. The results of this calculation can also be plotted agains existing calculations in the CHIANTI$XUVTOP/ioneq directory.
### ChiantiPy Classes, Methods and Attributes¶
Each of the ChiantiPy classes listed above has a number of methods for calculating various properties. The results of these calculations are stored as attributes of the class that has been instantiatied (created). In Python, all objects, which includes everything in Python, have introspection so that all methods and attributes can be discovered and used. The IPython terminal and the Jupyter qtconsole both provide their own methods of easily displaying the methods and attributes.
Some methods in each class are more useful to the user than others. For example, below the populate() method is demonstrated below. However, it is generally not necessary for the user to use the populate() method. Methods that need the ion population will make use that the Population attribute is available and, if not, use the populate() method to create it.
Below, the methods that are most likely of interest to users are listed below. All of the available methods are presented and documented in the API section, a part of the ChiantiPy documentation.
ion
popPlot() method
plots the level populations of the top (most highly populated) levels as a function of temperature and/or density.
gofnt() method
an interactive method that plots the most intense lines of an ion in a give wavelength range (wvlRange) and allows the user to select a line or several lines, that will be summed, and then plots the GofT function for the selected lines and saves these values in the Gofnt dictionary as an attribute of the ion object.
emiss() method
calculates the spectral line emissivities of the ion and saves these in the Emiss dictionary as an attribute of the ion object.
intensity() method
calculates the intensity of the specified ion as a function of temperature and density. These properties are saved in the Intensity dictionary, available as an attribute of the ion.
intensityList() method
lists the spectral line intensities in a given wavelength range (wvlRange) in an interactive terminal or notebook
intensityPlot() method
plots the spectral line intensites in a given wavelength range (wvlRange) for the top most intensity lines.
intensityRatio() method
an interactive method that plots the most intense lines of an ion in a give wavelength range (wvlRange) and allows the user to select a pair of lines or a pair of lines to be summed and then plots the intensity ratio as a function of temperature and/or density. The ratio is saved in the IntensityRatio dictionary as an attribute of the ion.
spectrum() method
calculates the spectrum of the ion as a function of wavelength. The spectral line intensities are pass through a selectable filter to simulate the spectrometer line profile. The spectrum is save in the Spectrum dictionary as an attribute of the ion.
ionizRate() method
calculates the ionization rate coefficient as a function of temperature. The rate coefficient is save in the IonizRate dictionary as an attribute. Uses the methods diRate() and eaRate() to first calculate the direct and excitation-ionization (ea) rate coefficients and sums them.
recombRate() method
calculates the recombination rate coefficient as a function of temperature. The rate coefficient is save in the RecombRate dictionary as an attribute. Uses the methods rrRate() and drRate() to first calculate the radiative recombination and dielectronic recombination rate coefficients and sums them.
ioneq
reads a selected, existing ionization equilibrium calculation for a given element and saves it as a numpy array Ioneq as an attribute of the object.
calculate() method
calculates the ionization equilibrium of a selected element from the CHIANTI ionization and recombination rates for a specified temperature(s) and saves it as a numpy array Ioneq as an attribute of the object.
plot() method
plots the loaded or calculated ionization equilibrium. Various parameters can be specified to plot only those aspects that are desired. Can also plot an additional existing ionization equilibrium for comparison
bunch
the init method calculates the spectral line intensities for the selection of ions save the information in the Intensity dictionary as an attribute . It does not calculate the continuum.
beyond the init method, the bunch class inherits all of the following methods that are described under the ion class above
intensityList()
intensityPlot()
intensityRatio()
in addition, it inherits the following methods that are described under the spectrum class below
convolve()
lineSpectrumPlot()
spectrumPlot()
spectrum
the init method calculates the spectral line intensities and the continuum due to the free-free (bremstrahlung), free-bound (radiative recombination), and two-photon processes. The line intensities are convolved using the convolve() method (below). The sum is saved in the Spectrum dictionary as an attribute.
beyond the init method, the spectrum class also inherits the same methods as the bunch class including intensityList(), intensityPlot, and intensityRatio.
convolve()
convolves the line spectrum with specified filter from ChiantiPy.tools.filters using a specified width.
lineSpectrumPlot()
plots the convolved line spectrum as a function wavelength
spectrumPlot()
plots the spectrum calculated by the init method. The summed (integrated) spectrum can be plotted or the spectrum for a specific temperature can be plotted.
mspectrum
the mspectrum behaves in the same way as the spectrum class except that it invokes the Python multiprocessing module so that the calculations are made using a specified number of cpu cores. mspectrum can not be used in the Jupyter qtconsole or notebook.
ipymspectrum
the ipymspectrum behaves in the same way as the spectrum class except that it invokes the IPython ipyparallel module so that the calculations are made using a specified number of cpu cores. ipymspectrum can only be used in the IPython terminal or the Jupyter qtconsole or notebook.
## The ion class, basic properties¶
Bring up a Python session, or better yet, an IPython session
import ChiantiPy.core as ch
fe14 = ch.ion('fe_14')
The fe14 object is instantiated with a number of methods and data. Methods start with lowercase letters and attributes start with uppercase letters. It is best not to simply import ion as there is a method with the same name in matplotlib. A few examples:
fe14.IonStr
>> 'fe_14'
fe14.Spectroscopic
>> 'Fe XIV'
CHIANTI and spectroscopic notation for the ion
fe14.Z
>> 26
fe14.Ion
>> 14
nuclear charge Z and the ionization stage (in spectroscopic notation) for the ion
fe14.Ip
>> 392.16196
this is the ionization potential in electron volts.
fe14.FIP
>> 7.9023801573028294
this is the first ionization potential (FIP) in electron volts - the ionization potential of the neutral (Fe I).
fe14.Abundance
>> 0.00012589265
fe14.AbundanceName
>> 'sun_photospheric_1998_grevesse'
this is the abundance of iron relative to hydrogen for the specified elemental abundance set. For the ion class, the abundance can be specified by the abuncance keyword argument or the abundanceName keyword argument. In the case the abundance is taken from default abundcance set. The specified defaults can be examined by
fe14.Defaults
>> {'abundfile': 'sun_photospheric_1998_grevesse', 'flux': 'energy', 'ioneqfile': 'chianti', 'wavelength': 'angstrom'}
the defaults can be specified by the user in the ~/.chiantirc/chiantrc file. One is included in the distribution but it must be placed in ~/.chiantirc for it to be read. If it is not found, a set of coded default values are used.
fe14.Elvlc.keys()
>> ['ecmth', 'term', 'ref', 'pretty', 'spd', 'ecm', 'j', 'l', 'erydth', 'conf', 'lvl', 'spin', 'eryd', 'mult']
fe14.Elvlc is a dictionary that describes the energy levels of the Fe XIV ion. The key ‘ecm’ provides the energies, relative to the ground level, in inverse cm. The ‘ref’ key provides the references in the scientific literature where the data were provided.
fe14.Elvlc['ref']
>> ['%filename: fe_14.elvlc',
%observed energy levels: Churilov S.S., Levashov V.E., 1993, Physica Scripta 48, 425,
%observed energy levels: Redfors A., Litzen U., 1989, J.Opt.Soc.Am.B 6, #8, 1447,
%theoretical energy levels: Storey P.J., Mason H.E., Young P.R., 2000, A&ASS 141, 28,
%comment,
Only level 16 does not have an observed energy. I have placed in,
the third energy column a recommended value for the energy value of,
this level, based on the theoretical and observed splittings of the,
4F levels. It is this energy value which is used to compute the,
wavelengths of transitions involving level 16 given in the .wgfa,
file.,
%produced as part of the Arcetri/Cambridge/GMU/NRL 'CHIANTI' atomic data base collaboration,
%,
% P.R.Young Feb 99']
If the fe14 ion object had be instantiated (created) with a temperature and an electron density, then many more attributes can be calculated. For example, if the populate() method is used, it creates a dictionary attribute Population. One thing to remember with Python is that capitalization matters.
import numpy as np
t = 10.**(5.8+0.1*np.arange(11))
dens = 1.e+9
fe14 = ch.ion('fe_14')
fe14.populate()
fe14.Population.keys()
>>['ci', 'protonDensity', 'popmat', 'eDensity', 'rec', 'population', 'temperature']
fe14.Population['population'].shape
>>(21, 739)
'%10.2e'%(fe14.Temperature[10])
>> ' 2.00e+06'
fe14.Population['population'][10,:5]
>>array([ 8.71775703e-01, 1.27867444e-01, 4.91230626e-09, 4.29120495e-08, 1.35517895e-08])
gives the population of the first 5 of 739 levels of Fe XIV at a temperature of 2.00e+6
to be continued
|
## Introduction
Due to the curved shape and membrane binding of Bin/Amphiphysin/Rvs (BAR) domains, proteins containing such domains have the interesting ability to reshape membranes. Furthermore, because of their elongated shape, they can align along a preferred direction, adopting a nematic organization that impinges anisotropic curvature on the membrane. In equilibrium and for high concentrations, this leads to the generation of membrane tubes with high curvatures, comparable to BAR intrinsic curvatures (of the order of 101 nm). For instance, incubation of small vesicles with a high concentration of BAR proteins leads to highly curved tubes covered by a dense protein scaffold where the elongated molecules are nematically arranged1,2. In a different system, GUVs with sufficiently high bound protein density rapidly expel thin protein-rich tubes in a tension-dependent manner3. On thin membrane tubes pulled out of giant unilamellar vesicles (GUVs), BAR proteins can also change the radius of the tube and the force required to hold it4. Beyond this well-known paradigm of thin tubes in equilibrium, in many physiological situations, BAR proteins dynamically interact with pre-existing curved membrane templates. Such templates can include for instance invaginations caused by nanoscale topographical features on the cell substrate5, mechanical folds6,7, or endocytic structures8,9. Due to their affinity for curved membranes, BAR proteins are thus bound to sense and reshape such templates in ways that are important in physiological processes10 like endocytosis11, the build-up of caveolar structures12,13, the maintenance of cell polarity14, and the modulation of actin polymerization15. However, how BAR proteins reshape membranes with initial curvatures that can be well below their intrinsic values, with which dynamics, how this depends on initial membrane shape, and what are the implications, remains unknown.
To address this issue, we developed a versatile experimental system combined with theoretical and computational modeling to study the dynamic reshaping of cellular-like membrane structures of a broad range of shapes and sizes. In commonly used systems such as tube pulling assays10 or curved substrates16,17, imposed curvature creates tensed curved structures. In cells, such tensed structures (created either from nanoscale curved topographies or from actin pulling on the membrane) can for instance recruit N-BAR proteins and enhance endocytosis5 or trigger the recruitment of effectors related to actin polymerization15,18. However, both in vitro and in cells tensed structures prevent extensive shape remodeling. In contrast, here we use a different physiologically relevant signal in the form of stretch and release cycles. In our system, we create curved membrane features off a supported lipid bilayer (SLB) by applying a successive lateral stretch and compression. As previously shown in SLBs6 and in cells7, this leads to the storage of excess membrane area in free-standing, low tension, easily reshaped protrusions of tubular or spherical shape. In contrast with tubes pulled out of GUVs, where a tip force and tension are required to stabilize their shape, in our system tubes are stabilized osmotically without a pulling force. To assess the effect of tension, we also control osmolarity to generate tensed spherical caps off the SLB6. These protrusions emerging from a flat SLB can serve as model system for membrane templates such as endocytic buds, or osmotically/mechanically-induced structures.
## Results
### Experimental and theoretical framework
Experimentally, we used the liposome deposition method to form a fluorescently labeled SLB on top of a thin extensible polydimethylsiloxane (PDMS) membrane. To this end, an electron microscopy grid was deposited on top of the PDMS membrane before plasma cleaning, which activated only the uncovered PDMS areas19. An easily identifiable hexagonal pattern was obtained (Fig. 1a), with a fluid SLB formed inside the hexagon (Supplementary Fig. 1) while a lipid monolayer was formed outside. The membrane was placed inside a stretching device previously described7 and mounted on a spinning disk confocal microscope (Fig. 1a). At initial state, the fluid bilayer contained brighter signals coming from non-fused liposomes (Supplementary Fig. 2a). This patterned SLB (pSLB) was then uniformly and isotropically stretched for 120 s (until 5–8% strain), slowly enough to allow liposome incorporation in the fluid bilayer, thereby ensuring membrane integrity (as happens in a cellular membrane through lipid reserve incorporation7). After 120 s, stretch was slowly released during 300 s to a completely relaxed state, and lateral compression led to the formation of highly curved lipid structure in the shape of either lipid buds or lipid tubes (Fig. 1b, Supplementary Fig. 2a and Supplementary Movies 1 and 2). We note that our system is diffraction-limited and not amenable to electron microscopy, and we could thus not measure tube diameter.
As a BAR protein, we used the commonly used model of Amphiphysin4,20,21,22,23, an N-BAR protein binding lipid bilayers of positive curvature (invaginations). We assessed protein activity by measuring the diameter of Amphiphysin-reshaped tubes via transmission electron microscopy in the sucrose loading vesicle assay24. Consistent with the literature, we found diameters of ~25 nm (Supplementary Fig. 2b). We used a pSLB composed of negatively charged lipids necessary for Amphiphysin binding25, including 1,2-dioleoyl-sn-glycero-3-phosphate (DOPA) for which Amphiphysin has a specific affinity26. In experiments, we injected fluorescently labeled Amphiphysin in the bulk solution on top of the previously described compressed pSLB (Fig. 1b), and monitored the fluorescence signal from both the pSLB and Amphiphysin (in different channels). Once injected, the protein clearly bound the curved lipid buds and tubes (Fig. 1c, left) and, after further adsorption from the bulk, it started to reshape them into geometrically heterogeneous structures with coexistence of small spherical and tubular features (tube-sphere complexes, Fig. 1c, right). We then carried out several controls. First, we monitored the tubes in absence of protein injection. In this case, a fraction of tubes spontaneously detached, and non-detached tubes did not undergo a progressive elongation as observed in presence of Amphiphysin. Instead, tubes were stable for some minutes and then progressively shortened, widened, and transformed into a structure of spherical shape, presumably due to enclosed fluid and/or excess membrane reorganization within the system (Fig. 1d and Supplementary Movie 3). Eventually, these spherical structures became immobile on top of the pSLB after 10–20 min. Second, we performed the same experiment by injecting fluorescent Neutravidin instead of Amphiphysin. Neutravidin did not specifically bind to the tube, and the same tube-to-bud relaxation was observed as in absence of injection (Supplementary Fig. 2c and Supplementary Movie 4). Third, we injected Neutravidin and monitored its binding to a biotinylated pSLB. We observed tube to bud relaxation as in the control, but some longer-lived tubes immobilized in the plane of the bilayer, providing a clear visualization of their cylindrical shape (Supplementary Fig. 2d. and Supplementary Movie 5). Tube to bilayer attachment is likely due to the tetrameric character of Neutravidin, which can therefore crosslink tubes to the surrounding flat bilayer. Beside this effect, we observed a similar process of relaxation from tubes to buds, clearly distinct from the tube-sphere complexes occurring in presence of Amphiphysin. Finally, we monitored the effect of Amphiphysin in non-stretched membranes, which were therefore devoid of pre-existing membrane structures. In this case, membrane reshaping only occurred if Amphiphysin concentration was increased above 5 μM in the bulk, which merely consisted in the formation of bright/dark spots in the membrane, likely reflecting membrane tearing. (Supplementary Fig. 2e, f and Supplementary Movies 6 and 7).
To understand the physical mechanisms underlying our observations, we developed a theoretical framework considering the dynamics of lipid tubes and buds with low coverage (since protein is injected once structures are formed) and low curvature (since the structures are made markedly thinner by Amphiphysin) upon exposure to BAR proteins. Theoretically, various computational studies using coarse-grained simulations of elongated and curved objects moving on a deformable membrane have suggested the self-organization of regions with high anisotropic (cylindrical) curvature with high-protein coverage and strong nematic order27,28,29. None of these works, however, predicted or observed the tube-sphere complexes that appear in our experiments (Fig. 1c). To address this, we first focused on our recently developed mean-field density functional theory30 for the free energy $${F}_{{{{{{{\mathrm{prot}}}}}}}}$$ of an ensemble of curved proteins on a membrane as a function of protein area coverage $$\phi$$, orientational order as given by a nematic order parameter S, and membrane curvature. This theory accounts for the entropic and steric interactions between proteins and for their bending elasticity, focusing on the scaffolding effect. We discuss the mapping between different mechanisms coupling protein coverage to membrane curvature for Amphiphysin (scaffolding, effect of insertions, bulky disordered domains) and detail our model in Supplementary Note 1, where we specifically quantify the role of crowding of disordered domains by adapting a model for the coupling between such domains and curvature31 to the present context, finding that the effect is small.
### Reshaping occurs through an isotropic-to-nematic transition
On flat membranes and for elliptical particles of the size and aspect ratio of Amphiphysin, the theory recovers a classical entropically-controlled discontinuous isotropic-to-nematic transition during which the system abruptly changes from low to high order as protein coverage increases above $$\phi \approx 0.5$$, in agreement with previous results in 3D32, Supplementary Fig. 3a1. We then examined the protein free-energy landscape on curved surfaces, where the elastic curvature energy of proteins depends on their intrinsic curvature and on the curvature of the surface along the protein long direction (Fig. 2a). On spherical surfaces and according to the theory, this energy landscape coincides with that of the flat membrane with a bias proportional to $$\phi$$ times the bending energy of proteins on the curved surface. Thus, the minimum energy paths as density increases (red dots in Fig. 2b–e) and hence the abrupt isotropic-to-nematic transition persists regardless of sphere radius, Fig. 2b, c, noting that on a complete sphere the nematic phase necessarily involves defects27. On cylindrical surfaces, however, curvature is anisotropic and the energy landscape is fundamentally modified according to our theory, as proteins can lower their free energy by orienting along a direction of favorable curvature. The competition between protein bending and entropy results in a continuous isotropic-to-nematic transition (Fig. 2d) and a significant degree of orientational order even at low coverage when the tube curvature is comparable to that of the protein (Fig. 2e). The model thus predicts how the nematic ordering of the curved and elongated membrane depends on coverage, curvature, and curvature anisotropy.
We then studied whether the model predicted the experimentally observed coexistence of thin tubes (which according to the theory should have higher coverage and order) and larger spheres (which should have lower coverage and isotropic organization). We examined the energy landscape along the minimizing paths (red dots) for spheres and tubes of varying radius (Fig. 2f). Since the slope of these curves is the chemical potential of proteins on the membrane, which tends to equilibrate with the fixed chemical potential of dissolved proteins in the medium, points of chemical coexistence are characterized by a common slope (red circles). This figure shows the largely non-unique combinations of geometry and membrane coverage compatible with coexistence in chemical equilibrium between higher-coverage nematic phases on cylinders and lower-coverage isotropic phases on spheres, supporting plausibility of such coexistence in the dynamical structures.
Shape, however, is also a dynamical variable and the selection of protein organization and shape requires the two-way interplay between the chemical-free energy $${F}_{{{{{{\rm{prot}}}}}}}$$ and the elastic free energy of the membrane $${F}_{{{{{{\rm{mem}}}}}}}$$. To account for this and for the out-of-equilibrium nature of our experiments, we self-consistently coupled a parametrization of the mean-field energy density functional theory used above with a continuum model for lipid membrane reshaping and hydrodynamics33,34,35. The combined model accounts for both energies, $${F}_{{{{{{\rm{prot}}}}}}}+{F}_{{{{{{\rm{mem}}}}}}}$$, for the dynamics of protein adsorption from a bulk reservoir, for the diffusion of proteins on the surface, and for the membrane dissipation associated with shape changes31 (see Supplementary Note 1, for a discussion of the model, its implementation and its parameters). Focusing on a single membrane protrusion in mechanical equilibrium (tubular or spherical, Fig. 2g) off a supported bilayer circular patch in the absence of proteins, this model predicts the dynamics of membrane shape, $$\phi$$, and S following a sudden increase of dissolved protein concentration in the medium (Fig. 2h). In all simulations, we fixed the chemical potential of proteins in the medium to account for the dissolved protein reservoir. Once the system is driven out-of-equilibrium, we must choose a mechanical ensemble controlling the ability of the simulated system to exchange membrane area and enclosed volume with its surroundings during the dynamics. To cover the experimental conditions, we considered a reference mechanical ensemble, used everywhere unless explicitly stated, and tested the robustness of our results by further considering a broad range of ensembles with varying ease of membrane and volume exchange. For membrane exchange, we interpolated between fixed tension (allowing for membrane exchange) and fixed projected membrane area (no membrane exchange). For water exchange, we considered adhesion potentials with different strengths and a fixed pressure difference ensemble. Soft potentials allow for changes in the distance between the adhered part of the membrane and the substrate, and hence for enclosed volume exchange, as does the fixed pressure ensemble, whereas protrusion volume was nearly fixed by considering very stiff adhesion potentials (see Supplementary Note 1).
### Dynamic reshaping of buds and tubes
We then compared model predictions with the experimental setup, by monitoring the reshaping of the buds or tubes formed after pSLB compression and upon subsequent injection of Amphiphysin at various nominal bulk concentrations. First, we examined how the mechanically-formed buds were reshaped in time as Amphiphysin binding occurs. In both our experiments and simulations, we systematically observed the growth of a thin tube emerging from the base of the bud, connecting the bud to the supported bilayer (Fig. 3a, and Supplementary Movies 8, 9, and 10). Such bud elongation from its neck also occurred upon exposure to 0.3 μM of non-fluorescently labeled Amphiphysin (Supplementary Fig. 4a and Supplementary Movie 11). According to our model, this elongation is due to a dynamic and progressive transition between two coexisting states of the membrane-protein system, one with low protein coverage, isotropic organization, and low curvature (spherical and flat parts) and another with high coverage, high nematic order, and high anisotropic curvature (thin elongating necks), Figs. 2h, 3a, and Supplementary Movie 8. The model predicts that the curvature of thin tubes is comparable to the intrinsic curvature of proteins, about 15 nm−1, Supplementary Fig. 2b. This transition is driven by the lower bending energy of proteins at the thin neck, which outweighs both the entropic penalty of a local protein enrichment and nematic organization (Fig. 2f) and the higher membrane curvature energy of a tube relative to a larger vesicle. Our simulations also showed that protein delivery to the enriched region overwhelmingly occurs by adsorption from the bulk rather than from membrane diffusion from neighboring regions (Supplementary Note 1). Our model does not account for thermal fluctuations of shape, which have been shown to play an important role in reshaping by BAR proteins of initially planar membranes36. Here, reshaping is directed by membrane templates, particularly membrane necks. In this case, estimated protein coverage fluctuations resulting from thermal fluctuations of tubes were small (Supplementary Movie 9, Supplementary Note 1) and could be neglected.
Shape transformations in vesicles are strongly influenced by the relation between area and enclosed volume, quantified by the non-dimensional reduced volume37. We thus wondered about changes in membrane area and enclosed volume in our evolving protrusions. The membrane used for tube elongation may come from the vesicle or the surrounding membrane; in experiments, we clearly observed tube elongation at the expense of vesicle area, indicated by a decrease of the vesicle diameter as the thin tube grows (as quantified experimentally in Supplementary Fig. 4b, left), until the point at which the diffraction limit does not allow to pursue reliable measurements. This suggests that membrane area is transferred from the vesicle to the elongating tube. If we compare a bud with an elongated tube with the same area, the tube would have a much lower enclosed volume. This suggests that this transformation requires significant enclosed volume exchange, explaining why simulations with our reference ensemble did not lead to significant vesicle shrinking during tube elongation, Fig. 3a and Supplementary Movie 8, but simulations with an ensemble fixing the membrane area of the protrusion and enabling easy water exchange replicated this aspect of the membrane reshaping (Supplementary Fig. 4b, right, Supplementary Movie 12).
Then, we considered the reshaping dynamics of tubes, which were frequently formed upon compression. In this case, we systematically found both in experiments and simulations that tube reshaping was initiated by the formation of a sequence of pearls (Fig. 3b and Supplementary Movies 1315). Tube pearling also occurred upon exposure to 0.3 μM of non-fluorescently labeled Amphiphysin (Supplementary Fig. 4c and Supplementary Movie 16). Subsequently, the pearled tubes transformed into pearls connected by thin tubes. This configuration was stable for long time (though such reshaped tubes collapsed on themselves, likely due to the related loss of tension3,38—this phase is best observed in the movie, see Supplementary Movie 14). Necks progressively elongated, eventually making pearls disappear and transforming the structure into a long thin tube.
To understand this process, we noticed that in our simulations pearling occurred at low and nearly uniform protein coverage with nearly isotropic organization, Figs. 2d, 3b, and hence this transformation can be ascribed to a previously described pearling instability in the presence of sufficiently large isotropic and uniform spontaneous curvature31,39,40. Indeed, pearling in simulations occurred even without nematic order, Supplementary Fig. 4d. Although this first step is independent of molecular alignment, it triggers nematic order, as pearling generates several thin necks along the tube. These necks nucleate regions of high anisotropic curvature, high coverage and nematic order coexisting with low-curvature spheres with low-coverage and isotropic molecular arrangement and subsequent tube elongation between the pearls. If the protein chemical potential is high enough, the necks progressively elongate into tubes while the spheres progressively disappear. We witness here a striking process in which a large tube (compared with the N-BAR dimer intrinsic curvature) is reshaped by Amphiphysin at low coverage due to an isotropic rearrangement of the proteins, giving rise to nucleation points of thin tubular necks promoting further protein enrichment and nematic ordering.
### Physical parameters governing membrane reshaping
We then addressed the time-scale of the reshaping process. For vesicles exposed to Amphiphysin, we measured tube elongation rates between 20 and 75 nm/s at 0.25 and 0.35 μM nominal concentrations, and from 365 to 550 nm/s at 0.5 μM. These rates were much smaller, about two orders of magnitude, than those predicted by our simulations, Supplementary Fig. 4b. To address these large differences in time-scale between simulations and experiments, we turned to the dynamics of the injection process, not accounted for in our model. Since our protein injection method was systematic (Supplementary Fig. 5a), we modeled protein delivery through bulk diffusion in the medium from the injection point to the close vicinity of the SLB. We estimated the time evolution of protein concentration to which the SLB is effectively exposed by solving a diffusion equation, readily providing a mapping between time and concentration at the SLB depending on the bulk nominal concentration (Supplementary Figs. 5a and 6a). According to this analysis, diffusion in the bulk is the slowest process as compared to adsorption, membrane diffusion, and membrane mechanical reshaping (see Supplementary Note 1), and thus it controls reshaping and explains quantitatively the very different times at which a given state is observed for different nominal concentrations (Supplementary Figs. 5c and 6c). It also explains that in our experiment, the higher the bulk nominal concentration, the faster reshaping occurred (for buds, Supplementary Fig. 5b, c and Supplementary Movies 9 and 10, for tubes, see Supplementary Fig. 6b, c and Supplementary Movies 13 and 14). When we performed simulations as a succession of quasi-equilibrium states at increasing concentrations up to the nominal concentration (Supplementary Figs. 5d and 6d), we found the same reshaping mechanisms as in our dynamical simulations of buds and tubes where the nominal concentration was applied instantaneously (Fig. 3a, b). More specifically, for buds, both approaches exhibited the nucleation and elongation of a tubular nematic and protein-rich phase at the bud neck, and for thick tubes they both proceeded by a pearling instability at low-coverage low-order protein states followed by nucleation and growth of tubular nematic and protein-rich phases at the necks between the pearls. Hence, we can interpret our experimental observations as quasi-equilibrium states at a given dissolved protein chemical potential in the close vicinity of the SLB.
We further tested the robustness of the reshaping mechanisms identified above by varying the mechanical ensemble governing exchange of membrane area and enclosed fluid volume in protrusions as they deform (see Supplementary Note 1). Supplementary Fig. 7 displays snapshots of the shape of the buds or tubes at their onset of reshaping or after further elongation at similar bulk concentrations to allow for comparison and for a selection of the tested ensembles. We found that, although the thresholds for reshaping and the reshaping progression at a given bulk concentration slightly depended on the mechanical constraints, the fundamental mechanisms of reshaping regions were consistent irrespective of the mechanical ensemble. Those consisted in nucleation (either at pre-existing necks for buds or at necks generated by the pearling transition in tubes) and in the elongation of highly curved tubes with high-coverage and nematic order, coexisting with low-curvature, low-coverage, and isotropically organized regions. The only exception, not observed in our experiments, was the reshaping of tubes in the specific condition of no membrane exchange and very easy volume exchange by fixing pressure across the membrane, in which case the high-curvature high-coverage nematic tubular state was reached by progressive elongation and thinning, Supplementary Fig. 8.
We also varied the size of the protrusions to capture the heterogeneity in sizes and shapes experimentally obtained, with bud diameters ranging from 0.5 to 1.5 µm, and tube lengths from 2 to 5 µm, and a diameter of 600 nm as discussed in Supplementary Note 1, section 5. We consistently found that the fundamental features of the previously described reshaping process were independent of protrusion dimensions (Supplementary Fig. 7, Supplementary Movies 17 and 18).
Finally, as the model predicts coverage as the key parameter that controls reshaping, we explored the coverage at which the onset of tubulation for buds, or pearling for tubes, occurred. Experimentally, we plotted the protein binding curves on the buds or on the tubes obtained at different bulk nominal concentrations, by displaying the mean intensity of the protein fluorescence on buds over time (Fig. 3c, d). The onset of reshaping occurred faster at higher concentration, however, they started at a comparable intensity level regardless of the nominal bulk concentration. Then, we developed a protocol to obtain an estimate of protein coverage from fluorescence levels. We performed a classical calibration of the protein fluorescence versus coverage41, and included a geometrical correction, by taking as a reference the fluorescence from the lipid channel (see Supplementary Fig. 9a). This enabled us to correct for the increase of fluorescence due to integration of a fluorescence from a 3D structure when taking 2D images. It also corrects for the loss of signal since we are not exactly focusing on the bilayer plane, but a bit above to resolve better the 3D geometry of the moving templates. We fitted the data with an exponential curve, and averaged the coverage values obtained at bud elongation and tube pearling. As a result, initiation of bud elongation or tube pearling occurred at respectively 0.44 +/− 0.097 and ~0.34 +/− 0.08 coverage (Supplementary Fig. 9b). Then, we analyzed the corresponding theoretical predictions for the onset of reshaping as a function of the mechanical ensemble and size, Supplementary Fig. 9c. Theoretical predictions mildly depended on the mechanical ensemble and size, but generally matched experimental results well for bud elongation. In the case of tube pearling they were slightly lower, likely due to the experimental difficulty of precisely capturing the onset of pearling (in contrast to bud elongation, which is a much more obvious event). Besides coverage, another hallmark of this isotropic-nematic coexistence is a protein enrichment on the tube relative to the vesicle. We estimated this experimentally (Supplementary Fig. 10a) and measured approximately a two-fold higher protein concentration in tubular versus bud regions (Fig. 3e). For a wide range of bud diameters and protein concentrations, our simulations predicted a comparable enrichment (Fig. 3e and Supplementary Fig. 10b). Our finding is in good agreement with a related study4 where enrichment ranging from 1.8 to 5 were reported on tubes pulled from a giant vesicle.
Taken together, our results show the complete path through which low curvature spherical or tubular templates exposed to BAR proteins evolve toward uniformly thin and protein-rich nematic tubes. This process involves a non-homogeneous, mechanochemical transition involving a low-curvature, low-coverage spherical state with isotropic molecular (phase I) organization and a high-curvature, high-coverage nematic tubular state (phase II). Indeed, at low to moderate protein coverages, heterogeneous intermediates are formed, exhibiting mixtures phases I and II. In fact, phase II nucleates in a phase I matrix, and propagates the phase boundaries until reaching a homogeneous phase II at full protein coverage. This occurs in a dynamic process where curvature sensing and generation are integrated within the same framework.
### Reshaping is hindered by high tension
As an additional case, we explored the behavior of the system when shape changes are not allowed. To this end, we generated shallow spherical cap protrusions, which develop when hypo-osmotic shocks are generated both in vitro and in cells6,7. In this case, the cap templates adopt a spherical shape and are pressurized, unlike the buds previously obtained by bilayer compression. Therefore, the membrane needs to accommodate a significant excess volume of liquid with little excess membrane area, leading to a structure under significant tension6 where shape changes are very difficult. Accordingly, shallow spherical caps formed by a hypo-osmotic shock in our experimental system were not visibly reshaped by Amphiphysin even at significant concentrations (Fig. 3f and Supplementary Movie 19). Moreover, the cap teared and collapsed upon exposure to higher Amphiphysin concentrations (Fig. 3f and Supplementary Movie 20). Our model predicts that upon exposure of such shallow caps to BAR proteins, shape changes are negligible. The model does not explicitly describe tearing but predicts membrane tension. As protein concentration increases, tension in the membrane sharply increases, potentially leading to membrane tearing21,42 (Fig. 3f).
### Cell compression triggers membrane tubulation by Amphiphysin
Beyond the specifics of the reshaping process, an important conclusion from this study is that the mechanical generation of membrane structures acts as a catalyzer of membrane reshaping by BAR-domain proteins. Indeed, compressed membranes exhibited a wide range of reshaping behaviors (Fig. 3), whereas non-mechanically stimulated membranes exposed to the same Amphiphysin concentration did not reshape in any clear way (Supplementary Fig. 2e and Supplementary Movie 6). This suggests the interesting possibility that cells could harness the mechanically-induced formation of membrane invaginations7 to trigger BAR-mediated responses, thereby enabling mechanosensing mechanisms. To explore this possibility, we cultured dermal fibroblasts (DF) and overexpressed GFP-Amphiphysin, which is well known to trigger spontaneous membrane tubulation20. Then, we stretched and subsequently compressed the cells using a previously described protocol7. Upon compression, cells formed dot-like membrane folds termed “reservoirs” (Fig. 4a), analogous to the membrane structures observed in vitro in Figs. 1 and 3. Amphiphysin-containing membrane tubes formed before, during, and after stretch. However, their number decreased during the stretch phase, likely due to increased membrane tension (Fig. 4c). Upon release of the stretch, tube formation strongly increased, reaching values well above the initial non-stretched condition (Fig. 4c and Supplementary Movie 21). Further, tubes formed upon de-stretch nucleated close to reservoir locations (Fig. 4b). We measured the elongation rates of these tubes and they ranged from 200 to 350 nm/s, comparable with the elongation rates found in the pSLB experiments for bud elongation. Though Amphiphysin overexpression presumably leads to concentrations above physiological levels, these results clearly show that mechanical compression of cells can stimulate not only BAR-protein recruitment (and possible ensuing signaling cascades) but also BAR-mediated membrane tubulation, abruptly affecting plasma membrane shape.
## Discussion
Curvature sensing and membrane reshaping properties of BAR proteins have been extensively studied on highly curved tubes (up to 100 nm in diameter) mostly in equilibrium4,43,44,45, but the dynamics of the process and its dependence on the initial template were unexplored. In this work, we present a charged synthetic lipid bilayer system that stores sufficient bilayer area and allows for significant bilayer stretch. With stretch release, lipids accommodated in curved structures generate templates for bilayer reshaping upon Amphiphysin binding. Our system is complementary to other in vitro systems, that typically consider either tubes pulled under tension,3,4 or free-standing liposomes or tubes uncoupled from any external lipid reservoir46,47. Instead, our system generates a heterogeneous population of shapes at low tension, curvature, and concentration, a highly relevant scenario in cells, and evaluates BAR protein reshaping dynamically. Our accompanying model provides a mechanistic explanation of this dynamic reshaping, which conforms a very rich process with many intermediate steps, including non-homogeneous phase separation between isotropic and nematic phases, and with major reshaping processes occurring at low coverage and curvature. This behavior emerges naturally from the fundamental physics of membrane mechanics and its mechanochemical interactions with curved proteins, generating a non-trivial feedback between membrane mechanical stimulation and subsequent response. Beyond the physics of the process, such feedback could potentially be used in the many cellular processes involving membrane reshaping. Indeed, the physiological role of BAR proteins at low concentration is mostly studied in the context of BAR protein sensing of highly curved structures, but many cell studies have shown BAR proteins acting on lower curvature structures48, where reshaping is expected to occur out of equilibrium. This study provides a mechanistic framework to understand how BAR protein remodeling in such a context may occur. This may be relevant in well-studied processes such as endocytosis, but also in emerging roles of BAR proteins in maintenance of cell polarity14, response to osmotic changes49, or build-up of caveolar structures12,13. Our results also open the door to unexplored scenarios involving reshaping of low tension, low-curvature membranes obtained under mechanical constraints.
## Methods
### Protein expression and purification
The plasmid containing full-length human Amphiphysin 1 (FL-hAMPH), pGEX-Amphiphysin1, was a kind gift from Pr. De Camilli, Yale University. The plasmid codes for the FL-hAMPH preceded by a Glutathione S-Transferase (GST-Tag) and a cleavage site recognized by prescission protein. The plasmid was transformed in Escherichia coli RosettaTM (DE3) pLysS cells (Novagen). Selected colonies were grown in luria broth supplemented with 25 μg/ml chloramphenicol and 25 μg/ml kanamycin at 37 °C until an OD between 0.6 and 0.8 was reached. Protein expression was induced by 1 mM isopropyl-β-D-thiogalactopyranosid (IPTG) overnight at 25 °C. Cells were pelleted for 30 min at 3,315 × g, pellet was resuspended in lysis buffer (10 mM phosphate buffer saline pH 7.3, supplemented with cOmplete protease inihibitor, EDTA free (Roche) and 1 mM Phenylmethylsulfonyl fluoride (Sigma)). Cells were lysed (5 pulses of 30 s sonication with 30 s rest), incubated for 20 min on ice with 5 μg/ml DNase, and centrifuged at 75,000 × g for 45 min. The supernatant was collected and incubated with 2 column volume (for 20 mL supernatant) of Gluthatione Sepharose 4B (GE Healthcare) for 1 h 30 min on a rotating wheel. The beads were subsequently washed with phosphate buffer saline buffer, pH 7.3 before exchanging to the cleavage buffer (50 mM Tris-base, 150 mM NaCl, 1 mM EDTA, 1 mM DTT, pH 7.0). Sixty units of Prescision protease (BioRad Laboratories) were added to the beads and cleavage of the GST-Tag was allowed for 1 h at room temperature followed by an overnight incubation at 4 °C on a rotating wheel. The flow through was recovered, and contained cleaved Amphiphysin that was further purified by size exclusion chromatography in a Superdex 75 26/60 in 10 mM PBS, pH 7.5, 1 mM DTT. Two fractions were obtained, both containing Amphiphysin according to the SDS-page gel, but the second fraction of smaller size was taken and concentrated for further use. The purity and identity of the product was established by HPLC and mass spectrometry (BioSuite pPhenyl 1000RPC 2.0 × 75 mm coupled to a LCT-Premier Waters from GE Healthcare). Neutravidin was from Thermofisher. Proteins (Amphiphysin and Neutravidin) were coupled to an Alexa Fluor® 488 TFP ester according to the manufacturer protocol and the resulting protein-alexa 488 was concentrated again. Adsorption was measured in a Nanodrop at 280 nm to obtain protein concentration and at 488 nm to obtain fluorophore concentration. This gave an average amount of fluorophore per protein of 3 per Amphiphysin dimers and 1 per Neutravidin protein. Amphiphysin was frozen and kept at −80 °C, experiments were performed with freshly unfrozen samples. Protein integrity was verified by SDS-page of the unfrozen samples.
### Preparation of stretchable membranes
Stretchable polydimethylsiloxane (Sylgard Silicone Elastomer Kit, Dow Corning) membranes were prepared as previously described7. Briefly, a mix of 10:1 base to crosslinker ratio was spun for 1 min at 500 rpm and cured at 65 °C overnight on plastic supports. Once polymerized, membranes were peeled off and assembled onto a metal ring that can subsequently be assembled in the stretch device.
### Patterned supported lipid bilayer (pSLB) formation on PDMS membrane
pSLBs were prepared by combining 1,2-dioleoyl-sn-glycero-3-phosphocoline (DOPC), 1,2-dioleoyl-snglycero-3-phospho(1′-rac-glycerol) (sodium salt) (DOPS), and 1,2-dioleoyl-sn-glycero-3-phosphate (sodium salt) (DOPA), 1,2-dipalmitoyl-sn-glycero-3-phosphoethanolamine-N-(lissamine rhodamine B sulfonyl) (ammonium salt) (LissRhod-DPPE). 1.25 mg of total lipids in a DOPC:DOPS:DOPA 3:2:1 proportion, with 0.5% mol LissRhod-DPPE were dissolved in chloroform. Addition of 1,2-dioleoyl-sn-3-phosphoethanolamine (DOPE) as an alternative to DOPA in the pSLB did not allow for a fluid bilayer formation. For control experiments, biotinylated pSLBs were prepared with 1.25 mg DOPC with 0.5% mol LissRhod-DPPE and 5% mol 1,2-dipalmitoyl-sn-glycero-3-phosphoethanolamine-N-(cap biotinyl) (sodium salt) (16:0 Biotinyl Cap PE). We consistently used the same acyl chains in the lipid composition in order to minimize asymmetry between the two leaflets. The solvent was evaporated for a minimum of 4 h. The lipid film was immediately hydrated with 750 μL of PBS, pH 7.5 (final concentration of 1.6 mg/ml) at room temperature. After gentle vortexing, a solution of giant multilamellar vesicles was obtained. Large unilamellar vesicles (LUVs) were prepared by mechanical extrusion using the Avanti extruder set. The lipid suspension was extruded repeatedly (15 times) through a polycarbonate membrane (Whatman® Nuclepore™ Track-Etched Membranes diam. 19 mm, pore size 0.05 μm). The mean diameter of the LUVs was verified by Dynamic Light Scattering (Zetasizer Nanoseries S, Malvern instruments). LUVs were always prepared freshly the previous day of the experiment.
To prepare the pSLB, a TEM grid (G200H-Cu, Aname) was placed in the middle of the PDMS membrane ring. The membrane was subsequently plasma cleaned in a Harrick oxygen plasma cleaner using the following parameter: constant flow of oxygen between 0.4 and 0.6 mbar, high power, and exposure time between 15 and 60 s. A small 6 mm inner diameter ring was simultaneously plasma cleaned and bonded around the TEM grid. Then, the TEM grid was removed and the liposome solution was deposited and confined inside the thin bonded ring, with subsequent incubation for 1 h at room temperature. LUVs were then extensively washed with PBS buffer pH 7.5. The membrane was mounted in the stretching device placed in the microscope.
### FRAP of the pSLB
Patterned Supported Lipid Bilayers (pSLB) were obtained as described above on PDMS membranes, and the ring-containing membranes were mounted under an upright epifluorescence microscope (Nikon Ni, with Hamamatsu Orca Flash 4.0, v2). Images of pSLBs, obtained with either 15 or 30 s plasma cleaning, were acquired with a 60x water dipping objective (NIR Apo 60X/WD 2.8, Nikon) and an Orca R2 camera. A small linear region of the pSLB was frapped by repeatedly scanning and focusing 180 fs pulses generated by a fiber laser (FemtoPower, Fianium) with central wavelength at 1064 nm at 20 MHz. A set of galvo mirrors (Thorlabs) and a telescope before the port of the microscope allowed to position and move (oscillations at 400 Hz) the diffraction-limited spot at a desired place on the bilayer. Once bleached, fluorescence recovery was monitored for 5 min. Time-lapse imaging during the pSLB photobleaching and its recovery after photobleaching was done with a home-made software (Labview 2011). Recovery of the intensity of the bleached lines were plotted either for the full line, or by separating the line into a left and a right area and assess whether the recovery was symmetric (Supplementary Fig. 1).
### Sucrose-loaded assay and negative-stain transmission electron microscopy
Sucrose-loaded vesicles were prepared as previously described in the literature24, using a mixture of DOPC, DOPS, and DOPE lipids in a 1:2:1 ratio. Lipids were evaporated and subsequently rehydrated with PBS buffer pH 7.5, containing 0.3 M sucrose. A solution of 0.6 mM lipids of vesicles was incubated for 20 min with 40 μM of Amphiphysin (non-fluorescent) at 37 °C. The solution was incubated on a copper grid (G200H-Cu + Formvar, Aname), previously activated (with 5 min UV) and subsequently stained with 2% neutral phosphotungstic acid. Grids were imaged in a JEOL 1010 80 kV TEM microscope, and recorded with the AnalySIS software.
### Mechanical/osmotic stimulation of the pSLB, protein injection, and live imaging
Membrane-containing rings were mounted in the stretch system as previously described7. Image acquisition of cells and pSLBs were acquired with a 60x objective (NIR Apo 60X/WD 2.8, Nikon) in an inverted microscope (Nikon Eclipse Ti) with a spinning disk confocal unit (CSU-W1, Yokogawa), a Zyla sCMOS camera (Andor) and using the Micromanager software. The bilayer was stretched slowly for 120 s and the strain, obtained through the measurement of the hexagon extension, was between 5 and 8%. After 120 s stretch, the bilayer was slowly released for 300 s. At release and upon tube appearance, images were acquired every sec in two different channels collecting each fluorophore emission signal. Given the 3D structure of the tubes and buds, manual focusing enabled to image these lipid templates over time, slightly above the bilayer plane. Three microliters of an Amphiphysin or Neutravidin stock solution (of a concentration depending on the desired end concentration but always in the same buffer as the one covering the pSLB to avoid any osmotic perturbation) was gently micro-injected in the buffer droplet hydrating the pSLB. End concentration ranged from 50 nM to 5 μM. In some instances, the non-fluorescent protein was used to reach high concentrations. For the controls of tube behavior in absence of protein, no injection was performed. To modify osmolarity, the pSLB was exposed to medium mixed with de-ionized water and after pressurized cap formation, protein was injected in the same conditions as above. Osmolarity was adjusted to that of the buffer hydrating the pSLB.
### Supported lipid bilayer (SLB) formation on glass coverslips
SLBs on glass coverslips used for the calibration in the quantitative fluorescence microscopy were obtained as previously described50. Glass coverslips were cleaned by immersion in 5:1:1 solution of H2O:NH4:H2O2 at 65 °C for 20 min and were dried under a stream of N2 gas. GMVs were obtained as previously but with different lipid mixtures. To obtain SLBs with 0.1 to 0.5% of protein-like fluorophores, two LUV-stock solutions were prepared, either DOPC only, or DOPC with 0.5% 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine-N-(TopFluor® AF488) (ammonium salt). Lipid films were rehydrated in 150 mM NaCl and 10 mM Tris, pH 7.4, to a final concentration of 3 mg/mL. GMVs were extruded as previously described to obtain LUVs. Small rings of 6 mm diameter of PDMS were bonded as described before using plasma cleaning of both substrates, forming a small chamber on top of the coverslip. Coverslips were activated by cleaning with oxygen plasma (Harrick) in a constant flow mode (pressure 0.6 and at high power) for 20 min. The two LUV-stock solutions were diluted in fusion buffer (300 mM NaCl, 10 mM Tris, 10 mM MgCl2) to 0.5 mg/mL solutions at different ratios to obtain a set of solutions from 0 to 0.5% TopFluor-AF488. SLBs of the different fluorophore ratios were obtained by incubating the diluted solutions in the glass coverslips chambers, immediately after the plasma cleaning process, for 1 h at room temperature. Liposomes were extensively rinsed with the fusion buffer and subsequently milli-Q water.
### Imaging of the SLBs, liposome, and protein solutions on glass for quantitative fluorescence microscopy
SLBs on glass were imaged in the same condition as the pSLB on PDMS. For the AF-488 enriched SLB, the exposure time and laser power were the same as for the protein channel. For the LissRhod-DPPE enriched SLB, parameters were the same as for the lipid channel. Background for the AF-488 enriched SLB was obtained by focusing on a LissRhod-DPPE enriched bilayer and recording an image in the 488 nm channel. The opposite was done for LissRhod-DPPE enriched background. Fluorescence image of protein solutions at different concentrations, from 0 to 0.75 μM, and of LissRhod-DPPE enriched LUV solutions (from 0 to 0.1%) were recorded with the same settings as for the pSLB protein channel.
### Cell culture and transfection
Normal Human Dermal Fibroblasts derived from an adult donor (NHDF-Ad, Lonza, CC-2511) were cultured using Dulbecco’s modified Eagle medium (DMEM, Thermofisher Scientific, 41965-039) supplemented with 10% FBS (Thermofisher Scientific, 10270-106), 1% Insulin-Transferrin-Selenium (Thermofisher Scientific, 41400045) and 1% penicillin-streptomycin (Thermofischer Scientific, 10378-016). Cell cultures were routinely checked for mycoplasma. CO2-independent media was prepared by using CO2-independent DMEM (Thermofischer Scientific, 18045-054) supplemented with 10% FBS, 1% penicillin-streptomycin, 1.5% HEPES 1 M, and 2% L-Glutamine (Thermofischer Scientific, 25030-024). One day before experiments, cells were co-transfected with the membrane-targeting plasmid peGFP-mem and the Amph1-pmCherryN1. Transfection was performed using the Neon transfection device according to the manufacturer’s instructions (Invitrogen). peGFP-mem was a kind gift from Pr. F. Tebar. and contained the N‐terminal amino acids of GAP‐4351, which has a signal for post‐translational palmitoylation of cysteines 3 and 4 that targets fusion protein to cellular membrane, coupled to a monomeric eGFP fluorescent protein. Amph1-pmCherryN1 was a kind gift of Pr. De Camilli and contained the full-length Amphiphysin 1 coupled to a mCherry fluorophore.
### Mechanical stimulation of the cells and live imaging
Cell mechanical stimulation was done as previously described7. Briefly, a 150 μL droplet of a 10 μg/mL fibronectin solution (Sigma) was deposited in the center of the membrane mounted in the ring. After overnight incubation at 4 °C, the fibronectin solution was rinsed, cells were seeded on the fibronectin-coated membranes and allowed to attach during 30–90 min. Then ring-containing membranes were mounted in the stretch system previously described7. Cell images were acquired with a 60x water dipping objective (NIR Apo 60X/WD 2.8, Nikon) and an Orca Flash 4.0 camera (Hamamatsu), in an upright epifluorescence microscope with the Metamorph software. Cells were always imaged in two different channels collecting each fluorophore emission signal, every 3 s. They were imaged for 2 min at rest, 3 min in the 6% stretched state (nominal stretch of the PDMS substrate), and 3 min during the release of the stretch.
### Quantifications
#### Diameter of tubes expelled by Amphiphysin from sucrose-loaded vesicles using TEM images
The diameter of the lipid tube reshaped by amphiphysin was measured using the TEM images from the sucrose-loaded assay. Diameters at one or two places of tubes expelled from the vesicles were measured manually on seven different high magnification images (*60k) of two independent experiments. The mean diameter was computed from these measurements.
#### Binding curves of the protein to the buds and tubes
Stacks of the acquired images were prepared in Fiji. A stack containing a single lipid object (tube or bud) was isolated from the timelapse stacks obtained in protein channel, as well as a stack of a small area of the pSLB close to the object. Objects were automatically thresholded in Cell Profiler and their mean florescence intensity was extracted. After background correction, the fluorescence intensity was plotted over time for each object.
#### Protein enrichment on the reshaped tube
The raw intensities of the elongated tubes were measured as explained above (in the tube diameter section), for both lipid and protein channels, at the same timepoint. The raw intensity of the bud in both channels was also measured assuming a spherical shape. We define the tube versus bud enrichment in both channels by the ratio between the mean intensities of the tube and bud. Mean intensities are calculated by dividing the raw intensities by the area of the tube or bud, which is the same in both channels. In the case of the lipid image, no enrichment is assumed. We thus normalize the protein enrichment value with that of the lipid which makes our measurement independent of geometry. See also Supplementary Fig. 9a.
#### Estimation of protein coverage
To estimate the coverage of tubes and buds with Amphiphysin, we first prepared flat membrane bilayers containing 0.5% Liss Rhodamine fluorophore, and measured their average fluorescence intensity per unit area. Then, tubes or buds in experiments were identified as described in the “binding curve method”, and their average fluorescence intensity in the lipid channel was also calculated. By calculating a ratio between both values, we obtain a geometrical correction factor. Due to the 3D shapes of tubes and buds, this accounts for loss of signal if not all fluorescence is collected in the confocal slice, or gain of signal due to integration of fluorescence due to the 3D object.
Then, we prepared flat membrane bilayers, but labeled with the same fluorophore used for Amphiphysin, AF-488. By measuring fluorescence intensities as a function of AF-488 concentration, we obtained a calibration curve between the fluorescence signal and fluorophore concentration, as previously described41,50. Finally, we measured the average fluorescence intensity of tubes and buds in the Amphiphysin channel, and used the calibration curve and the geometrical correction factor to estimate an Amphiphysin dimer concentration (accounting for the number of fluorophores per dimer). After assuming a dimer area of 58 nm2 (same area as in our simulations, close to the one classically used4), we finally obtain a coverage estimation (see also Supplementary Fig. 7a) of the protein on the tube or bud at each timepoint. We then fit the data with an exponential curve using the equation: $${{{{{{\mathrm{Coverage}}}}}}}\,=\,{C}_{{\max }}\,({1-e}^{-{kt}})$$ to the experimental evolution with time of each analyzed structure, and take the coverage value at which reshaping begins. Then, we calculated the mean and standard deviation of all points.
#### Quantification of Amphiphysin tubulation in the cell experiments
In movies of Amphiphysin over-expressing cells, time slots of 90 s before, during, and after stretch were analyzed. The number of tubulations appearing in each one of the slots was manually counted having as reference the timepoint of formation of the structure. The graph and statistics were generated using the Graphpad prism software.
#### Quantification of the elongation rate
For both tubes elongating from buds in vitro or tube elongating in the cellular plasma membrane, the elongation rate was obtained by plotting the length of the tube (increasing with time) at different time points. The slope of the fit to a linear curve directly gives the elongation rate.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
|
# Math Help - Deriving logarithmic error equation
1. ## Deriving logarithmic error equation
Hi guys, wonder if any of you can help me clarify something. Got a deadline coming up and have been asked to explain an equation I've used in my work ... I've got a derivation from the source I got the equation from, but I don't understand one of the steps ... here goes ....
$m\pm \sigma_m = constant - 2.5\log(S\pm N)$
$m\pm \sigma_m = constant - 2.5\log\left[S\left(1\pm \frac{N}{S}\right)\right]$
$m\pm \sigma_m = constant - 2.5\log S - 2.5\log\left(1+ \frac{N}{S}\right)$
Therefore
$\sigma_m = \pm 2.5\log \left(1 + \frac{1}{S/N}\right)$
I'm just not sure what's happening with the $\pm$ disappearing, changing places etc, so if anyone could show me how you get from line 2 to 3 and 3 to 4 I'd be very grateful!
Cheers,
Rai
2. Originally Posted by Rai
Hi guys, wonder if any of you can help me clarify something. Got a deadline coming up and have been asked to explain an equation I've used in my work ... I've got a derivation from the source I got the equation from, but I don't understand one of the steps ... here goes ....
$m\pm \sigma_m = constant - 2.5\log(S\pm N)$
$m\pm \sigma_m = constant - 2.5\log\left[S\left(1\pm \frac{N}{S}\right)\right]$
$m\pm \sigma_m = constant - 2.5\log S - 2.5\log\left(1+ \frac{N}{S}\right)$
Therefore
$\sigma_m = \pm 2.5\log \left(1 + \frac{1}{S/N}\right)$
I'm just not sure what's happening with the $\pm$ disappearing, changing places etc, so if anyone could show me how you get from line 2 to 3 and 3 to 4 I'd be very grateful!
Cheers,
Rai
From line 2 to line 3,
The reason the negative N/S is not included anymore could be that N is geater than S.
If N > S, then N/S is greater than 1.0. Then (1 -N/S) is negative.
There are no logarithms for negative quantities.
From line 3 to line 4,
The reappearance of the +,- outside of the logarithm is due to the +,- of the sigma_m in line 3.
In line 4, the sigma_m is only positive.
3. Thanks a lot for that. Of course, the reason for the $\pm$ reappearing in the 4th line is obvious now you've pointed it out! Hmmm N isn't likely to be larger than S in many cases (the symbols stand for noise and signal and in the context the equation is used in, the signal measured is usually much larger than the noise measurement). I've got a horrible feeling it's just been 'fudged' ... I shall have to have another think ...
Thanks again!
|
# All Questions
16,089 questions
5 views
### ELPA list is missing markdown-mode according to list-packages
I'm trying to install markdown-mode using instructions on https://jblevins.org/projects/markdown-mode/, according to which I put into my init.el file (require 'package) (add-to-list 'package-archives ...
10 views
Have recently upgraded to Ubuntu 18.04 which required an emacs reinstall of version 24 Now I've noticed that my symmetrically encrypted org mode file (file.org.gpg) no longer requires me to enter the ...
19 views
### How do I get byte-compilation warnings about undefined variables?
The following code works fine with M-x eval-buffer, and byte-compiles without any warnings. (eval-when-compile (defconst demo-one 1)) (defvar demo-some-var (foo bar ,demo-one)) However, if I ...
16 views
### org-insert-link sometimes produces “wrong type argument: stringp, nil” after prolonged use of emacs
It usually happens after a couple of days of use that C-c C-l / org-insert-link inside .org files does not allow me to insert new links anymore. What I want to do is interactively insert link ...
19 views
### How can I get flush left text in org-indent-mode?
I open an org file in emacs and it displays thus: I enable word wrap with M-x visual-line-mode and get: But I prefer the extra indenting of headlines provided by org indent mode, so then do M-x org-...
12 views
### Outline minor mode + mouse
XEmacs had outl-mouse that used outline-minor-mode plus a few pixmaps and mouse keybindings to let old lazy guys to hide and show (in my use case) sections and subsections etc of my AUCTeX buffers, ...
233 views
### How to modify a string without altering its text properties
If I propertize a string and save it to a variable, how can I change the string within that variable without altering its text properties? For example: (setq myvar (propertize "testing" ...
22 views
Is there a way of calling (end-of-buffer) that excludes trailing empty lines?
31 views
### Directed Acyclic Graphs in Org mode, and Cloned Nodes
By default, Org-mode headers form trees. That is, every header can have an arbitrary number of sub-tasks, and this relationship is recursive to an arbitrary depth. However, a task in practice can be a ...
27 views
### How Do I Create a Popup Window?
I would like to create a (read-only) popup window with some text in it, such that the user can close it by pressing 'q'. You know - the kind you see when you try compiling and get errors. Any ...
21 views
### Use Helm for Org Refile Completion
I want to use helm for org-refile completion when determining which heading to refile under. Here are some excerpts from my initialization script. (setq org-refile-targets '(("~/Documents/GTD/Gtd....
10 views
### Org-mode tell paragraph position in subtree
Is there a quick way to show where a paragraph is sitting in the document structure hierarchy? * Chapter 1 ... * Chapter 5 ** Section 4 ... paragraph X I would like to tell that paragraph X is ...
29 views
### org-mode numbered list across headings
I have a list of items. * Heading 1 1. Item 2. Item * Heading 2 1. Item 2. Item 3. Item Is there an easy way to make the numbers run across the headings like this? * Heading 1 1. Item 2. Item * ...
59 views
### How can I quit from multiple-cursor mode by ESC
I want to bind ESC to quit from multiple-cursor mode. This code doesn't work: (define-key mc/keymap (kbd "<ESC>") 'mc/keyboard-quit) How can I do it?
22 views
### How to use recover-session and dired-omit-mode?
I have enabled dired-omit-mode globally with (add-hook 'dired-mode-hook #'dired-omit-mode). When using recover-session, I don't see any files. How to fix this? Creating a .dir-locals.el containing ((...
13 views
### After setting bookmark: Wrong type argument: listp, "~/.spacemacs
So I did just M-x bookmark-save in my original .spacemacs file, which lead to the case that when I booted Spacemacs, I get the following error: Upgrading bookmark format from 0 to 1... mapcar: Wrong ...
12 views
### Synchronised scrolling in Evil mode
Is it possible to do synchronized scrolling of two windows in Evil mode, similar to what :set scrollbind does in Vim? sroll-all-mode doesn't quite achieve this. For example, it doesn't scroll on G or ...
31 views
### Any differences between M-x command and (call-interactively #'command)?
Q: are there any differences between M-x command and (call-interactively #'command)? I have an obscure bug I'm trying to hunt down, and can't even begin to think of how to do it. the summary version ...
23 views
### How to use a org-capture template expansion several times in a captured note
I am trying to create a org-capture template that re-uses some text given with the ‘%^{PROMPT}’ expansion along several points of the created note. I have looked in this link and in stack overflow ...
16 views
### Align statement breaks on rhs with equal sign
Is there a way to have rhs statement breaks align with the equal sign on the lhs line? e.g. some_value = a * b + c / some_other_value * x; becoming some_value = a * b + c / ...
13 views
### Custom indentation for macros in C mode
I'm working in a codebase with an extensive use of function macros and I'm having some difficulty finding the correct way to set indentations with them. Specifically, we pass a lot of statement lists {...
49 views
### How replace either of two words using a regexp to find them?
Text How are you? I'm just fine. I need to add some suffix to the next words: are and fine. I try this: $$are$$|$$fine$$ but it doesn't help.
20 views
### Mapping new command involving <escape> clear all predefined <escape> commands
I have the command fill-paragraph mapped to key <escape> q, it is mapped by default in emacs. Now I want to map the opposite command, unfill-paragraph, to key <escape> p. However, when ...
14 views
### Speedbar opens a normal text buffer, not dired like
I'm trying to use Speed bar when coding C++. My problem is, when I type M-x speedbar it opens an already opened code file instead. Here you can see the effect:
14 views
### Missing color support (for exa) in eshell
I just started using eshell. But the color support for some commands seems to be missing. For example I like to use the command exa, because of its nice colors. But in eshell it is all black.
15 views
### Org-ref not communicating with Tex-live?
Org-ref cannot tell that I have biblatex-caspervector installed. Is this because of some packages that I missed?
28 views
### org-mode internal link works only in some cases
I have an .org document with the following structure. * Headline 1 This is some text * Headline 2 :Custom_id: h2 This is heading number two ** Sub heading 1 <<target>> ** Sub heading ...
34 views
### Is it possible to apply a function to every region between two “marker” lines?
I have a .tex file that looks something like this: \$ cat foo.tex \begin{document} \begin{question} lots of multiline text with several paragraphs of information \end{question} other text \begin{...
19 views
### mark-whole-buffer followed by kill-ring-save not working correctly
I'm using Emacs 26.1 on MacOS. Instead of copying the entire buffer I get a warning: Saved text until "text where is stopped saving with whitespace seemingly to the end of the buffer " I'm ...
29 views
### How to enable the superword minor mode globally?
I am just wondering how to enable the superword-mode globally? I tried to enable it following the answers found here: How to enable ido-mode forever? , using: (require 'superword-mode) (...
42 views
### How to change window splitting behavior in Magit?
emacs: 26.2 spacemacs: develop (19c429e) magit: 20190609.1424 Some time ago (e.g. 20190222.1746), magit's window management used to be different from how it currently is. For example, magit-status ...
11 views
### Org Agenda is looking for agenda file names inside of my todo file
When I try to open a org-agenda view, I am prompted with emacs claiming that the first line of my todo list is not a valid file, with options to remove it from the agenda list, or abort. Removing it ...
9 views
### cmake-ide in projects with only make
What are my options to mimic cmake-ide in the projects that are not built with CMake? The features that I want the most are find-definition and autocomplete. For the compilation related features, ...
21 views
### Org-Mode not exporting all child subheading
While exporting following sample org file content to html, final output skips headings from 3rd depth in table of content. After reading org documentation, and forum questions, I tried following, ...
35 views
### Huge buggy padding in line-number buffer
A picture: As you can see there is a huge waste of space on the left side of the line number section. When I increase the font size the padding even increases and does not decrease when I decrease ...
19 views
### How to go to matching pattern after org-occur?
After typing C-c / / and a regex, org shows a sparse tree containing matches of the pattern. How do I move point through these matches, in a way similar to C-s?
17 views
### BibTeX Line Breaks in fields
I'd like to have linebreaks within fields of a BibTeX entry that are preserved even after I format the entry with C-q. Currently, what happens in my BibTeX buffer is this. I produce an entry with a ...
26 views
### org-ref “Unbalanced parentheses”, 20306, 18248 error
I am new to org-mode and I am learning to use it to replace markdown for academic writing. M-x org-ref gave me this error: org-ref "Unbalanced parentheses", 20306, 18248 error What should I do ...
14 views
### Evil: How to add keybinding for changing buffers?
With Evil mode, :bn shows the next buffer, and :bp shows the previous buffer. I wish to define a custom keybinding where <Space>n shows the next buffer, and <Space>p shows the previous ...
15 views
### Fontconfig warning when start emacs [on hold]
Upon invoking emacs, it report enormous font errors Fontconfig warning: "/etc/fonts/fonts.conf", line 5: unknown element "its:rules" Fontconfig warning: "/etc/fonts/fonts.conf", line 6: unknown ...
19 views
### Wrap a block of html with one or multiple tags
Often my point is at a paragraph <p>The quick brown fox jumped over the fence</p> <br> Example and I want to wrap it like this <div class="row"> <div class="columns">...
24 views
### Can Org-mode link to a Babel block?
I would like to have a link that runs a Babel source-code block. Does Org-mode support running something from the library of babel in a link, or a source block via a file+blockname reference? A ...
21 views
### How do I keep special modes from overriding my keybindings?
I usually have multiple windows open and switch between them often, so I put this in my init file: (global-set-key (kbd "C-o") 'other-window) ; Save a keystroke when switching windows (global-set-key ...
27 views
### vc-root-dir` is not a valid command name
Trying to use the vc-root-dir function on Emacs gives the error message vc-root-dir is not a valid command name On inspecting vc.elc as found by locate-library the function appears to exist as a ...
24 views
### How to chain isearch-forward-symbol-at-point and query-replace-regexp into a single keybinding?
I'm looking to create a keybinding that will perform the following shortcut: call isearch-forward-symbol-at-point call query-replace-regexp It's basically a shortcut for the following key combo: M-s ...
70 views
### What happens if I have different versions of Emacs using the same directory
Currently I have two versions of Emacs (system installed by apt-get: emacs26 and my own compiled version emacs27). They both us emacs.d directory, and I wonder what happens the packages are ...
77 views
### Is it possible to shuffle paragraphs?
I'd like to be able to quickly "shuffle" the order in which all paragraphs in a region occur. Can this be easily accomplished in emacs? For example, consider the following region. Hello world. This ...
11 views
### How to specify a pdf viewer for ess mode
I have set the variable ess-pdf-viewer-pref to "evince" But when I export any Rnw file using M-n e, instead of displaying the pdf in evince it is displayed inside Emacs. How can I stop Emacs ...
22 views
### Breaking up minified html
Is it possible to do this with vanilla emacs? If not what changes do I need to make. Original test.html <a id="try_redacted" target="_blank" href="http://redacted.com/game/2d/?try=1" style="right:...
36 views
### How can I automatically swap frame buffers if the buffer is already open?
I've seen the https://www.emacswiki.org/emacs/buffer-move.el script, but I'm trying to something a bit different. Suppose I have a frame with a buffer from file A.h in it and A.cpp in another frame ...
15 30 50 per page
|
# R S Aggarwal Solutions for Class 10 Maths Chapter 17 Volume and Surface Area of Solids
Get RS Aggarwal Solutions for Class 10 Chapter 17 Volume and Surface Area of Solids here. We provide the solution for all exercise problems to help students in preparing for their exams. Class 10 Chapter 17 is based on volume and surface area of various solids such as a cube, cuboid, cylinder, sphere and their combinations. This is one of the important and interesting concepts in class 10 syllabus. The subject experts at BYJU’S have devised detailed solutions for the students to understand the concepts easily. Students can downloadR S Aggarwal Maths Class 10 Solutions now for free and start practising.
## Download PDF of R S Aggarwal Solutions for Class 10 Chapter 17 Volume and Surface Area of Solids
### Access Solutions of Maths R S Aggarwal Chapter 17 – Volume and Surface Area of Solids
Get detailed solutions for all the questions listed under the below exercises:
Exercise 17 A Solutions
Exercise 17 B Solutions
Exercise 17 C Solutions
Exercise 17 D Solutions
## Exercise 17A
Question 1: Two cubes each of volume 27 cm3 are joined end to end to form a solid. Find the surface area of the resulting cuboid.
Solution:
Volume of cube = 27 cm3 (given)
Let ‘a’ cm be the length of each side of the cube.
We know, volume of cube = (side)3 cubic units
⇨ 27 = a3
or a = 3 cm
When two cubes are joined cuboid is formed.
Find dimensions of cuboid:
Length = l = 2a = 2 × 3 cm = 6 cm
Breadth = b = a = 3 cm
Height = h = a = 3 cm
Now,
Surface area of cuboid = 2(lb+bh+lh)
= 2(6 × 3 + 3 × 3 + 6 × 3)
= 2(18 + 9 + 18)
= 90
Surface area of resulting cuboid is 90 cm2
Question 2: The volume of a hemisphere is 2425(1/2) cm3. Find its curved surface area.
Solution:
Volume of a hemisphere = 2425(1/2) = 4851/2 cm3
We know, Volume of a hemisphere = 2/3 pi r3
4851/2 = 2/3 pi r3
r3 = (4851 x 21)/88
r = 10.5
So, radius of hemisphere is 10.5 cm.
Curved Surface Area of hemisphere = 2πr2
= 2 × 22/7 × (10.5)2
= 693
Curved surface area of hemisphere is 693 cm2.
Question 3: If the total surface area of a solid hemisphere is 462 cm2, then find its volume.
Solution:
Total surface area of a solid hemisphere = 462 cm2
We know, Total surface area of a solid hemisphere = 3 pi r2
462 = 3 pi r2
r2 = 3234/66
or r = 7
Now, volume of solid hemisphere = 2/3 pi r3
= 2/3 x 22/7 x 7x7x7
= 2156/3
= 718.67
So, volume of solid hemisphere is 718.67 cm3
Question 4: (i) A 5-m-wide cloth is used to make a conical tent of base diameter 14 m and height 24 m. Find the cost of cloth used at the rate of Rs.25 per metre.
Solution:
Given:
Width of cloth used = 5 m
Diameter of conical tent = 14 m
Let r be the radius of the conical tent.
Diameter of conical tent = 14 m
Radius = r = 14/2 m = 7 m
Height = h = 24 m
Let slant height of conical tent = l m
So , $I = \sqrt{r^{2}+h^{2}} \,=\, \sqrt{7^{2}+24^{2}}$
l = 25 m
Curved Surface area of conical tent = πrl
= 22/7 × 7 × 25 m2
= 550 m2
Which is the area of cloth required to make a conical tent that is 550 m2
Now,
Area of cloth required = Length of cloth used x width of cloth
or
Length of cloth used = Area of cloth required ÷ width of cloth
= 550/5 m
= 110 m
Length of cloth used = 110 m
Given, Cost of cloth used = Rs 25 per meter
Total Cost of cloth required to make a conical tent = 110 × Rs 25
= Rs 2750
(ii) The radius and height of a right circular cone are in the ratio 5:12. If its volume is 314 cubic cm. find the total surface area [Take π = 22/7]
Solution:
Radius and height of a right circular cone are in the ratio of 5:12.
Let the radius and height be 5x, 12x respectively.
Volume of cone = 314 cm³. (given)
We know, Volume of cone = 1/3 πr2 h
1/3 πr2 h = 314
1/3 x 22/7 x (5x)2 x 12x = 314
300 x3 = 300
or x = 1
This implies,
Height = 7 cm
Slant height(l) = √(h² + r²)
√12² + 5²
l = √144 + 25
l = √169
or l = 13 cm.
We know, Total surface area = πr(l + r)
= 22/7 x 5 (13+5)
= 282.6 cm2
Question 5: If the volumes of two cones are in the ratio of 1:4 and their diameters are in the ratio of 4:5, then find the ratio of their heights.
Solution:
Let V1 be the volume of first cone and V2 be the volume of second cone.
Then, V1:V2 = 1:4 (Given) ……(1)
Let d1 be the diameter of first cone and d2 be the diameter of second cone.
Then d1:d2 = 4:5 (given) …(2)
Let h1 be the height of first cone and h2 be the height of second cone.
We know that, volume of cone = V = 1/3π(d2/4)h
(Using equation (2))
Therefore, Ratio of height of two cones is 25:64.
Question 6: The slant height of a conical mountain is 2.5 km and the area of its base is 1.54 km2. Find the height of the mountain.
Solution:
Given:
Slant height of conical mountain = 2.5 km
Area of its base = 1.54 km2
Let the radius of base be ‘r’ km, height of the mountain is ‘h’ km and slant height be ‘l’ km
Area of base = πr2
1.54 = πr2
1.54 = 22/7 r2
or r = 0.7 km
We know that, l2 = r2 + h2
(2.5)2 = (0.7)2 + h2
6.25 – 0.49 = h2
or h = 2.4 km
Question 7: The sum of the radius of the base and the height of a solid cylinder is 37 metres. If the total surface area of the cylinder be 1628 sq metres, find its volume.
Solution:
Height = h
The sum of the radius of the base and the height of a solid cylinder is 37 metres.
⇨ r + h = 37 m …(1)
Total surface area of the cylinder = 2πr(r + h)
r = 7
From (1): 7 + h = 37
h = 30
Again,
Volume of the cylinder = πr2 h
Question 8: The surface area of a sphere is 2464 cm2. If its radius be doubled, then what will be the surface area of the new sphere?
Solution:
Surface area of a sphere = 2464 cm2 (Given)
4π r2 = 2464
Now,
Surface area of a sphere with radius 2r = 4π (2r)2
= 4 x 4π r2
= 4 x 2464
= 9856 cm2
Question 9: A military tent of height 8.25 m is in the form of a right circular cylinder of base diameter 30 m and height 5.5 m surmounted by a right circular cone of same base radius. Find the length of canvas used in making the tent, if the breadth of the canvas is 1.5 m.
Solution:
A military tent is made as a combination of right circular cylinder and right circular cone on top.
Given :
Total Height of tent = h = 8.25 m
Base diameter of tent = 30 m, then
Base radius of tent = r = 30/2 m = 15 m
Height of right circular cylinder = 5.5 m
Base radius of cone = 15 m
Let slant height of cone = l m
Now,
Curved surface area of right circular cylindrical part of tent = 2πrh
and
Height of conical part = total height of tent – height of cylindrical part
Height of cone = 8.25 – 5.5 = 2.75 m
l2 = h2 + r2
= 2.752 + 152
= 232.5625
or l = 15.25 m
Curved surface area of conical part of the tent = πrl
Total surface area of the tent = Curved surface area of cylindrical part + curved surface area of conical part
Total surface area of tent = 2πrh + πrl
= πr(2h + l)
= 22/7 × 15 × (2 × 5
= 1237.5
Total surface area of tent is 1237.5 m2
Breadth of canvas used = 1.5 m
Length of canvas used x breadth of canvas used = Total surface area of tent
Length of canvas used = 1237.5\1.5 = 825
Length of canvas used is 825 m.
Question 10: A tent is in the shape of a right circular cylinder up to a height of 3 m and conical above it. The total height of the tent is 13.5 m and the radius of its base is 14 m. Find the cost of cloth required to make the tent at the rate of Rs.80 per square metre. [take π = 22/7]
Solution:
The tent is made as a combination of right circular cylinder and right circular cone on top.
Height of cylindrical part of the tent = h = 3 m
Radius of its base = r = 14 m
Total height of the tent = 13.5 m
Curved surface area of cylindrical part of tent = 2πrh
= 2 × 22/7 × 14 × 3
= 264 m2
Height of conical part of the tent = total height of tent – height of cylindrical part
Height of conical part of the tent = 13.5 – 3 m =10.5 m
Let the slant height of the conical part be l.
l2 = h2 + r2 (used cone height)
= 110.25 + 196
= 306.25
or l = 17.5 m
Curved surface area of conical part of tent = πrl
= 22/7 × 14 × 17.5
= 770
Total surface area of tent = Curved surface area of cylindrical part of tent + Curved surface area of conical part of tent.
Total Surface area of tent = 264 + 770
= 1034 m2
Cloth required = Total Surface area of tent = 1034 m2
Cost of cloth = Rs 80/m2 (given)
Total cost of cloth required = Total surface area of tent × Cost of cloth
= 1034 × Rs. 80
= Rs. 82720
Cost of cloth required to make the tent is Rs. 82720
## Exercise 17B
Question 1: A solid metallic cuboid of dimensions 9m× 8m×2m is melted and recast into solid cubes of edge 2m. Find the number of cubes so formed.
Solution:
A solid metallic cuboid of dimensions 9m× 8m×2m (given)
Length (l) = 9 m
Height (h) = 2 m
Edge of a cube (a) = 2 m
Let n be the number of required cubes.
To find: Value of n
Number of cubes = (volume of a cuboid)/ volume of each cube)
= lbh/a3
= (9x8x2)/(2x2x2)
= 18
Therefore, total number of 18 cubes required.
Question 2: A cone of height 20 cm and radius of base 5 cm is made up of modelling clay. A child reshapes it in the form of a sphere. Find the diameter of the sphere.
Solution:
Radius of the cone = r = 5cm and
Height of the cone = h = 20cm
Let the radius of the sphere = R
As per given statement,
Volume of sphere = Volume of cone
4/3 πR3 = 1/3πr2h
4R3 = 5 × 5 × 20
R = 5 cm
Diameter of the sphere = 2R = 2 x 5 = 10 cm
Question 3: Metallic spheres of radii 6 cm, 8 cm and 10 cm respectively are melted to form a single solid sphere. Find the radius of the resulting sphere.
Solution:
Radius of 1st sphere = r1 = 6 cm
Radius of 2nd sphere =r2 = 8 cm and
Radius f third sphere = r3 = 10 cm
Let radius of the resulting sphere = R
Now,
Volume of resulting sphere = Volume of three metallic spheres
4/3 πR3 = 4/3π (r13 + r23 + r33)
= 4/3π (63 + 83 + 103)
R3 = 1728
or R = 12 cm
Radius of the resulting sphere is 12cm.
Question 4: A solid metal cone with base radius of 12 cm and height 24 cm is melted to form solid spherical balls of diameter 6 cm each. Find the number of balls thus formed.
Solution:
Let the number of balls formed are n
As per statement,
Volume of metal cone = Total volume of n spherical balls
Volume of cone = n(Volume of any spherical ball)
1/3 π r2 h = n (4/3 π r3)
122 x 24 = n x 108
or n = 32
Therefore, 32 spherical balls can be formed.
Question 5: The radii of internal and external surfaces of a hollow spherical shell are 3 cm and 5 cm respectively. It is melted and recast into a solid cylinder of diameter 14 cm. Find the height of the cylinder.
Solution:
Let r1 and r2 be the internal and external base radii of spherical shell.
r1 = 3 cm, and r2 = 5 cm
Base radius of solid cylinder, r = 7 cm
Let the height of the cylinder = h
As per given statement:
The hollow spherical shell is melted into a solid cylinder, so
Volume of solid cylinder = Volume of spherical shell
πr2h = 4/3 π(r23 – r13)
⇨ 49h = 4/3(125 – 27)
or h = 8/3 cm
Question 6: The internal and external diameters of a hollow hemispherical shell are 6 cm and 10 cm, respectively. It is melted and recast into a solid cone of base diameter 14 cm. Find the height of the cone so formed.
Solution:
Internal radius of hemisphere = 3 cm and external radius of hemisphere = 5 cm
Diameter of cone = 14 cm
Radius of cone = 7 cm
Now,
Volume of the hollow hemisphere = Volume of the cone
Volume of cone = 1/3 π r2 h
Volume of the hollow hemisphere = 2/3 π(R3 – r3)
2/3 π(R3 – r3) = 1/3 π r2 h
2/3 x 22/7 (53 – 33) = 1/3 x 22/7 x 72 x h
2 x 98 = 49h
or h = 4 cm
Height of the cone formed is 4 cm.
Question 7: A copper rod of diameter 2 cm and length 10 cm is drawn into a wire of uniform thickness and length 10 m. Find the thickness of the wire.
Solution:
Recall MU: 1m = 1000 mm and 1 cm = 10 mm
Diameter of the copper Rod = 2cm
Radius of the copper rod = r= 1 cm
Length of the Rod = h = 10 cm
Length of the wire = h = 10 m = 1000cm
Let suppose the radius of the wire = R
From the statement: Volume of the rod = volume of the wire
πr2h = πR2H
1 × 1 × 10 = 1000 × R
or R = 1/10 cm or 01. cm
Diameter of the wire = 2r = 2 × 0.1 = 0.2 cm
Therefore, the thickness of the wire is 0.2 cm or 2mm.
Question 8: A hemispherical bowl of internal diameter 30 cm contains some liquid. This liquid is to be poured into cylindrical bottles of diameter 5 cm and height 6 cm each. Find the number of bottles necessary to empty the bowl.
Solution:
Internal diameter of hemispherical bowl = 30 cm
Volume = 2/3 πr3
= 2/3 π (15)3
= 2250 π cm3
Diameter of the cylindrical bottle = 5 cm
Volume of one cylindrical bottle = πr2h
= π x (2.5)2 x 6
= 37.5π cm3
Now,
Amount of water in n bottles = Amount of water in bowl
n × 37.5π = 2,250π
or n = 60
So, 60 numbers of bottles are required to empty the bowl.
Question 9: A solid metallic sphere of diameter 21 cm is melted and recast into a number of smaller cones, each of diameter 3.5 cm and height 3 cm. Find the number of cones so formed.
Solution:
Diameter of sphere = 21 cm
Height of the cone = 3 cm
Radius of the cone = 7/4 cm
Volume of the sphere = 4/3 πr3 = 4/3 x π x (21/2)3 cm3
Volume of cone = 1/3 πr2 h = 1/3 π (7/4)2 x 3 cm3
Let n be the number of cone formed, then
n = (Volume of the sphere) / (Volume of cone)
n = (4/3 x π x (21/2)3) / (1/3 π (7/4)2 x 3)
= 504
Number of cones formed = 504
Question 10: A spherical cannon ball, 28 cm in diameter is melted and recast into a right circular conical mould, base of which is 35 cm in diameter. Find the height of the cone.
Solution:
Radius of cannon ball = 14 cm
Volume of cannon ball = 4/3 π r3 = 4/3 π (14)3
Radius of Cone = 35/2 cm
Volume of Cone = 1/3 πr2 h = 1/3 π(35/2)2 h
Let h be the height of the cone.
From statement:
Volume of cannon ball = Volume of Cone
⇨ 4/3 π (14)3 = 1/3 π(35/2)2 h
h = 35.84
Therefore, height of the cone is 35.84 cm.
## Exercise 17C
Question 1: A drinking glass is in the shape of a frustum of a cone of height 14 cm. The diameters of its two circular ends are 16 cm and 12 cm. Find the capacity of the glass.
Solution:
Diameter of lower circular end of glass = 12 cm
Radius of lower circular end = r = 12/2 = 6 cm
Diameter of upper circular end of glass = 16 cm
Radius of lower circular end = R = 16/2 = 8 cm
Height of glass = h = 14 cm
Capacity of glass = 1/3 πh(R2 + r2 + Rr)
= 1/3 x 22/7 x 14(82 + 62 + 8×6)
= 44/3(64 + 36+48)
= 2170.67
Capacity of glass is 2170.67 cm3
Question 2: The radii of the circular ends of a solid frustum of a cone are 18 cm and 12 cm and its height is 8 cm. Find its total surface area. [Use π= 3.14]
Solution:
Height of frustum = h = 8 cm
Radius of lower circular end = r = 12 cm
Radius of upper circular end = R = 18 cm
Let l = slant height
Slant height = l2 = (R-r)2 + h2 cm
= 36 + 64
= 100
or l = 10 cm
We know, Total surface area of frustum = πr2 + πR2 + π(R + r)l cm2
= π × 122 + π × 18^ + π × (18 + 12) × 10 cm2
= 3.14(144 + 324 + 300)
= 2411.52
Total surface area of frustum is 2411.52 cm2
Question 3: A metallic bucket, open at the top, of height 24 cm is in the form of the frustum of a cone, the radii of whose lower and upper circular ends are 7 cm and 14 cm, respectively. Find
(i) the volume of water which can completely fill the bucket;
(ii) the area of the metal sheet used to make the bucket.
Solution:
Radius of lower circular end = r = 7 cm
Radius of upper circular end = R = 14 cm
Height of bucket = h = 24 cm
(ii)Curved surface area = πl( R+r)
= 22/7 x 25 x (14+7)
= 1650 cm2
Area of the base of bucket = πr2
(consider lower base of bucket)
= 22/7 x 7 x 7
= 154 cm2
Area of metal sheet used to make the bucket = curved surface area + Area of the base
= 1650 + 154
= 1804
Area of metal sheet used to make the bucket is 1804 cm2
Question 4: A container, open at the top, is in the form of a frustum of a cone of height 24 cm with radii of its lower and upper circular ends as 8 cm and 20 cm, respectively. Find the cost of milk which can completely fill the container at the rate of Rs. 21 per litre.
Solution:
Radius of lower circular end = r = 8 cm
Radius of upper circular end = R = 20 cm
Cost of 1 litre milk = Rs. 24
Height of frustum container = h = 24 cm
Volume of frustum of cone = 1/3π h(R2 + r2 + Rr) cm3
Volume of milk completely fill the container = Volume of frustum of cone
= 1/3 π x 24(202 + 82 + 20×8)
= 15689.14 cm3
= 15.68914 litres
Since 1 litre is 1000 cm3
Cost of 1 liter milk = Rs. 21
Cost of 15.68914 liter milk = Rs. 21 x 15.68914 = Rs. 329.47
Question 5: A container made of a metal sheet open at the top is of the form of a frustum of cone, whose height is 16 cm and the radii of its lower and upper circular edges are 8 cm and 20 cm respectively. Find
(i)the cost of metal sheet used to make the container if it costs Rs. 10 per 100 cm2.
(ii) the cost of milk at the rate of Rs. 35 per liter which can fill it completely.
Solution:
Radius of lower end = r = 8 cm
Radius of upper end = R = 20 cm
Height of container frustum = h = 16 cm
Cost of 100 cm2 metal sheet = Rs 10
So, Cost of 1 cm2 metal sheet = 10/100 = Rs 0.1
Let l be the slant height.
l2 = (R-r)2 + h2
= (20-8)2 + 162
= 256 + 144
= 400
or l= 20cm
Surface area of frustum of the cone = πr2 + π(R + r)l cm2
= π[ (20 + 8 )20 + 82]
= π[560 + 64]
= 624 X ( 22/7 )
= 1961.14 cm2
(i) Find cost of metal sheet used to make the container if it costs Rs. 10 per 100 cm2
Cost of metal sheet per 100 cm2 = Rs. 10
Cost of metal for Rs. 1961.14 cm2 = (1961.14 x10)/100 = Rs. 196.114
(ii) Find the volume of frustum:
Volume of frustum = 1/3 πh(r2 + R2 + rR)
= 1/3 x 22/7 x 16(82 + 202 + 8×20)
= 1/3 x 22/7 x 16(64 + 400 + 160)
= 10345.4208 cm3
= 10.345 liters
Using
1000cm3 = 1 litre
1 cm3 = 1/1000 litre
Cost of 1 liter milk = Rs. 35
Cost of 10.345 liter milk = Rs. 35 x 10.345 = Rs. 363 (approx)
Question 6: The radii of the circular ends of a solid frustum of a cone are 33 cm and 27 cm, and its slant height is 10 cm. Find its capacity and total surface area. [take π = 22/7]
Solution:
Question 7.
A bucket is in the form of a frustum of a cone. Its depth is 15 cm and the diameters of the top and the bottom are 56 cm and 42 cm respectively. Find how many litres of water the bucket can hold. [take π = 22/7]
Solution:
Depth of the bucket = height of frustum = h = 15 cm
Diameter of top of bucket = 56 cm
Radius of top = R = 56/2 = 28 cm
Diameter of bottom of bucket = 42 cm
Radius of bottom = r = 42/2 = 21 cm
Volume of frustum of cone = 1/3π h(R2 + r2 + Rr) cm3
= 1/3 x 22/7 x 15(282 + 212 + 28×21)
= 22 x 5 x 259
= 28490
Since Volume of water bucket can hold = volume of bucket which is in form of frustum
So, volume of water bucket can hold is 28490 cm3 or = 28.49 litres
Question 8: A bucket made up of a metal sheet is in the form of a frustum of a cone of height 16 cm and radii of its lower and upper ends are 8 cm and 20 cm respectively. Find the cost of the bucket if the cost of metal sheet used is Rs. 15 per 100 cm2. [use π = 3.14]
Solution:
Radius of lower circular end = r = 8 cm
Radius of upper circular end = R = 20 cm
Height of container frustum = h = 16 cm
Cost of 100 cm2 metal sheet = rs. 15
Cost of 1 cm2 metal sheet = 15/100 = Rs. 0.15
Slant height, l2 = (R-r)2 + h2
= (20-8)2 + 162
= 144 + 256
= 400
or l = 20 cm
Now,
Area of metal sheet used = (Total surface area of frustum)- (Area of upper circle) …(1)
Area of upper circle = πR2
Total surface area of frustum = πr2 + πR2 + π(R + r)l cm2
(1)⇨
Area of metal sheet used = (πr2 + πR2 + π(R + r)l) – πR2
= πr2 + π(R + r)l
= π(82 + (20 + 8)20)
= 1959.36 cm3
Again,
Cost of 1959.36 cm2 metal sheet = 1959.36 × cost of 1 cm2 metal sheet
= 1959.36 × Rs. 0.15
= Rs.293.904
Question 9: A bucket made up of a metal sheet is in the form of frustum of a cone. Its depth is 24 cm and the diameters of the top and bottom are 30 cm and 10 cm respectively. Find the cost of milk which can completely fill the bucket at the rate of Rs. 20 per litre and the cost of metal sheet used if it costs Rs. 10 per 100 cm2. [use π = 3.14]
Solution:
Let r = 5 cm, R = 15 cm and h = 24 cm
Question 10: A container in the shape of a frustum of a cone having diameters of its two circular faces as 35 cm and 30 cm and vertical height 14 cm, is completely filled with oil. If each cm3 of oil has mass 1.2 g, then find the cost of oil in the container if it costs Rs. 40 per kg.
Solution:
Diameter of top of container = 35 cm
Radius of top = R = 35/2 = 17.5 cm
Diameter of bottom of container = 30 cm
Radius of bottom = r = 30/2 = 15 cm
1 cm3 of oil = 1.2g of oil, so, Cost of 1 kg oil = Rs. 40
Height of frustum = h = 14 cm
Volume of frustum of cone = 1/3π h(R2 + r2 + Rr) cm3
Volume of oil in container = 1/3 x 22/7 x 14(17.52 + 152 + 17.5 x 15)
= 22/3 × 2 × 793.75
=34925/3
Volume of oil in container = 11641.667 cm3 or 11641.667 × 1.2 g = 13970.0004 g or 13.970 kg
(As 1000 g = 1 kg)
Cost of 13.970 kg oil = Rs. 20 x 13.970 = Rs. 558.
## Exercise 17D
Question 1: A river 1.5 m deep and 36 m wide is flowing at the rate of 3.5 km/hr. Find the amount of water (in cubic metres) that runs into the sea per minute.
Solution:
Depth of the river = 1.5 m
Width of the river = 36 m
Using units:
1 hour = 60 minutes
1 km = 1000 m
Flow rate of river = 3.5 km/hr
= (3.5) x 1000)/1×60) m/min
= 350/6 m/min
The amount of water that runs into sea per minute =
350/6 × 1.5 × 36 = 350 × 1.5 × 6 = 3150
The amount of water that runs into sea per minute is 3150 m3/min.
Question 2: The volume of a cube is 729 cm3. Find its surface area.
Solution:
Volume of a cube is 729 cm3 (given)
we know, Volume of the cube = (side)3
(side)3 = 729
or Side = 9
Each side measure of a cube is 9 cm.
Now,
Total surface area of the cube = 6(side)2
= 6 × 92
= 6 × 81
= 486
Total surface area of the cube is 486 cm2.
Question 3: How many cubes of 10 cm edge can be put in a cubical box of 1 m edge?
Solution:
Let ‘a’ be edge if a cube, so edge = 1m or 100 cm
Volume of cube of 100 cm edge = (side)3
= (100)3
= 1000000 cm3
Volume of cubes of 10cm edge = 103 = 1000 cm3
Now,
Number of required cubes = (Volume of cube of 100 cm edge)/(Volume of cubes of 10cm edge)
= 1000000/1000000
= 1000
Question 4: Three cubes of iron whose edges are 6 cm, 8 cm and 10 cm, respectively are melted and formed into a single cube. Find the edge of the new cube formed.
Solution:
Edge of the first cube = 6 cm
Volume of the first cube = (side)3 = (6)3 cm3
Edge of the second cube = 8 cm
Volume of the second cube = (side)3 = (8)3 cm3
Edge of the third cube = 10 cm
Volume of the third cube = (side)3 = (10)3 cm3
Let “a” be side edge of the new formed cube.
Volume of the formed cube = Volume of the First cube + Volume of the Second cube + Volume of the Third Cube
a3 = 63 + 83 + 103
= 216 + 512 + 1000
= 1728
0r a = 12
Edge of the new cube is 12 cm.
Question 5: Five identical cubes, each of edge 5 cm, are placed adjacent to each other. Find the volume of the resulting cuboid.
Solution:
Edge of the given Cube = 5 cm
When 5 identical cubes are placed adjacent to each other, the length of cuboid formed is 5 x 5 = 25 cm
Now, Volume of the resulting cuboid = lbh
= 25 x 5 x 5
= 625 cm3
Question 6: The volumes of two cubes are in the ratio 8 : 27. Find the ratio of their surface areas.
Solution:
Ratio of two cube = 8:27
Let volumes of the two cubes be 8x and 27x
Let a and b be the sides of first cube and second cube respectively.
Volume of cube = (side)3
8x = a3
and 27x = b3
Formula to find surface area = 6(side)2
The required ratio is 4:9.
Question 7: The volume of a right circular cylinder with its height equal to the radius is 25 1/7 cm3. Find the height of the cylinder.
Solution:
Volume of the right circular Cylinder = 176/7 cm3
Height of the right circular cylinder = Radius of the right circular cylinder
⇨ h = r
We know, Volume of the right circular Cylinder = πr2h
πr2h = 176/7
22/7 × h2 × h = 176/7
h3 = 8
or h = 2 cm
Height of the cylinder is 2 cm.
Question 8: The ratio between the radius of the base and the height of a cylinder is 2 : 3. If the volume of the cylinder is 12936 cm3, then find the radius of the base of the cylinder.
Solution:
Volume of the cylinder = 12936 cm3
Ratio of the base and the height of a cylinder is 2:3
Let 2x be the radius and 3x be the height.
Volume of the cylinder = 12936 cm3
πr2h = 12936
22/7 x (2x)2(3x) = 12936
x3 = 343
or x = 7
Radius of the base is 14 cm.
Question 9: The radii of two cylinders are in the ratio of 2 : 3 and their heights are in the ratio of 5 : 3. Find the ratio of their volumes.
Solution:
The radii of two cylinders are in the ratio of 2 : 3 and their heights are in the ratio of 5 : 3.
Let 2r and 3r are the radii and 5h and 3h be the heights.
= 20/27
Ratio of their volumes is 20:27
Question 10: 66 cubic cm of silver is drawn into a wire 1 mm in diameter. Calculate the length of the wire in metres.
Solution:
Diameter of the wire = 1 mm
Radius of wire = r = 1/2 mm = 0.5 mm = 0.05 cm
Let the length of the wire be h
Volume of the wire = 66 cm3
πr2h = 66
22/7 x 0.05 × 0.05 × h = 66
h = (66x7x400)/22
h = 8400
Length of the wire is 8400 cm or 84 m.
## R S Aggarwal Solutions for Class 10 Maths Chapter 17 Volume and Surface Area of Solids
In this chapter students will study important concepts on Volume and Surface Area of Solids as listed below:
• Volume and Surface Area of Solids Introduction
Solids can be defined as objects having definite shape and size.
• Volume and Surface Area of Cuboid
• Volume and Surface Area of Cube
• Volume and Surface Area of the cylinder and hollow cylinder
• Volume and Surface Area of cone
• Volume and Surface Area of sphere and hemisphere
• Volume and Surface Area of a combination of solids
• Volume and Surface Area of a frustum of a cone
• Conversion of solid from one shape to another and mixed problems
### Key Features of RS Aggarwal Solutions for Class 10 Maths Chapter 17 Volume and Surface Area of Solids
1. R S Aggarwal solutions is a set of problems based on Volume and surface area of solid figures – sphere, cube, rectangular solid with and without top, cylinder, and cone.
2. The combination of diagrams and corresponding formulas will help students gain a thorough understanding of these concepts.
3. Easy for quick revision.
4. Helps students to solve complex problems at their own pace.
|
# Construct tensor with complicated symmetries
I have a tensor with following symmetries
Clear[G]
G[i_, j_, k_, l_] := G[3 - i, 3 - j, 3 - k, 3 - l]
G[i_, j_, k_, l_] := G[j, i, l, k]
G[i_, j_, k_, l_] := Conjugate[G[k, l, i, j]]
where $$i,\,j,\,k,\,l\in\{1,\,2\}$$. It is fully specified by
G[1, 1, 1, 1] = r[1];
G[1, 1, 2, 2] = r[2];
G[1, 2, 1, 2] = r[3];
G[1, 2, 2, 1] = r[4];
G[1, 1, 1, 2] = z;
where $$r\in\mathbb{R}$$ and $$z\in\mathbb{C}$$. I would like to specify the whole tensor by writing
Array[G, {2, 2, 2, 2}]
This is obviously not possible due to infinite recursion. One can type all the elements manually. But is there an automatic way? Relation 2 can be specified with the help of SymmetrizedArray, but I do not know how to specify 1 and 3.
It's just a matter of ordering your definitions appropriately. Start by defining the simple symmetries:
Clear[G];
G[i_, j_, k_, l_] /; ! OrderedQ[{{i, j, k, l}, {j, i, l, k}}] := G[j, i, l, k];
G[i_, j_, k_, l_] /; ! OrderedQ[{{i, j, k, l}, {k, l, i, j}}] := Conjugate[G[k, l, i, j]];
Here are your (minimal) specification of the elements:
G[1, 1, 1, 1] = r[1];
G[1, 1, 2, 2] = r[2];
G[1, 2, 1, 2] = r[3];
G[1, 2, 2, 1] = r[4];
G[1, 1, 1, 2] = z;
Finally, for the cases that do not match, subtract all indices from 3:
G[i_, j_, k_, l_] := G[3 - i, 3 - j, 3 - k, 3 - l]
With these definitions in that order, you get what you are looking for:
Array[G, {2, 2, 2, 2}]
(*
{{{{r[1], z}, {z, r[2]}},
{{Conjugate[z], r[3]}, {r[4], Conjugate[z]}}},
{{{Conjugate[z], r[4]}, {r[3], Conjugate[z]}},
{{Conjugate[r[2]], z}, {z, r[1]}}}}
*)
• Nice, however, r[2] is real and in the final result there should not appear Conjugate[r[2]]. Dec 30, 2021 at 8:50
The problem of recursion happens because the order of evaluation in Mathematica is inherently depth-first. Below is my very very hacky solution – some sort of a random-walk evaluation order. Obviously, one could (and someone surely did it already) write a proper breadth-first evaluation, but I think this random walk will do just fine for small enough tensors.
Clear[G];
ind = {{1, 1, 1, 1}, {1, 1, 2, 2}, {1, 2, 1, 2},
{1, 2, 2, 1}, {1, 1, 1, 2}};
r /: Conjugate[r[a_]] = r[a];
G[1, 1, 1, 1] = r[1];
G[1, 1, 2, 2] = r[2];
G[1, 2, 1, 2] = r[3];
G[1, 2, 2, 1] = r[4];
G[1, 1, 1, 2] = z;
G[i_, j_, k_, l_] := G[3 - i, 3 - j, 3 - k, 3 - l] /; {j, i, l, k} \[Element] ind
G[i_, j_, k_, l_] := G[j, i, l, k] /; {j, i, l, k} \[Element] ind
G[i_, j_, k_, l_] := Conjugate[G[k, l, i, j]] /; {k, l, i, j} \[Element] ind
G[i_, j_, k_, l_] := ReleaseHold@RandomChoice[{
Hold@G[3 - i, 3 - j, 3 - k, 3 - l],
Hold@G[j, i, l, k],
Hold@Conjugate[G[k, l, i, j]]
}]
Array[G, {2, 2, 2, 2}] // Refine[#, r[_] \[Element] Reals] &
(* {{{{r[1], z}, {z, r[2]}}, {{Conjugate[z], r[3]}, {r[4],
Conjugate[z]}}}, {{{Conjugate[z], r[4]}, {r[3],
Conjugate[z]}}, {{r[2], z}, {z, r[1]}}}} *)
• Nice, however, r[2] is real and in the final result there should not appear Conjugate[r[2]]. Dec 30, 2021 at 8:50
• @yarchik, oh, right, I fixed it. Dec 30, 2021 at 10:02
• This is one possibility. However, I believe that the final result could be obtained without invoking the assumption of being real, by means of the two first rules. This brings me to the point: rule with Conjugate should only be used when two others bring no simplification. Dec 30, 2021 at 10:13
|
# Random Samples
Science 13 May 2005:
Vol. 308, Issue 5724, pp. 948
1. # Cultivating the Third Eye
The zoology laboratory of Dungar College in the small town of Bikaner in the Indian state of Rajasthan has some strange inmates: more than 50 three- eyed frogs. The amphibians have confirmed a long-held suspicion of developmental biologists that pineal glands retain the ability to respond to light and even to form into eyes.
Zoologist Om Prakash Jangir and his colleagues earlier found that if they removed tadpoles' eyes and raised the animals in a medium enriched with vitamin A, a new eye developed within 10 days over the site of the pineal gland. The researchers then transplanted tadpole pineal glands between the eyes of month-old frogs. With the help of some vitamin A, most of the amphibians developed third eyes within 15 days, the scientists report in the May issue of the Indian Journal of Experimental Biology.
“In lower vertebrates, the pineal organ had a visual role which got lost during evolution. Our experiments show that this vestigial organ can be activated in vertebrates,” says Jangir. Both the eyes and the pineal organ depend on similar developmental signals in the embryo and express the same homeobox gene, he says. Ramesh Ramachandra Bhonde of the National Center for Cell Science in Pune calls the achievement “an important milestone” that contributes to the value of the pineal gland as a model in studies of both evolution and development.
2. # Mating for Autism?
If cases of autism are on the increase, as some believe, here's one provocative explanation: Blame the rise on marriages between like-minded people, whom psychologist Simon Baron-Cohen of Cambridge University in the U.K. calls “systemizers.”
Baron-Cohen argues that autism and related conditions like Asperger's are manifestations of what he calls the “extreme male brain”: one with weak social skills and a strong tendency to “systemize,” or think according to rules and laws. In a study of 1000 U.K. families, he has reported that the fathers as well as the grandfathers of children with autism spectrum conditions are more likely to work in professions such as engineering. And the mothers are also likely to be systemizers “with male-typical interests,” he says.
Baron-Cohen, whose theory is in press at the journal Progress in Neuropsycho-pharmacology and Biological Psychiatry, says he and colleagues are performing genetic studies, collecting subjects, and conducting population surveys in systemizer-heavy areas, such as Silicon Valley, to test the idea that techies marrying each other is raising autism rates.
Some balk at the idea. Psychologist Elizabeth Spelke of Massachusetts Institute of Technology says there's no good evidence for an “inborn, male predisposition for systemizing.” But psychiatrist Herbert Schreier of Children's Hospital in Oakland, California, believes the intermarriage of techies “probably does account for why you have pockets of high autism around Stanford and MIT.” Drawing on his own practice, he adds that fathers of children with learning disabilities have a disproportionate tendency to be engineers or computer scientists.
3. # Counting by Gates
Microsoft Chair Bill Gates may know how to add up the profits of his software giant, but his math is a bit shaky on the labor power front. Microsoft has many vacant positions because “there just aren't as many [U.S.] graduates with a computer science background,” Gates lamented last month at a forum on innovation and education at the Library of Congress in Washington, D.C. The shortage of grads “creates a dilemma for us, in terms of how we get our work done,” noted the world's richest man.
But the data tell a different story. A newly published survey by the Computing Research Association (CRA) of top university departments show that the number of U.S. bachelor's degrees awarded in computer science rose by 85% from 1998 to 2004; a similar rise has occurred in doctoral programs since 1999 (see graph, above). The annual number of new undergrad majors has admittedly fallen off since the dot.com bust in 2000, notes CRA's Jay Vegso. “But these numbers have always been cyclical,” he says. “I don't see any reason to panic.”
4. # Egyptian Beauty
Last week, archaeologists in Cairo unveiled a well-preserved, newly discovered 2300-year-old mummy—which Egyptian Antiquities chief Zahi Hawass says “may be the most beautiful mummy ever found in Egypt.” The unidentified figure has a golden mask and is covered in brilliantly colored images of gods and goddesses as well as illustrations of the mummification process. It was found 2 months ago in the Saqqara pyramids complex, 20 kilometers south of Cairo, in the necropolis of King Teti. Scientists plan to do computed tomography studies of the mummy before it goes on display.
Norway's Nobels. Sweden's Nobel Prizes just got a little neighborly competition. Last week, Norwegian-born philanthropist Fred Kavli announced three $1 million prizes for research in astrophysics, neuroscience, and nanotechnology. The Norwegian Academy of Science and Letters will make the biennial selections beginning in 2008. The fields were chosen because they are ripe for important breakthroughs, says David Auston, president of the Kavli Foundation, which supports 10 research foundations worldwide (Science, 21 January, p. 340), and may be revised over the years. Kavli, who made his fortune selling sensors for automobiles and aircraft, wants the new prizes to recognize “more daring” discoveries than the Nobels. Auston says: “If a major development occurred in the last 5 to 10 years, we want to acknowledge that.” 6. # Deaths Hazard-meister. After a career spent helping protect fellow Filipinos from natural hazards, the former head of the Philippine Institute of Volcanology and Seismology has fallen victim to a more mundane hazard of his profession. On 28 April, Raymundo Punongbayan, 68, died in a helicopter crash that also claimed the lives of four staffers from the institute that he elevated to international prominence. The group was returning from a landslide hazard survey. In 1991, Punongbayan won acclaim for an effort that moved 80,000 people from harm's way before Mount Pinatubo erupted. “Ray brought Philippine natural- hazard efforts into the modern world,” says volcanologist Christopher Newhall of the U.S. Geological Survey in Seattle, Washington. “He was a good scientist and a masterful public relations person and politician.” Integration pioneer. Social psychologist Kenneth Clark, whose work on the negative effects of school segregation was instrumental in the historic 1954 Supreme Court ruling outlawing the practice, died on 1 May at his home in Hastings-on-Hudson, New York. He was 90. Clark belonged to a pioneering generation of African-American scholars. He was the first to earn a doctorate in psychology from Columbia University (his wife and collaborator Mamie Phipps Clark was second); the first to become tenured in the City College system of New York; the first elected to the New York State Board of Regents; and the first black to be president of the American Psychological Association (APA). Clark and his wife are remembered for a famous study using black and white dolls that showed that children in a segregated school thought the black dolls were bad. As APA president, he created a Board of Social and Ethical Responsibility that brought problems of social justice within psychology into greater prominence. “Clark was a deeply compassionate person committed to racial equality,” says psychologist George Albee, a former president of the association. “He was a quiet, scholarly man but persistent and unwavering.” 7. # Pioneers Housekeeping fellowships. Nobelist Christiane Nüsslein-Volhard says she is tired of watching young women scientists struggle to balance family and career. So the developmental geneticist has launched an initiative to help pay for household help. The Christiane Nüsslein-Volhard Foundation will give roughly$600 a month to a handful of top-notch, early-career scientists who are mothers. Nüsslein-Volhard, director of the Max Planck Institute for Developmental Biology in Tübingen, Germany, says the money is not primarily intended for daycare but to pay someone to help with cleaning and cooking.
It's maddening “when a top woman scientist can't make it to a seminar because she has to go home and do the laundry,” says Maria Leptin, a developmental biologist at the University of Cologne in Germany and a member of the foundation, which has so far raised over \$500,000. “She should concentrate on what is most important—doing her work and spending time with her family—and nothing else.” The first awards, lasting 1 to 3 years, will be made later this year and then annually.
8. # Sidelines
Loud and clear. Physicists can filibuster, too. That was the message from string theorist Ed Witten, who read from a particle physics textbook during a mock filibuster at Princeton University begun late last month to protest a Republican threat to eliminate the filibuster by changing the rules of the U.S. Senate. The student-organized protest was held outside a campus building named for its donors, the family of Senate Majority Leader Bill Frist (R-TN).
Listening to physics—Nobelist Frank Wilczek joined the protest—was a welcome change from a steady diet of novels and phone books offered up by other speakers, says Princeton sophomore Asheesh Siddique, one of the organizers: “We thought it was very cool.”
|
# How do I send a quantum circuit to IBM for execution?
I have, let's say, the following quantum circuit:
The QASM code of this quantum scheme has the following form:
OPENQASM 2.0;
include "qelib1.inc";
qreg q[5];
creg c[5];
reset q[0];
reset q[1];
reset q[2];
reset q[3];
reset q[4];
x q[0];
x q[1];
x q[2];
x q[3];
x q[4];
measure q[0] -> c[0];
measure q[1] -> c[1];
measure q[2] -> c[2];
measure q[3] -> c[3];
measure q[4] -> c[4];
I want to send the QASM code to IBM and get the result. In our case, I want to get [1, 1, 1, 1, 1].
I don't understand which endpoints I should send my requests to (probably POST requests) and what I should pass in the message body (in the JSON object).
I was able to execute a GET request to log in and get my access token (access ID). But I don't know what to do next.
There is no complete documentation for the API. Here is what is available now.
HTTP request method: POST
JSON request: {
"apiToken": "YOUR_API_TOKEN"
}
JSON response: {
"id": "YOUR_ACCESS_ID_TOKEN",
"created": "YOUR_ACCESS_ID_TOKEN_CREATIONDATE",
"userId": "YOUR_USER_ID"
}
Second step: I can't figure out which requests to send next to execute my QASM code and get the result [1, 1, 1, 1, 1].
• Is using Qiskit an option? If yes sending jobs is a piece of cake :) Jun 2 at 15:43
• @Cryoris This is really easy to do with Qiskit. But in this study, I need to use the IBM API, which I gave a link to in the question text. Jun 2 at 15:49
The only API documentation available is the one you linked to. Its documentation page describes the steps needed to submit a job (although not in great details). However, you'll first need to convert your QASM string to a qObject, which is the format the API accepts. Assuming Qiskit is allowed for this part, you can first use QuantumCircuit.from_qasm_str() to convert the string to a QuantumCircuit then use assemble() to convert it to a qObject (also called Qobj). If you can't use Qiskit, you'll have to construct the Qobj manually (or using another package that's allowed), following its schema.
1. POST to the /Network/{hubName}/Groups/{groupName}/Projects/{projectName}/Jobs endpoint to create a remote job
3. POST to /Network/{hubName}/Groups/{groupName}/Projects/{projectName}/Jobs/{jobId}/jobDataUploaded to finish the job submit.
|
Email: [email protected]
ISSN 2562-2854 (print)
ISSN 2562-2862 (online)
# Category: 2019, Vol. 1, No. 1
### From the Editors
Maoan Han and Junling Ma
The publishers, Zhejiang Normal University, China, and Pacific Edilite Academic Inc, Canada, and the editorial board, are very proud to announce the inauguration of the Journal of Nonlinear Modeling and Analysis.
Mathematical modeling is a thriving field of research, with applications to engineering, computer science, physics, chemistry, earth science and geography, biology, medicine, public health, economics, management sciences, and many others. In fact, the list to too long to enumerate here. The research on modeling relies on, and drives the research of, almost all fields of mathematics and statistics, such as nonlinear analysis, differential equations and dynamical systems, optimization, operation research, probability theory, graph theory, combinatory, topology, experimental design, data analysis, parameter estimation, and many others. As we know, models widely used in different applications may resemble striking similarities, and can be studied with similar theories and methods.
There have been numerous journals on mathematical modeling and analysis, ranging from very theoretical to very applied aspects. However, there is a strong need for an easy-to-access journal for modelers and mathematicians to present their new models, powerful methods, and innovative analysis, so that researchers from different fields in both pure and applied mathematics can inspire and learn from each other. This journal aims to provide such a platform.
The editorial board welcome quality contributions in original research, reviews, and communications. We invite the readers, authors, and reviewers to work closely with the editors, to help us fulfill our vision.
Sincerely,
Maoan Han and Junling Ma, Editors in Chief
On behalf of the editorial board
### Geometric properties and exact travelling wave solutions for the generalized Burger-Fisher equation and the Sharma-Tasso-Olver equation
Jibin Li
In this paper, we study the dynamical behavior and exact parametric representations of the traveling wave solutions for the generalized Burger-Fisher equation and the Sharma-Tasso-Olver equation under different parametric conditions, the exact monotonic and non-monotonic kink wave solutions, two-peak solitary wave solutions, periodic wave solutions, as well as unbounded traveling wave solutions are obtained.
### Periodic solutions of the Duffing differential equation revisited via the averaging theory
Rebiha Benterki and Jaume Llibre
We use three different results of the averaging theory of first order for studying the existence of new periodic solutions in the two Duffing differential equations $\ddot y+ a \sin y= b \sin t$ and $\ddot y+a y-c y^3=b\sin t$, where $a$, $b$ and $c$ are real parameters.
### Numerical method for homoclinic and heteroclinic orbits of neuron models
Bo Deng
A twisted heteroclinic cycle was proved to exist more than twenty-five years ago for the reaction-diffusion FitzHugh-Nagumo equations in their traveling wave moving frame. The result implies the existence of infinitely many traveling front waves and infinitely many traveling back waves for the system. However efforts to numerically render the twisted cycle were not fruitful for the main reason that such orbits are structurally unstable. Presented here is a bisectional search method for the primary types of traveling wave solutions for the type of bistable reaction-diffusion systems the FitzHugh-Nagumo equations represent. The algorithm converges at a geometric rate and the wave speed can be approximated to significant precision in principle. The method is then applied for a recently obtained axon model with the conclusion that twisted heteroclinic cycle maybe more of a theoretical artifact.
### Nonexistence of nonconstant positive steady states of a diffusive predator-prey model with fear effect
Shanshan Chen, Zonghao Liu and Junping Shi
In this paper, we investigate a diffusive predator-prey model with fear effect. It is shown that, for the linear predator functional response case, the positive constant steady state is globally asymptotically stable if it exists. On the other hand, for the Holling type II predator functional response case, it is proved that there exist no nonconstant positive steady states for large conversion rate. Our results limit the parameters range where complex spatiotemporal pattern formation can occur.
### Dynamics of a predator-prey model with delay and fear effect
Weiwei Gao and Binxiang Dai
Recent manipulations on vertebrates showed that the fear of predators, caused by prey after they perceived predation risk, could reduce the prey’s reproduction greatly. And it’s known that predator-prey systems with fear effect exhibit very rich dynamics. On the other hand, incorporating the time delay into predator-prey models could also induce instability and oscillations via Hopf bifurcation. In this paper, we are interested in studying the combined effects of the fear effect and time delay on the dynamics of the classic Lotka-Volterra predator-prey model. It’s shown that the time delay can cause the stable equilibrium to become unstable, while the fear effect has a stabilizing effect on the equilibrium. In particular, the model loses stability when the delay varies and then regains its stability when the fear effect is stronger. At last, by using the normal form theory and center manifold argument, we derive explicit formulas which determine the stability and direction of periodic solutions bifurcating from Hopf bifurcation. Numerical simulations are carried to explain the mathematical conclusions.
### Bifurcation of a modified Leslie-Gower system with discrete and distributed delays
Zhongkai Guo, Haifeng Huo, Qiuyan Ren and Hong Xiang
A modified Leslie-Gower predator-prey system with discrete and distributed delays is introduced. By analyzing the associated characteristic equation, stability and local Hopf bifurcation of the model are studied. It is found that the positive equilibrium is asymptotically stable when $\tau$ is less than a critical value and unstable when $\tau$ is greater than this critical value and the system can also undergo Hopf bifurcation at the positive equilibrium when $\tau$ crosses this critical value. Furthermore, using the normal form theory and center manifold theorem, the formulae for determining the direction of periodic solutions bifurcating from positive equilibrium are derived. Some numerical simulations are also carried out to illustrate our results.
### The complete biorthogonal expansion theorem and its application to a class of rectangular plate equations
Jianbo Zhu and Xianlong Fu
In this paper, we first establish the separable $Hamiltonian$ system of rectangular cantilever thin plate bending problems by choosing proper dual vectors. Then using the characteristics of off-diagonal infinite-dimensional $Hamiltonian$ operator matrix, we derive the biorthogonal relationships of the eigenfunction systems and based on it we further obtain the complete biorthogonal expansion theorem. Finally, applying this theorem we obtain the general solutions of rectangular cantilever thin plate bending problems with two opposite edges slidingly supported.
|
Write a config.yml, write a CalculiX case input file, and run an adapted CalculiX executable.
## Layout of the YAML configuration file
The layout of the YAML configuration file, which should be named config.yml (default name), is explained by means of an example for an FSI simulation:
participants:
Calculix:
interfaces:
- nodes-mesh: Calculix_Mesh
patch: interface
write-data: [DisplacementDeltas]
precice-config-file: ../precice-config.xml
The adapter allows to use several participants in one simulation (e.g. several instances of Calculix if several solid objects are taken into account). The name of the participant “Calculix” must match the specification of the participant on the command line when running the executable of “CCX” with the adapter being used (this is described later). Also, the name must be the same as the one used in the preCICE configuration file precice-config.xml.
One participant may have several FSI interfaces. Note that each interface specification starts with a dash.
For FSI simulations the mesh type of an interface is always “nodes-mesh”, i.e. the mesh is defined node-wise, not element-wise. The name of this mesh, “Calculix_Mesh”, must match the mesh name given in the preCICE configuration file.
For defining which nodes of the CalculiX domain belong to the FSI interface, a node set needs to be defined in the CalculiX input files. The name of this node set must match the name of the patch (here: “interface”).
In the current FSI example, the adapter reads forces from preCICE and feeds displacement deltas (not absolute displacements, but the change of the displacements relative to the last time step) to preCICE. This is defined with the keywords “read-data” and “write-data”, respectively. The names (here: “Forces” and “DisplacementDeltas”) again need to match the specifications in the preCICE configuration file. In the current example, the coupled fluid solver expects displacement deltas instead of displacements. However, the adapter is capable of writing either type. Just use “write-data: [Displacements]” for absolute displacements rather than relative changes being transferred in each time step. Valid readData keywords in CalculiX are:
* Forces
* Displacements
* Temperature
* Heat-Flux
* Sink-Temperature
* Heat-Transfer-Coefficient
Valid writeData keywords are:
* Forces
* Displacements
* DisplacementDeltas
* Temperature
* Heat-Flux
* Sink-Temperature
* Heat-Transfer-Coefficient
From CalculiX version 2.15, additional writeData keywords are available:
* Positions
* Velocities
Note that the square brackets imply that several read- and write-data types can be used on a single interface. This is not needed for FSI simulations (but for CHT simulations). Lastly, the “precice-config-file” needs to be identified including its location. In this example, the file is called precice-config.xml and is located one directory above the folder, in which the YAML configuration file lies.
## CalculiX case input file
CalculiX is designed to be compatible with the Abaqus file format. Here is an example of a CalculiX input file:
*INCLUDE, INPUT=all.msh
*INCLUDE, INPUT=fix1.nam
*INCLUDE, INPUT=fix2.nam
*INCLUDE, INPUT=fix3.nam
*INCLUDE, INPUT=interface.nam
*MATERIAL, Name=EL
*ELASTIC
100000000, 0.3
*DENSITY
10000.0
*SOLID SECTION, Elset=Eall, Material=EL
*STEP, NLGEOM, INC=1000000
*DYNAMIC
0.01, 5.0
*BOUNDARY
Nfix1, 3, 3, 0
Nfix2, 1, 1, 0
Nfix2, 3, 3, 0
Nfix3, 1, 3, 0
Ninterface, 1, 0.0
Ninterface, 2, 0.0
Ninterface, 3, 0.0
*NODE FILE
U
*EL FILE
S, E
*END STEP
The adapter internally uses the CalculiX data format for point forces to apply the FSI forces at the coupling interface. This data structure is only initialized for those nodes, which are loaded at the beginning of a CalculiX analysis step via the input file. Thus, it is necessary to load all nodes of the node set, which defines the FSI interface in CalculiX (referring to the above example, the nodes of set “interface” (Note that in CalculiX a node set always begins with an “N” followed by the actual name of the set, which is here “interface”.) are loaded via the “CLOAD” keyword.), in each spatial direction. However, the values of these initial forces can (and should) be chosen to zero, such that the simulation result is not affected.
CalculiX CCX offers both a geometrically linear as well as a geometrically non-linear solver. Both are coupled via the adapter. The keyword “NLGEOM” (as shown in the example) needs to be included in the CalculiX case input file in order to select the geometrically non-linear solver. It is also automatically triggered if material non-linearities are included in the analysis. In case the keyword “NLGEOM” does not appear in the CalculiX case input file and the chosen materials are linear, the geometrically linear CalculiX solver is used. In any case, for FSI simulations via preCICE the keyword “DYNAMIC” (enabling a dynamic computation) must appear in the CalculiX input file.
More input files that you may find in the CalculiX tutorial cases:
• <name>.inp: The main case configuration file. Through this, several other files are included.
• <name>.msh: The mesh file.
• <name>.flm: Films
• <name>.nam: Names, e.g. indices of boundary nodes
• <name>.sur: Surfaces
• <name>.dfl: DFlux
## Running the adapted calculiX executable
Running the adapted executable is pretty similar to running the original CalculiX CCX solver. The syntax is as follows:
ccx_preCICE -i [CalculiX input file] -precice-participant [participant name]
For example:
ccx_preCICE -i flap -precice-participant Calculix
The input file for this example would be flap.inp. Note that the suffix “.inp” needs to be omitted on the command line. The flag “-precice-participant” triggers the usage of the preCICE adapter. If the flag is not used, the original unmodified solver of CCX is executed. Therefore, the new executable “ccx_preCICE” can be used both for coupled preCICE simulations and CalculiX-only runs. Note that as mentioned above, the participant name used on the command line must match the name given in the YAML configuration file and the preCICE configuration file.
### Supported elements
The preCICE CalculiX adapter supports solid and shell elements. It can been used with both linear and quadratic tetrahedral (C3D4 and C3D10) and hexahedral (C3D8 and C3D20) elements. For shell elements, currently S3 and S6 tetrahedral elements are supported. There is a restriction when using nearest-projection mapping that you have to use tetrahedral elements. If a quasi 2D-3D case is set up (single element in out-of-place direction) then only linear elements are supported.
### Nearest-projection mapping
In order to use nearest-projection mapping, a few additional changes are required. The first is that the interface surface file (.sur) must be added to the Calculix input file. An example of the addition to the input file is given below
*INCLUDE, INPUT=all.msh
*INCLUDE, INPUT=fix1.nam
*INCLUDE, INPUT=fix2.nam
*INCLUDE, INPUT=fix3.nam
*INCLUDE, INPUT=interface.nam
*INCLUDE, INPUT=interface.sur
*MATERIAL, Name=EL
This surface file is generated during the mesh generation process. The second addition is to the config.yml. In order for the adapter to know that the surface mesh must be read, the line
- nodes-mesh
must be changed to
- nodes-mesh-with-connectivity
Note that an error will only occur if nodes-mesh-with-connectivity is specified without a .sur file. The calculix-adapter with nearest-projection mapping only supports tetrahedral elements (C3D4 and C3D10) as preCICE only works with surface triangles for nearest-projection mapping.
|
Are there strictly dominated strategies?
Two players simultaneously announce a prime number less than 20.Denoting $$p_{i}$$ the number announced by the player $$i$$, the payoffs are:
-If $$p_{1}+p_{2}<14$$ each player receives as payment the value of $$p_{i}+1$$.
-If $$p_{1}+p_{2} \geq{14}$$ and $$p_{i}, player $$i$$ receives $$p_{j}$$ and the player $$j$$ receive $$20-p_{i}$$.
-If $$p_{1}+p_{2} \geq{14}$$ and $$p_{i}=p_{j}$$, each player receives a payoff $$p_{i}$$.
Q1:My game in normal form is correct?
$$\begin{array}{|c|c|c|c|} \hline & 2 & 3&5&7&11&13&17&19 \\ \hline 2&3,3& 3,4&3,6 & 3,8&3,12&13,18&17,18&19,18\\ \hline 3 & 4,3 & 4,4& 4,5&4,8&11,17&13,17&17,17&19,17\\ \hline 5& 6,3 & 6,4& 6,6&6,8&11,15&13,15&17,15&19,15\\ \hline 7& 8,3 & 8,4& 8,6&7,7&11,13&13,13&17,13&19,13\\ \hline 11& 12,3 & 17,11& 15,11&13,11&11,11&13,9&17,9&19,9\\ \hline 13& 18,13 & 17,13& 15,13&13,13&9,13&13,13&17,7&19,7\\ \hline 17& 18,17 & 17,17& 15,17&13,17&9,17&7,17&17,17&19,3\\ \hline 19& 18,19 & 17,19& 15,19&13,19&9,19&7,19&3,19&19,19\\ \hline \end{array}$$
Q2:There are strictly dominated strategies? For the table, I think not. Is correct?
Thanks!
• The game looks correct (I only checked a few cells though). Based on the matrix, there is no strictly dominated strategy, mainly because of the last row/column. – Herr K. May 8 '19 at 15:26
I agree with Herr, the payoff matrix looks right. Also, there are no strictly dominated strategies because a strictly dominated strategy cannot be a best response for any possible belief. However, If any player believes that the other player is choosing 19, then every strategy (both pure and mixed) is a best response.
Q1
Your table seems to be correct. Here is a quick Python implementation for generating the payoffs:
def payoff_calculator(x, y):
if x+y < 14:
return (x+1,y+1)
else:
if x==y:
return (x,y)
else:
return (y,20-x) if x < y else (20-y,x)
primes = [2,3,5,7,11,13,17,19]
payoffs = [[payoff_calculator(i,j) for i in primes] for j in primes]
for row in payoffs:
print(*row)
Result
(3, 3) (4, 3) (6, 3) (8, 3) (12, 3) (18, 13) (18, 17) (18, 19)
(3, 4) (4, 4) (6, 4) (8, 4) (17, 11) (17, 13) (17, 17) (17, 19)
(3, 6) (4, 6) (6, 6) (8, 6) (15, 11) (15, 13) (15, 17) (15, 19)
(3, 8) (4, 8) (6, 8) (7, 7) (13, 11) (13, 13) (13, 17) (13, 19)
(3, 12) (11, 17) (11, 15) (11, 13) (11, 11) (9, 13) (9, 17) (9, 19)
(13, 18) (13, 17) (13, 15) (13, 13) (13, 9) (13, 13) (7, 17) (7, 19)
(17, 18) (17, 17) (17, 15) (17, 13) (17, 9) (17, 7) (17, 17) (3, 19)
(19, 18) (19, 17) (19, 15) (19, 13) (19, 9) (19, 7) (19, 3) (19, 19)
Q2
You are right, there are no strictly dominated strategies here. This is because each action is a best response to some opponent action. For example, 2 is a best response to opponent moves 13, 17, and 19, 3 is a best response to 11, 13, 17, and 19, and so on. Similarly, there is no strictly dominant strategy. For example, 19 is a best response to 2, 3, 5, 7, and 19, but not to 11, 13, or 17.
|
# Research
My research is focused on variable selection methods in functional regression models.
“What is functional data?”
A bunch of curves.
“Huh?”
The major difference is that your usual flavor of statistics is about dealing with data points; with functional data, you now have data curves instead.
You’ve seen a scatter plot before, yes? In a scatter plot, each dot is basically an instantaneous snapshot of each individual data point. Let’s say you’re looking weight vs. age. In this instance, every subject only has one data point for those things; e.g., at this very moment, I only have one weight and one age, and at this very moment, you only have one weight and one age.
Still with me? Good.
Imagine tracking something that continuously changes with time, space, frequency, or some other continuum. Each dot is still an instantaneous snapshot, but each subject has their own bunch of dots instead of just one. In our example above, we could look at weight vs. age over time—rather than just our weight and age right now. Now, since we’re tracking things over time, the individual points on their own aren’t necessarily interesting, but they can ideally be considered as having arisen from an underlying smooth curve or function of time, and that is what we are interested in. That’s functional data.
As an simple example, the equivalent iris or mtcars data set for illustrating functional data is the average daily temperature at 35 weather stations across Canada,1 shown in the plot below (keep me far away from any place named “Uranium City”):
## Coursework
Below is a list (roughly ordered by recency) of pretty much every class I took in graduate school.2 No guarantees on me remembering anything prior to… last week, probably.
Just kidding.
Or maybe not.
You’ve been warned.
• Measurement Error & Statistical Inference
• Nonparametric Regression & Smoothing
• Big Data: A Statistical Perspective
• Applied Nonparametric Statistics
• Causal Inference
• Breakthroughs in Statistics
• Bayesian Inference & Analysis
• Computation for Statistical Research
• Applied Multivariate Statistical Analysis
• Applied Spatial Statistics
• Categorical Data Analysis
• Statistical Consulting
• Applied Least Squares
• Design of Experiments
• Theory of Sampling Applied to Survey Design
• Applied Longitudinal Data Analysis
• Experimental Statistics for Biological Sciences
• Linear Models & Variance Components
• Statistical Theory I-II
1. This is a subset of the CanadianWeather data set available in the fda package in R.↩︎
2. NC State has since changed around the course numbers and names, in case anybody went to look it up, couldn’t find it, and thinks I just made up stuff.↩︎
|
# Computing left derived functors from acyclic complexes (not resolutions!)
I am reading a paper where the following trick is used:
To compute the left derived functors $L_{i}FM$ of a right-exact functor $F$ on an object $M$ in a certain abelian category, the authors construct a complex (not a resolution!) of acyclic objects, ending in $M$, say $A_{\bullet} \to M \to 0$, such that the homology of this complex is acyclic, and this homology gets killed by $F$. Thus, they claim, the left-derived functors can be computed from this complex.
Why does this claim follow? It seems like it should be easy enough, but I can't seem to wrap my head around it.
-
Compare with a projective resolution $P_\bullet\to M\to 0$. By projectivity, we obtain (from the identiy $M\to M$) a complex morphism $P_\bullet\to A_\bullet$, which induces $F(P_\bullet)\to F(A_\bullet)$. With a bit of diagram chasing you shold find that $H_\bullet(F(P_\bullet))$ is the same as $H_\bullet(F(A_\bullet))$.
A bit more explict: We can build a resolution of complexes $$\begin{matrix} &\downarrow && \downarrow&&\downarrow\\ 0\leftarrow &A_2&\leftarrow&P_{2,1}&\leftarrow&P_{2,2}&\leftarrow\\ &\downarrow && \downarrow&&\downarrow\\ 0\leftarrow &A_1&\leftarrow&P_{1,1}&\leftarrow&P_{1,2}&\leftarrow\\ &\downarrow && \downarrow&&\downarrow\\ 0\leftarrow &M&\leftarrow &P_{0,1}&\leftarrow&P_{0,2}&\leftarrow\\ &\downarrow && \downarrow&&\downarrow\\ &0&&0&&0 \end{matrix}$$ i.e. the $P_{i,j}$ are projective and all rows are exact. The downarrows are found recursively using projectivity so that all squares commute: If all down maps are called $f$ and all left maps $g$, then $f\circ g\colon P_{i,j}\to P_{i-1,j-1}$ maps to the image of $g\colon P_{i-1,j}\to P_{i-1,j-1}$ because $g\circ(f\circ g)=f\circ g\circ g=0$, hence $f\circ g$ factors through $P_{i-1,j}$, thus giving the next $f\colon P_{i,j}\to P_{i-1,j}$. We can apply $F$ and take direct sums across diagonals, i.e. let $B_k=\bigoplus_{i+j=k} FP_{i,j}$. Then $d:=(-1)^if+g$ makes this a complex. What interest us here, is that we can walk from the lower row to the left column by diagram chasing, thus finding that $H_\bullet(F(P_{0,\bullet}))=H_\bullet(F(A_\bullet))$. Indeed: Start with $x_0\in FP_{0,k}$ with $Fg(x_0)=0$. Then we find $y_1\in FP_{1,k}$ with $Ff(y_1)=x_0$. Since $Ff(Fg(y_1))=Fg(Ff(y_1))=0$, we find $y_2\in FP_{2,k-1}$ with $Ff(y_2)=y_1$, and so on until we end up with a cycle in $A_k$. Make yourself clear that the choices involved don't make a difference in the end (i.e. up to boundaries). Also, the chase can be performed just as well from the left column to the bottom row ...
There's also a small typo: $H_{\bullet}(F(A_{\bullet}))$. – Bart Rutgers Dec 30 '12 at 10:38
|
Algebra Level 3
Find the number of solution(s) of $$x$$ satisfying $$\log_{4}(x - 1) = \log_{2}(x - 3)$$.
×
|
# Tag Info
5
Typically, this situation is handled by: Using an "event function" or something to that effect to stop the integration when $P_{1} = P_{2}$. LSODA (the backend for scipy.integrate.odeint) does not have this capability, but other integrators, such as CVODE (part of SUNDIALS) do. Reformulating the right-hand side so it's continuously differentiable when $P_{1}... 5 Take a look at active subspaces, e.g., Active Subspace Methods in Theory and Practice: http://epubs.siam.org/doi/abs/10.1137/130916138 And a PDF here: http://inside.mines.edu/~pconstan/docs/constantine-asm.pdf I have a SIAM book (Active Subspaces: Emerging Ideas for Dimension Reduction in Parameter Studies) coming out in March. Suppose$f$maps$\mathbb{...
2
That is known as 3D reconstruction. I don't know if there is anything better in the bookstores these days but Multiple-view geometry by Hartley and Zisserman is a good textbook on the subject.
1
As, $J2 = -21 meV$ is more dominant than $J1 = 2.3meV$, the system is antiferromagnetic in nature. The expected ground state energy per moelcule(NiO) is $-42meV$. When the state reaches equilibirum, two of the nearest neighbors are alike and two are opposite, cancelling the energy contirbutions of each other. The energy contribution is from the second ...
1
Hint: The value $\max(x_1,x_2)$ is the smallest value $t$ for which $t\geq x_1$ and $t\geq x_2$.
1
If you need to terminate the ODE solver at some point (P1==P2) you can use odespy. The solve method of the solvers accepts a terminate function that accepts the state u, the time t and the integration step step_no and returns a boolean. I used it today to stop the integration when any of the state elements is too low: u,t = solver.solve(linspace(0,100,1000),...
Only top voted, non community-wiki answers of a minimum length are eligible
|
# Creating a timeline with chronology
I am trying to create a simple timeline with years underneath the line and some explaining text above each year. I understand that chronology is useful for this.
My problem is that when trying out examples from related questions (e.g. here or here), I always get the error "! Missing number, treated as zero."
A MWE (from the first related question linked to above):
\documentclass{article}
\usepackage{chronology}
\begin{document}
\begin{chronology}[3]{2011}{2016}{3ex}{\textwidth}
\end{chronology}
\end{document}
Which results in:
! Missing number, treated as zero.
}
l.4 ...chronology}[3]{2011}{2016}{3ex}{\textwidth}
?
I have tried changing some of the values or removing them altogether, but this doesn't seem to help.
Has anyone encountered this problem or does anyone know of a solution? Chronology seems to be exactly what I need.
• @LaRiFaRi That produced a tiny timeline without the years. Mar 18, 2015 at 12:33
Removing the 3ex produces the following output without error:
\documentclass{article}
\usepackage{chronology}
\begin{document}
\begin{chronology}[3]{2011}{2016}{\textwidth}
\end{chronology}
\end{document}
• That does it! I had tried leaving the curly braces around "3ex" empty, but hadn't tried you solution. Mar 18, 2015 at 12:35
• It is still strange that the code in my MWE does compile for others... Mar 18, 2015 at 12:37
• My only guess is that the package has been updated without changing the version info. It is very short on documentation... Mar 18, 2015 at 13:40
The package has been changed after Oct 14 '11 (edit: or even Apr 18 '13) as the counters and lengths have been moved outside the environment definition. I do not know, why the CTAN is telling differently. Maybe it was happening both (Werner's post and Levi's fix) at more or less the same time.
However, the actual code does not provide the fixed definition of the chronology environment, which can be found here, too.
Using Werner's and Gonzalo's code, you get the image of the other post you have linked to.
% arara: pdflatex
\documentclass{article}
\usepackage{chronology}
\renewenvironment{chronology}[5][6]{%
\setcounter{step}{#1}%
\setcounter{yearstart}{#2}\setcounter{yearstop}{#3}%
\setcounter{deltayears}{\theyearstop-\theyearstart}%
\setlength{\unit}{#4}%
\setlength{\timelinewidth}{#5}%
\pgfmathsetcounter{stepstart}%
{\theyearstart+\thestep-mod(\theyearstart,\thestep)}%
\pgfmathsetcounter{stepstop}{\theyearstop-mod(\theyearstop,\thestep)}%
\begin{lrbox}{\timelinebox}%
\begin{tikzpicture}[baseline={(current bounding box.north)}]%
\draw [|->] (0,0) -- (\thedeltayears*\unit+\unit, 0);%
\foreach \x in {1,...,\thedeltayears}%
\draw[xshift=\x*\unit] (0,-.1\unit) -- (0,.1\unit);%
\foreach \x in {\thestepstart,\thestep,...,\thestepstop}{%
\pgfmathsetlength\xstop{(\x-\theyearstart)*\unit}%
\draw[xshift=\xstop] (0,-.3\unit) -- (0,.3\unit);%
\node at (\xstop,0) [below=.2\unit] {\x};}%
}
{%
\end{tikzpicture}%
\end{lrbox}%
\raisebox{2ex}{\resizebox{\timelinewidth}{!}{\usebox{\timelinebox}}}}%
\begin{document}
\noindent
\begin{chronology}[3]{2011}{2016}{3ex}{\textwidth}
\end{chronology}
\end{document}
• Thank you. That worked in the example, but curiously not when I started adding /events. I still got it to work with the answer I accepted above, but with different problems. I've asked a follow-up question. Mar 18, 2015 at 14:53
• What should do if I want something like: 100 --- 150 --- 200 ---- 250 --- 300 -----350 --- ... Feb 22, 2021 at 13:22
|
# Igloo Matrix BSDF¶
## Description¶
Example of igloo matrix BSDF
This BSDF model uses tabulated values, using an igloo shaped patch subdivision of the hemisphere. Incident and scattered vectors use the same basis.
This coordinate system is a generalization of the Klems angle basis used in Randiance and Window/Optics software described in BSDFs, Matrices and Phases by Andy McNeil, LBNL.
Note
Room daylighting simulation with a sun redirector film
Patch geometry : The hemisphere is first split along theta (angle to the surface normal) with any user-defined subdivision. For instance, the Klems basis uses 0°, 5°, 15°, 25°, 35°, 45°, 55°, 65°, 75° and 90°. These boundary values delimit band rings on the hemisphere.
Each band is then split evenly along the phi axis, with a user-defined number of subdivisions for each ring. For instance, the Klems basis uses:
0° to 5° 1 patch 5° to 15° 8 patches 15° to 25° 16 patches 25° to 35° 20 patches 35° to 45° 24 patches 45° to 55° 24 patches 55° to 65° 24 patches 65° to 75° 16 patches 75° to 90° 12 patches
Patch numbering : For the incident vector, each patch is then numbered starting from 0, in theta and phi order. For the scattered vector, numbers are rotated by 180° around the normal direction. This allows having the same patch number for scattered and incident vectors in the case of specular reflection.
BTDF : The transmission data uses the same basis obtained by symmetry with respect to the surface plane.
The following figure shows the subdivisions and numbering for the Klems angle basis:
The igloo patch numbering convention for incident vector (left) and scattered vector (right)
BSDF values : The BSDF values for any incident and scattered patch pair is given as a matrix, column indices matching incident patches, row indices mapping scattered patches. This matrix has the following properties:
• A fully specular BRDF or BTDF will correspond to a diagonal matrix, thanks to the scattered patch numbering rotation explained above.
• BSDF reciprocity (value is kept inchanged by flipping incident and scattered directions) means that non diagonal coefficients will be equal by pairs. However, due to the scattered patch numbering rotation explained above, this is not a simple matrix transposition, as if the patch numbering was the same for incident and scattered directions.
A complete BSDF consists in three matrices : one for transmission (BTDF) and one for reflection on each side (front and back BRDFs). However, it is not necessary to provide each:
• Missing BTDF matrix results in a non transparent material
• If a single BRDF matrix is supplied, the reflection is symmetric
• If no BRDF matrix is supplied, the material has zero reflection
Example of BRDF data representation for the incident direction 87
Channels : The BSDF coefficients may be RGB, spectral values, or any set of channels having a primary spectrum. These primaries are represented by the spectrum children nodes. For instance:
• An achromatic (gray) BSDF will have a single spectrum child primary. The spectrum will be uniform, with a value of 1
• A RGB BSDF will have three primaries, each being set to R, G and B primary spectrum using preset spectra.
• A spectral BSDF will, for instance use a set of square spectra as primaries.
## Children Nodes¶
This node is a spectrum list. It may have a variable number of spectrum children.
user-defined First primary user-defined Second primary user-defined ...
## Ocean XML 4.0 example¶
Matrixes encoding
Each matrix is encoded as a list of coefficients. The number of coefficients is : nchannels * npatches * npatches.
For the ith channel, the jth scattered patch, and kth incident patch, the coefficient index is :
i + j * nchannels + k * nchannels * npatches
Or said differently, the coefficients are sorted by incident patch, then by scattered patch, then by channel.
<bsdf type="igloomatrix" name="bsdf">
<spectrum type="uniform" name="" id="0" value="1"/>
<flist name="theta">
0 5 15 25 35 45 55 65 75 90
</flist>
<ilist name="numphi">
1 8 16 20 24 24 24 16 12
</ilist>
<flist name="btdf">
36.5314 0 0 0 0 <!-- Actual data skipped - 145*145 values -->
</flist>
<flist name="brdf_front">
3.69662 0 0 0 0 <!-- Actual data skipped - 145*145 values -->
</flist>
<flist name="brdf_back">
4.26844 0 0 0 0 <!-- Actual data skipped - 145*145 values -->
</flist>
</bsdf>
Note : If reflectivity is the same on both sides, a single BRDF may be provided with:
<flist name="brdf">
4.26844 0 0 0 0 <!-- Actual data skipped - 145*145 values -->
</flist>
|
# Localization and p-adic completion of Integers coincide?
I want to know if $$\mathbb{Z}_{(p)}$$ (localization by a prime ideal) and $$\mathbb{Z}_p$$ (the completion of p-adic integers) are isomorphic. It seems true, but i don't know how to prove it. Does it holds for every PID?
Thanks.
• $\mathbb{Z}_{(p)}$ is dense in $\mathbb{Z}_p$ and in both $(p^n)$ are the only ideals – reuns Nov 24 '18 at 2:40
## 2 Answers
No, $$\mathbb{Z}_p$$ is much larger than $$\mathbb{Z}_{(p)}$$. Indeed, $$\mathbb{Z}_p$$ is uncountable, since it has an element $$\sum a_np^n$$ for any sequence of coefficients $$a_n\in\{0,1,\dots,p-1\}$$. On the other hand, $$\mathbb{Z}_{(p)}$$ is a subring of $$\mathbb{Q}$$ (the rationals with denominator not divisible by $$p$$), so it is countable.
$$\mathbb{Z}_{(p)}$$ is a proper subring of $$\mathbb{Z}_{p}$$, and the latter one is complete discrete valuation ring, but $$\mathbb{Z}_{(p)}$$ is not complete (but still DVR with respect to the same valuation). However, if you take completion of $$\mathbb{Z}_{(p)}$$ with respect to the $$p$$-adic norm, you get $$\mathbb{Z}_{p}$$.
|
# C#LeetCode刷题之#830-较大分组的位置(Positions of Large Groups)
In a string S of lowercase letters, these letters form consecutive groups of the same character.
For example, a string like S = “abbxxxxzyy” has the groups “a”, “bb”, “xxxx”, “z” and “yy”.
Call a group large if it has 3 or more characters. We would like the starting and ending positions of every large group.
The final answer should be in lexicographic order.
Input: “abbxxxxzzy”
Output: [[3,6]]
Explanation: “xxxx” is the single large group with starting 3 and ending positions 6.
Input: “abc”
Output: []
Explanation: We have “a”,”b” and “c” but no large group.
Input: “abcdddeeeeaabbbcd”
Output: [[3,5],[6,9],[12,14]]
Note: 1 <= S.length <= 1000
```public class Program {
public static void Main(string[] args) {
string S = string.Empty;
S = "abbxxxxzzy";
var res = LargeGroupPositions(S);
ShowArray(res);
S = "abcdddeeeeaabbbcd";
res = LargeGroupPositions2(S);
ShowArray(res);
}
private static void ShowArray(IList<IList<int>> array) {
foreach(var list in array) {
foreach(var index in list) {
Console.Write(\$"{index} ");
}
}
Console.WriteLine();
}
private static IList<IList<int>> LargeGroupPositions(string S) {
var result = new List<IList<int>>();
var last = '\0';
var startIndex = -1;
var endIndex = -1;
for(var i = 0; i < S.Length; i++) {
if(S[i] != last || i == S.Length - 1) {
endIndex = i - 1;
if(i == S.Length - 1 && S[i] == last) endIndex = i;
if(endIndex - startIndex + 1 >= 3) {
var item = new List<int>();
}
startIndex = i;
}
last = S[i];
}
return result;
}
private static IList<IList<int>> LargeGroupPositions2(string S) {
var result = new List<IList<int>>();
for(var i = 0; i < S.Length; i++) {
var next = i + 1;
var dic = new Dictionary<int, int>();
while(next < S.Length && S[next] == S[i]) {
if(next - i >= 2) {
dic[i] = next;
}
next++;
}
if(dic.TryGetValue(i, out int value)) {
result.Add(new int[] { i, value });
i = next - 1;
//循环内部更改循环计数是一种不好的做法,切记切记
//如果不是必需,请勿模仿此做法
}
}
return result;
}
}```
```3 6
3 5 6 9 12 14```
|
# Is aspect ratio efficiency related to the momentum added to the air?
Many years ago I was taught that for generating lift or propulsion, lower losses were associated with imparting less momentum change to a larger volume of air, and higher losses were seen when imparting larger momentum changes to a smaller volume of air - and that the losses were due to friction (i.e., the air's viscosity).
The propulsive example of this is the commonly-cited turbojet-versus-high bypass turbofan engine comparison, and I had been under the impression that the airfoil example was high aspect ratio versus low aspect ratio wing designs. However, having just learned that this is in fact incorrect (it has to do with induced drag instead), is there in fact any connection at all between the imparted momentum change/volume of air argument and airfoil aspect ratio?
You're almost there. The difference lies in the kinetic energy required to move a certain mass of air, not in the air's viscosity. Copying freely from one of my earlier answers:
The force to keep the object aloft is $$F=m_{object}g$$ The force generated by downwards momentum transfer is $$F=\dot{m}v$$ with $\dot{m}$ indicating mass flow (kilogram per second) of the air (not the mass of the object). The energy flow (power) required to impart this momentum on the airflow is $$P=\frac{1}{2}\dot{m}v^2$$ Here we can draw an important conclusion. The power requirement is arbitrarily small, by increasing the mass flow and decreasing the downwards velocity.
The power requirement expresses itself in the form of induced drag. By affecting a large mass of air (a long wingspan), you reduce the power requirement, which manifests itself in the form of lower induced drag.
• this is most excellent, thanks for the clear explanation. – niels nielsen Sep 4 '18 at 20:31
|
Geometry
# Sine Rule - Ambiguous Case
How many distinct triangles are there such that $a=6, c=15, \angle A=30^\circ?$
Note: $$a, b$$ and $$c$$ are the lengths of the sides opposite the vertices $$A,$$ $$B$$ and $$C,$$ respectively.
How many distinct triangles are there such that $a=20, c=16, \angle A=30^\circ?$
Note: $$a, b$$ and $$c$$ are the lengths of the sides opposite the vertices $$A,$$ $$B$$ and $$C,$$ respectively.
How many distinct triangles are there such that $\angle A=30^\circ, a=9, b=9\sqrt{3} ?$
Note: $$a, b$$ and $$c$$ are the lengths of the sides opposite the vertices $$A,$$ $$B$$ and $$C,$$ respectively.
In triangle $$ABC$$, $$a=13\sqrt{3}$$, $$b=13$$, $$\angle B=30^{\circ}$$, and $$\angle C$$ is acute. What is the value of $$\angle C$$ (in degrees)?
Details and assumptions
$$a$$, $$b$$ and $$c$$ are the lengths of the sides opposite to the vertices $$A$$, $$B$$ and $$C$$, respectively. An acute angle is an angle strictly less than $$90^\circ$$.
How many distinct triangles are there such that $a=35, b=56, \angle A=30^\circ?$
Note: $$a, b$$ and $$c$$ are the lengths of the sides opposite the vertices $$A,$$ $$B$$ and $$C,$$ respectively.
×
|
## Zeros of Continuous-time Linear Periodic Systems
Zeros of continuous-time linear periodic systems are defined and their properties investigated. Under the assumption that the system has uniform relative degree, the zero-dynamics of the system is characterized and a closed-form expression of the blocking inputs is derived. This leads to the definition of zeros as unobservable characteristic exponents of a suitably defined periodic pair. The zeros of periodic linear systems satisfy blocking properties that generalize the well-known time-invariant case. Finally, an efficient computational scheme is provided that essentially amounts to solving an eigenvalue problem.
Published in:
Automatica, 34, 12, 1651-1655
Year:
1998
Laboratories:
|
# Chapter 05.06: Extrapolation is a Bad Idea
## Learning Objectives
After successful completion of this lesson, you should be able to:
1) enumerate why using extrapolation can be a bad idea.
2) show through an example why extrapolation can be a bad idea.
## Description
This conversation illustrates the pitfall of using extrapolation.
(Due to certain reasons, this student wishes to remain anonymous.)
This takes place in Summer Session B – July 2001
Student: “Hey, Dr. Kaw! Look at this cool new cell phone I just got!”
Kaw: “That’s nice. It better not ring in my class or it’s mine.”
Student: “What would you think about getting stock in this company?”
Kaw: “What company is that?”
Student: “WorldCom! They’re the world’s leading global data and internet company.”
Kaw: “So?”
Student: “They’ve just closed the deal today to merge with Intermedia Communications, based right here in Tampa!”
Kaw: “Yeah, and …?”
Student: “The stock’s booming! It’s at \$14.11 per share and promised to go only one way—up! We’ll be millionaires if we invest now!”
Kaw: “You might not want to assume their stock will keep rising … besides, I’m skeptical of their success. I don’t want you putting yourself in financial ‘jeopardy!’ over some silly extrapolation. Take a look at these NASDAQ composite numbers (Table 1).”
Student: “That’s only up to two years ago …”
Kaw: “That’s right. Looking at this data, don’t you think you should’ve invested back then?”
Student: “Well, didn’t the composite drop after that?”
Kaw: “Right again, but look what you would’ve hoped for if you had depended on that trend continuing (Figure 1).”
Student: “So you’re saying that …?”
Kaw: “You should seldom depend on extrapolation as a source of approximation! Just take a look at how wrong you would have been (Table 2).”
Table 1. End of year NASDAQ composite data
End of year NASDAQ
$$1$$ $$751.96$$
$$2$$ $$1052.13$$
$$3$$ $$1291.03$$
$$4$$ $$1570.35$$
$$5$$ $$2192.69$$
$$6$$ $$4069.31$$
Note: The range of years in Table 1 are actually between 1994 (Year 1) and 1999 (Year 9). Numbers start from 1 to avoid round-off errors and near singularities in matrix calculations.
Figure 1 Data from 1994 to 1999 extrapolated to yield results for 2000 and 2001 using polynomial extrapolation.
Table 2 Absolute relative true error of polynomial interpolation.
End of Year Actual
Fifth order
polynomial interpolation
Absolute relative
true error
$$2000$$ $$2471$$ $$9128$$ $$269.47 \%$$
$$2001$$ $$1950$$ $$20720$$ $$962.36 \%$$
Student: “Now wait a sec! I wouldn’t have been quite that wrong. What if I had used cubic splines instead of a fifth-order interpolant?”
Kaw: “Let’s find out.”
Figure 2 Data from 1994 to 1999 extrapolated to yield results for 2000 and 2001 using cubic spline interpolation.
Table 3 Absolute relative true error of cubic spline interpolation
End of Year Actual
Cubic spline
interpolation
Absolute relative
true error
$$2000$$ $$2471$$ $$5945.9$$ $$140.63 \%$$
$$2001$$ $$1950$$ $$5947.4$$ $$204.99 \%$$
Student: “There you go. That didn’t take so long (Figure 2 and Table 3).”
Kaw: “Well, let’s think about what this data means. If you had gone ahead and invested, thinking your projected yield would follow the spline, you would have only been 205% (Table 3) wrong, as opposed to being 962% (Table 2) wrong by following the polynomial. That’s not so bad, is it?”
Student: “Okay, you’ve got a point. Maybe I’ll hold off on being an investor and just use the cell phone.”
Kaw: “You’ve got a point, too—you’re brighter than you look … that is if you turn off the phone before coming to class.”
* * * * *
<One year later … July 2002>
Student: “Hey, Dr. Kaw! Whatcha got for me today?”
Kaw: “The Computational Methods students just took their interpolation test today, so here you go. <hands stack of tests to student> Time to grade them!”
Student: <Grunt!> “That’s a lot of paper! Boy, interpolation … learned that a while ago.”
Kaw: “You haven’t forgotten my lesson to you about not extrapolating, have you?”
Student: “Of course not! Haven’t you seen the news? WorldCom just closed down 93% from 83¢ on June 25 to 6¢ per share! They’ve had to recalculate their earnings, so your skepticism really must’ve spread. Did you have an”in” on what was going on?”
Kaw: “Oh, of course not. I’m just an ignorant numerical methods professor.”
|
Not to be confused with J. H. C. Whitehead, who also worked in algebraic topology.
## Selected writings
Introducing the J-homomorphism:
• George Whitehead, On the homotopy groups of spheres and rotation groups, Annals of Mathematics. Second Series 43 (4): 634–640 (1942) (jstor:1968956)
Computing the second stable homotopy group of spheres:
• George Whitehead, The $(n+2)$nd Homotopy Group of the $n$-Sphere, Annals of Mathematics Second Series, Vol. 52, No. 2 (Sep., 1950), pp. 245-247 (jstor:1969466)
category: people
Last revised on December 21, 2020 at 10:47:09. See the history of this page for a list of all contributions to it.
|
# function property name
by muballitmitte
Tags: function, metric, operator
P: 3 Is there a special name for functions that satisfy this inequality $$d(x,T^2x) \le d(x,Tx)$$ with $$d$$ being a metric? Any of you guys knows?
Related Discussions Advanced Physics Homework 2 Calculus & Beyond Homework 1 Calculus & Beyond Homework 12 General Discussion 13 Electrical Engineering 1
|
# nLab twisted Pontrjagin theorem
Contents
### Context
#### Cobordism theory
Concepts of cobordism theory
flavors of bordism homology theories/cobordism cohomology theories, their representing Thom spectra and cobordism rings:
bordism theory$\;$M(B,f) (B-bordism):
relative bordism theories:
algebraic:
# Contents
## Idea
A twisted Pontrjagin theorem should generalize the Pontrjagin theorem from plain homotopy theory/cobordism theory to parametrized homotopy theory/twisted cobordism theory:
Where the plain Pontrjagin theorem identifies the Cohomotopy of a differentiable manifold with its cobordism classes of normally framed submanifolds, the twisted Pontrjagin theorem should identify twisted Cohomotopy with cobordism classes of normally twisted-framed submanifolds (Cruickshank 03, Lemma 5.2)
## References
### Pontrjagin-Thom construction
#### Pontrjagin’s construction
##### General
The Pontryagin theorem, i.e. the unstable and framed version of the Pontrjagin-Thom construction, identifying cobordism classes of normally framed submanifolds with their Cohomotopy charge in unstable Borsuk-Spanier Cohomotopy sets, is due to:
(both available in English translation in Gamkrelidze 86),
as presented more comprehensively in:
The Pontrjagin theorem must have been known to Pontrjagin at least by 1936, when he announced the computation of the second stem of homotopy groups of spheres:
• Lev Pontrjagin, Sur les transformations des sphères en sphères (pdf) in: Comptes Rendus du Congrès International des Mathématiques – Oslo 1936 (pdf)
Review:
Discussion of the early history:
##### Twisted/equivariant generalizations
The (fairly straightforward) generalization of the Pontrjagin theorem to the twisted Pontrjagin theorem, identifying twisted Cohomotopy with cobordism classes of normally twisted-framed submanifolds, is made explicit in:
A general equivariant Pontrjagin theorem – relating equivariant Cohomotopy to normal equivariant framed submanifolds – remains elusive, but on free G-manifolds it is again straightforward (and reduces to the twisted Pontrjagin theorem on the quotient space), made explicit in:
• James Cruickshank, Thm. 5.0.6, Cor. 6.0.13 in: Twisted Cobordism and its Relationship to Equivariant Homotopy Theory, 1999 (pdf, pdf)
##### In negative codimension
In negative codimension, the Cohomotopy charge map from the Pontrjagin theorem gives the May-Segal theorem, now identifying Cohomotopy cocycle spaces with configuration spaces of points:
• Peter May, The geometry of iterated loop spaces, Springer 1972 (pdf)
• Graeme Segal, Configuration-spaces and iterated loop-spaces, Invent. Math. 21 (1973), 213–221. MR 0331377 (pdf)
c Generalization of these constructions and results is due to
• Dusa McDuff, Configuration spaces of positive and negative particles, Topology Volume 14, Issue 1, March 1975, Pages 91-107 (doi:10.1016/0040-9383(75)90038-5)
• Carl-Friedrich Bödigheimer, Stable splittings of mapping spaces, Algebraic topology. Springer 1987. 174-187 (pdf, pdf)
#### Thom’s construction
Thom's theorem i.e. the unstable and oriented version of the Pontrjagin-Thom construction, identifying cobordism classes of normally oriented submanifolds with homotopy classes of maps to the universal special orthogonal Thom space $M SO(n)$, is due to:
Textbook accounts:
#### Lashof’s construction
The joint generalization of Pontryagin 38a, 55 (framing structure) and Thom 54 (orientation structure) to any family of tangential structures (“(B,f)-structure”) is first made explicit in
and the general statement that has come to be known as Pontryagin-Thom isomorphism (identifying the stable cobordism classes of normally (B,f)-structure submanifolds with homotopy classes of maps to the Thom spectrum Mf) is Lashof 63, Theorem C.
Textbook accounts:
Lecture notes:
• John Francis, Topology of manifolds course notes (2010) (web), Lecture 3: Thom’s theorem (pdf), Lecture 4 Transversality (notes by I. Bobkova) (pdf)
• Cary Malkiewich, Section 3 of: Unoriented cobordism and $M O$, 2011 (pdf)
• Tom Weston, Part I of An introduction to cobordism theory (pdf)
|
# The open sets in the Zariski topology are the complements of finite sets
What does "that is, on maximal ideals of $$k[x]$$" mean? Just before that remark it was said that we should work in the Zariski topology on $$A^1(k)$$, so the remark following is confusing. What exactly is being asked?
Assuming that the question is "show that the open subsets of $$A^1(k)$$ are the complements of finite sets": an open subset of $$A^1(k)$$ is of the form $$A^1(k)\setminus Z(I)$$ for an ideal $$I\subset k[x]$$. It is indeed the complement of the finite set $$Z(I)$$: since $$k$$ (hence $$k[x]$$) is Noetherian, $$I$$ is finitely generated, and every generator has only finitely many roots. Is that reasoning correct?
Conversely, suppose $$S$$ is a finite subset of $$A^1(k)$$. I need to show that $$A^1(k)\setminus S$$ is open. So I need to show that $$S$$ is an algebraic set, right? Shall I just consider the ideal generated by a polynomial that has as roots the elements of $$S$$ (e.g. the corresponding Lagrange polynomial)? This should prove that $$S$$ is the zero set of that ideal, so $$A^1(k)\setminus S$$ is open. But what for do we need that $$k$$ is algebraically closed?
For the last question, should I prove it for all $$n$$, or is it suffices to prove it for $$n=2$$? For $$n=2$$, this answer suggests to look at $$x=y$$, but I don't see why this is a counterexample: the zero set of $$x-y$$ is closed in the Zariski topology, and it's also closed say in $$\mathbb R^2$$ (the product topology coincides with the Euclidean topology, and the line contains all its limit points, hence is closed).
• When $k$ is algebraically closed, $\mathbf A^n(k)$ can be thought of in two ways: first, as the cartesian product $k^n$, and second, as the set of maximal ideals of the ring $R = k[t_1, ... , t_n]$. Explicitly, if $(a_1, ... , a_n) \in k^n$, then this identifies with ideal generated by the elements $t_1 - a_1, ... , t_n - a_n$. This is one way of stating the Nullstellensatz. – D_S Jan 20 '19 at 4:29
• "since k (hence k[x]) is Noetherian, I is finitely generated, and every generator has only finitely many roots. Is that reasoning correct?" You're not wrong, but $k[x]$ is even a principal ideal domain, so $I$ would be generated by a single polynomial $f$ which has finitely many roots. – D_S Jan 20 '19 at 4:34
• @D_S So is the problem asking then to prove that the open sets in the space of maximal ideals are the complements of finite sets? (With the topology on the set of maximal ideals being the subset topology of the Zariski topology on the set of prime ideals.) Or is the problem asking what I tried to prove in the question? – user437309 Jan 20 '19 at 14:38
• I can't tell which definition of $\mathbf A^n(k)$ they are using, $k^n$ or the maximal ideals, because I don't know what book this is – D_S Jan 20 '19 at 16:33
• @D_S It's Eisenbud's "Commutative Algebra ..." (p.54). Earlier in this exercise he introduced the topology on the prime spectrum. But he explicitly said that this is a topology on $\operatorname{Spec}(R)$, whereas in the exercise he is talking about the topology on $A^1(k)$. – user437309 Jan 20 '19 at 16:41
Assume $$k$$ is algebraically closed. Let's use the definition that $$\mathbf A^1(k)$$ consists of all maximal ideals of the ring $$k[t]$$, with the induced topology from $$\operatorname{Spec} k[t]$$. Since $$k$$ is algebraically closed, every maximal ideal is of the form $$\mathfrak m = (t - a)$$ for a unique $$a \in k$$. It follows from the definition of the Zariski topology that the closed sets in $$\mathbf A^1(k)$$ are those of the form
$$Z(I) = \{ \mathfrak m \in \mathbf A^1(k) : I \subseteq \mathfrak m \}$$
where $$I$$ is any ideal of $$k[t]$$. Since $$k[t]$$ is a principal ideal domain, $$I$$ is generated by a polynomial $$f(t) = (t - a_1)^{m_1} \cdots (t - a_n)^{m_n}$$ for $$a_i \in k$$. For a maximal ideal $$\mathfrak m = (t - a)$$ of $$k[t]$$, check that $$I \subseteq \mathfrak m$$ if and only if $$a$$ is one of the $$a_i$$. It follows that
$$Z(I) = \{ (t - a_1), ... , (t - a_n) \}$$
and is in particular a finite set.
• Thanks! It turned out to be very easy. Regarding the last question of the problem, is it still asking about the set of maximal ideals in $k[t]$, or is it now asking about the "usual" $A^n(k)$ since it's emphasized there that $A^n(k)=k^n$? – user437309 Jan 20 '19 at 18:47
• I would assume the $k^n$ definition, but they are basically equivalent, since whenever $A$ and $B$ are finitely generated $k$-algebras over $k$ algebraically closed, you can identify (as sets, not as topological spaces) $$\operatorname{m-Spec}(A \otimes_k B) = \operatorname{m-Spec} A \times \operatorname{m-Spec} B$$ and $$k[t_1, ... , t_n] \otimes_k [t_1, ... , t_m] = k[t_1, ... , t_{n+m}]$$ – D_S Jan 20 '19 at 18:49
• The empty set must be open too. But it is the complement of a finite set iff $MaxSpec(k[t])$ is finite. Is it always the case? Is this case covered by your argument? – user437309 Jan 20 '19 at 21:59
• Take $I$ to be the zero ideal, then $Z(I)$ will be the whole space. – D_S Jan 21 '19 at 1:10
• That's true but we have to show that the open sets are the complemets of finite sets (equivalently, the closed sets are the finite sets), but the fact that $Z(I)$ is the whole space doesn't say that $Z(I)$ is finite. – user437309 Jan 21 '19 at 1:13
|
Please note: You are viewing the unstyled version of this web site. Either your browser does not support CSS (cascading style sheets) or it has been disabled.
# Macquarie University Department of Mathematics
## Workshop on Categorical Methods in Algebra, Geometry and Mathematical Physics
### Satellite to the StreetFest conference in honour of Ross Street's sixtieth birthday
#### July 18-21 2005, Australian National University, Canberra
Thu, 21 July: 16:40 - 17:20
##### The periodic table of $n$-categories:\\ low-dimensional results
###### Cheng, Eugenia (University of Chicago)
We examine the periodic table of weak $n$-categories for the low-dimensional cases. It is widely understood that degenerate categories give rise to monoids, doubly degenerate bicategories to commutative monoids, and degenerate bicategories to monoidal categories; however, to understand the situation fully we should examine the totalities of such structures. Categories naturally form a 2-category {\bfseries Cat}, so we can take the full sub-2-category of this whose 0-cells are the degenerate categories. On the other hand monoids naturally form a category, but we can regard this as a discrete 2-category to make the comparison. We show that this construction does not yield a biequivalence; to get an equivalence we must ignore the natural transformations and consider only the {\it category} of degenerate categories.
A similar situation occurs for degenerate bicategories. The tricategory of such does not yield an equivalence with monoidal categories; we must consider only the categories of such structures.
For doubly degenerate bicategories the situation is more subtle. The tricategory of such is not naturally triequivalent to the category of commutative monoids (regarded as a tricategory). However, in this case considering just the categories does not give an equivalence either; to get an equivalence we must consider the {\it bicategory} of doubly degenerate bicategories.
We conclude with some remarks about how the above cases might generalise for degenerate, doubly degenerate and triply degenerate tricategories, and for $n$-fold degenerate $n$-categories.
Typeset PDF of this abstract.
|
### EPIM FOR THERMAL CONSOLIDATION PROBLEMS OF SATURATED POROUS MEDIA SUBJECTED TO A DECAYING HEAT SOURCE
Wang Lujun1,2, Ai Zhiyong1
1. 1. Department of Geotechnical Engineering, Key Laboratory of Geotechnical and Underground Engineering of Ministry of Education, Tongji University, Shanghai 200092, China;
2. Institute of Geotechnical Engineering, Key Laboratory of Soft Soils and Geoenvironmental Engineering of Ministry of Education, Zhejiang University, Hangzhou 310058, China
• Received:2016-09-28 Revised:2017-01-04 Online:2017-03-15 Published:2017-03-21
Abstract:
The thermal consolidation of saturated porous media subjected to a heat source is an important subject in civil engineering and energy engineering. For the complexity of the problem, the porous media are usually treated as homogeneous isotropic media and the heat source is assumed to be a heat source with constant strength in the existing studies. In engineering practice, natural saturated porous media usually show obvious layered characteristics and the heat source is decaying with time. In this case, the extended precise integration method (EPIM) is presented in this study to investigate the thermal consolidation problems of layered saturated porous media subjected to a decaying heat source. The partial differential equations are reduced to ordinary ones by means of the integral transform techniques. Combining the adjacent layer elements and considering the boundary conditions, the EPIM solutions in the transformed domain of the problems are deduced. With the aid of corresponding numerical integral inversion, the temperatures, excess pore pressures and vertical displacements in the physical domain are obtained. A numerical example with the corresponding calculation program is performed to compare with the existing results, which confirm the applicability and validity of the presented method in dealing with the thermal consolidation problems of layered saturated porous media. Finally, numerical examples are carried out to analyse the influence of the heat source's half-life and buried depth, as well as the stratification of medium on the thermal consolidation behaviour. Numerical results show that:the decay period of heat sources has significant influence on the peak values and peak time of temperature and excess pore pressure, the longer the decay period, the greater the peak values and the longer the peak time of temperature and excess pore pressure; burial depths have obvious influence on the variations of excess pore pressure and vertical displacement, the evolutions of vertical displacements against time on both side of the deeply buried heat source are symmetrical, while there is no such phenomenon for the shallow heat source; stratification characteristics of the saturated porous media shows prominent effects on the thermal consolidation.
Key words:
heat source|saturated porous media|thermal consolidation|extended precise integration method
CLC Number:
|
# Simple Heart Rate Monitor Using Reflective Sensor
So I’m sure all of you have seen one of theses at least once, most likely on a visit to a hospital, this device is called a pulse oximeter. This device measures your heart rate and oxygen levels in your blood. The way it works is but having two leds on one side of the finger, usually red and infra-red, and on the other side a phototransistor or light dependant resistor(LDR). When blood passes through your finger, a different amount of light reaches the light sensor and the output spikes because of this.
You could use either the red or IR led for this but if you want to calculate the blood oxygen levels you’d need both. Oxygenated blood lets more red light through and deoxygenated blood lets more IR light through and by alternating the leds and comparing the outputs, you can calculate oxygen levels but in this project I’m just going to be measuring the heart rate.
I wanted to make a simple heart rate monitor so I used cheap components that I had lying around so it definitely isn’t the best heart rate monitor around but it works. From a bit of googling I found a few other people with the same idea, one being Scott Harden which built this device a few years ago that could also be used to record an ECG, his post about building it can be found here, he also has lots of other great projects give it a visit. I used his circuit as reference from mine, I didn’t have the exact same values for each component but choose ones that were close. My circuit can be shown below:
The phototransistor I’m using is the TRCT5000 which is very cheap, around €1.10 for ten on ebay, it also includes an IR led so it had everything I needed. So I’m going to explain all of the parts of the circuit above, first at the top is the operational amplifier (op amp) which I’m using as a virtual ground(VG), a reason for this so we can power the circuit just using one power supply, to get the full range out of an op amp you need to supply it with a negative and positive voltage but using a virtual ground we can just use one supply so for example if we used a 12V supply, the VG op amp outputs 6V(VCC/2) because of the voltage divider and with respect to this ground the other voltages would be seen as +6V and -6V. If the op amp wasn’t there and we just used the voltage divider, the voltage wouldn’t remain constant as the resistant of the bottom half of the divider would change because of being connected in parallel to other resistances on the circuit. Now onto the reflective sensor, like I said before I’m using the TRCT5000 and to limit the current through the IR led I’m using a 1k resistor and for the voltage divider for the phototransistor. If you’re using a different supply voltage you’d have to change the 1k to something appropriate for that voltage and if you want to change the sensitivity for the transistor change the 10k resistor but this value worked well for me.
Now onto the second op amp which is a differentiator which like the name says it differentiates the put into the amp, differentiation also works as a high pass filter. This gets rid of the DC from the input and amplifies the changing voltage. The equation for a differentiator is as follows:
$Vout=R*C*\frac{dVin}{dt}$
R being the resistance R5, C being the capacitance C2, dVin is the change in input Voltage and dt being change in time. Onto the next part of the circuit which is the low pass filter, this is mainly used to get rid of the 50-60Hz noise cause by the mains in your house, by changing the value of the 10k potentiometer, you change the cut off frequency for the filter giving by the equation:
$f_c=\frac{1}{2*\pi*R*C}$
You want to change the pot enough to get rid of the noise but still keep the majority of the input signal, I put a hook I could clip onto at this part of the circuit so I could look at how much I was filtering the signal, this can be seen in the picture of the final project near the end. The next op amp is the buffer or unity gain amplifier, what this does is separate the left side of the amp from the right as this would cause problems with the differentiator and integrator amp, there is no change across the input and output of the amp. The final part of the circuit is the integrator which again like the name says integrates the input, and integration works as a low pass filter as well. The equations for it are as follows:
$DC Voltage Gain=-\frac{R2}{R1}$
$AC Voltage Gain = -\frac{R2}{R1}*\frac{1}{1+2*\pi*f*R2*C}$
$f_c = \frac{1}{2*\pi*R2*C}$
R2 is the resistor between the output and input, I didn’t have one but a 2k potentiometer would work better here but the 1k was the closest I had, having this potentiometer does change the cut off frequency but not by too much. I added some ‘hooks’ I could clip onto easily for measure voltages at different parts of the circuit and for the input voltages, you could also use terminal blokes too. To restrict some of the light coming in from the sides of the sensor I put some white tape. For testing, make it on a breadboard so its easier to troubleshoot of change things then move to a more permanent solution. That’s it, circuit finished now just power it and place your finger on the sensor and measure the output on an oscilloscope, if you’re like me you don’t have an oscilloscope, you’ll have to improvise. Below is the makeshift oscilloscope using an Arduino Nano:
So the analog digital converter(ADC) of the Nano has a range of 0-5V and a resolution of 1023 bits, our output signal has a range of -6V to +6V so we can’t directly connect to the analog input, first the signal gets passed through a capacitor to filter the DC so we just have zero signal. After that it goes through a potentiometer which is used to reduce the volt if the signal has a large voltage swing and the final potentiometer is connected to the 5V of the Nano, this is used to create a DC offset so we can see the negative parts of the signal. That’s all of that circuit done, just write a simple code to read in the values and print them over serial. Arduino has a great feature called “Serial Plotter” which graphs whatever is printed over serial. It can be used as a simple oscilloscope but I didn’t find a way to make changes to it like display extra information such as the beats per minute(BPM) or have a set scale to the axis so I decided to write a processing sketch which plots, the data coming in over serial, calculate a displays the BPM of the user and allow saving of the data. A video of me demonstrating the device and code can be seen below:
As I mentioned in the video I couldn’t get the processing sketch to plot the graph in real time even though the Arduino was capable of plotting the data in real time at 10 times the speed, if anyone knows how I would go about getting it to plot in real real, feel free to comment below. In the end I’m happy with how it turned out, I was able to measure my pulse and automatically calculate my BPM to a degree so in my mind it was a success, I might come back to it an measure my O2 levels or make some improvements in the future.
Below is the code and a few other things:
Calculation for BPM:
Samples to Time @ 100Hz:
$t = Samples*0.01$
Time between peaks(seconds per beat):
$\Delta t = t2-t1$
Beats per second(BPS):
$BPS = \frac{1}{\Delta t}$
Beats per minute(BPM):
$BPM=60*BPS$
//
// Simple_Arduino_Ocscilloscope.ino
// This code is used to read an analog values from pin A0
// at a speed at 100Hz and convert this bit value to a voltage.
// This voltage is sent over Serial to a processing sketch
// Create By Ronan Byrne, https://roboroblog.wordpress.com/2016/09/05/simple-heart-rate-monitor/
// Last Updated 06/09/2016
//
// Include timer library which can be found
// here http://www.doctormonk.com/search?q=timer
#include "Timer.h"
Timer t; // Create Timer Object
float volt; // Voltage from A0
void setup() {
Serial.begin(250000); // Baudrate of 250000
pinMode(ch1, INPUT); // Set A0 as an Input
Serial.println('B'); // Print 'B' to the Pc
// To plot on Serial plotter, comment out 'establishContact()'
establishContact(); // Wait for response from PC
}
void loop() {
t.update();
}
volt = (analogRead(ch1) / 1023.0) * 5.0;
Serial.println(volt); // Print This value over Serial
}
void establishContact() {
// Send "B" until a response is heard
while (Serial.available() <= 0) {
Serial.println("B");
delay(300);
}
}
//
// Heart_Rate_Monitor.pde
// This code reads a heart rate signal over serial
// and graphs this data as well as calculating the
// beats per minute(BPM). The user can also record
// the data by clicking on the record button or by pressing
// the spacebar.
// Created By Ronan Byrn, https://roboroblog.wordpress.com/2016/09/05/simple-heart-rate-monitor/
// Last Updated 06/09/2016
//
import processing.serial.*;
import javax.swing.*;
Serial myPort; // Create Serial Object
// Define Variables
String portid, val;
float V, Vmax, Vmin, Vold, Vold2, Vthresh, t1, t2,
X, Y, oldX, oldY;
boolean port, saved, firstContact, record;
int i, beatMax, beatMin, beats, t, sample,
id;
void setup() {
size(displayWidth, 600);// define screen size(X,Y)
// Look through serial list for your COM port(Your COM Port way not be COM3)
printArray(Serial.list());
// Loop through list until your COM port is found
while (port == false) {
for ( i=0; i<Serial.list().length; i++) {
if (Serial.list()[i].equals("COM3") == true) {
portid = Serial.list()[i];
port = true;
}
}
if (portid ==null) {// Alert the user that the COM3 isn't connected
JOptionPane.showMessageDialog(null, "Com port not conntected!!!",
delay(1000);
}
}
// Initialize the serial port and set the baudrate to 250000
myPort = new Serial(this, portid, 250000);
myPort.bufferUntil('\n');
// Set Voltage Threshold at 3 voltage to calculate beats
// You may need to change this value depending on the peak to peak
Vthresh = 3.0;
oldX = 0;
oldY = 550;
background(255);
record =false;
firstContact = false;
// Set the max and mins so that they'll be updated straight away
Vmax =0;
Vmin = 200;
beatMax = int(Vmax);
beatMin = int(Vmin);
}
void draw() {
if (myPort.available()>0) { // Wait until something is sent over serial
processData(); // Process data reads in the serial data
// Convert the value into a range of 0-500 pixels, changing the '550'
// value will change where 0V will be on the screen
Y=550-map(V, 0.0, 5.0, 0.0, 500);
// Set the X axis to 0 to 3s(300 samples @100Hz)
X = map(sample, 0.0, 300.0, 0.0, float(width));
if (X > width) { // If we go off the screen reset to the left of the screen
sample = 0;
X = 0.0;
oldX = -1.0;
background(255);
}
stroke(0);
line(oldX, oldY, X, Y); // draw the line for the current sample
// Create white box to cover previous BPM
fill(255);
stroke(255);
rectMode(CENTER);
rect(width/2, 40, 120, 30);
// Write new BPM over white box
String bpm = str(beats) + " BPM";
textAlign(CENTER);
stroke(0);
fill(0);
textSize(15);
text(bpm, width/2, 40);
oldX= X;
oldY = Y;
// Record Button
if (record ==false) { // Small Red Circle Within White Circle
stroke(0);
fill(255);
ellipse(displayWidth-50, 550, 50, 50);
stroke(255, 0, 0);
fill(255, 0, 0);
ellipse(displayWidth-50, 550, 10, 10);
} else { // White Square Within Red Circle
stroke(255, 0, 0);
fill(255, 0, 0);
ellipse(displayWidth-50, 550, 50, 50);
rectMode(CENTER);
stroke(255);
fill(255);
rect(displayWidth-50, 550, 20, 20);
}
if (keyPressed == true && key ==' ') {// If the space bar is pressed, record/stop recording (' ' stands for space bar in this instance)
if (record==false) {
println("Recording");
record = true;
delay(100); // delay to stop bouncing
} else {
// When recording is stopped, save the data to a .csv file named by the date and time
String date = hour()+"_"+minute()+"_"+second()+"_"+day()+month()+year();
String Dir = sketchPath("Heart_Beat_Monitor/"+date);
println(Dir+".csv"); // Print directory to be saved to
saveTable(readings, Dir+".csv"); // Save table as .csv
delay(100);
record = false;
id = 0; // Reset id value
}
}
}
}
void processData() {
// Make sure our data isn't empty before continuing
if (val != null) {
// Trim whitespace and formatting characters (like carriage return)
val = trim(val);
if (firstContact == false) { // Make contact with nano
firstContact = true;
myPort.clear();
myPort.write("A");
} else if (val.length() > 2) { // Check that the value is more than two characters
V =float(val);
heartBeat(); // Calculate Heart Rate
sample++;
t++;
// If recording, save values
if (record == true) {
id++;
}
}
}
}
void heartBeat() {
// Find max and min voltages
if (V > Vmax) Vmax = V;
if (V < Vmin) Vmin = V;
// Calculating bpm
if (V > Vthresh) { // If the voltage is above the threshold
// and the current sample is smaller than the previous and the previous is greater than
// the sample before that (i.e. a peak)
if (V< Vold && Vold > Vold2) {
if (t1 == 0)t1 = t; // set time for first peak
else {
t1 = t2;
t2 = t;
// Clamp bpm so we dont get any very large or very small heart rates
if ( 130 > round(6000/(t2-t1))&& 40 < round(6000/(t2-t1))) {
// (t2-t1)*0.01 = s/beat
// 1/(s/beat) = beats/s
// 60*beats/s = BPM, below is the simplifed version of that
beats = int(round(6000/(t2-t1)));
}
// Record max and min BPM
if (beats > beatMax) beatMax = beats;
if (beats < beatMin) beatMin = beats;
}
}
Vold2 = Vold; // Set old old value
Vold = V; // Set old value
}
}
void mousePressed() {
if (mouseButton == LEFT) {
// Toggle record if the left mouse button is clicked within the circle
// Find distance from center of circle
float disX = displayWidth-50 - mouseX;
float disY = 550 - mouseY;
println(disX + " " + disY);
if (sqrt(sq(disX)+sq(disY)) < 25) // check if distance is within radius
{
record =!record;
if (record == false) {
// When recording is stopped, save the data to a .csv file named by the date and time
// No debouncing necessary for mouse pressed
String date = hour()+"_"+minute()+"_"+second()+"_"+day()+month()+year();
String Dir = sketchPath("Heart_Rate_Monitor/"+date);
println(Dir+".csv"); // Print directory to be saved to
saveTable(readings, Dir+".csv"); // Save table as .csv
record = false;
id = 0; // reset id value
} else println("Recording");
}
}
}
|
Here we have collected our favourite sets, from the Mandelbrot set to the Mahut-Isner set!
A set is a collection of items. Sprinkled throughout Issue 04 of Chalkdust were some of our favourite sets. Here we have collected them together and we’d really love to hear yours. You can write about them at the bottom of this post!
### Mandelbrot set (Andrea Bertozzi)
During our interview with Andrea Bertozzi she shared with us why the Mandelbrot set is her favourite set: “My research group developed an efficient algorithm for tracking the boundaries of sets on different scales. I thought we could apply it to fractals so we demonstrated it on the Mandelbrot set.”
The figure above is the Mandelbrot set computed on a 100 $\times$ 100 grid and the figure refines the grid all the way up to 1000 $\times$ 1000 pixels.
### Cantor set (Belgin Seymenoğlu)
If you want to construct my favourite set, start with the interval $[0, 1]$. Next, remove the open middle third interval. This gives you two line segments: $[0, 1/3]$ and $[2/3, 1]$. Again, delete the middle third for each remaining interval (which leaves you with four new intervals). Now repeat the final step ad infinitum.
Once you’re done, you’re left with the Cantor set (also called the Cantor comb). But what does the Cantor set look like? Infinitely many discrete points? An infinite collection of line segments? The answer is a bit of both, and that’s because the dimension of my favourite set is neither zero nor one: it’s $\ln2/\ln3\approx 0.63093$. The other wonderful feature of this set is that it’s a fractal, so if you zoom in on a tiny portion, you get the Cantor set again!
### The empty set (Rob Beckett)
My favourite set is the empty set ($\varnothing$). The empty set is the only set of its kind and contains no elements. The empty set is a subset of any subset but the only subset of the empty set is itself.
The empty set is not nothing, but is a set that contains nothing. If I had a bag with multicoloured counters in, these are the elements of a non-empty set. On the other hand, if I had a bag with no counters in, there are no elements in the set and this is an example of the empty set.
### Numbers (Matthew Wright)
My favourite set is:
$$\hspace{-2mm}\{\{\},\{\{\}\},\{\{\},\{\{\}\}\},\{\{\},\{\{\}\},\{\{\},\{\{\}\}\}\},\{\{\},\{\{\}\},\{\{\},\{\{\}\}\},\{\{\},\{\{\}\},\{\{\},\{\{\}\}\}\}\}$$
or in other words, the number 5. In set theory, numbers are constructed as follows:
Zero is defined to be the empty set: $0 = \{\} = \varnothing$; one is the set containing the empty set: $1 = \{\{\}\}=\{\varnothing\}$; and the number $n$ is defined as the set containing all the previous numbers: $n = \{0,1,…..,n-1\}.$
As you can see, this definition causes things to get complicated quite quickly!
### $\aleph_1$ (Matthew Scroggs)
If you could count forever, you would reach infinity. You might be surprised to learn that there are bigger things than this infinity. My favourite set, $\aleph_1$ (aleph one), is the smallest set that is bigger than the counting infinity.
Cantor’s diagonal argument (if you’ve not heard of it, Google it!) can be used to show that there are more real numbers than natural numbers, and therefore that one infinite thing is bigger than another.
But $\aleph_1$ is even weirder than this. It is not known whether or not $\aleph_1$ and the real numbers are the same size. In fact, this is not just unknown, but it cannot be proven either way using the standard axioms of set theory! (The suggestion that they are the same size is called the continuum hypothesis.) So my favourite set is bigger than the smallest infinity—but we can’t work out by how much.
### Mahut-Isner set (Mattheas Recht)
My favourite set is the final set in the match between Nicolas Mahut and John Isner at Wimbledon in 2010. The set lasted over 11 hours with a final score of 70–68 to Isner.
[Pictures 1 – adapted from Flickr.com –S5003248 by Voo de MarCC-BY 2.0; other pictures by Chalkdust]
• ### Book of the Year 2020
We announce the winner of this coveted prize
|
[HOME] OKiւ̉bZ[W@@[e]
@I@@ Ǘl@@ 2011N66() 2F33 @@ [C]
tς菑݂xȂĂ܂܂BBB܂́A͘Aꂳ܂łB`[͘AAÕ͈z[hɑāc2008Nȗ̏ƂȂ܂ˁ(^^)ڐЌAhSYł̏I߂łƂ܂B{ɊȂ(*^^*)
@܂ł@@ B̃hSYt@@@ 2011N65() 19F30 @@ [C]
cn75NAʎZ4721ځIɎO̖O܂ꂽƂɁAhSYt@29NAOt@8NAĂėǂƂ̏Ȃo܂Bc싅DŋāAӖō̈ɂȂ܂BO肠肪ƂA肪ƂA肪ƂB
@@@ ^{@@ 2011N65() 22F23 @@ [C]
ʂłÕTC{[q߂ςĂ܂III
@hSYڐЌ㏉@@ R[@@ 2011N65() 18F22 @@ [C]
ڐЌ㏉߂łƂ܂IIRo}TI肪loĂ܂ăs`LAĂ܂܂OI肪}ĂĖ{ɊłIꂩs`Ɍ}DuƋO҂Ă܂RL,,,,M/
@vԂ̓o@@ Ǘl@@ 2011N65() 12F46 @@ [C]
@TQQȗ̓ołBꂳ܂łB_̏ʂł̓oEEEْ܂Hij
@z[NX@@ Kei@@ 2011N62() 19F03 @@ [C]
Ah[֊ϐɍsĂ܂IO̓oȂ̂ŎcOcƎvĂAVŎOɑ˂ɐĂ܂ɂւ炸A₳Ă܂BOA肪Ƃ܂(*^^*)z[NXォt@̂ŊłBꂩ撣ĂIĂ܂!!
@@@ ^{@@ 2011N64(y) 1F08 @@ [C]
ォ̃t@ł@O
@@@ g@@ 2011N64(y) 8F50 @@ HP [C]
z[NX艞Ă܂B
@Mise'sүށ@@ Ǘl@@ 2011N61() 23F10 @@ [C]
OA肪Ƃ܂I()oԊuĂ̂CɂȂ܂cł̌EMoɖ{ɎcOł(T_T)_lc(>_<)
@Ǘllց@@ g@@ 2011N61() 10F27 @@ HP [C]
ǗllAbɂȂĂ܂BgłBeɉ摜URLLƃG[œe邱Ƃo܂łB()
@@@ Ǘl@@ 2011N61() 22F39 @@ [C]
gA肪Ƃ܂(^-^)Xpɋ֎~ƂĐݒ肵Ă܂BL̂悤ɏĂ悤ɌfڂĂ}̖LqĂΏ܂B肪Ƃ܂B
@hA@@ g@@ 2011N61() 10F23 @@ HP [C]
OAǗllAt@̊FlAɂ́`hAƃz[NX̃LN^[}XRbgƂ̋LOBeI(pV)z[NX̃LN^[đRłȂ`BłׂĂȂ̂ł傤EEEH܂ŁAfBYj[ĥ悤łȂ`B(){͓o邱Ƃ҂Ă܂B撣ĉB摜̓hSYHP
@Mise'sүށ@@ Ǘl@@ 2011N61() 0F49 @@ [C]
OA肪Ƃ܂I()y݂ɂĂ܂BMő@܂悤ɁBBBz[NXt@AÕt@A݂ȂɌCȎpĂ邱Ƃ҂Ă܂(^-^)
@SB@@ g@@ 2011N531() 19F45 @@ HP [C]
OAǗllAt@̊FlAɂ́`\tgoNAo@ēĉIƑ̊ϐ邩܂lB()Ԉ1ۃx`ɋAȂłˁEEEB()Ă܂B
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 P16 P17 P18 P19 P20 P21 P22 P23 P24 P25 P26 P27 P28 P29 P30 P31 P32 P33 P34 P35 P36 P37 P38 P39 P40 P41 P42 P43 P44 P45 P46 P47 P48 P49 P50 P51 P52 P53 P54 P55 P56 P57 P58 P59 P60 P61 P62 P63 P64 P65 P66 P67 P68 P69 P70 P71 P72 P73 P74 P75 P76 P77 P78 P79 P80 P81 P82 P83 P84 P85 P86 P87 P88 P89 P90 P91 P92 P93 P94 P95 P96 P97 P98 P99 P100 P101 P102 P103 P104 P105 P106 P107
S 1670@@[Ǘ]
CGI-design
|
#### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
Options
# Agriculture
Started Agriculture.
• Options
1.
On the blog, nad has been talking about the area of land required to produce enough food. I have been collecting some info about this, as it applies to the UK. I found the answers surprising.
My vague idea was that if you became vegan, protein would be a quite a problem. I thought you would end up eating a lot of legumes (beans, peas, lentils) because other vegetables either had too little protein, or a poor balance of amino acids. I also thought that the high-protein legumes required a warmer climate than the UK, so that getting enough protein in the UK might be tricky.
I am using nutrition data from http://ndb.nal.usda.gov/ndb/foods. All figure are per 100g.
Potatoes, microwaved, cooked in skin, flesh and skin, without salt
Energy 105kcal
Protein 2.44g
Fat 0.10g
So first surprise: If you got all your calories from potatoes, you'd get a bit too much protein. The recommended ratio is 2000kcal to 40g protein. Potato protein is high quality
Oats
Energy 389kcal
Protein 16.89g
Fat 6.90g
Protein overdose! (and again it is high quality). Need some fat.
Canola (rapeseed oil)
Energy 884kcal
Protein 0.00g
Fat 100.00g
It was harder to get info for yields. I used:
100m2 of potatoes yields 0.01*42000/365 = 1.15 kg/day
150m2 of oats yields 0.015*3500/365 = 0.144 kg/day
250m2 of rapeseed yields 0.025*1280/365 = 0.087 kg/day
providing
Energy 11.5*105 + 1.44*389 + 0.87*884 = 2540kcal/day
Protein 11.5*2.44 + 1.44*16.89 + 0.87*0 = 52g/day
Fat 11.5*0.10 + 1.44*6.90 + 0.87*100 = 98g/day
from 500m2.
These are a little above the recommended daily allowances. Obviously some vitamins and minerals are missing, but I'm surprised that porridge and chips is such a good basis!
It turns out fat is the biggest problem, which is not the impression you get from a supermarket.
Another surprise was that a UK wholefood catalogue which has a dozen kinds of beans most of which are hard to grow in the UK does not list broad beans (fava beans) which grow well in the UK.
Comment Source:On the blog, nad has been [talking](http://johncarlosbaez.wordpress.com/2014/04/16/what-does-the-new-ipcc-report-say-about-climate-change-part-6/#comment-47326) about the area of land required to produce enough food. I have been collecting some info about this, as it applies to the UK. I found the answers surprising. My vague idea was that if you became vegan, protein would be a quite a problem. I thought you would end up eating a lot of legumes (beans, peas, lentils) because other vegetables either had too little protein, or a poor balance of amino acids. I also thought that the high-protein legumes required a warmer climate than the UK, so that getting enough protein in the UK might be tricky. I am using nutrition data from [http://ndb.nal.usda.gov/ndb/foods](http://ndb.nal.usda.gov/ndb/foods). All figure are per 100g. ~~~~ Potatoes, microwaved, cooked in skin, flesh and skin, without salt Energy 105kcal Protein 2.44g Fat 0.10g ~~~~ So first surprise: If you got all your calories from potatoes, you'd get a bit too much protein. The recommended ratio is 2000kcal to 40g protein. Potato protein is [high quality](http://nutritiondata.self.com/facts/vegetables-and-vegetable-products/2771/2) ~~~~ Oats Energy 389kcal Protein 16.89g Fat 6.90g ~~~~ Protein overdose! (and again it is high quality). Need some fat. ~~~~ Canola (rapeseed oil) Energy 884kcal Protein 0.00g Fat 100.00g ~~~~ It was harder to get info for yields. I used: * Potatoes 42t/ha, figure from [ukagriculture](http://www.ukagriculture.com/crops/potatoes_uk.cfm) * Oats 3.5t/ha, figure from Wikipedia. * Rapeseed oil 1.28t/ha, figure from [Wikipedia.](http://en.wikipedia.org/wiki/Rapeseed) which says "Every ton of rapeseed yields about 400 kg of oil." and [ukagriculture](http://www.ukagriculture.com/crops/oil_seed_rape.cfm) which says UK yield is about 3.2t/ha. (Another estimate is rapeseed oil 1200lt/ha from [http://journeytoforever.org/biodiesel_yield.html](http://journeytoforever.org/biodiesel_yield.html).) ~~~~ 100m2 of potatoes yields 0.01*42000/365 = 1.15 kg/day 150m2 of oats yields 0.015*3500/365 = 0.144 kg/day 250m2 of rapeseed yields 0.025*1280/365 = 0.087 kg/day ~~~~ providing ~~~~ Energy 11.5*105 + 1.44*389 + 0.87*884 = 2540kcal/day Protein 11.5*2.44 + 1.44*16.89 + 0.87*0 = 52g/day Fat 11.5*0.10 + 1.44*6.90 + 0.87*100 = 98g/day ~~~~ from 500m2. These are a little above the recommended daily allowances. Obviously some vitamins and minerals are missing, but I'm surprised that porridge and chips is such a good basis! It turns out fat is the biggest problem, which is not the impression you get from a supermarket. Another surprise was that a UK wholefood catalogue which has a dozen kinds of beans most of which are hard to grow in the UK does not list broad beans (fava beans) which grow well in the UK.
• Options
2.
edited June 2014
Graham, incidentally I had just edited the project page and asked about the protein.... I am not sure though wether all proteins are the same. That is I can only speak for myself, that is I feel differently energized with different protein sources. I feel more energized after eating fish or organic chicken (I don't eat much meat, so I really feel a difference) than after eating any vegetable. From the veggies I find soybeans (in form of Tofu) mostly energising, I don't feel much energized after eating potatoes, but rather tired (even if I eat the potatoes usually with skin...we had learned in school (bavarian cooking class in an all girls school) that most of the minerals,vitamins etc. of a potato are in the skin).
Comment Source:Graham, incidentally I had just edited the project page and asked about the protein.... I am not sure though wether all proteins are the same. That is I can only speak for myself, that is I feel differently energized with different protein sources. I feel more energized after eating fish or organic chicken (I don't eat much meat, so I really feel a difference) than after eating any vegetable. From the veggies I find soybeans (in form of Tofu) mostly energising, I don't feel much energized after eating potatoes, but rather tired (even if I eat the potatoes usually with skin...we had learned in school (bavarian cooking class in an all girls school) that most of the minerals,vitamins etc. of a potato are in the skin).
• Options
3.
I guess this is part of the explanation of your experience - the carbohydrates you get with veggie protein making you sleepy, cancelling any energised feeling.
Comment Source:I guess [this](http://www.livestrong.com/article/530662-does-eating-carbs-make-you-sleepy/) is part of the explanation of your experience - the carbohydrates you get with veggie protein making you sleepy, cancelling any energised feeling.
• Options
4.
I guess this is part of the explanation of your experience - the carbohydrates you get with veggie protein making you sleepy, cancelling any energised feeling.
yes probably, but I guess the proteins in the potato come together with the carbs.
Comment Source:>I guess this is part of the explanation of your experience - the carbohydrates you get with veggie protein making you sleepy, cancelling any energised feeling. yes probably, but I guess the proteins in the potato come together with the carbs.
• Options
5.
edited June 2014
Graham - I hope you put this data on the wiki! You're right - it's quite surprising.
Comment Source:Graham - I hope you put this data on the wiki! You're right - it's quite surprising.
• Options
6.
I've added the data to the wiki, and organised the page a bit.
Comment Source:I've added the data to the wiki, and organised the page a bit.
• Options
7.
These are a little above the recommended daily allowances. Obviously some vitamins and minerals are missing, but I’m surprised that porridge and chips is such a good basis!
I bet the vitamins and minerals are not so easy to get.
Comment Source:>These are a little above the recommended daily allowances. Obviously some vitamins and minerals are missing, but I’m surprised that porridge and chips is such a good basis! I bet the vitamins and minerals are not so easy to get.
• Options
8.
When a kind of agricultural revolution started among the Ancient Pueblo Peoples in the American southwest, and people started eating lots of corn, populations skyrocketed but skeletons show damage from mineral deficiencies.
Comment Source:When a kind of agricultural revolution started among the Ancient Pueblo Peoples in the American southwest, and people started eating lots of corn, populations skyrocketed but skeletons show damage from mineral deficiencies.
• Options
9.
Graham wrote:
It turns out fat is the biggest problem, which is not the impression you get from a supermarket.
Sorry I haven't got time to find the refs but a recent BBC One Horizon demonstrated tests showing that fat on its own or carbs on their own are not a problem.
The killer foods are doughnuts and deep-pan pizza with the endorphin-producing 50/50 fat/sugar mix.
Comment Source:Graham wrote: > It turns out fat is the biggest problem, which is not the impression you get from a supermarket. Sorry I haven't got time to find the refs but a recent BBC One Horizon demonstrated tests showing that fat on its own or carbs on their own are not a problem. The killer foods are doughnuts and deep-pan pizza with the endorphin-producing 50/50 fat/sugar mix.
• Options
10.
The killer foods are doughnuts and deep-pan pizza with the endorphin-producing 50/50 fat/sugar mix.
I'm just going into the garden to eat some ice-cream.
Comment Source:> The killer foods are doughnuts and deep-pan pizza with the endorphin-producing 50/50 fat/sugar mix. I'm just going into the garden to eat some ice-cream.
• Options
11.
+1
Comment Source:+1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.