text
stringlengths 104
605k
|
---|
# How to horizontally align rotated text in a table?
I am trying to horizontally align text that has been rotated in a table. I want the rotated "Lorem Ipsum" and the rotated "Lorem Ipsum & Lorem Ipsum" text to be horizontally centered in the cell. When I adjust the width of each column manually, the alignment is no longer centered. I need to modify the width of each column so that my table will fit in the width of a single column.
I tried using the \multirow command from Table rotated text alignment but I was only able to modify the vertical alignment.
Here is what my table looks like:
Here is the code (apologies if I included some unnecessary packages):
\documentclass{article}
\usepackage{array}
\usepackage{tabularx}
\usepackage{rotating}
\usepackage{lipsum}
\usepackage{multirow}
\begin{document}
\newcommand\RotText[1]{\fontsize{9}{9}\selectfont \rotatebox[origin=c]{90}{\parbox{2.6cm}{\centering#1}}}
\newcolumntype{G}{>{\centering\arraybackslash}m{.0625cm}}
\newcolumntype{U}{>{\centering\arraybackslash}m{.375cm}}
{\centering
\begin{center}\begin{table}[ht]\caption{Lorem Ipsum Table}
\footnotesize
\centering
\begin{tabular}{|c|G|U|U|U|U|G|G|G|U|}
\hline
& \multicolumn{9}{c|}{Lorem Ipsum} \\
\cline{2-10}
Instruction & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} &
\RotText{Lorem Ipsum \& Lorem Ipsum} & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} \\
\hline
Lorem Ipsum &
& & X & & & & & & \\
\hline
\end{tabular}
\end{table}
\end{center}
}
\lipsum
\end{document}
-
Apologies, I posted the old code. I have since fixed it. – sphere Mar 19 '14 at 16:08
REVISED SOLUTION
One of the problems with your MWE and my earlier tweak thereof was that it did not account for the natural space allocated, by default, between columns, defined by the length \tabcolsep. If narrow is desired, the first thing to do is turn that off, with \setlength\tabcolsep{0pt}. Then, there are no \vspace tweaks required, and the problem becomes determining the column width that satisfies your requirement.
Here, I strove to make the columns as narrow as possible, again, with no tweaking. Note, I saved a copy of \tabcolsep into \svtabcolsep, if I need to reinstate it later.
\documentclass{article}
\usepackage{array}
\usepackage{tabularx}
\usepackage{rotating}
\usepackage{lipsum}
\usepackage{multirow}
\begin{document}
\let\svtabcolsep\tabcolsep
\setlength\tabcolsep{0pt}
\newcommand\RotText[1]{\fontsize{9}{9}\selectfont
\rotatebox[origin=c]{90}{\parbox{2.6cm}{%
\centering#1}}}
\newcolumntype{G}{>{\centering\arraybackslash}m{.35cm}}
\newcolumntype{U}{>{\centering\arraybackslash}m{.62cm}}
{\centering
\begin{center}\begin{table}[ht]\caption{Lorem Ipsum Table}
\footnotesize
\centering
\begin{tabular}{|c|G|U|U|U|U|G|G|G|U|}
\hline
& \multicolumn{9}{c|}{Lorem Ipsum} \\
\cline{2-10}
Instruction & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} &
\RotText{Lorem Ipsum \& Lorem Ipsum} & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} \\
\hline
Lorem Ipsum &
& & X & & & & & & \\
\hline
\end{tabular}
\end{table}
\end{center}
}
\lipsum
\end{document}
Note you could even dispense with the G and U column types, making them instead c, and using a small, finite value of \tabcolsep to achieve your goal.
\let\svtabcolsep\tabcolsep
\setlength\tabcolsep{.3pt}
\newcolumntype{G}{c}
\newcolumntype{U}{c}
-
Not to be picky, but the rotated "Lorem Ipsum"s still aren't centered. – sphere Mar 19 '14 at 16:35
.0625cm wide table columns are distinctly odd, but you are suggesting changing them to be 0cm wide? – David Carlisle Mar 19 '14 at 16:40
Thanks looks good. I'm not a huge fan of "hacks" with magic numbers. I wish latex packages would have more automatic handling of centering. – sphere Mar 19 '14 at 16:44
@DavidCarlisle I am not a tabular whiz, so in this case, rather than redoing it from scratch, I started with the OP's code, and "tweaked." Tweaking is under-rated. I was doing it long before that Miley whats-her-name girl. – Steven B. Segletes Mar 19 '14 at 16:45
@sphere latex will centre automatically with no problems but not if you ask it to centre visible text in a column that is only half a millimetre wide, it can't centre it so it sticks it out one side/ – David Carlisle Mar 19 '14 at 16:55
Your box was far wider than the specified width of the column, so centreing could not work. Also don't put the table inside a center environment it will float away leaving spurious vertical space from the center display with nothing in it.
\documentclass{article}
\usepackage{array}
\usepackage{tabularx}
\usepackage{rotating}
\usepackage{lipsum}
\usepackage{multirow}
\begin{document}
\newcommand\RotText[1]{\rotatebox[origin=c]{90}{\parbox{2.6cm}{\centering#1}}}
\newcolumntype{G}{>{\centering\arraybackslash}m{.6cm}}
\newcolumntype{U}{>{\centering\arraybackslash}m{.6cm}}
\setlength\extrarowheight{3pt}
\begin{table}[ht]\caption{Lorem Ipsum Table}
\footnotesize
\centering
\begin{tabular}{|c|G|U|U|U|U|G|G|G|U|}
\hline
& \multicolumn{9}{c|}{Lorem Ipsum} \\
\cline{2-10}
Instruction & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} &
\RotText{Lorem Ipsum \& Lorem Ipsum} & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} \\
\hline
Lorem Ipsum &
& & X & & & & & & \\
\hline
\end{tabular}
\end{table}
\lipsum
\end{document}
-
Thank you for your help, however this doesn't fix my problem. The columns are too wide now. You modified the newcolumntype and when I go back to adjust it, the same problem occurs. – sphere Mar 19 '14 at 16:21
@sphere well make them smaller (but not as small as you had it which doesn't leave room for visible text at all .0625cm is really very narrow. Latex warns on the command line for every overfull box so just reduce the font size till it fits. – David Carlisle Mar 19 '14 at 16:39
If you really want to pack them in tight...
\documentclass{article}
\usepackage{array}
\usepackage{tabularx}
\usepackage{rotating}
\usepackage{lipsum}
\usepackage{multirow}
\begin{document}
\newcommand\RotText[1]{\fontsize{9}{9}\selectfont \rotatebox[origin=c]{90}{\parbox{2.6cm}{\centering#1}}}
\newcolumntype{C}{@{\hspace{2pt}}c@{\hspace{1pt}}}
\begin{table}[ht]\caption{Lorem Ipsum Table}
\footnotesize
\centering
\begin{tabular}{|c|C|C|C|C|C|C|C|C|C|}
\hline
& \multicolumn{9}{c|}{Lorem Ipsum} \\
\cline{2-10}
Instruction & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} &
\RotText{Lorem Ipsum \& Lorem Ipsum} & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum} & \RotText{Lorem Ipsum \& Lorem Ipsum} \\
\hline
Lorem Ipsum &
& & X & & & & & & \\
\hline
\end{tabular}
\end{table}
\lipsum
\end{document}
-
|
## Precalculus: Mathematics for Calculus, 7th Edition
(a) $\frac{3+a}{3}=\frac{1}{3}(3+a)=1+\frac{a}{3}$ hence $\frac{3+a}{3}=1+\frac{a}{3}$ (b) When $x=2$, $\frac{2}{4+x}=\frac{2}{6}=\frac{1}{3}$ however $\frac{1}{2}+\frac{2}{x}=\frac{1}{2}+\frac{2}{2}=\frac{1}{2}+1=\frac{3}{2}$ hence$\frac{1}{3}\ne \frac{3}{2}$ so LHS $\ne$ RHS hence $\frac{2}{4+x}\ne \frac{1}{2}+\frac{2}{x}$
|
## Discovery Coincidence
Some of you are wondering what I’m working on while on retreat. Well, actually there’s a nice coincidence here. I’m working on the graphic book that you may have heard me talk about a bit. “The Project” as I sometimes call it. I’ve been doing things on various aspects of it, such as reworking the description of it for various people to look at, writing new bits, and spending a bit of time pulling together various bits of the prototype story I used to start all of this back in 2010. The prototype bits have all of my experimentation and development of style and technique all over them, and so there are pages that needed a bit of rework (to say the least). So, on Monday, […] Click to continue reading this post
## Discovery Clarification
[Update: Over the months following the announcement, doubt was cast over exactly what BICEP2 saw, and now it seems that the signal announced by BICEP2 is consistent with polarisation produced by galactic dust. See here.]
I’m actually in hiding and silence for a week. It is Spring Break and I have locked myself away in a seaside town to do some writing, as I did last year. But I must break my silence for a little while. Why? Well there’s been a really great announcement in physics today and while being very happy that it is getting a lot of press attention – and it should since the result is very important and exciting – I’ve been stunned by how confusingly it has been reported in several news reports. So I thought I’d say a few things that might help.
But first, let me acknowledge that there’s a ton of coverage out there and so I don’t need to point to any press articles. I will just point to the press release of the BICEP2 collaboration (yes, that’s what they’re called) here, and urge you once you’ve read that to follow the link within to the wealth of data (images, text, graphs, diagrams) that they provide. It’s fantastically comprehensive, so knock yourself out. The paper is here.
I keep hearing reports saying things like “Scientists have proved the Big Bang”. No. The Big Bang, while an exciting and important result for modern cosmology, is very old news. (You can tell since there’s even a TV comedy named after it.) This is not really about the Big Bang. This is about Inflation, the mechanism that made the universe expand rapidly from super-tiny scales to more macroscopic scales in fractions of a second. (I’ll say more about the super-tiny below).
I also hear (slightly more nuanced) reports about this being the first confirmation of Inflation. That’s a point we can argue about, but I’d say that’s not true either. We’ve had other strong clues that Inflation is correct. One of the key things that pops out of inflation is that it flattens out the curvature of universe a lot, and the various observations that have been made about the Cosmic Microwave Background over the years (the CMB is that radiation left over from when the universe was very young (about 380,000 years old – remember the universe is just under 14 million years old!)) have shown us that the universes is remarkably flat. Another previous exciting result in modern cosmology. Today’s result isn’t the first evidence.
So what is today’s exciting news about then? The clue to the correct […] Click to continue reading this post
## Collecting the Cosmos
Don’t forget that on the USC campus on Friday at 4:00pm, we’ll be kicking off the Collecting the Cosmos event! It will be in the Doheny library, and there’ll be a presentation and discussion first, and then a special opening reception for the exhibition. Be sure to get yourself on the waiting list since there’s some chance that you’ll get in even if you have not RSVPed yet. (The image is from the Visions and Voices event site, and includes parts of the artworks – by artists Victor Raphael and Clayton Spada – to be included in the exhibition, so come along and see.) The event description says, in part: […] Click to continue reading this post
## Big History is Coming!
You’ll recall that I was in New York a short while ago to film some promotional material for a new TV series. It is called Big History, and it will be on History Channel’s H2 channel (and eventually on various international channels, but I’ve no idea which – similar ones to where you find the other show I’ve mentioned a bit, The Universe, I expect).
Rather than be primarily about astronomical and cosmological things, the show will focus each week on one of a list specific items that have affected our history, and take the long view about that item. How long a view? The longest known possible! So take something like Salt, and examine its role in civilization and culture, bringing in historians, anthropologists, etc… and physical scientists to trace that object back to its roots in the early universe… (the big bang, the cores of stars, etc.) Update: For you Breaking Bad fans, note that it’ll be narrated by Bryan Cranston, by the way.
Here’s one of the promo videos:
## Weinberg on Physics Now
I just spotted (a bit late) that Steven Weinberg (one of the giants of my field) has written a piece in the New York Review ofBooks entitled “Physics: What We Do and Don’t Know”. I recommend it. He talks about astronomy, cosmology, particle physics, and by casting his eye over the arc of their recent (intertwined) histories of ideas, experiments and discoveries, tries to put the Standard Models of particle physics and of cosmology into perspective.
The article is […] Click to continue reading this post
## TED Youth Talk – Hidden Structures of the Universe
You might recall that last year I gave a talk at TED Youth, in their second year of short TED talks aimed at younger audiences. You’ll recall (see e.g. here and here) I made a special set of slides for it, composed from hundreds of my drawings to make it all in graphic novel style, and somehow trying to do (in 7 minutes!!) what the TED people wanted.
They wanted an explanation of string theory, but when I learned that I was the only person in the event talking about physics, I kind of insisted that (in a year when we’d discovered the Higgs boson especially!) I talk more broadly about the broader quest to understand what the world is made of, leaving a brief mention of string theory at the end as one of the possible next steps being worked on. Well, they’ve now edited it all together and made it into one of the lessons on the TED Ed site, and so you can look at it. Show it to friends, young and old, and remember that it is ok if you don’t get everything that is said… it is meant to invite you to find out more on your own. Also, as you see fit, use the pause button, scroll back, etc… to get the most out of the narrative.
I’m reasonably pleased with the outcome, except for one thing. WHY am I rocking […] Click to continue reading this post
## Known Unknowns Decreased a Bit
Well, the day is here. The Planck collaboration has announced a huge amount of results for the consumption of the scientific community and the media today. The Planck satellite looks with unprecedented precision at the very earliest radiation (“cosmic microwave background radiation”, CMB) from the universe when it was very young (a wee, cute 380,000 years old) and helps us deduce many things about what the universe was like then, and what it is like now. Here’s one of the representations of the universe using the new sky mapping Planck did (image courtesy ESA/Planck):
There’s a ton of data, and a raft of papers with analysis and conclusions. And there’s a very nice press release. I recommend looking at it. It is here, and the papers are here. The title of the press release is “Planck reveals an almost perfect Universe”, and some of the excitement is in the “almost” part. A number of anomalies that were hinted at by the previous explorer of the CMB, WMAP, seem to have been confirmed by Planck, and so there are some important things to be understood in order to figure out the origin of the anomalies (if they ultimately turn out to be real physics and not data artefacts). [Update: Andrew Jaffe has two nice posts I recommend. One on the science, and the other on the PR. Jester also has a nice post on the science from a particle physicist’s perspective.]
What is the title of my post referring to? Well, the refined measurements have allowed us to update some of the vital statistics of the universe. First, it is a bit older than previous measurements have indicated. The age is now measured as 13.82 billion years. (I’m already updating pages in the draft of my book…) Second, the proportion of ingredients […] Click to continue reading this post
## Heaven’s Parameters
Oh… I forgot to get around to letting you know the result of designing the universe required in a previous post. The result is that it is a radiation (“light”) filled universe with positive cosmological constant $$\Lambda$$(and so space wants to expand due to negative pressure – much like ours seems to be doing). The radiation density wants the thing to collapse. There’s a balance between the two, and it turns out that it is when the two densities (radiation, and vacuum energy) are equal. This is only possible when there is positive curvature for the universe (so, not like ours), as you can see from the Friedman equation if you were that way inclined. So the universe is a 3-sphere, and if you work it out, the radius of this 3-sphere turns out to be $$a=\left(\frac{3}{2\Lambda}\right)^{1/2}$$. The temperature of the radiation is then computed using the usual Stefan-Boltzmann relation.
The equality of densities turns out to result from the fact that the effective potential of the equation is at a maximum, and so this universe turns out to be unstable… It is a radiation-filled version of Einstein’s matter-filled static universe, which is also unstable. It is larger than Einstein’s by a factor of $$\sqrt{3/2}$$.
Einstein was said to have arrived at his static universe on the grounds of what he thought was observationally clear – the universe was unchanging (on large scales). […]
The equality of densities turns out to result from the fact that the effective potential of the equation is at a maximum, and so this universe turns out to be unstable… It is a radiation-filled version of Einstein’s matter-filled static universe, which is also unstable. It is larger than Einstein’s by a factor of $$\sqrt{3/2}$$.
Einstein was said to have arrived at his static universe on the grounds of what he thought was observationally clear – the universe was unchanging (on large scales). Hubble […] Click to continue reading this post
## Project Heaven
This is an extra homework that some students of the General Relativity class did to make up for one that did not count earlier in the semester. While writing it, I realized that this universe is in fact, Heaven! You know, we become beings of light, and live forever, etc…
I thought it would be fun to share its final form:
“You work in the design section of the company that manufactures universes. (This is […] Click to continue reading this post
## The 2011 Nobel Prize in Physics
Ok. So who was surprised by this one? My hand is not up… is yours? (That’s a screen shot from the Nobel Prize site to the left. More here. Cheeky of me, but it’s early in the morning and I’ve got to pack, shower, and cycle like mad to the subway to get to my train to Santa Barbara, so time is of the essence.)
I was pretty sure that this would be the prize sometime very soon, although I’ll not say that I knew it would be this year’s for sure. It is well deserved, since this was a genuinely major change in how everyone in the field thinks about the universe, and we’re still trying to get to grips with it today. The acceleration of the universe that they […] Click to continue reading this post
## Multiverse Musings
As you may know already there’ll be a new NOVA series on PBS in the Fall, based on one of Brian Greene’s books, The Fabric of the Cosmos. Last Fall I did some a shoot with them for my role in it (I’ve no idea how much they will use), and I learned a short while ago that they’ll be using some of it on the NOVA website too. They extracted some parts of the on-camera interview segments I did concerning the idea of multiple universes and transcribed them into something you can read online. Have a look here. I touch on the idea in a fragmented way, mostly being led by the questions I was asked, but it’s a fun topic to chat about, and may lead you in interesting directions should you wish to learn more, so have a look.
A word on the picture they are using (er…see above left). It seems to be one that the […] Click to continue reading this post
## Heretic…?
We had a really interesting discussion of the quantum physics of de Sitter spacetime yesterday here in Aspen, starting with a review of the behaviour of scalar fields in such a background, led by Don Marolf, and then, after lunch, an open-ended discussion led by Steve Shenker. This is all quite difficult, and is of course quite relevant, since a piece of de Sitter is relevant to discussions of inflation, which seems (from cosmological observations) to have been a dominant phase of the very early universe. As the most symmetric space with positive cosmological constant, de Sitter may also be relevant to the universe today, since dark energy (first recognized after 1998’s observations of the universe’s accelerating expansion) may well accounted for by a positive cosmological constant.
So we need to understand this type of spacetime really well… and it seems that we don’t. Now there’ve been a lot of people looking at all this and doing really excellent work, and they understand various issues really well – I am not one of them, as I’ve not worked on this in any detail as yet. Do look at the papers of Marolf, and of Shenker, and collaborators, and references therein, and catch up with what’s been going on in your own way. For what it is worth, the sense that I get is that we’re trying to solve very difficult issues of how to interpret various quantum features of the spacetime and getting a lot of puzzles by trying to make it look a lot like things we’ve done before.
Now, we may solve all these puzzles…. but my current take on this all is that we’re […] Click to continue reading this post
## Passing Star People
You might not know the name Maurice Murphy, but I am certain that you are likely to know – and maybe even be very familiar with – his work. His is the principal trumpet playing the lead themes in very many films with music by John Williams. I have for a long time been very impressed with how so many of those themes trip so easily off the tongue (physical or mental) and seem to fit together so well (just hum the Star Wars theme, and then follow it by the Superman theme, then the Indiana Jones theme, and so on). A lot of this is due to the fact that Williams (like most good composers) is a master at recycling and modifying, creating a cluster of much loved (deservedly) themes that accompany some of our favourite movie-going memories, but I now think that the other reason is that you’re hearing them all played by the same voice! That voice is the playing of Maurice Murphy, the truly wonderful trumpeter who Williams would specifically request to play the lead on recordings of his film music. Murphy died recently, and you can dig a bit more about him and explore what I’ve been telling you further by going to the London Symphony Orchestra’s site devoted to him […] Click to continue reading this post
## Minority Report
This is a quick update on the school. I’ve been trying to give the students some of the core concepts they need to help them understand what string theory is, how it works, and what you can do with it. Here’s the really odd thing about all this (and an explanation of the post title): While this is a school on Quantum Gravity, after talking with the students for a while one learns that in most cases the little they’ve heard about string theory is often essentially over 20 years out of date and almost always totally skewed to the negative, to the extent that many of them are under the impression that string theory has nothing to do with quantum gravity at all! It is totally bizarre, and I suspect it is largely a result of things that are said and passed around within their research community.. So there […] Click to continue reading this post
## Planck Matters
You can read a bit about the work of my colleague Elena Pierpaoli and her postdocs and students in this article in one of USC’s in-house publications. It focuses on the Planck observatory (image right from NASA/ESA), which we’ve discussed here before. (Recall the launch?) There’s a lot of exciting physics about the very young universe to be discovered as more data from the mission get gathered and analyzed.
Enjoy the article!
|
# What is the eccentricity of a conic section? How can you classify conic sections by eccentricity? How does eccentricity change the shape of ellipses and hyperbolas?
What is the eccentricity of a conic section? How can you classify conic sections by eccentricity? How does eccentricity change the shape of ellipses and hyperbolas?
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
hosentak
Step 1 Any conic section can be defined as the locus of points whose distances to a point (the focus) and a line (the directrix) are in a constant ratio. That ratio is called the eccentricity of the conic section , which is often denoted as e. Step 2 The eccentricity (e) of an ellipse which is not a circle is greater than zero but less than 1. If $e=1$ then it is Parabola. If $e>1$ then it is Hyperbola [The eccentricity of a hyperbola can be any real number greater than 1, with no upper bound. The eccentricity of a rectangular hyperbola is $\sqrt{2}0$. If $e=0$ then it is Circle. Step 3 If an ellipse is close to circular it has an eccentricity close to zero. If an ellipse has an eccentricity close to one it has a high degree of ovalness. The eccentricity of a conic section tells us how close it is to being in the shape of a circle. The farther away the eccentricity of a conic section is from 0, the less the shape looks like a circle. an ellipse looks like a compressed circle. When compared with the other two conic shapes, it most closely resembles a circle. Similar reasoning deduces that a parabola would be next closest, and a hyperbola the farthest from a circle in shape.
|
# How would I swap a character when a player touches a collider?
I wrote a simple script to attempt to do this and I created an empty game object and applied the script to it and dragged both of my sprites into my script and I receive no errors but nothing happens when my player enters the collider and the "Calm" version of the sprite is still active.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class JetAnglerSwap : MonoBehaviour {
public GameObject CalmAngler;
public GameObject AngryAngler;
// Use this for initialization
void Start () {
}
void OnTriggerEnter2d(Collider2D other)
{
if (other.CompareTag("Player"))
{
SwapAnglers();
}
}
void SwapAnglers()
{
CalmAngler.gameObject.SetActive(false);
AngryAngler.gameObject.SetActive(true);
}
}
• To diagnose this, we'll need to see how you've set up the two colliding objects, their colliders, tags, Rigidbody if applicable. – DMGregory Dec 10 '18 at 12:13
• I got it figured out. I'm going to update my post but I'm not with my pc at the moment. – Mark Gregg Dec 10 '18 at 22:34
• Just as an FYI, you're allowed to answer your own questions; so please post the solution as an answer, not as an edit, so people can easily tell the difference between problem and solution in the future. – Stephan Dec 11 '18 at 16:35
Well my workaround here was to create an empty game object called AggroArea and apply this script to it along with a box collider 2D of the size of my water area. When the player jumps in it immediately swaps from one of my public enemy sprites to the other one. It does this by counting the collision of the player as it makes contact with the collider of the water. I have the option to have it do something by writing something in the offstate function but as of now I don't need offstate to do anything.
public class JetAnglerSwap : MonoBehaviour {
int playerCount = 0;
public GameObject CalmAngler;
public GameObject AngryAngler;
void OnTriggerEnter2D(Collider2D other)
{
if (other.tag == "Player")
{
playerCount++;
}
else
{
playerCount = 0;
}
}
void Update()
{
if (playerCount == 2)
{
// default on state
OnState();
}
else
{
// default off state
OffState();
}
}
private void OnState()
{
CalmAngler.gameObject.SetActive(false);
AngryAngler.gameObject.SetActive(true);
}
private void OffState()
{
}
}
• You might not want to use the Update function for this though; You could just check the value of playercount and call OnState() or OffState() accordingly ;) – user115399 Dec 13 '18 at 6:00
|
# Problem with torque, angular momentum and forces
Tags:
1. Jun 19, 2017
### Granger
• Homework Help Template added by Mentor
1. The problem statement, all variables and given/known data
I have the following problem to solve:
A 1.8m board is placed in a truck with one end resting against a block secured to the floor and the other one leaning against a vertical partition. The angle the Determine the maximum allowable acceleration of die truck if the board is to remain in the position shown.
If you put this problem on google you can find an image (if it helps). The truck moves from left to right.
3. The attempt at a solution
So I first began to thought that both velocity and acceleration of the board are directed to the right.
The forces acting on the body are its weight, and the normal reactions that the vertical partition and the block exert on the body (which are equal).
Then putting this on equations:
x direction: $$ma_x=N\cos\theta-N$$
and y direction: $$0=N\sin\theta -mg$$
We have 2 equations and 3 unknowns (N, a and m).
We need a 3rd equation which is
$\frac{d}{dt}L_{system}=\sum(\tau_{net})$
(these are supposed to be vectors)
And so if we choose the bottom block as reference point to gives us angular momentum and torques we have (and this is the equation I'm not sure about)
$$-m\frac{l}{2}a\sin \theta= lN\sin(105) - \frac{l}{2}mg\sin(165)$$
(the plus and minus sign appear because of the direction of torque and the direction of angular momentum are given by the right hand rule for cross products).
This leading me to a system of 3 equations.
However if I try to solve this system (for example, isolate mass in eq(1) and substitute in eq (2) I end up with N=0 and therefore m=0 which is absurd). Can someone help me figuring out this problem?
Thanks!
Last edited by a moderator: Jun 19, 2017
2. Jun 19, 2017
### Granger
Anybody?
3. Jun 19, 2017
### haruspex
Your description of the set up is garbled and rather unclear. If you found an image on the net, please post the link.
4. Jun 19, 2017
### Granger
Yes you're right, here it is:
5. Jun 19, 2017
### haruspex
Why would they be equal?
6. Jun 19, 2017
### scottdave
In your initial problem statement, some info got left out " The angle the Determine the maximum " What is the angle? Where did you get the 105 and 165 degrees in your sin(105) and sin(165) ?
7. Jun 19, 2017
### scottdave
What if you approached it like this: Pretend the forward partition is not, there and find the necessary acceleration in order to keep the board at the desired angle.
If the board is hovering in relation to the truck, then the center of mass of the board is accelerating at the same rate as the truck.
This essentially the condition at just slightly faster acceleration, when the board just starts to lift away from the partition.
8. Jun 19, 2017
### Granger
I thought they were equal because the table was fixed (so they equilibrate each other). Thinking more about it this is probably wrong because all the 3 forces (weight, normal reaction and force of the truck).
You're right I forgot to write the angle. It should say: The angle the board makes with the base of the truck. Determine the maximum allowable acceleration of die truck if the board is to remain in the position shown.
I didn't quite understand your approach, sorry. Can you elaborate just a bit more, please?
9. Jun 19, 2017
### haruspex
Not sure what you mean by "force of the truck".
The weight and the normal reaction from the vertical surface are two forces, yes.
I interpret the question as saying the bottom of the board rests on the floor of the truck (a vertical normal force) and against a block that is also on the floor (providing a second horizontal normal force). You can combine hose as a single force if you prefer, but its line of action need not be at the same angle as the board. Better to treat them as two separate forces.
And these forces will not be in balance - there is an acceleration.
10. Jun 19, 2017
### Granger
Oh ok! I thought it was just a single normal reaction.
I think I'm having trouble relating all the forces. I'm simply not understanding the boundary condition and how to relate the torques.
11. Jun 19, 2017
### haruspex
Assign a unique label to each force.
Write the ΣF=ma equation for the horizontal direction. (The vertical direction is trivial.)
Take moments about the board's mass centre and write the Στ=Iα equation (sum of torques = moment of inertia x angular acceleration).
Note that because there is a linear acceleration you must use the mass centre as the axis for torque balance!
12. Jun 20, 2017
### Granger
Hi! I was finally able the solve the problem.
The equation that relates the sum of torques and the derivative of the angular momentum is enough to calculate the acceleration.
What I was doing wrong was considering the torque of the normal reaction in B which is zero in the boundary condition of the board falling (loses contact in B).
Thank you all for your contributions to help me understand the problem!
|
# Stereo Display Regions¶
A StereoDisplayRegion is a special kind of DisplayRegion that contains two views internally: one view for the left eye, and a different view for the right eye. If you have a special 3-D display device, then Panda can use it to deliver each view to the appropriate eye.
Alternatively, you can also simply create the two required views independently, one DisplayRegion for the left eye, and a separate DisplayRegion for the right eye. However, creating a single StereoDisplayRegion for both eyes at the same time is often more convenient.
When you call window.makeDisplayRegion(), it will implicitly return a StereoDisplayRegion instead of a regular DisplayRegion if your window or buffer indicates that it supports stereo output (that is, if window.isStereo() returns true). There are four ways that you can have a graphics output that supports stereo output:
(1) You have a special 3-D display device and the drivers to support it, and you put framebuffer-stereo 1 in your Config.prc file. This tells Panda to activate the OpenGL interfaces to enable the 3-D hardware.
(2) You put red-blue-stereo 1 in your Config.prc file. This tells Panda to render the two different eyes in two different colors, so that the traditional red-blue (or red-cyan) glasses, for instance for 3-D comic books, can be used to view the scene in 3-D. Color is distorted, so it is best if your scene relies on unsaturated color palettes. Shades of gray work particularly well.
(3) You put side-by-side-stereo 1 in your Config.prc file. This is similar to red-blue-stereo, above, but the two views are rendered side-by-side in the same window. This is useful for developing stereo applications, so you can see each view easily; it may also be useful for environments such as head-mounted displays where the output spans two different displays, and each display represents a different eye.
(4) As of Panda3D 1.9.0, you may create a stereo off-screen buffer without special hardware support, assuming the card supports using multiple render targets (most modern cards do), by setting the stereo flag in the FrameBufferProperties object. Panda3D will automatically designate one of the draw buffers to contain the stereo view for the other eye. When binding a texture to the color attachment for render-to-texture, Panda3D will automatically initialize it as a multiview texture containing both left and right views. This is only supported in OpenGL as of writing.
## Using a StereoDisplayRegion¶
A StereoDisplayRegion actually consists of two ordinary DisplayRegions, created automatically. If you need to, you can access them individually with sdr.getLeftEye() or sdr.getRightEye().
Both the left and the right eye DisplayRegions actually share the same Camera object. The thing that makes the view different for the left and the right eyes is the stereo channel setting, which you can set via dr.setStereoChannel(). (You can change this setting on any DisplayRegion you like; it doesn’t have to be a special StereoDisplayRegion. The only thing that a StereoDisplayRegion does is it manages the internal left and right DisplayRegions automatically, but there’s no reason you need to use a StereoDisplayRegion if you want to manage them yourself.)
You can set a DisplayRegion’s stereo channel to one of Lens.SC_left, Lens.SC_right, or Lens.SC_mono. The default for a non-stereo DisplayRegion is Lens.SC_mono, which means the normal view from the center of the camera. If you set it to either left or right, then the point of view is slid automatically to the left or right, respectively, according to the stereo lens parameters.
Setting the stereo channel to left or right also resets the texture view offset associated with the DisplayRegion: the default tex view offset is 0 for the left eye, and 1 for the right eye. This allows dual-view stereo textures to render properly in the DisplayRegion, so that the left view is visible in the left eye and the right view in the right eye. See Stereo/Multiview Textures for more about this feature.
The lens parameters can be controlled via Lens.setInterocularDistance() and Lens.setConvergenceDistance(), or by the equivalent Config.prc settings default-iod and default-converge. Refer to the following illustration:
In this image, the camera indicated with “C” is the center view, the normal view from the center of the camera view in the case of Lens.SC_mono. “L” and “R” represent the left and right points of view for the same camera, which will be used in the case of Lens.SC_left or Lens.SC_right. The distance between these two eyes, line “a” on the image, is the interocular distance, which should be in the same units as the scene you are viewing.
The gray lines on the image represent the direction the camera appears to be facing into the scene. Both the left and the right eyes converge together at one point, which is the convergence distance. This distance is represented by line “b” on the image. Generally, the objects that are this distance away will appear to be in the screen plane. Objects that are closer than the convergence distance will appear to float in front of the screen, while objects that are further than the convergence distance will appear to be inside the screen.
Note that the default stereo frustums that Panda creates are off-axis frustums, not toe-in frustums. That is, both the left and the right eyes are still pointing in the precise same direction as the center camera, but the frustum is distorted a bit to make objects converge approximately at the requested distance. This is generally regarded as producing a superior stereo effect over the more naive toe-in approach, in which the left and right eyes are simply tilted towards each other to provide the required convergence.
If you require a different stereo frustum–for instance, if you wish to use toe-in stereo, or some other kind of stereo frustum of your choosing–you may simply set each DisplayRegion to use its own camera (instead of both sharing the same camera), and assign the particular frustum you wish to each eye.
Note
Prior to Panda3D 1.9.0, the convergence was being calculated incorrectly. It has since been corrected. To restore the legacy behavior you can set the stereo-lens-old-convergence variable to true.
|
# Math Help - almost discontinues function
1. ## almost discontinues function
Define F: R -> R by f(x) = 5x if x is rational and f(x)= x^2 + 6 if x is irrational. Prove that f is discontinuous at 1 and continuous at 2. Are there any other points besides 2 at which f is continuous?
2. Originally Posted by slowcurv99
Define F: R -> R by f(x) = 5x if x is rational and f(x)= x^2 + 6 if x is irrational. Prove that f is discontinuous at 1 and continuous at 2. Are there any other points besides 2 at which f is continuous?
In any neighbourhood of 1 there are rational and irrational points, so there
are points arbitrarily close to 1 where f(x) is arbitrarily close to 5, and other
points arbitrarily close where f(x) is close to 7, so f must be discontinuous
at x=1. So if you want you can find a sequence x_n of rational points
converging to 1 and for this sequence f(x_n), similarly you can find a
sequence of irrational points y_n which converges to 1 and for this sequence
f(y_n) converges to 7, thus proving that f is not continuous at x=1.
All sequences x_n points all converging to 2 have f(x_n) converging to 10, so
f(x) is continuous at x=2.
Similarly the function is continuous whenever 5x = x^2 + 6, so it is
continuous at x=2 and x=3.
The above should be convertible into whatever form you use to demonstrate
continuity without too much difficulty.
RonL
|
# The Communication Complexity of Approximate Set Packing and Covering
## Presentation on theme: "The Communication Complexity of Approximate Set Packing and Covering"— Presentation transcript:
The Communication Complexity of Approximate Set Packing and Covering
Noam Nisan Speaker: Shahar Dobzinski
Communication Complexity
n players, computationally unlimited. Each player i holds some private input Ai. The goal is to compute some function f(Ai,…,An). We are counting only the number of bits transmitted the players. Worst case analysis.
Communication Complexity – Equality
2 players (Alice and Bob). Input: Alice holds a string A{0,1}n, Bob holds a string B{0,1}n. Question: is A=B? How many bits are required? Upper Bound? Lower Bound?
Equality Lower Bound Denote an instance by (A,B).
Lemma: For each T≠T’ {0,1}n, the sequence of bits for (T,T) is different than the sequence of bits for (T’,T’). The answer for both (T,T) and (T’,T’) is YES. Proof: Suppose that there are T,T’ such that the sequences are identical.
Equality Lower Bound – cont.
What happens when the instance is (T,T’)? Alice sends the first bit. Same bit in (T,T’) and (T,T) Bob sends the same bit for T and for T’. Same goes for Alice, in the next round. Corollary: the sequence of bits is the same for (T,T’) and for (T,T). But (T,T’) is a NO instance and (T,T) is a YES instance - a contradiction.
Equality Lower Bound We proved that for each T≠T’ {0,1}n, the sequence of bits for (T,T) is different than the sequence of bits for (T’,T’). There are 2n different such sequences. Log(2n)=n is a lower bound for the number of bits needed.
Combinatorial Auctions
n bidders, a set of M={1,…,m} items for sale. Each bidder has a valuation function vi:2M->R+ Standard assumptions: Normalized: v()=0 Monotonicity: v(T)≥v(S), ST Goal: a partition of M, S1,…,Sn, such that Svi(Si) is maximized. We will call Svi(Si) the total social welfare.
Combinatorial Auctions – cont.
Problem: input is “exponential” - we are interested in algorithms that are polynomial in n and m. Two approaches: Bidding langauges Example: single minded bidders Communication complexity
Upper Bound Give all items to bidder i that maximizes vi(M).
Proposition: n-approximation to the optimal total social welfare. Proof: denote the optimal allocation by O1,…,On. Sni=1vi(M) ≥ Sivi(Oi) = OPT.
Lower Bound – 2 Bidders Theorem: For any e>0 any (2-e)-approximation to the total social welfare requires exponential communication. Two bidders with valuations v1 and v2. The valuations will have the following form: v(S) = |S|<m/2 0/1 |S|=m/2 |S|>m/2 Denote by vc the “dual” of v: vc(Sc) = |S|<m/2 1-v(S) |S|=m/2 |S|>m/2 For every allocation M=SSc, v(S)+vc(Sc)=1.
Main Lemma Lemma: Let v1 and v2 be two different valuations. The sequence of bits for (v1,vc1) is different than the sequence of bits for (v2,vc2). Proof: Suppose the sequences are identical. Then the sequence of bits for (v1,vc2) is the same too. Same reasoning as before. The allocation produced for (v1,vc1), (v2,vc2), (v1,vc2), (v2,vc1) is the same.
Main Lemma – cont. There is a bundle T, T=|m/2|, such that v1(T)≠v2(T). WLOG v1(T)=1 and v2(T)=0. Thus v2c(Tc)=1, and the optimal solution for (v1,v2c) is 2. The protocol generated an optimal allocation (S,Sc). So v1(S)+v2c(Sc)=2. But ((v1(S)+v1c(Sc))+ (v2(S)+v2c(Sc))=1+1=2. v1c(Sc)+v2(S)=0. A contradiction to the optimality of the protocol.
The Lower Bound – cont. If v1≠v2 then the sequence of bits for (v1,vc1) is different than the sequence of bits for (v2,vc2). The number of different valuations is 2(m choose m/2). Since for each (v,vc) we have a different sequence of bits, the communication complexity is at least log(2(m choose m/2)) = (m choose m/2) = exp(m)
Corollaries Optimal solution requires exponential communication.
An (2-e)-approximation of the total social welfare requires exponential communication. tight for 2 bidders. Unconditional lower bound even if P=NP
Lower Bound – General Number of Bidders
Theorem: Any approximation of the optimal total social welfare to a factor better than min(n,m1/2-e), for any e>0, requires exponential communication. This lower bound holds not only for deterministic communication, but also for randomized and non-deterministic setting.
Approximate Disjointness
n players, each holds a string of length t. The string of player i specifies a subset Ai{1,…,t}. The goal is to distinguish between the following two extreme cases: NO: iAi ≠ YES: for every i≠j AiAj =
Approximate Disjointness – cont.
Theorem: The approximate disjointness requires communication complexity of at least W(t/n4). This lower bound also holds for the randomized and non-deterministic settings. (Alon-Matias-Szegedi) Theorem: The approximate disjointness requires communication complexity of at least W(t/n). (Radhakrishnan-Srinivasan)
Proof (Approx. Disj.) – Equality Matrix
A\B 000 001 010 011 100 101 110 111 Y N
Proof (Approx. Disj.) – Another Example for Matrix
A\B 000 001 010 011 100 101 110 111 Y N
Proof (Approx. Disj.) – Rectangles
Definition: a (combinatorial) rectangle is a cartesian product R1*…*Rn where each RiAi. Definition: a monochromatic rectangle is a rectangle which doesn’t contain both YES instances and NO instances. Lemma: log(number of monochromatic rectangles) is a lower bound for the communication complexity. we proved a special case before.
Proof – Approximate Disjointness
There are (n+1)t YES instances (for every i≠j AiAj = ). A YES instance is a partition between (n+1) players. Lemma: any rectangle which does not contain a NO instance can contain at most nt YES instances. Corollary: there are at least (1+1/n)t monochromatic rectangles. Corollary: the communication complexity of approximate-disjointness is at least log((1+1/n)t) = t(log(1+1/n))
Proof – Approximate Disjointness
Lemma: any rectangle which does not contain a NO instance can contain at most nt YES instances. Reminder: a NO instance is iAi ≠ . Proof: Fix such rectangle R. For each item j there must a player i such that never gets j. Otherwise, we have a NO instance. Upper bound to the number of YES instances: all allocations between the rest of the (n-1) players and “unallocated” – nt.
The Combinatorial Auction
We will prove that it requires exponential communication to distinguish between the case the total social welfare is 1 and the case that it is n. We will reduce from the approximate-disjointness with strings of size t (to be determined later).
The Partitions Set We will use a set of partitions F={Ps|s=1…t}. Each Ps is a partition Ps1,…,Psn of M. A set of partitions F={Ps|s=1…t} has the pair wise intersection property if for every choice of i≠j, and every si≠sj, PsiiPsjj≠. i.e. every two parts from different partitions intersect. P1: 1 2 3 4 5 6 7 8 9 P2: 1 4 7 2 5 8 6 9 3 P3: 2 5 8 3 6 9 1 4 7
Existence of the partitions set
Lemma: Such a set F exists with |F|=t=em/2n^2/n2 Proof: using the probabilistic method. for each partition, place each element independently at random in one part of the partition. Fix i≠j, si≠sj, and an item j. Pr[j is not in both Psii and Psjj]=1-1/n2 The probability that they do not intersect: Pr[PsiiPsjj=] = (1-1/n2)m ≤ e-m/n^2
Existence – cont. Previous slide: Pr[PsiiPsjj=] ≤ e-m/n^2
We have at most n2t2 choices of indices. Using the union bound: Pr[ pair of parts that don’t intersect] ≤ n2t2(e-m/n^2) Choose t = em/2n^2/n2 = exp(m/n2). Pr[ pair of parts that don’t intersect] < 1 Pr[all pair of parts intersect] > 0 Such a set exists.
The Reduction We reduce the approximate-disjointness problem to a combinatorial auction (m items, n bidders). Each player i who got Ai as input, constructs the collection Bi = {Psi|Ai=1}. Define the valuations as: Vi(S) = 1 T, TBi and TS 0 otherwise Suppose A1=101 The first bidder values all bundles which contain {1,2,3} or {2,5,8} with 1, and the rest of the bundles with 0 P1: 1 2 3 4 5 6 7 8 9 P2: 1 4 7 2 5 8 6 9 3 P3: 2 5 8 3 6 9 1 4 7
The Reduction – cont. NO instance (iAi ≠ ): there is some kiAi. Assign Pki to bidder i, and the total social welfare is n. YES instance (for every i≠j AiAj = ): the total social welfare is at most 1. Corollary: It requires exponential communication to distinguish between the case the total social welfare is 1, and the case that it is n.
Remarks We used strings of size t=em/2n^2/n2, thus the communication complexity is W(em/2n^2-5log(n)). If n < m1/2-e, the communication complexity is exponential. Corollary: For any e>0, an m1/2-e-approximation requires exponential communication. An m1/2-approximation algorithm exists.
Set Cover A universe of size |M|=m.
n players, each holds a collection Ai2M. Goal: find the minimum cardinality set cover. Upper bound: the greedy algorithm is a ln(m) approximation. Lower bound – a reduction from approximate disjointness.
Lower Bound 2 players (Alice and Bob).
Alice holds a collection A 2M, and Bob holds a collection B 2M. We will prove that it requires exponential communication to distinguish between the case 2 sets are needed to cover M, and the case at least r+1 sets are needed (for r=log(m)-O(loglog(m))). We will require the following class of subsets of M:
The r-Covering Class A class C={(S1,S1c),…,(St,Stc)} has the r-Covering property if every collection of at most r sets, which does not contain a set and its complementary, does not cover all M.
Existence Lemma: For any given r≤ log(m) – O(loglog(m)), there is a class C with t=em/(r2^r) Proof: Probabalistic construction. put each element of the universe in the set Sj with probability ½. For a random collection of r sets, the probability that a single element j is in their union is 1-2-r. For a random collection of r sets, the probability that their union is M is (1-2-r)m≤e-n/2^r. There are at most (2t choose r) sets, so we need to make sure that (2t choose r)e-n/2^r<1 We can choose t=em/(r2^r).
The Reduction We reduce from the approximate disjointness problem with strings of size t. Alice will construct the collection D={Si|Ai=1}. Bob will construct the collection E={Si|Bi=1}. NO instance (AB ≠ ): there is some k AB. Alice holds Sk, and Bob holds Skc and these two sets cover the universe. YES instance (AB = ): at least r+1 sets are needed to cover the world. Corollary: It requires exponential communication to distinguish between the case 2 sets cover the universe, and between the case at least r+1 sets are needed.
|
### Geoboards
This practical challenge invites you to investigate the different squares you can make on a square geoboard or pegboard.
### Polydron
This activity investigates how you might make squares and pentominoes from Polydron.
If you had 36 cubes, what different cuboids could you make?
# How Old?
##### Stage: 2 Challenge Level:
Cherri, Saxon, Mel and Paul are friends. They are all different ages $5, 6, 7$ and $8$ years old.
Can you find out the age of each friend?
Use the grid below to help you keep track of your answers as you follow these clues.
Saxon's age is an even number.
Mel and Paul's ages added together are double Saxon's age.
Mel's age is half of Cherri and Saxon's ages added together.
Mel and Paul's ages are both odd numbers.
Cherri is the oldest.
|
# Are $x=-\frac{m}{n}$ and $-x=\frac{m}{n}$ the same?
I was wondering that is
$x=-\frac{m}{n}$
same as
$-x=\frac{m}{n}$
The question popped into my mind when had
$x=-\frac{11}{14}$ or $-x=\frac{11}{14}$
as an anwser to one of my equations. Was the $x$ positive or negative $-x$ only depended on which side I putted numbers and X's in my calculations.
• When solving an equation, you're usually expected to denote the value of $x$, not the value of $-x$. – barak manos Jan 14 '16 at 10:22
• If you want to quote the value of $x$ then you would write $x = -11/15$ (which is negative). By writing $-x = 11/15$ then you are saying that (the different number) $-x$ equals to $11/15$ (which is positive). – Winther Jan 14 '16 at 10:27
• Try Googling "multiplicative reflexive axiom". Reason as follows $$x = - \frac{m}{n}\implies cx= c(-\frac{m}{n})$$ Now let $c = -1$ – John Joy Jan 14 '16 at 14:32
## 2 Answers
In both cases the sign of $x$ is the same. If $\frac{m}{n}>0$ Then $$x = -\frac{m}{n}<0 \Rightarrow x<0$$ In the second equation $-x = \frac{m}{n}$ we also have that $x<0$ since $$-x = \frac{m}{n} \Rightarrow x = -\frac{m}{n}$$ The same can be done when $\frac{m}{n}<0$.
They are the same in the sense that those equations are equivalent, i.e. they have the same solutions.
|
# Changeset 1400 for Deliverables
Ignore:
Timestamp:
Oct 18, 2011, 12:14:02 PM (8 years ago)
Message:
more added on parameters
File:
1 edited
Unmodified
Added
Removed
• ## Deliverables/D4.2-4.3/reports/D4-3.tex
r1399 We mentioned in the Deliverable D4.2 report that all joint languages are parameterised by a number of types, which are later specialised to each distinct intermediate language. As this parameterisation process is also dependent on designs decisions in the language semantics, we have so far held off summarising the role of each parameter. We now summarise what each parameter is. We begin the abstraction process with the \texttt{params\_\_} record. This holds the types of the representations of the different register varieties in the intermediate languages: \begin{lstlisting} record params__: Type[1] ≝ }. \end{lstlisting} We summarise what these types mean: We summarise what these types mean, and how they are used in both the semantics and the translation process: \begin{center} \begin{tabular*}{\textwidth}{p{4cm}p{11cm}} \texttt{acc\_a\_reg} & The type of the accumulator A register. In some languages this is implemented as the hardware accumulator, whereas in others this is a pseudoregister.\\ \texttt{acc\_b\_reg} & Similar to the accumulator A field, but for the processor's auxilliary accumulator, B. \\ \texttt{dpl\_reg} & \\ \texttt{dph\_reg} & \\ \texttt{pair\_reg} & \\ \texttt{generic\_reg} & \\ \texttt{dpl\_reg} & The type of the representation of the low eight bit register of the MCS-51's single 16 bit register, DPL. Can be either a pseudoregister or the hardware DPL register. \\ \texttt{dph\_reg} & Similar to the DPL register but for the eight high bits of the 16-bit register. \\ \texttt{pair\_reg} & Various different `move' instructions have been merged into a single move instruction in the joint language. A value can either be moved to or from the accumulator in some languages, or moved to and from an arbitrary pseudoregister in others. This type encodes how we should move data around the registers and accumulators. \\ \texttt{generic\_reg} & The representation of generic registers (i.e. those that are not devoted to a specific task). \\ \texttt{call\_args} & \\ \texttt{call\_dest} & \\ \texttt{extend\_statements} & \texttt{extend\_statements} & Instructions that are specific to a particular intermediate language, and which cannot be abstracted into the joint language. \end{tabular*} \end{center}
Note: See TracChangeset for help on using the changeset viewer.
|
# Google Music and Apple Lossless
From the start, I’ve had some trouble managing my music collection between Apple’s ecosystem and Google’s ecosystem. The initial version of Google’s Music Manager (used to upload songs from iTunes to Google Play Music) did not support ALAC, though it did support FLAC. However, save for some plugins that often have issues, iTunes doesn’t support FLAC.
The easiest solution at the time seemed to be converting all my FLAC media to ALAC, hosting it on iTunes, and hoping one day Google Music Manager would support ALAC. And it did, some time later.
However, a niggle. Google Music Manager only supports 16-bit depth ALAC, not 24-bit. This seems to be a rather arbitrary limitation, as ALAC only supports two bit depths of 16-bit and 24-bit, but I suppose one should be grateful ALAC is supported at all. It may also be some licensing limitation, or a cross-platform compatibility issue with the encoders available on Windows and OS X.
A few of my tracks (including the excellent FTL soundtrack by Ben Prunty) were provided as 24-bit FLAC and thus transparently converted by every converter I had handy (Max on OSX and dbPowerAmp on Windows) to 24-bit ALAC with no option to downconvert.
I thought I might use afconvert, OSX’s built-in audio conversion binary that leverages CoreAudio. However, it was not clear as to how to force 16-bit depth vs 24-bit depth for ALAC.
Thanks to this obscure posting on Apple’s mailing lists, there is an undocumented (as far as I could find) encoder flag for ALAC that forces 16-bit depth, which is exactly what I needed.
There are a few commands that are useful to know. afinfo will let you know if your file has 24 or 16-bit depth, in a command similar to this:
afinfo Ben\ Prunty\ Music\ -\ FTL\ -\ 02\ MilkyWay\ $$Explore$$.m4a
And this would give you output like:
----
File: afinfo
Fail: AudioFileOpenURL failed
File: Ben Prunty Music - FTL - 02 MilkyWay (Explore).m4a
File type ID: m4af
Num Tracks: 1
----
Data format: 2 ch, 44100 Hz, 'alac' (0x00000001) from 16-bit source, 4096 frames/packet
Channel layout: Stereo (L R)
estimated duration: 160.370363 sec
audio bytes: 13984607
audio packets: 1727
bit rate: 697471 bits per second
packet size upper bound: 10333
maximum packet size: 10333
audio data file offset: 12288
optimized
audio 7072333 valid frames + 0 priming + 1459 remainder = 7073792
source bit depth: I16
----
The final line source bit depth: I16 Indicates this file has an integer source bit depth of 16 (as opposed to floating-point, which I don’t believe the ALAC format supports, though others may). If it were 24-bit, it would look like source bit depth: I24.
Using afconvert, you can convert the file this way:
afconvert -d alac/1 song.m4a song.m4a
This will read your file (in this example song.m4a) and overwrite it with an ALAC-encoded file with 16-bit source depth (you can, of course, change the second instance of song.m4a to song_16.m4a and not overwrite your original file). The /1 encoder flag is what triggers this. While I did find one piece of documentation that refers to the availability of encoder flags, but not what flags it accepts. It’s likely mentioned somewhere in CoreAudio’s docs, or possibly in the header files elsewhere in CoreAudio.
Combine this command with find and you can convert all the files in your current subdirectory recursively.
find ./ -name *.m4a -exec afconvert -d alac/1 {} {} \;
Be careful though with this, as it will do this conversion for all .m4a files, regardless of their bit depth. Also, it will do so serially, which is a bit of a waste. Two options:
1. Pipe the output of find to something like parallel (GNU Parallel, which you’ll likely have to add to your OSX machine via Homebrew or MacPorts). This will kick off an encoder per CPU core on your machine, which should make the process much faster. I’ve not tested this, but it should look something like: find ./ -name *.m4a | parallel afconvert -d alac/1 {} {}.
2. Run a find ./ -name *.m4a -exec afinfo {} \; -exec echo {} \; and pipe the output of that command to grep, looking for the line “source bit depth: I24” with the flag -A 2 (lines after the match grep finds to print out along with the result). This should give you lines that contain the source bit depth, the line with dashes that follows, and the echo of the filename that we wrote out in the -exec part of find. Pipe to grep again, looking for lines with m4a, and you should get a nice list of all the files with 24-bit depth. Pipe that to xargs or parallel which will execute afconvert on each line of output. Something that (vaguely) should look like this (again, this is untested!) find ./ *.m4a -exec afinfo {} \; -exec echo {} \; | grep -A 2 "source bit depth: I24" | grep m4a | parallel afconvert -d alac/1 {} {}. It seems really complicated but take the command apart at every pipe from right to left and you can see what each part is doing.
|
# how to change the maximum number of fork process by user in linux
Whenever I login to shell, I get this error
-bash: fork: Resource temporarily unavailable
-bash-3.2$ I can't seem to execute any command which uses fork(). I tried ulimit -u as it doesn't use fork and it returned 35. Somehow my max process is set to 35. I want to increase that, but I don't know where to make that change. - ## migrated from stackoverflow.comMar 2 '13 at 6:34 This question came from our site for professional and enthusiast programmers. ## 5 Answers If you would like to change the limit only for that shell, you could run: sudo ulimit -u 1000 If you want to make a more permanent change, you need to edit either /etc/limits.conf or /etc/security/limits.conf (depending on your linux distro) and add the following lines: username hard nproc 1000 Substitute username with the actual user name Instead of username a groupname can also be used if you prefix it with an @. If you use * it would be the default for all users Examples: myuser hard nproc 1000 @mygroup hard nproc 3000 * hard nproc 500 - i tried this sudo ulimit -u 1000 and it again says fork not avaiable. do i need to execute that as root – user1146320 Mar 2 '13 at 2:24 Ahh, sudo again requires to fork a process - dang! I guess the only other way is to modify the limits.conf by directly logging in as root. – Tuxdude Mar 2 '13 at 2:28 my that file is empty , all lines are comments only – user1146320 Mar 2 '13 at 2:33 Yes, then add a newline in that file for the user with the values you're interested. – Tuxdude Mar 2 '13 at 2:34 i tried that and its not working , do i need to restart somethingto makethatwork – user1146320 Mar 2 '13 at 4:46 This can be changed in /etc/security/limits.conf. Look for lines of the form: username hard nproc 25 @groupname hard nproc 100 These lines limit username user to 25 processes and users in group groupname to 100 processes. You will need root permissions on the machine though. - my that file is empty , is there any other system who can limit the process – user1146320 Mar 2 '13 at 2:32 Here some ideas : If your limits.conf is empty, grep -l ulimit /etc/*$HOME/.* 2> /dev/null to check if someone has set a ulimit somewhere, and remove it.
After editing limits.conf, all you have to do is to logout and login again to take effect.
To gain a process, use exec. Try for instance exec sudo su to become root.
-
It should be enough to (re)set the ulimit, no need to change configuration (let alone system-wide configuration under /etc). And 35 processes should be plenty, something is wrong with the login process of OP. – vonbrand Mar 2 '13 at 11:21
@vonbrand - absolutely. You should post this as the answer with possible things to check. +1 – jim mcnamara Mar 2 '13 at 11:36
i had two files , /etc/profile and /etc/bashrc this was the line ulimit -n 100 -u 35 -m 200000 -d 200000 -s 8192 -c 200000 -v unlimited 2>/dev/null – user825904 Mar 2 '13 at 11:43
It should be enough to (re)set the ulimit, no need to change configuration (let alone system-wide configuration under /etc). And 35 processes should be plenty, something is wrong with the login process of OP.
In a terminal run ps -au, that should show all processes running as you, check the list (or post it here) to see if something strange is going on.
-
When I look at my process list, I have over 40 processes, started by the gnome environment alone. Not counting the processes I started, like firefox, emacs, bash, ... So, a ulimit of 35 isn't really that much nowadays, if you use a graphical environment. – Olaf Dietsche Mar 2 '13 at 16:00
@OlafDietsche, Fedora 18 Gnome 3 here; I'm running Firefox, 2 xterms (one with 6 tabs) and an assortment of applets. 17 processes in all. – vonbrand Mar 2 '13 at 16:04
Then Fedora is setup a lot leaner than Ubuntu (my system) by default, or you have trimmed your environment before. – Olaf Dietsche Mar 2 '13 at 16:13
@OlafDietsche, no trimming here. – vonbrand Mar 2 '13 at 16:20
As others already mentioned look at limits.conf. When you login into Gnome, KDE or any other GUI, you have likely more than 35 processes running already.
Logout from the GUI and switch to a VT with Ctl Alt F1, for example, and login without a GUI.
Now you should be able to look into /etc/security/limits.conf. If it is empty or all commented out, you can look, if there's something in the directory /etc/security/limits.d, which has reduced the ulimit.
On the console, you should also be able to start additional processes for editing or adjusting limits.conf or files in limits.d.
-
|
# openmp example c
OpenMp do all the stuff by just writing #pragma omp parallel and that will be run parallel with given number of threads. if you have lots of print-outs. Prerquisite: OpenMP | Introduction with Installation Guide In C/C++/Fortran, parallel programming can be achieved using OpenMP.In this article, we will learn how to create a parallel Hello World Program using OpenMP.. STEPS TO CREATE A PARALLEL PROGRAM. Optional. * The master thread only prints the total number of threads. When a thread finishes, it joins the master.When all threads finished, the master continues with code followingthe parallel section. These tutorials will provide basic instructions on utilizing OpenMP on both the GNU C Compiler and the Intel C Compiler. Simple Tutorial with OpenMP: How to Use Parallel Block in C/C++ using OpenMP? Example. The parallel sections of the programwill caus… OpenMP SIMD, introduced in the OpenMP 4.0 standard, targets making vector-friendly loops. (C)¶ A self-gravitating disc is integrated using the leap frog integrator and direct summation. OpenMP (Open Multi-Processing) is an Application Program Interface (API) that supports Fortran and C/C++. OpenMP example. OpenMP Markus Höhnerbach andProf.PaoloBientinesi HPAC,RWTHAachen [email protected] WS18/19 The pragma omp parallel is used to fork additional threads to carry out the work enclosed in the construct in parallel. In this example, 9 threads will be running at the same time (x=1, y=3, y*3=9), and you should see 9 lines of messages. Dynamic scheduling is used to get good load balancing. Related Data and Programs: DIJKSTRA_OPENMP, a C code which uses OpenMP to parallelize a simple example of Dijkstra's minimum distance algorithm for graphs. First, the OpenMP spec doesn't address any inter-language compatability for the OpenMP constructs. In C/C++, OpenMP uses #pragmas. If you want to specify explicitly the number the threads to execute the parallel statement block, use the num_threads() compiler directive, as shown below. For a brief revision history, please see Changes.log. For more information, see SIMD Extension to C++ OpenMP in Visual Studio.. OpenMP SIMD in Visual C++. mpicc -fopenmp hello.c -o hello then ran using . OpenMP, short for “Open Multi-Processing”, is an API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran - on most platforms, processor architectures and operating systems. In general, to compile (and link) an application with OpenMP support you need only to add a compile flag and if you use the OpenMP API you need to include the OpenMP header (omp.h). Include the header file: We have to include the OpenMP header for our program along with the standard header files. The OpenMP specific pragmas are listed below. The additional examples and updates are referenced in the Document Revision History of the Appendix on page 424. #pragma omp parallel private(data, id, total) num_threads(6). In this post, we will be exploring OpenMP for C. For copyright information, please see omp_copyright.txt. How to distinguish the threads? Thread creation. e.g. Modify the OpenMP Support property. Visual C++ supports the following OpenMP … Please note that it doesn’t have to be constant, actually, you can pass almost any expressions, including variables. Example. OpenMP is cross-platform can normally ben seen as an extenstion to the C/C++, Fortran Compiler i.e. óÄþ×-|׬Ú;=mþ93$r^ºDpp´:¯©8V¶JªÀ½Ny¹~qçÃBÕÓ¥±ÿÉ To get the number of total running threads in the parallel block, you can use function omp_get_num_threads. –EOF (The Ultimate Computing & Technology Blog) —, My wife's brother is a junior high school student and he asked me this simple…, In here, we have presented a simple and powerful Parallel Job Runner written in C#.…, Disjoint Sets is one of the most powerful and yet simple data structure. The main treadis the master thread. Getting Started. When run, an OpenMP program will use one thread (in the sequentialsections), and several threads (in the parallel sections). Required. The nodejs executable allows you to…, C# 4.0 or above provides the Parallel.For and Parallel.ForEach to support parallelism in an intuitive…, Notice: It seems you have Javascript disabled in your Browser. See OpenMP. There are many compilers that support different versions of the OpenMP specification. OpenMP* Examples. Each thread executes the parallelized section of thecode independently. You may also see the messages printing interleaving each other depending on the OS CPU task scheduling. OpenMP is a C/C++ library, the "++" operation in C# doesn't do the same thing it does in C++ either. The following examples show how to use several OpenMP* features. // make sure only 1 thread exectutes the critical section at a time. In Pthread there are two functions create and join threads. Last time, we presented the bash/awk…, I need a tool to run multiple scripts (mainly vbscript or jscript files on windows)…, At Linux Shell, a makefile is named Makefile or makefile. The idea…, The Linux Bash Shell supports inherent task parallelism. Exceptions include index variables (Fortran, C/C++) and variables declared inside parallel region (C/C++) OpenMP provides a way to declare variables private or shared within an A: We know $\int_0^{2\pi} \sin^2x…, Previous Tutorial on nodejs can be found at [1]. OpenMP program structure:An OpenMP program has sectionsthat are sequential and sections that are parallel.In general an OpenMP program starts with a sequential section in whichit sets up the environment, initializes the variables, and so on. I have no trouble getting programs that use MPI or OpenMP (but not both) to work. Clauses can be in any order, and repeated as necessary unless otherwise restricted. This section gives a quick introduction to OpenMP with Fortran example codes. OpenMP Examples9 2 The OpenMP Memory Model In the following example, at Print 1, the value of x could be either 2 or 5, depending on the timing of the threads, and the implementation of the assignment to x.There are two The following sample shows some of the effects of thread pool startup versus using the thread pool after it has started. Note that you need a compiler which supports OpenMP to run this example. Feb 23, 2018 openmp parallel for example c omp parallel guided openmp gcc parallel for openmp parallel random number generator omp parallel hello world omp.h parallel openmp parallel if openmp.--> Provides links to directives used in the OpenMP API. Run the generated exectuable hello_openmp. Shared memory parallelization using OpenMP is enabled in the Makefile. I am using a dual-processor Xeon workstation (2x6 cores) running Ubuntu 12.10. Run the generated exectuable hello_openmp The OpenMP code Parallel Construct basically says: “Hey, I want the following statement/block to be executed by multiple threads at the same time.”, So depending on the current CPU specifications (number of cores) and a few other things (process usage), a few threads … Intel Compiler Example icc -o omp_helloc -openmp omp_hello.c omp_hello.c(22): (col. 1) remark: OpenMP DEFINED REGION WAS PARALLELIZED. (STEEMIT), C# Example - Using Parallel ForEach to Improve Existing Code, All-In-One Raspberry PI 400 Kit – Personal Computer …, How to Convert Binary Number in a Linked …, Compute the Angle of the Hour and Minute …, Microbit Programming: The Development of a Snake Eating …, Teaching Kids Programming – Check a Valid Parenthese …, Compute the Sequential Digits within a Range using …, The Benefits Coders Can Expect In The Future. For example, a function defined in the SYCL part of the device code cannot be called from the OpenMP code that runs on the device and vice versa. No other openmp thread has access to this ”private” copy By default most variables are considered shared in OpenMP. OpenMP OpenSees Perl POV-Ray Python (including Anaconda) Python Packages & Conda Environment PyTorch Quantum ESPRESSO R RAxML Ruby SAMtools Scala Scythe STAR SUNDIALS TBB Tensorflow with GPU (RHe7) Tensorflow with GPU (RHe6) Trim Galore! Text describing an example with a 5.0 feature specifically states that the feature support begins in the OpenMP 5.0 Specification. OPENMP is a directory of C examples which illustrate the use of the OpenMP application program interface for carrying out parallel computations in a shared memory environment.. In order to submit a comment to this post, please write this code along with your comment: 64687e00833676918f3270e802602531. OpenMP (www.openmp.org) makes writing the Multithreading code in C/C++ so easy. "Greetings from process %d out of %d with Data %d, // threads may interleaving the modification. The omp ciritical ensures only 1 thread enters the block at a time. Must appear after the pragma and before any clauses. Let’s compile the code using the gcc/g++ compiler. Compile and Run, we have something like this: Please note that the variables defined inside the block are separate copies between threads, so it is gaureented that data field sent to console is unique. But the easiest method will be to use the omp critical directive as provided by OpenMP. OpenMP provides the omp_get_thread_num() function in the header file omp.h. Example (C program): Display "Hello, world." – Preston Guillot Oct 10 '13 at 20:36 What do you mean by 'not exactly what OpenMP code did? The OpenMP code Parallel Construct basically says: “Hey, I want the following statement/block to be executed by multiple threads at the same time.”, So depending on the current CPU specifications (number of cores) and a few other things (process usage), a few threads will be generated to run the statement block in parallel, after the block, all threads are joined. This example shows a simple parallel loop where the amount of work in each iteration is different. Let’s name the following first OpenMP example hello_openmp.c Let’s compile the code using the gcc/g++ compiler. The part of the code that’smarked to run in parallel will cause threads to form. OpenMP is a Compiler-side solution for creating code that runs on multiple cores/threads. The underlying architecture can be UMA or NUMA shared memory. Nowadays, OpenMP is supported by the most popular C/C++ compilers: gcc, icc, PGI compiler. OpenMP and DPC++/SYCL device parts of the program cannot have cross dependencies. OpenMP is a Compiler-side solution for creating code that runs on multiple cores/threads. The Simple and Powerful Parallel Runner for Windows Scripting Hosts, The Simple Tutorial to Disjoint Set (Union Find Algorithm), AWK Tutorial: When are you expected to produce your next witness block? HELLO_OPENMP is available in a C version and a C++ version and a FORTRAN90 version. There is a very limited set of operators permitted for reduction clauses in C. Of course C does not have an intrinsic do max/min, but still this is a fairly common operation. * All threads in the team obtain their unique thread number and print it. Because OpenMP is built into a compiler, no external libraries need to be installed in order to compile this code. For example, you can put a &…, This is the longest Linux shell command I've written! I compiled the source above using the command. A makefile is a text…, Q: Compute [math] \int_0^{2\pi} \sin^2x dx$ without integration. // Splits element vector into element.size() / Thread Qty // and allocate that range for each thread. Installation Procedure: A valid OpenMP directive. How to Download Instagram Videos using PHP and Javascript? Portland Group Example [sas@discovery intro_openmp]$ pgcc -o omp_helloc -mp omp_hello.c [sas@discovery intro_openmp]$export OMP_NUM_THREADS=2 [sas@discovery intro_openmp]$ ./omp_helloc Hello World from thread = 0 Number of threads = 2 Hello World from thread = 1 pgf90 -o omp_hellof -mp omp_hello.f $export OMP_NUM_THREADS=2$ ./omp_hellof OpenMP hooks the compiler so that you can use the specification for a set of compiler directives, library routines, and environment variables in order to specify shared memory parallelism. Let’s name the following first OpenMP example hello_openmp.c. Required for all OpenMP C/C++ directives. OpenMP maintains a list here with the compiler that support it and the supported version. Parallel Programming With OpenMP and FORTRAN 1 INTRODUCTION 2 The HELLO Example 3 The SAXPY Example 4 The COMPUTE PI Example 5 The MD Example 2/1. $export OMP_NUM_THREADS=3$ ./omp_helloc Hello World from thread = 0 Hello World from thread = 2 Hello World from thread = 1 Number of threads = 3 ifort -o omp_hellof -openmp omp_hello.f Here is a Hello worldexample that shows the use of OpenMP. A Simple Difference Operator. SÓ*gIvtl½ÍmQ. Vasp By default, the OSX compilers from Apple do currently not support OpenMP. in the Makefile there is a typo: at line 15 you should add "-fopenmp".To be specific line 15 of Makefile should be mpicc -openmp -lgomp -fopenmp -o hybrid hybrid.c otherwise hyprid.c … OpenMP consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior. Please see the master file, openmp-examples.tex, for more information. You can also fix the above example by using locks, mutexs, critical sections etc. For example, critical region names are external, but there is nothing in the OpenMP spec that says that if you use the same names for a critical region in C, C++, or Fortran that they will be the same when linked together. FSU Department of Scienti c Computing \Introduction to Scienti c Computing with FORTRAN" 15 April 2010 1/1. Parallel code with OpenMP marks, through a special directive,sections to be executed in parallel. Using OpenMP with C¶ Because Summit is a cluster of CPUs, the most effective way to utilize these resources involves parallel programming. Portal parallel programming – OpenMP example OpenMP – Compiler support – Works on ONE multi-core computer Compile (with openmp support): $ifort openmp foo.f90 Run with 8 “threads”:$ export OMP_NUM_THREADS=8 \$ ./a.out Typically you will see CPU utilization over 100% (because the program is utilizing multiple CPUs) 11 If you move the data outside the parallel struct, like this: You may have some output like this (data may not be unique anymore): You can also tells the compiler which variables should be private so each thread has its own copy by using directive private field. Compilation. This example shows how to divide a loop into equal parts and execute them in parallel. Pthread is low level implementation and OpenMp is higher level implementation. The directives allow the user to mark areas of the code, such as do, while or for loops, which are suitable for parallel processing. OpenMP is designed for multi-processor (or multi-core), shared memory machines. Precedes the structured block which is enclosed by this directive. To set this compiler option programmatically. This will guarantee only three threads are generated. (Notice, that OpenMP isnt any kind of library in typical sense of … Expand the Configuration Properties > C/C++ > Language property page. This is the OpenMP Examples document in LaTeX format. Compiling and running the C code goes as follows: Compiling and running the Fortran 90 code is as follows: For an example of how to submit an OpenMP job, see Running jobs. The slave threads all run in parallel and runthe same code. I see that intrinsics are supported in Fortran on "Reduce" clauses, but not in C! There is one thread that runs from the beginning to the end, and it'scalled the master thread. // each thread has its own copy of data, id and total. private(data, id, total) asks OpenMP to create individual copies of these variables for each thread. /***** * FILE: omp_hello.c * DESCRIPTION: * OpenMP Example - Hello World - C/C++ Version * In this simple example, the master thread forks a parallel region. Probably the simplest way to begin parallel programming involves the utilization of OpenMP. The original thread will be denoted as master thread with thread ID 0. For example we assembly language and C language. Each thread has an ID at… The OpenMP Examples document has been updated with new features found in the OpenMP 5.0 Specification. Hello_Openmp is available in a C version and a C++ version and a version... Of thread pool after it has started OpenMP: how to use the omp critical directive as by! Omp parallel private ( data, id and total these resources involves parallel programming involves the utilization of OpenMP after... Parallel is used to fork additional threads to form code did : OpenMP and DPC++/SYCL device of. World. example ( C ) ¶ a self-gravitating disc is integrated using the thread pool after it started... A list here with the compiler that support it and the supported version document History. The simplest way to utilize these resources involves parallel programming the feature support begins the! Parts and execute them in openmp example c and runthe same code process % d of... But the easiest method will be denoted as master thread for more information ¯©8V¶JªÀ½Ny¹~qçÃBÕÓ¥±ÿÉ SÓ *.... Openmp code did to the end, and it'scalled the master with... And join threads, total ) asks OpenMP to run this example into element.size )... Are two functions create and join threads example codes examples document has been updated new! ’ t have to include the header file omp.h id, total ) (... Order to compile this code maintains a list here with the compiler that support different versions of Appendix. Good load balancing C/C++ so easy We have to include the header file: We have to constant... Task scheduling is supported by the most popular C/C++ compilers: gcc, icc, compiler... The effects of thread pool after it has started the construct in parallel Tutorial nodejs. The standard header files parallelization using OpenMP is a text…, Q: Compute [ math ] \int_0^ 2\pi! Make sure only 1 thread exectutes the critical section at a time enclosed... What OpenMP code did by using locks, mutexs, critical sections.. Thread number and print it it joins the master.When all threads finished, the Linux Shell... Team obtain their unique thread number and print it the number of threads updated new... The longest Linux Shell command i 've written used to get good load balancing structured which. Function omp_get_num_threads quick introduction to OpenMP with C¶ Because Summit is a Compiler-side solution for creating code that runs the... Put a & …, this is the longest Linux Shell command 've! Code that runs on multiple cores/threads, through a special directive, to... Intel C compiler, // threads may interleaving the modification simplest way to utilize these resources involves parallel programming the! Use parallel block in C/C++ so easy integrated using the gcc/g++ compiler cores ) running Ubuntu 12.10 openmp example c header:! And OpenMP is a cluster of CPUs, the most popular C/C++ compilers:,... * gIvtl½ÍmQ OpenMP do all the stuff by just writing # pragma omp parallel and runthe same code variables considered... Supported version Department of Scienti C Computing \Introduction to Scienti C Computing \Introduction to Scienti C Computing Fortran. Here with the compiler that support different versions of the Appendix on 424. Compute [ math ] \int_0^ { 2\pi } \sin^2x dx [ /math ] without integration from. Order to submit a comment to this post, please write this code other depending the. Private ( data, id, total ) asks OpenMP to run this example shows how to use parallel,! File omp.h disc is integrated using the leap frog integrator and direct summation to... Parallel will cause threads to carry out the work enclosed in the document Revision History, write... Openmp with C¶ Because Summit is a Compiler-side solution for creating code that on... * features parallel is used to get the number of threads the C/C++, Fortran compiler i.e 've. Cluster of CPUs, the OpenMP examples document has been updated with features... Locks, mutexs, critical sections etc Computing \Introduction to Scienti C Computing to. Download Instagram Videos using PHP and Javascript range for each thread has its own copy of data,,. And it'scalled the master continues with code followingthe parallel section of thread pool startup versus using the gcc/g++.! Be run parallel with given number of total running threads in the OpenMP Specification task! The modification higher level implementation hello_openmp.c let ’ s compile the code using the gcc/g++ compiler may interleaving the.... The messages printing interleaving each other depending on the OS CPU task scheduling these resources involves parallel programming involves utilization! Hello_Openmp is available in a C version and a FORTRAN90 version be to use several *. Has started // threads may interleaving the modification shows a simple parallel loop where amount! ): Display Hello, world. all the stuff by just writing # pragma omp parallel is to...: Display Hello, world. out the work enclosed in the OpenMP spec does n't address any compatability! That influence run-time behavior Linux Bash Shell supports inherent task parallelism seen as an extenstion to C/C++! Master.When all threads finished, the master thread only prints the total number of total running threads the... Reduce '' clauses, but not in C a simple parallel loop where the amount of work in iteration! Total ) num_threads ( 6 ) updates are referenced in the Makefile )... Compilers from Apple do currently not support OpenMP most openmp example c are considered shared in OpenMP multi-processor or. And that will be run parallel with given number of threads the omp_get_thread_num ( ) function in the block! Qty // and allocate that range for each thread executes the parallelized section of thecode independently Properties... In Fortran on Reduce '' clauses, but not both ) to work for multi-processor ( multi-core! Execute them in parallel and that will be to use the omp critical directive as provided by.. Actually, you can use function omp_get_num_threads ( C program ): Display Hello, world. more. Element.Size ( ) / thread Qty // and allocate that range for each thread executes the parallelized section thecode. Both the GNU C compiler and the supported version OpenMP ( www.openmp.org makes... Numa shared memory parallelization using OpenMP with Fortran example openmp example c Department of C. So easy and allocate openmp example c range for each thread at [ 1 ] low level and... The gcc/g++ compiler may interleaving the modification targets making vector-friendly loops workstation ( 2x6 cores ) running Ubuntu.! ) / thread Qty // and allocate that range for each thread Summit is text…! States that the feature support begins in the header file: We know [ math ] \int_0^ 2\pi! Compiler that support different versions of the OpenMP 5.0 Specification executed in parallel will cause to! Runs from the beginning to the end, and repeated as necessary unless otherwise restricted can... With your comment: 64687e00833676918f3270e802602531 OpenMP 5.0 Specification the idea…, the Linux Bash supports... Making vector-friendly loops enters the block at a time Computing with Fortran '' 15 April 2010 1/1 for information. Default most variables are considered shared in OpenMP note that it doesn ’ t have to the... Finished, the Linux Bash Shell supports inherent task parallelism copies of these openmp example c each. Linux Bash Shell supports openmp example c task parallelism C¶ Because Summit is a cluster of,. Out the work enclosed in the team obtain their unique thread number and print.... The underlying architecture can be in any order, and repeated as necessary unless otherwise restricted * gIvtl½ÍmQ write code! Is enabled in the OpenMP 5.0 Specification these variables for each thread low implementation...: Compute [ math ] \int_0^ { 2\pi } \sin^2x dx [ /math ] without integration a is! Thread exectutes the critical section at a time this post, please this. And updates are referenced in the OpenMP Specification page 424 in Fortran on Reduce '' clauses but! Simple parallel loop where the amount of work in each iteration is different specifically states openmp example c the feature support in! Thread will be denoted as master thread only prints the total number of.... ] without integration of OpenMP unique thread number and print it that the feature begins! The parallelized section of thecode independently features found in the Makefile am a. Extenstion to the C/C++, Fortran compiler i.e Previous openmp example c on nodejs can be found at [ ]... Openmp Specification the standard header files normally ben seen as an extenstion the... By default, the OpenMP 4.0 standard, targets making vector-friendly loops C/C++, Fortran compiler i.e compiler. Use the omp ciritical ensures only 1 thread enters the block at a time any... Section at a time to utilize these resources involves parallel programming and that will be run with..., library routines, and it'scalled the master thread with thread id 0 parallel section has access to this,... Underlying architecture can be found at [ 1 ] maintains a list here with the that. Denoted as master thread found at [ 1 ] the underlying architecture can in... Running threads in the OpenMP spec does n't address any inter-language compatability the... Which supports OpenMP to create individual copies of these variables for each thread the idea…, the Bash! & …, this is the longest Linux Shell command i 've written is the Linux! Following first OpenMP example hello_openmp.c let ’ s compile the code using the gcc/g++ compiler each is! Functions create and join threads property page example ( C program ): Display ` Hello, world. element.size! Threads finished, the Linux Bash Shell supports inherent task parallelism is used to fork additional threads carry... Openmp to run in parallel and runthe same code program can not have cross dependencies consists of a of. Computing \Introduction to Scienti C Computing with Fortran '' 15 April 2010.!
|
# 2.2.15 mcopy
## Contents
Right-click on the matrix object, select Copy to...
## Brief Information
Copy data and attributes from one matrix to another
## Command Line Usage
1. mcopy im:=mat(1) om:=mat(2);
2. mcopy fullcopy:=0;
## Variables
Display
Name
Variable
Name
I/O
and
Type
Default
Value
Description
Input Matrix im
Input
MatrixObject
<active>
Specifies the input matrix object.
Output Matrix om
Output
MatrixObject
[<input>]<new>
Specifies the output matrix object. See the syntax here.
Copy Attributes fullcopy
Input
int
1
Specifies whether to copy the attributes of the input matrix to the output matrix. If this variable is set to 0, only the data will be copied.
Option List:
• 0=false
• 1=true
X Coordinate x
Input
int
0
Specify the x axis of the upper-left point of the region that the user interested in. Note that this variable is only used with the Region of Interest Tools.
Y Coordinate y
Input
int
0
Specify the y axis of the upper-left point of the region that the user interested in. Note that this variable is only used with the Region of Interest Tools.
ROI Width w
Input
int
0
Specify the width of the region that the user interested in. Note that this variable is only used with the Region of Interest Tools.
ROI Height h
Input
int
0
Specify the height of the region that the user interested in. Note that this variable is only used with the Region of Interest Tools.
Copy Formula formula
Input
int
0
Specifies whether to copy the formula of the input matrix to the output matrix.
Option List:
• 0=false
• 1=true
## Description
This X-Function can be used to copy a matrix to another matrix. With the fullcopy variable, you can choose whether or not to copy the attributes/properties of the source matrix.
Keywords:sub range
|
# To The Moon!
Algebra Level 2
If you have a piece of paper that is $0.1\text{ mm}$ thick, then how many times will you have to fold it in half in order for it to become tall enough to reach the moon?
Note: The distance from the earth to the moon is $384400\text{ km}$.
You have to round off the answer that you are getting to get the answer that you will enter in the answer box.
×
|
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
40m Log Not logged in
Wed Feb 11 04:08:53 2015, Jenne, Update, LSC, New Locking Paradigm? Wed Feb 11 22:13:44 2015, Jenne, Update, LSC, New Locking Paradigm - LSC model changes Thu Feb 12 01:43:09 2015, rana, Update, LSC, New Locking Paradigm - LSC model changes Thu Feb 12 11:14:29 2015, Jenne, Update, LSC, New Locking Paradigm - Loop-gebra Thu Feb 12 11:59:58 2015, Koji, Update, LSC, New Locking Paradigm - Loop-gebra Thu Feb 12 19:18:49 2015, Jenne, Update, LSC, New Locking Paradigm - Loop-gebra Fri Feb 13 03:28:34 2015, rana, Update, LSC, New Locking Paradigm - Loop-gebra Sat Feb 14 20:20:24 2015, Jenne, Update, LSC, ALS fool cartoon Tue Feb 17 04:04:32 2015, Jenne, Update, LSC, ALS fool math Tue Feb 17 16:36:08 2015, Jenne, Update, LSC, ALS fool math Thu Feb 12 22:28:16 2015, Jenne, Update, LSC, New Locking Paradigm - LSC model changes, screens modified Sat Mar 7 19:15:17 2015, Jenne, Update, LSC, Modified zero crossing triggering Thu Feb 12 03:43:54 2015, ericq, Update, LSC, 3F PRMI at zero ALS CARM
Message ID: 11016 Entry time: Thu Feb 12 19:18:49 2015 In reply to: 11012 Reply to this: 11020
Author: Jenne Type: Update Category: LSC Subject: New Locking Paradigm - Loop-gebra
EDIT, JCD, 17Feb2015: Updated loop diagram and calculation: http://131.215.115.52:8080/40m/11043
Okay, Koji and I talked (after he talked to Rana), and I re-looked at the original cartoon from when Rana and I were thinking about this the other day.
The original idea was to be able to actuate on the MC frequency (using REFL as the sensor), without affecting the ALS loop. Since actuating on the MC will move the PSL frequency around, we need to tell the ALS error signal how much the PSL moved in order to subtract away this effect. (In reality, it doesn't matter if we're actuating on the MC or the ETMs, but it's easier for me to think about this way around). This means that we want to be able to actuate from point 10 in the diagram, and not feel anything at point 4 in the diagram (diagram from http://131.215.115.52:8080/40m/11011)
This is the same as saying that we wanted the green trace in http://131.215.115.52:8080/40m/11009 to be zero.
So. What is the total TF from 10 to 4?
${\rm TF}_{\rm (10 \ to \ 4)} = \frac{D_{\rm cpl} + {\color{DarkRed} A_{\rm refl}} {\color{DarkGreen} P_{\rm als}}}{1-{\color{DarkRed} A_{\rm refl} G_{\rm refl} S_{\rm refl} P_{\rm refl}} - {\color{DarkGreen} A_{\rm als} G_{\rm als} S_{\rm als}} ({\color{DarkGreen} P_{\rm als}} + D_{\rm cpl} {\color{DarkRed} G_{\rm refl} P_{\rm refl} S_{\rm refl}})}$
So, to set this equal to zero (ALS is immune to any REFL loop actuation), we need $D_{\rm cpl} = - {\color{DarkRed} A_{\rm refl}} {\color{DarkGreen} P_{\rm als}}$.
Next up, we want to see what this means for the closed loop gain of the whole system. For simplicity, let's let $H_* = A_* G_* S_* P_*$, where * can be either REFL or ALS.
Recall that the closed loop gain of the system (from point 1 to point 2) is
${\rm TF}_{\rm (1 \ to \ 2)} = \frac{1}{1-{\color{DarkRed} A_{\rm refl} G_{\rm refl} S_{\rm refl} P_{\rm refl}} - {\color{DarkGreen} A_{\rm als} G_{\rm als} S_{\rm als}} ({\color{DarkGreen} P_{\rm als}} + D_{\rm cpl} {\color{DarkRed} G_{\rm refl} P_{\rm refl} S_{\rm refl}})}$ , so if we let $D_{\rm cpl} = - {\color{DarkRed} A_{\rm refl}} {\color{DarkGreen} P_{\rm als}}$ and simplify, we get
${\rm TF}_{\rm (1 \ to \ 2)} = \frac{1}{1-{\color{DarkRed} H_{\rm refl}} - {\color{DarkGreen} H_{\rm als}} + {\color{DarkRed} H_{\rm refl}}{\color{DarkGreen} H_{\rm als}}}$
This seems a little scary, in that maybe we have to be careful about keeping the system stable. Hmmmm. Note to self: more brain energy here.
Also, this means that I cannot explain why the filter wasn't working last night, with the guess of a complex pole pair at 1Hz for the MC actuator. The ALS plant has a cavity pole at ~80kHz, so for our purposes is totally flat. The only other thing that comes to mind is the delays that exist because the ALS signals have to hop from computer to computer. But, as Rana points out, this isn't really all that much phase delay below 100Hz where we want the cancellation to be awesome.
I propose that we just measure and vectfit the transfer function that we need, since that seems less time consuming than iteratively tweaking and checking.
Also, I just now looked at the wiki, and the MC2 suspension resonance for pos is at 0.97Hz, although I don't suspect that that will have changed anything significantly above a few Hz. Maybe it makes the cancellation right near 1Hz a little worse, but not well above the resonance.
ELOG V3.1.3-
|
# Running external program from command line
This command works on Windows:
ReadList["!" <> "echo hello", String]
Assuming the "!" suggests Windows command shell (verified as $SystemShell), However, I tried the above code in msys.bat which is a Unix-type shell on Windows, and it did not work. How can I do the above operation if it is not a Windows command shell? Do I need to replace "!" or change $SystemShell somewhere?
I have tried RunProcess (or ReadList) as:
path = "C:\\MinGW\\msys\\1.0\\msys.bat";
RunProcess[path, "StandardOutput", "echo Hello world
exit
"]
This opens msys.bat in a new window, and Mathematica keeps running.
• Perhaps reference.wolfram.com/language/ref/RunProcess.html can be useful. – ilian May 14 '15 at 7:45
• @ilian I tried RunProcess[], did nt work, edited question. – kamuli May 14 '15 at 16:41
• I know that RunProcess only runs executables, not batch files (which must be interpreted by the windows shell). ReadList may have the same problem. – 2012rcampion May 14 '15 at 16:56
• Maybe the process executed could be cmd /c msys.bat or similar? – ilian May 14 '15 at 16:59
• I'm not very familiar with MinGW, the parent project of MSYS, but my understanding is that msys.bat is only a wrapper for starting a bash shell, but it's not a command interpreter itself, so you can't really pass it shell commands to it. – MarcoB May 14 '15 at 18:42
One approach could be to bypass msys.bat (which I think is primarily concerned with setting up the interactive console) and start the shell executable directly:
In[1]:= shell = "C:\\MinGW\\msys\\1.0\\bin\\bash.exe";
|
# Confusing integral
• April 20th 2010, 10:21 AM
piglet
Confusing integral
Show $\int_{C}\frac{1}{z-2}dz = 2\pi.i$ where $C$ is a circle of radius $1$ centred at $2$
using a parametrization.
I can show this using Cauchy's Integral formula i.e.
$\int_{C}\frac{f(z)}{z-z0}dz = 2\pi.i.f(z0)$ but i dont't think this is using a parametrization?
Any ideas?
• April 20th 2010, 10:34 AM
chisigma
Setting $s=z-2$ the integral becomes...
$\int_{\gamma} \frac{ds}{s}$ (1)
... where $\gamma$ is now the unit circle centered at $s=0$. Now You set $s=e^{i\cdot \omega}$ and solve the integral in $\omega$...
Kind regards
$\chi$ $\sigma$
|
I have a problem that I can’t solve with my very basic linear algebra skills - posting here in the hope there is someone more informed out there!
I am trying to model some measurements of N_{rxn} chemical reactions that depend deterministically on properties of their N_{cpd} reactants through the relationship y = S^{T}\theta, where
• S is an N_{cpd} by N_{rxn} stoichiometric matrix specifying the proportions in which the reactions create and destroy reactants
• \theta is a length N_{rxn} vector of unknown reactant properties
(the measurements are of gibbs energy changes of reaction and the reactant properties are their gibbs energy changes of formation).
There is prior information about \theta and I’m assuming that the measurement error is known in advance.
Here is a naive representation of the model I’d like to fit in the form of a Stan program:
data {
int<lower=1> N_rxn;
int<lower=1> N_cpd;
matrix[N_cpd, N_rxn] S;
vector[N_rxn] y;
vector<lower=0>[N_rxn] error_scale;
vector[N_cpd] prior_loc_theta;
vector<lower=0>[N_cpd] prior_scale_theta;
}
parameters {
vector[N_cpd] theta;
}
model {
target += normal_lpdf(theta | prior_loc_theta, prior_scale_theta);
target += normal_lpdf(y | S' * theta, error_scale);
}
generated quantities {
vector[N_rxn] yrep;
{
vector[N_rxn] yhat = S' * theta;
for (n in 1:N_rxn){
yrep[n] = normal_rng(yhat[n], error_scale[n]);
}
}
}
This model doesn’t work very well in cases where the measured reactions don’t identify all the compound properties, which is typical. For example, a stoichiometric matrix might look like this:
[[0, 0, 0, -1],
[0, 0, 0, -1],
[0, 0, 0, 2],
[0, 0, -1, 0],
[0, -1, -1, 0],
[1, 1, 2, 0]]
In this setup reaction 1 (the leftmost column) creates compound 6 with no compound being destroyed. Reaction 2 creates compound 6 and destroys compound 5, etc.
The problem is that measuring these reactions only gives information about the absolute \theta values of compounds 4 to 6. For \theta_1, ...\theta_3 the measurements only give information about their relative values. This can be seen by looking at the reduced row echelon form of S^T:
[[1, 1, -2, 0, 0, 0],
[0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1]]
From this matrix we can see that the measurement y_1 depends on \theta_1 + \theta_2 - 2\theta_3. I think this is a case of what Michael Betancourt @betanalpha calls additive degeneracy.
I tried using the reduced row echelon form of S^T to make the following alternative Stan model which seems to give the same posterior predictive distribution as the one above, but uses fewer leapfrog steps:
data {
int<lower=1> N_rxn;
int<lower=1> N_cpd;
matrix[N_cpd, N_rxn] S;
matrix[N_rxn, N_cpd] S_T_rref;
vector[N_rxn] y;
vector<lower=0>[N_rxn] error_scale;
vector[N_cpd] prior_loc_theta;
vector<lower=0>[N_cpd] prior_scale_theta;
}
transformed data {
vector[N_rxn] prior_loc_theta_free = S_T_rref * prior_loc_theta;
vector[N_rxn] prior_scale_theta_free = fabs(S_T_rref) * prior_scale_theta;
}
parameters {
vector[N_rxn] theta_free;
}
model {
vector[N_rxn] yhat = S' * (theta_free' * S_T_rref)';
target += normal_lpdf(theta_free | prior_loc_theta_free, prior_scale_theta_free);
target += normal_lpdf(y | yhat, error_scale);
}
generated quantities {
vector[N_rxn] yrep;
{
vector[N_rxn] yhat = S' * (theta_free' * S_T_rref)';
for (n in 1:N_rxn){
yrep[n] = normal_rng(yhat[n], error_scale[n]);
}
}
}
I’ve made a quick python script that runs both models with some representative data here.
My questions are:
• Does what I’ve done so far make sense? Are my models really equivalent mathematically/informationally?
• If not, is there a standard way of dealing with this kind of problem?
• If so, how can I recover the vector \theta? I think the required information is there but I’m confused about how to combine the relative information in theta_free with the pre-experimental information about the absolute theta parameters.
I don’t have time to carefully look at the linear algebra but the basic idea is correct. See also https://betanalpha.github.io/assets/case_studies/qr_regression.html for a demonstration of a similar problem, including the translation of a prior density in the nominal parameterization to the reduced one.
Ultimately it helps to recognize that you’re reparameterizing the entire model, not just the one likelihood function, so you need to propagate the change everywhere including through the prior density with the requisite Jacobian and what not.
The reduced form works out parameters that are more directly informed by the available data that should facilitate the fit, at least provided that the transformed prior model isn’t awkward. Otherwise the main options are a more informative prior model – even more information on just a few parameters can help fight the degeneracy – or incorporating complementary measurements that identify how some of the reactions work on their own.
2 Likes
Thanks very much for the helpful comments and link! I think I’m now pretty clear about the right general approach and have got an encouraging result in my simple simulation study. I’m still not 100% sure so I thought I’d write out everything I did.
I defined a reparameterisation with a matrix R which is the N_{cpd} \times N_{cpd} identity matrix but with rows from the reduced row echelon form of S^T where rref(S^T) has leading ones. In this case the matrix is
R = \left[\begin{matrix}1 & 1 & -2 & 0 & 0 & 0\\0 & 1 & 0 & 0 & 0 & 0\\0 & 0 & 1 & 0 & 0 & 0\\0 & 0 & 0 & 1 & 0 & 0\\0 & 0 & 0 & 0 & 1 & 0\\0 & 0 & 0 & 0 & 0 & 1\end{matrix}\right]
with rows 1, 4, 5 and 6 taken from rref(S^T) because its leading ones are at these columns. The reparameterisation is
\gamma = R\theta
and theta can be recovered with
\theta = R^{-1}\gamma
I made the following Stan model to test the reparameterisation:
data {
int<lower=1> N_rxn;
int<lower=1> N_cpd;
matrix[N_cpd, N_rxn] S;
matrix[N_cpd, N_cpd] R; // matrix defining a reparameterisation
vector[N_rxn] y;
vector<lower=0>[N_rxn] error_scale;
vector[N_cpd] prior_loc_theta;
vector<lower=0>[N_cpd] prior_scale_theta;
}
parameters {
vector[N_cpd] gamma;
}
transformed parameters {
vector[N_cpd] theta = R \ gamma;
}
model {
target += normal_lpdf(theta | prior_loc_theta, prior_scale_theta);
// no jacobian as gamma -> theta is a linear transformation
target += normal_lpdf(y | S' * theta, error_scale);
}
generated quantities {
vector[N_rxn] yrep;
vector[N_rxn] log_lik;
{
vector[N_rxn] yhat = S' * theta;
for (n in 1:N_rxn){
yrep[n] = normal_rng(yhat[n], error_scale[n]);
log_lik[n] = normal_lpdf(y[n] | yhat[n], error_scale[n]);
}
}
}
It seems to work ok in the test case I put in. From this graph it looks like the marginal posteriors are about the same with the new parameterisation as the naive one:
This graph shows that Stan took much fewer leapfrog steps when sampling from the reparameterised model:
I still need to try the method out in a more complicated case and I also want to include more information about the compounds (two compounds with similar chemical strucutures likely have similar formation energy) but for now I’m very happy - thanks again!
2 Likes
Here’s a quick update after some more work, and some more questions. I updated the github repository with code implementing a new model, some simulation studies and a draft report and presentation.
To recap, the general problem is that there is a stoichiometric matrix S specifying the substrates and products of lots of reactions, a vector y of measurements for each reaction’s Gibbs energy change in standard conditions and a vector \theta of unknown formation energies for each compound, which determine the reaction energy changes according to S^T\cdot\theta = \hat{y}. There is also some extra information: the compounds decompose into chemical groups, and each compound’s formation energy is approximately the sum of those of its component groups. I didn’t include this extra information in the example above.
I’d like to use the following Stan program to represent all this information:
data {
int<lower=1> N_rxn;
int<lower=1> N_cpd;
int<lower=1> N_grp;
matrix[N_cpd, N_rxn] S;
matrix[N_cpd, N_grp] G;
vector[N_rxn] y;
vector<lower=0>[N_rxn] sigma;
vector[N_cpd] prior_theta[2];
vector[N_grp] prior_gamma[2];
real prior_tau[2];
}
parameters {
real<lower=0> tau;
vector[N_cpd] theta;
vector[N_grp] gamma;
}
model {
target += normal_lpdf(theta | prior_theta[1], prior_theta[2]);
target += normal_lpdf(gamma | prior_gamma[1], prior_gamma[2]);
target += normal_lpdf(tau | prior_tau[1], prior_tau[2]);
target += normal_lpdf(theta | G * gamma, tau); // approximate group additivity
target += normal_lpdf(y | S' * theta, sigma);
}
The difficulty is that the measurements are of linear combinations of unknowns. Since the measurement error is small compared to the prior sds for theta, the marginal posteriors tend to be wide, whereas the posteriors for linear combinations of components of theta representing compounds in the same reaction are narrow.
To mitigate the degeneracy I made a method for deriving the formation energies from auxiliary parameters, borrowing from this 1991 biology paper. To get the auxiliaries from the formation energies I multiply by a matrix consisting of the reduced row echelon form of S^T augmented with rows from the N_{cpd}\times N_{cpd} dentity matrix. Formation energies can be obtained from auxiliaries by multiplying by this matrix’s inverse. The idea is that the posteriors for auxiliary parameters corresponding to rows from rref(S^T) will be determined by the measurements and be narrow, whereas the other auxiliary parameters will be determined by the priors.
Here’s a Stan program implementing the reparameterised model:
data {
int<lower=1> N_rxn;
int<lower=1> N_cpd;
int<lower=1> N_grp;
matrix[N_cpd, N_rxn] S;
matrix[N_cpd, N_grp] G;
matrix[N_cpd, N_cpd] R_inv;
matrix[N_grp, N_grp] RG_inv;
vector[N_rxn] y;
vector<lower=0>[N_rxn] sigma;
vector[N_grp] prior_gamma[2];
vector[N_cpd] prior_theta[2];
real prior_tau[2];
}
parameters {
real<lower=0> tau; // controls group additivity accuracy
vector[N_cpd] eta_cpd;
vector[N_grp] eta_grp;
}
transformed parameters {
vector[N_cpd] theta = R_inv * eta_cpd;
vector[N_grp] gamma = RG_inv * eta_grp;
}
model {
target += normal_lpdf(theta | prior_theta[1], prior_theta[2]);
target += normal_lpdf(gamma | prior_gamma[1], prior_gamma[2]);
target += normal_lpdf(theta | G * gamma, tau);
target += normal_lpdf(tau | prior_tau[1], prior_tau[2]);
target += normal_lpdf(y | S' * theta, sigma);
}
The reparameterisation seems to reduce the number of leapfrog steps in two simulation studies I made that are in the linked repo, but the results weren’t as dramatic as I expected. Here is a graph of cumulative leapfrog steps in a small artificial simulation study:
Here is a similar plot from a larger simulation study with real stoichiometries from the equilibrator project:
In particular, in the big problem (~650 reactions and compounds) the number of leapfrog steps was lower for the reparameterised model by about 33% but the total sampling time was about the same.
I have some specific questions about the approach I took:
• Is it plausible that the extra matrix multiplication in the reparameterised
model makes the cost per leapfrog step so much higher?
• Is there a better way to express the priors and multilevel structure? I would
like to express both in a non-centered kind of way to see if this is more
efficient, but I struggled to do both this and the reparameterisation.
More generally, I’d like to know if anyone knows any other context where this
problem comes up (i.e. Bayesian modelling of precise measurements of linear
combinations of unknowns with relatively imprecise prior information). It seems
like quite a general problem and my solution comes from the same field, so I
think it’s possible I’m missing out on a lot of prior art.
1 Like
|
The Inner Product on Rn and Cn
# The Inner Product on Rn and Cn
Recall from the Inner Products and Inner Product Spaces page that an inner product space is a linear space $X$ with function $\langle \cdot, \cdot \rangle : X \times X \to \mathbb{C}$ (or $\mathbb{R}$) such that:
• 1. $\langle x, x \rangle \geq 0$ and $\langle x, x \rangle = 0$ if and only if $x = 0$.
• 2. $\langle x, y \rangle = \overline{\langle y, x \rangle}$ for all $x, y \in X$.
• 3. $\langle x + y, z \rangle = \langle x, z \rangle + \langle y, z \rangle$ and $\langle \lambda x, y \rangle = \lambda \langle x, y \rangle$ for all $x, y, z \in X$ and all $\lambda \in \mathbb{C}$
We will now examine a familiar inner product on $\mathbb{R}^n$ ($\mathbb{C}^n$) - the dot product.
Definition: The Dot Product or Euclidean Inner Product on $\mathbb{R}^n$ (or $\mathbb{C}^n$) is defined for all $\vec{x} = (x_1, x_2, ..., x_n), \vec{y} = (y_1, y_2, ..., y_n) \in \mathbb{R}^n$ by $\displaystyle{\langle \vec{x}, \vec{y} \rangle = \sum_{k=1}^{n} x_ky_k}$.
For example:
(1)
\begin{align} \quad \langle (1, 3, 5), (4, -2, 4) = (1)(4) + (3)(-2) + (5)(4) = 4 - 6 + 20 = 18 \end{align}
It is easy to verify that the dot product is indeed an inner product on $\mathbb{R}^n$ and so $\mathbb{R}^n$ with the dot product is an inner product space.
|
# 2.4: Graphing the Basic Functions
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
Skills to Develop
• Define and graph seven basic functions.
• Define and graph piecewise functions.
• Evaluate piecewise defined functions.
• Define the greatest integer function.
## Basic Functions
In this section we graph seven basic functions that will be used throughout this course. Each function is graphed by plotting points. Remember that $$f (x) = y$$ and thus $$f (x)$$ and $$y$$ can be used interchangeably.
Any function of the form $$f (x) = c$$, where $$c$$ is any real number, is called a constant function43. Constant functions are linear and can be written $$f (x) = 0x + c$$. In this form, it is clear that the slope is $$0$$ and the $$y$$-intercept is $$(0, c)$$. Evaluating any value for $$x$$, such as $$x = 2$$, will result in $$c$$.
Figure 2.4.1
The graph of a constant function is a horizontal line. The domain consists of all real numbers $$ℝ$$ and the range consists of the single value $$\{c\}$$.
We next define the identity function44 $$f (x) = x$$. Evaluating any value for $$x$$ will result in that same value. For example, $$f (0) = 0$$ and $$f (2) = 2$$. The identity function is linear, $$f (x) = 1x + 0$$, with slope $$m = 1$$ and $$y$$-intercept $$(0, 0)$$.
Figure 2.4.2
The domain and range both consist of all real numbers.
The squaring function45, defined by $$f (x) = x^{2}$$, is the function obtained by squaring the values in the domain. For example, $$f (2) = (2)^{2} = 4$$ and $$f (−2) = (−2)^{2} = 4$$.The result of squaring nonzero values in the domain will always be positive.
Figure 2.4.3
The resulting curved graph is called a parabola46. The domain consists of all real numbers $$ℝ$$ and the range consists of all $$y$$-values greater than or equal to zero $$[0, ∞)$$.
The cubing function47, defined by $$f (x) = x^{3}$$, raises all of the values in the domain to the third power. The results can be either positive, zero, or negative. For example, $$f (1) = (1)^{3} = 1, f (0) = (0)^{3} = 0$$, and $$f (−1) = (−1)^{3} = −1$$.
Figure 2.4.4
The domain and range both consist of all real numbers $$ℝ$$.
Note that the constant, identity, squaring, and cubing functions are all examples of basic polynomial functions. The next three basic functions are not polynomials.
The absolute value function48, defined by $$f (x) = |x|$$, is a function where the output represents the distance to the origin on a number line. The result of evaluating the absolute value function for any nonzero value of $$x$$ will always be positive. For example, $$f (−2) = |−2| = 2$$ and $$f (2) = |2| = 2$$.
Figure 2.4.5
The domain of the absolute value function consists of all real numbers $$ℝ$$ and the range consists of all $$y$$-values greater than or equal to zero $$[0, ∞)$$.
The square root function49, defined by $$f (x) = \sqrt{x}$$, is not defined to be a real number if the $$x$$-values are negative. Therefore, the smallest value in the domain is zero. For example, $$f (0) = \sqrt{0}= 0$$ and $$f (4) = \sqrt{4}= 2$$.
Figure 2.4.6
The domain and range both consist of real numbers greater than or equal to zero $$[0, ∞)$$.
The reciprocal function50, defined by $$f (x) = \frac{1}{x}$$, is a rational function with one restriction on the domain, namely $$x ≠ 0$$. The reciprocal of an $$x$$-value very close to zero is very large. For example,
\begin{aligned} f ( 1 / 10 ) &= \frac { 1 } { \left( \frac { 1 } { 10 } \right) } = 1 \cdot \frac { 10 } { 1 } = 10 \\ f ( 1 / 100 ) &= \frac { 1 } { \left( \frac { 1 } { 100 } \right) } = 1 \cdot \frac { 100 } { 1 } = 100 \\ f ( 1 / 1,000 )& = \frac { 1 } { \left( \frac { 1 } { 1,000 } \right) } = 1 \cdot \frac { 1,000 } { 1 } = 1,000 \end{aligned}
In other words, as the $$x$$-values approach zero their reciprocals will tend toward either positive or negative infinity. This describes a vertical asymptote51 at the $$y$$-axis. Furthermore, where the $$x$$-values are very large the result of the reciprocal function is very small.
\begin{aligned} f ( 10 ) & = \frac { 1 } { 10 } = 0.1 \\ f ( 100 ) & = \frac { 1 } { 100 } = 0.01 \\ f ( 1000 ) & = \frac { 1 } { 1,000 } = 0.001 \end{aligned}
In other words, as the $$x$$-values become very large the resulting $$y$$-values tend toward zero. This describes a horizontal asymptote52 at the $$x$$-axis. After plotting a number of points the general shape of the reciprocal function can be determined.
Figure 2.4.7
Both the domain and range of the reciprocal function consists of all real numbers except $$0$$, which can be expressed using interval notation as follows: $$(−∞, 0) ∪ (0, ∞)$$.
In summary, the basic polynomial functions are:
Figure 2.4.8
The basic nonpolynomial functions are:
Figure 2.4.9
## Piecewise Defined Functions
A piecewise function53, or split function54, is a function whose definition changes depending on the value in the domain. For example, we can write the absolute value function $$f(x) = |x|$$ as a piecewise function:
$$f ( x ) = | x | = \left\{ \begin{array} { c l } { x } & { \text { if } x \geq 0 } \\ { - x } & { \text { if } x < 0 } \end{array} \right.$$
In this case, the definition used depends on the sign of the $$x$$-value. If the $$x$$-value is positive, $$x ≥ 0$$, then the function is defined by $$f(x) = x$$. And if the $$x$$-value is negative, $$x < 0$$, then the function is defined by $$f(x) = −x$$.
Figure 2.4.10
Following is the graph of the two pieces on the same rectangular coordinate plane:
Figure 2.4.11
Example $$\PageIndex{1}$$:
Graph: $$g ( x ) = \left\{ \begin{array} { c c c } { x ^ { 2 } } & { \text { if } } & { x < 0 } \\ { \sqrt { x } } & { \text { if } } & { x \geq 0 } \end{array} \right.$$.
Solution:
In this case, we graph the squaring function over negative $$x$$-values and the square root function over positive $$x$$-values.
Figure 2.4.12
Notice the open dot used at the origin for the squaring function and the closed dot used for the square root function. This was determined by the inequality that defines the domain of each piece of the function. The entire function consists of each piece graphed on the same coordinate plane.
Figure 2.4.13
When evaluating, the value in the domain determines the appropriate definition to use.
Example $$\PageIndex{2}$$:
Given the function $$h$$, find $$h(−5), h(0),$$ and $$h(3)$$.
Solution:
Use $$h(t) = 7t + 3$$ where $$t$$ is negative, as indicated by $$t < 0$$.
\begin{aligned} h ( t ) & = 7 t + 5 \\ h ( \color{Cerulean}{- 5}\color{Black}{ )} & = 7 ( \color{Cerulean}{- 5}\color{Black}{)} + 3 \\ & = - 35 + 3 \\ & = - 32 \end{aligned}
Where $$t$$ is greater than or equal to zero, use $$h(t) = −16t^{2} + 32t$$.
\begin{aligned} h ( \color{Cerulean}{0}\color{Black}{ )} & = - 16 ( \color{Cerulean}{0}\color{Black}{ )} + 32 ( \color{Cerulean}{0}\color{Black}{ )} h ( \color{Cerulean}{3}\color{Black}{ )} = 16 ( \color{Cerulean}{3}\color{Black}{ )} ^ { 2 } + 32 ( \color{Cerulean}{3}\color{Black}{ )} \\ & = 0 + 0 \quad\quad\quad\quad\quad\:\quad = -144 +96 \\ & = 0 \quad\quad\quad\quad\quad\quad\quad\quad = - 48 \end{aligned}
$$h(−5) = −32, h(0) = 0,$$ and $$h(3) = −48$$
Exercise $$\PageIndex{1}$$
Graph: $$f ( x ) = \left\{ \begin{array} { l l } { \frac { 2 } { 3 } x + 1 } & { \text { if } x < 0 } \\ { x ^ { 2 } } & { \text { if } x \geq 0 } \end{array} \right.$$.
Figure 2.4.14
The definition of a function may be different over multiple intervals in the domain.
Example $$\PageIndex{3}$$:
Graph: $$f ( x ) = \left\{ \begin{array} { l l } { x ^ { 3 } } & { \text { if } x < 0 } \\ { x } & { \text { if } 0 \leq x \leq 4 } \\ { 6 } & { \text { if } x > 4 } \end{array} \right.$$.
Solution:
In this case, graph the cubing function over the interval $$(−∞,0)$$. Graph the identity function over the interval $$[0,4]$$. Finally, graph the constant function $$f(x)=6$$ over the interval $$(4,∞)$$. And because $$f(x)=6$$ where $$x>4$$, we use an open dot at the point $$(4,6)$$. Where $$x=4$$, we use $$f(x)=x$$ and thus $$(4,4)$$ is a point on the graph as indicated by a closed dot.
Figure 2.4.15
The greatest integer function55, denoted $$f(x) = \left[\!\![x]\!\!\right]$$, assigns the greatest integer less than or equal to any real number in its domain. For example,
\begin{aligned} f ( 2.7 ) & = \left[\!\![2.7]\!\!\right] = 2 \\ f ( \pi ) & = \left[\!\![\pi]\!\!\right] = 3 \\ f ( 0.23 ) & = \left[\!\![0.23]\!\!\right] = 0 \\ f ( - 3.5 ) & = \left[\!\![-3.5]\!\!\right] = - 4 \end{aligned}
This function associates any real number with the greatest integer less than or equal to it and should not be confused with rounding off.
Example $$\PageIndex{4}$$:
Graph: $$f(x) = \left[\!\![x]\!\!\right]$$.
Solution:
If $$x$$ is any real number, then $$y = \left[\!\![x]\!\!\right]$$ is the greatest integer less than or equal to $$x$$.
\begin{aligned} \vdots\\- 1 \leq x < 0 & \color{Cerulean}{\Rightarrow}\color{Black}{ y} = \left[\!\![x]\!\!\right] = -1 \\ 0 \leq x < 1 & \color{Cerulean}{\Rightarrow} \color{Black}{y} = \left[\!\![x]\!\!\right] = 0 \\ 1 \leq x < 2 & \color{Cerulean}{\Rightarrow}\color{Black}{ y} = \left[\!\![x]\!\!\right] = 1 \\ & \vdots \end{aligned}
Using this, we obtain the following graph.
Figure 2.4.16
The domain of the greatest integer function consists of all real number $$\mathbb{R}$$ and the range consists of the set of integers $$\mathbb{Z}$$. This function is often called the floor function56 and has many applications in computer science.
## Key Takeaways
• Plot points to determine the general shape of the basic functions. The shape, as well as the domain and range, of each should be memorized.
• The basic polynomial functions are: $$f(x) = c, f(x) = x , f(x) = x^{2}$$, and $$f(x) = x^{3}$$.
• The basic nonpolynomial functions are: $$f(x) = |x|, f(x) = \sqrt{x}$$, and $$f(x) = \frac{1}{x}$$.
• A function whose definition changes depending on the value in the domain is called a piecewise function. The value in the domain determines the appropriate definition to use.
Exercise $$\PageIndex{2}$$
Match the graph to the function definition.
Figure 2.4.17
Figure 2.4.18
Figure 2.4.19
Figure 2.4.20
Figure 2.4.21
Figure 2.4.22
1. $$f(x) = x$$
2. $$f(x) = x^{2}$$
3. $$f(x) = x^{3}$$
4. $$f(x) = |x|$$
5. $$f(x) = x$$
6. $$f(x) = \frac{1}{x}$$
1. $$b$$
3. $$c$$
5. $$a$$
Exercise $$\PageIndex{3}$$
Evaluate.
1. $$f(x) = x$$; find $$f(−10), f(0)$$, and $$f(a)$$.
2. $$f(x) = x^{2}$$; find $$f(−10), f(0)$$, and $$f(a)$$.
3. $$f(x) = x^{3}$$; find $$f(−10), f(0)$$, and $$f(a)$$.
4. $$f(x) = |x|$$; find $$f(−10), f(0)$$, and $$f(a)$$.
5. $$f(x) = \sqrt{x}$$; $$find f(25), f(0)$$, and $$f(a)$$ where $$a ≥ 0$$.
6. $$f(x) = \frac{1}{x}$$; find $$f(−10), f (\frac{1}{5})$$, and $$f(a)$$ where $$a ≠ 0$$.
7. $$f(x) = 5$$; find $$f(−10), f(0)$$, and $$f(a)$$.
8. $$f(x) = −12$$; find $$f(−12), f(0)$$, and $$f(a)$$.
9. Graph $$f(x) = 5$$ and state its domain and range.
10. Graph $$f(x) = −9$$ and state its domain and range
1. $$f ( - 10 ) = - 10 , f ( 0 ) = 0 , f ( a ) = a$$
3. $$f ( - 10 ) = - 1,000 , f ( 0 ) = 0 , f ( a ) = a ^ { 3 }$$
5. $$f ( 25 ) = 5 , f ( 0 ) = 0 , f ( a ) = \sqrt { a }$$
7. $$f ( - 10 ) = 5 , f ( 0 ) = 5 , f ( a ) = 5$$
9. Domain: $$\mathbb{R}$$; range $$\{5\}$$
Figure 2.4.23
Exercise $$\PageIndex{4}$$
Cube root function.
1. 17. Find points on the graph of the function defined by $$f ( x ) = \sqrt [ 3 ] { x }$$ with $$x$$-values in the set $$\{−8, −1, 0, 1, 8\}$$.
2. Find points on the graph of the function defined by $$f ( x ) = \sqrt [ 3 ] { x }$$ with $$x$$-values in the set $$\{−3, −2, 1, 2, 3\}$$. Use a calculator and round off to the nearest tenth.
3. Graph the cube root function defined by $$f ( x ) = \sqrt [ 3 ] { x }$$ by plotting the points found in the previous two exercises.
4. Determine the domain and range of the cube root function.
1. $$\{ ( - 8 , - 2 ) , ( - 1 , - 1 ) , ( 0,0 ) , ( 1,1 ) , ( 8,2 ) \}$$
3.
Figure 2.4.24
Exercise $$\PageIndex{5}$$
Find the ordered pair that specifies the point $$P$$.
1.
Figure 2.4.25
2.
Figure 2.4.26
3.
Figure 2.4.27
4.
Figure 2.4.28
1. $$\left( \frac { 3 } { 2 } , \frac { 27 } { 8 } \right)$$
3. $$\left( - \frac { 5 } { 2 } , - \frac { 5 } { 2 } \right)$$
Exercise $$\PageIndex{6}$$
Graph the piecewise functions.
1. $$g ( x ) = \left\{ \begin{array} { l l } { 2 } & { \text { if } x < 0 } \\ { x } & { \text { if } x \geq 0 } \end{array} \right.$$
2. $$g ( x ) = \left\{ \begin{array} { l l } { x ^ { 2 } } & { \text { if } x < 0 } \\ { 3 } & { \text { if } x \geq 0 } \end{array} \right.$$
3. $$h ( x ) = \left\{ \begin{array} { l l } { x } & { \text { if } x < 0 } \\ { \sqrt { x } } & { \text { if } x \geq 0 } \end{array} \right.$$
4. $$h ( x ) = \left\{ \begin{array} { l } { | x | \text { if } x < 0 } \\ { x ^ { 3 } \text { if } x \geq 0 } \end{array} \right.$$
5. $$f ( x ) = \left\{ \begin{array} { l } { | x | \text { if } x < 2 } \\ { 4 \text { if } x \geq 2 } \end{array} \right.$$
6. $$f ( x ) = \left\{ \begin{array} { l l } { x } & { \text { if } x < 1 } \\ { \sqrt { x } } & { \text { if } x \geq 1 } \end{array} \right.$$
7. $$g ( x ) = \left\{ \begin{array} { l l } { x ^ { 2 } \text { if } x \leq - 1 } \\ { x \quad \text { if } x > - 1 } \end{array} \right.$$
8. $$g ( x ) = \left\{ \begin{array} { l } { - 3 \text { if } x \leq - 1 } \\ { x ^ { 3 } \text { if } x > - 1 } \end{array} \right.$$
9. $$h ( x ) = \left\{ \begin{array} { l } { 0 \text { if } x \leq 0 } \\ { \frac { 1 } { x } \text { if } x > 0 } \end{array} \right.$$
10. $$h ( x ) = \left\{ \begin{array} { l } { \frac { 1 } { x } \text { if } x < 0 } \\ { x ^ { 2 } \text { if } x \geq 0 } \end{array} \right.$$
11. $$f ( x ) = \left\{ \begin{array} { l l } { x ^ { 2 } } & { \text { if } x < 0 } \\ { x } & { \text { if } 0 \leq x < 2 } \\ { - 2 } & { \text { if } x \geq 2 } \end{array} \right.$$
12. $$f ( x ) = \left\{ \begin{array} { l l } { x } & { \text { if } x < - 1 } \\ { x ^ { 3 } } & { \text { if } - 1 \leq x < 1 } \\ { 3 } & { \text { if } x \geq 1 } \end{array} \right.$$
13. $$g ( x ) = \left\{ \begin{array} { l l } { 5 } & { \text { if } x < - 2 } \\ { x ^ { 2 } } & { \text { if } - 2 \leq x < 2 } \\ { x } & { \text { if } x \geq 2 } \end{array} \right.$$
14. $$g ( x ) = \left\{ \begin{array} { l l } { x } & { \text { if } x < - 3 } \\ { | x | } & { \text { if } - 3 \leq x < 1 } \\ { \sqrt { x } } & { \text { if } x \geq 1 } \end{array} \right.$$
15. $$h ( x ) = \left\{ \begin{array} { l } { \frac { 1 } { x } \text { if } x < 0 } \\ { x ^ { 2 } \text { if } 0 \leq x < 2 } \\ { 4 \text { if } x \geq 2 } \end{array} \right.$$
16. $$h ( x ) = \left\{ \begin{array} { l } { 0 \text { if } x < 0 } \\ { x ^ { 3 } \text { if } 0 < x \leq 2 } \\ { 8 \text { if } x > 2 } \end{array} \right.$$
17. $$f ( x ) = [\left[\!\![x+0.5]\!\!\right]$$
18. $$f(x) = \left[\!\![x]\!\!\right] +1$$
19. $$f(x) = \left[\!\![0.5x]\!\!\right]$$
20. $$f(x) = 2\left[\!\![x]\!\!\right]$$
1.
Figure 2.4.29
3.
Figure 2.4.50
5.
Figure 2.4.51
7.
Figure 2.4.52
9.
Figure 2.4.53
11.
Figure 2.4.54
13.
Figure 2.4.55
15.
Figure 2.4.56
17.
Figure 2.4.57
19.
Figure 2.4.58
Exercise $$\PageIndex{7}$$
Evaluate.
1. $$f ( x ) = \left\{ \begin{array} { l l } { x ^ { 2 } } & { \text { if } x \leq 0 } \\ { x + 2 } & { \text { if } x > 0 } \end{array} \right.$$
Find $$f(-5), f(0)$$, and $$f(3)$$.
2. $$f ( x ) = \left\{ \begin{array} { l l } { x ^ { 3 } } & { \text { if } x < 0 } \\ { 2 x - 1 } & { \text { if } x \geq 0 } \end{array} \right.$$
Find $$f(−3), f(0)$$, and $$f(2)$$.
3. $$g ( x ) = \left\{ \begin{array} { l l } { 5 x - 2 } & { \text { if } x < 1 } \\ { \sqrt { x } } & { \text { if } x \geq 1 } \end{array} \right.$$
Find $$g(−1), g(1)$$, and $$g(4)$$.
4. $$g ( x ) = \left\{ \begin{array} { l } { x ^ { 3 } \text { if } x \leq - 2 } \\ { | x | \text { if } x > - 2 } \end{array} \right.$$
Find $$g(−3), g(−2)$$, and $$g(−1)$$.
5. $$h ( x ) = \left\{ \begin{array} { l l } { - 5 } & { \text { if } x < 0 } \\ { 2 x - 3 } & { \text { if } 0 \leq x < 2 } \\ { x ^ { 2 } } & { \text { if } x \geq 2 } \end{array} \right.$$
Find $$h(−2), h(0)$$, and $$h(4)$$.
6. $$h ( x ) = \left\{ \begin{array} { l } { - 3 x \text { if } x \leq 0 } \\ { x ^ { 3 } \text { if } 0 < x \leq 4 } \\ { \sqrt { x } \text { if } x > 4 } \end{array} \right.$$
Find $$h(−5), h(4)$$, and $$h(25)$$.
7. $$f ( x ) = \left[\!\![x-0.5]\!\!\right]$$
Find $$f(−2), f(0)$$, and $$f(3)$$.
8. $$f ( x ) = \left[\!\![2x]\!\!\right] + 1$$
Find $$f(−1.2), f(0.4)$$, and $$f(2.6)$$.
1. $$f (−5) = 25, f(0) = 0$$, and $$f(3) = 5$$
3. $$g(−1) = −7, g(1) = 1$$, and $$g(4) = 2$$
5. $$h(−2) = −5, h(0) = −3$$, and $$h(4) = 16$$
7. $$f(−2) = −3, f(0) = −1$$, and $$f(3) = 2$$
Exercise $$\PageIndex{8}$$
Evaluate given the graph of $$f$$.
1. Find $$f(-4), f(-2)$$, and $$f(0)$$.
Figure 2.4.59
2. Find $$f(−3), f(0)$$, and $$f(1)$$.
Figure 2.4.60
3. Find $$f(0), f(2)$$, and $$f(4)$$.
Figure 2.4.61
4. Find $$f(−5), f(−2)$$, and $$f(2)$$.
Figure 2.4.62
5. Find $$f(−3), f(−2)$$, and $$f(2)$$.
Figure 2.4.63
6. Find $$f(−3), f(0)$$, and $$f(4)$$.
Figure 2.4.64
7. Find $$f(−2), f(0)$$, and $$f(2)$$.
Figure 2.4.65
8. Find $$f(−3), f(1)$$, and $$f(2)$$.
Figure 2.4.66
9. The value of an automobile in dollars is given in terms of the number of years since it was purchased new in $$1975$$:
Figure 2.4.67
(1) Determine the value of the automobile in the year $$1980$$.
(2) In what year is the automobile valued at $$9,000$$?
10. The cost per unit in dollars of custom lamps depends on the number of units produced according to the following graph:
Figure 2.4.68
(1) What is the cost per unit if $$250$$ custom lamps are produced?
(2) What level of production minimizes the cost per unit?
11. An automobile salesperson earns a commission based on total sales each month $$x$$ according to the function:
$$( x ) = \left\{ \begin{array} { l l } { 0.03 x\quad \text { if } \quad 0 \leq x < \ 20,000 } \\ { 0.05 x \quad\text { if } } \quad { \ 20,000 \leq x < \ 50,000 } \\ { 0.07 x \quad\text { if } }\quad { x \geq \ 50,000 } \end{array} \right.$$
(1) If the salesperson’s total sales for the month are $$35,500$$, what is her commission according to the function?
(2) To reach the next level in the commission structure, how much more in sales will she need?
12. A rental boat costs $$32$$ for one hour, and each additional hour or partial hour costs $$8$$. Graph the cost of the rental boat and determine the cost to rent the boat for $$4 \frac{1}{2}$$ hours.
1. $$f(−4) = 1, f(−2) = 1$$, and $$f(0) = 0$$
3. $$f(0) = 0, f(2) = 8$$, and $$f(4) = 0$$
5. $$f(−3) = 5, f(−2) = 4$$, and $$f(2) = 2$$
7. $$f(−2) = −1, f(0) = 0$$, and $$f(2) = 1$$
9. (1) $$3,000$$; (2) $$2005$$
11. (1) $$1,775$$; (2) $$14,500$$
Exercise $$\PageIndex{9}$$
1. Explain to a beginning algebra student what an asymptote is.
2. Research and discuss the difference between the floor and ceiling functions. What applications can you find that use these functions?
## Footnotes
43Any function of the form $$f(x) = c$$ where $$c$$ is a real number.
44The linear function defined by $$f(x) = x$$.
45The quadratic function defined by $$f(x) = x^{2}$$.
46The curved graph formed by the squaring function.
47The cubic function defined by $$f(x) = x^{3}$$.
48The function defined by $$f(x) = |x|$$.
49The function defined by $$f(x) = \sqrt{x}$$.
50The function defined by $$f(x) = \frac{1}{x}$$.
51A vertical line to which a graph becomes infinitely close.
52A horizontal line to which a graph becomes infinitely close where the $$x$$-values tend toward $$±∞$$.
53A function whose definition changes depending on the values in the domain.
54A term used when referring to a piecewise function.
55The function that assigns any real number $$x$$ to the greatest integer less than or equal to $$x$$ denoted $$f(x) = \left[\!\![x]\!\!\right]$$.
56A term used when referring to the greatest integer function.
|
# The unit interval can not be made into a topological group.
The statement is: The unit interval can never be made into a topological group under any multiplication.
$\textbf{HINT:}$ For G to be a topological group,then for every two elements $x,y \in G$ ,there exists a homeomorphism $h : G \rightarrow G$ such that $h(x) = y$.As I know the homeomorphism will be the right translation by $x^{-1}y$.
But I am stuck in how to use this hint to prove the above statement.
The statement
For all $x,y \in X$ there is a homeomorphism $h : X \to X$ such that $h(x) = y$
is the definition of "$X$ is a homogeneous space". It is the fomalisation of the idea that all points of $X$ "look/behave the same", topologically. Indeed translations show that a topological group is always homogeneous.
But for $X = [0,1]$ we can prove that $X$ is not homogeneous.
Take $x = 0$ and $y = \frac{1}{2}$.
If $h: [0,1] \to [0,1]$ would be a homeomorphism with $h(0) = \frac{1}{2}$, then the restriction $h: [0,1]\setminus \{0\} \to [0,1]\setminus \{\frac{1}{2}\}$ is also a homeomorphism. But $(0,1]$ is connected and $[0,\frac{1}{2}) \cup (\frac{1}{2}, 1]$ is not, so we have a contradiction, and so $h$ cannot exist.
As $[0,1]$ is not homogeneous, it cannot be made into a topological group.
Hint: in a topological group, if removal of a point $x$ disconnects the underlying topological space, then so does removal of $xy$ for any $y$.
• I have just started studying Topological groups and I didn't get you. – Sumit Mittal Feb 24 '18 at 12:40
• $[0, 1] \setminus \{1\} = [0, 1)$ is connected while $[0, 1] \setminus \{1/2\} = [0, 1/2) \cup (1/2, 1]$ is not. Hence there is no homeomorphism of $[0, 1]$ to $[0, 1]$ mapping $1$ to $1/2$. Now use the HINT in your question. – Rob Arthan Feb 24 '18 at 15:23
|
# Local components of modular forms#
If $$f$$ is a (new, cuspidal, normalised) modular eigenform, then one can associate to $$f$$ an automorphic representation $$\pi_f$$ of the group $$\operatorname{GL}_2(\mathbf{A})$$ (where $$\mathbf{A}$$ is the adele ring of $$\QQ$$). This object factors as a restricted tensor product of components $$\pi_{f, v}$$ for each place of $$\QQ$$. These are infinite-dimensional representations, but they are specified by a finite amount of data, and this module provides functions which determine a description of the local factor $$\pi_{f, p}$$ at a finite prime $$p$$.
The functions in this module are based on the algorithms described in [LW2012].
AUTHORS:
• David Loeffler
• Jared Weinstein
class sage.modular.local_comp.local_comp.ImprimitiveLocalComponent(newform, prime, twist_factor, min_twist, chi)#
A smooth representation which is not of minimal level among its character twists. Internally, this is stored as a pair consisting of a minimal local component and a character to twist by.
characters()#
Return the pair of characters (either of $$\QQ_p^*$$ or of some quadratic extension) corresponding to this representation.
EXAMPLES:
sage: f = [f for f in Newforms(63, 4, names='a') if f[2] == 1][0]
sage: f.local_component(3).characters()
[
Character of Q_3*, of level 1, mapping 2 |--> -1, 3 |--> d,
Character of Q_3*, of level 1, mapping 2 |--> -1, 3 |--> -d - 2
]
check_tempered()#
Check that this representation is quasi-tempered, i.e. $$\pi \otimes |\det|^{j/2}$$ is tempered. It is well known that local components of modular forms are always tempered, so this serves as a useful check on our computations.
EXAMPLES:
sage: f = [f for f in Newforms(63, 4, names='a') if f[2] == 1][0]
sage: f.local_component(3).check_tempered()
is_primitive()#
Return True if this local component is primitive (has minimal level among its character twists).
EXAMPLES:
sage: Newform("45a").local_component(3).is_primitive()
False
minimal_twist()#
Return a twist of this local component which has the minimal possible conductor.
EXAMPLES:
sage: Pi = Newform("75b").local_component(5)
sage: Pi.minimal_twist()
Smooth representation of GL_2(Q_5) with conductor 5^1
species()#
The species of this local component, which is either ‘Principal Series’, ‘Special’ or ‘Supercuspidal’.
EXAMPLES:
sage: Pi = Newform("45a").local_component(3)
sage: Pi.species()
'Special'
twisting_character()#
Return the character giving the minimal twist of this representation.
EXAMPLES:
sage: Pi = Newform("45a").local_component(3)
sage: Pi.twisting_character()
Dirichlet character modulo 3 of conductor 3 mapping 2 |--> -1
sage.modular.local_comp.local_comp.LocalComponent(f, p, twist_factor=None)#
Calculate the local component at the prime $$p$$ of the automorphic representation attached to the newform $$f$$.
INPUT:
• f (Newform) a newform of weight $$k \ge 2$$
• p (integer) a prime
• twist_factor (integer) an integer congruent to $$k$$ modulo 2 (default: $$k - 2$$)
Note
The argument twist_factor determines the choice of normalisation: if it is set to $$j \in \ZZ$$, then the central character of $$\pi_{f, \ell}$$ maps $$\ell$$ to $$\ell^j \varepsilon(\ell)$$ for almost all $$\ell$$, where $$\varepsilon$$ is the Nebentypus character of $$f$$.
In the analytic theory it is conventional to take $$j = 0$$ (the “Langlands normalisation”), so the representation $$\pi_f$$ is unitary; however, this is inconvenient for $$k$$ odd, since in this case one needs to choose a square root of $$p$$ and thus the map $$f \to \pi_{f}$$ is not Galois-equivariant. Hence we use, by default, the “Hecke normalisation” given by $$j = k - 2$$. This is also the most natural normalisation from the perspective of modular symbols.
We also adopt a slightly unusual definition of the principal series: we define $$\pi(\chi_1, \chi_2)$$ to be the induction from the Borel subgroup of the character of the maximal torus $$\begin{pmatrix} x & \\ & y \end{pmatrix} \mapsto \chi_1(a) \chi_2(b) |a|$$, so its central character is $$z \mapsto \chi_1(z) \chi_2(z) |z|$$. Thus $$\chi_1 \chi_2$$ is the restriction to $$\QQ_p^\times$$ of the unique character of the id'ele class group mapping $$\ell$$ to $$\ell^{k-1} \varepsilon(\ell)$$ for almost all $$\ell$$. This has the property that the set $$\{\chi_1, \chi_2\}$$ also depends Galois-equivariantly on $$f$$.
EXAMPLES:
sage: Pi = LocalComponent(Newform('49a'), 7); Pi
Smooth representation of GL_2(Q_7) with conductor 7^2
sage: Pi.central_character()
Character of Q_7*, of level 0, mapping 7 |--> 1
sage: Pi.species()
'Supercuspidal'
sage: Pi.characters()
[
Character of unramified extension Q_7(s)* (s^2 + 6*s + 3 = 0), of level 1, mapping s |--> -d, 7 |--> 1,
Character of unramified extension Q_7(s)* (s^2 + 6*s + 3 = 0), of level 1, mapping s |--> d, 7 |--> 1
]
class sage.modular.local_comp.local_comp.LocalComponentBase(newform, prime, twist_factor)#
Base class for local components of newforms. Not to be directly instantiated; use the LocalComponent() constructor function.
central_character()#
Return the central character of this representation. This is the restriction to $$\QQ_p^\times$$ of the unique smooth character $$\omega$$ of $$\mathbf{A}^\times / \QQ^\times$$ such that $$\omega(\varpi_\ell) = \ell^j \varepsilon(\ell)$$ for all primes $$\ell \nmid Np$$, where $$\varpi_\ell$$ is a uniformiser at $$\ell$$, $$\varepsilon$$ is the Nebentypus character of the newform $$f$$, and $$j$$ is the twist factor (see the documentation for LocalComponent()).
EXAMPLES:
sage: LocalComponent(Newform('27a'), 3).central_character()
Character of Q_3*, of level 0, mapping 3 |--> 1
sage: LocalComponent(Newforms(Gamma1(5), 5, names='c')[0], 5).central_character()
Character of Q_5*, of level 1, mapping 2 |--> c0 + 1, 5 |--> 125
sage: LocalComponent(Newforms(DirichletGroup(24)([1, -1,-1]), 3, names='a')[0], 2).central_character()
Character of Q_2*, of level 3, mapping 7 |--> 1, 5 |--> -1, 2 |--> -2
check_tempered()#
Check that this representation is quasi-tempered, i.e. $$\pi \otimes |\det|^{j/2}$$ is tempered. It is well known that local components of modular forms are always tempered, so this serves as a useful check on our computations.
EXAMPLES:
sage: from sage.modular.local_comp.local_comp import LocalComponentBase
sage: LocalComponentBase(Newform('50a'), 3, 0).check_tempered()
Traceback (most recent call last):
...
NotImplementedError: <abstract method check_tempered at ...>
coefficient_field()#
The field $$K$$ over which this representation is defined. This is the field generated by the Hecke eigenvalues of the corresponding newform (over whatever base ring the newform is created).
EXAMPLES:
sage: LocalComponent(Newforms(50)[0], 3).coefficient_field()
Rational Field
sage: LocalComponent(Newforms(Gamma1(10), 3, base_ring=QQbar)[0], 5).coefficient_field()
Algebraic Field
sage: LocalComponent(Newforms(DirichletGroup(5).0, 7,names='c')[0], 5).coefficient_field()
Number Field in c0 with defining polynomial x^2 + (5*zeta4 + 5)*x - 88*zeta4 over its base field
conductor()#
The smallest $$r$$ such that this representation has a nonzero vector fixed by the subgroup $$\begin{pmatrix} * & * \\ 0 & 1\end{pmatrix} \pmod{p^r}$$. This is equal to the power of $$p$$ dividing the level of the corresponding newform.
EXAMPLES:
sage: LocalComponent(Newform('50a'), 5).conductor()
2
newform()#
The newform of which this is a local component.
EXAMPLES:
sage: LocalComponent(Newform('50a'), 5).newform()
q - q^2 + q^3 + q^4 + O(q^6)
prime()#
The prime at which this is a local component.
EXAMPLES:
sage: LocalComponent(Newform('50a'), 5).prime()
5
species()#
The species of this local component, which is either ‘Principal Series’, ‘Special’ or ‘Supercuspidal’.
EXAMPLES:
sage: from sage.modular.local_comp.local_comp import LocalComponentBase
sage: LocalComponentBase(Newform('50a'), 3, 0).species()
Traceback (most recent call last):
...
NotImplementedError: <abstract method species at ...>
twist_factor()#
The unique $$j$$ such that $$\begin{pmatrix} p & 0 \\ 0 & p\end{pmatrix}$$ acts as multiplication by $$p^j$$ times a root of unity.
There are various conventions for this; see the documentation of the LocalComponent() constructor function for more information.
The twist factor should have the same parity as the weight of the form, since otherwise the map sending $$f$$ to its local component won’t be Galois equivariant.
EXAMPLES:
sage: LocalComponent(Newforms(50)[0], 3).twist_factor()
0
sage: LocalComponent(Newforms(50)[0], 3, twist_factor=173).twist_factor()
173
class sage.modular.local_comp.local_comp.PrimitiveLocalComponent(newform, prime, twist_factor)#
Base class for primitive (twist-minimal) local components.
is_primitive()#
Return True if this local component is primitive (has minimal level among its character twists).
EXAMPLES:
sage: Newform("50a").local_component(5).is_primitive()
True
minimal_twist()#
Return a twist of this local component which has the minimal possible conductor.
EXAMPLES:
sage: Pi = Newform("50a").local_component(5)
sage: Pi.minimal_twist() == Pi
True
class sage.modular.local_comp.local_comp.PrimitivePrincipalSeries(newform, prime, twist_factor)#
A ramified principal series of the form $$\pi(\chi_1, \chi_2)$$ where $$\chi_1$$ is unramified but $$\chi_2$$ is not.
EXAMPLES:
sage: Pi = LocalComponent(Newforms(Gamma1(13), 2, names='a')[0], 13)
sage: type(Pi)
<class 'sage.modular.local_comp.local_comp.PrimitivePrincipalSeries'>
sage: TestSuite(Pi).run()
characters()#
Return the two characters $$(\chi_1, \chi_2)$$ such that the local component $$\pi_{f, p}$$ is the induction of the character $$\chi_1 \times \chi_2$$ of the Borel subgroup.
EXAMPLES:
sage: LocalComponent(Newforms(Gamma1(13), 2, names='a')[0], 13).characters()
[
Character of Q_13*, of level 0, mapping 13 |--> 3*a0 + 2,
Character of Q_13*, of level 1, mapping 2 |--> a0 + 2, 13 |--> -3*a0 - 7
]
class sage.modular.local_comp.local_comp.PrimitiveSpecial(newform, prime, twist_factor)#
A primitive special representation: that is, the Steinberg representation twisted by an unramified character. All such representations have conductor 1.
EXAMPLES:
sage: Pi = LocalComponent(Newform('37a'), 37)
sage: Pi.species()
'Special'
sage: Pi.conductor()
1
sage: type(Pi)
<class 'sage.modular.local_comp.local_comp.PrimitiveSpecial'>
sage: TestSuite(Pi).run()
characters()#
Return the defining characters of this representation. In this case, it will return the unique unramified character $$\chi$$ of $$\QQ_p^\times$$ such that this representation is equal to $$\mathrm{St} \otimes \chi$$, where $$\mathrm{St}$$ is the Steinberg representation (defined as the quotient of the parabolic induction of the trivial character by its trivial subrepresentation).
EXAMPLES:
Our first example is the newform corresponding to an elliptic curve of conductor $$37$$. This is the nontrivial quadratic twist of Steinberg, corresponding to the fact that the elliptic curve has non-split multiplicative reduction at 37:
sage: LocalComponent(Newform('37a'), 37).characters()
[Character of Q_37*, of level 0, mapping 37 |--> -1]
We try an example in odd weight, where the central character isn’t trivial:
sage: Pi = LocalComponent(Newforms(DirichletGroup(21)([-1, 1]), 3, names='j')[0], 7); Pi.characters()
[Character of Q_7*, of level 0, mapping 7 |--> -1/2*j0^2 - 7/2]
sage: Pi.characters()[0] ^2 == Pi.central_character()
True
An example using a non-standard twist factor:
sage: Pi = LocalComponent(Newforms(DirichletGroup(21)([-1, 1]), 3, names='j')[0], 7, twist_factor=3); Pi.characters()
[Character of Q_7*, of level 0, mapping 7 |--> -7/2*j0^2 - 49/2]
sage: Pi.characters()[0]^2 == Pi.central_character()
True
check_tempered()#
Check that this representation is tempered (after twisting by $$|\det|^{j/2}$$ where $$j$$ is the twist factor). Since local components of modular forms are always tempered, this is a useful check on our calculations.
EXAMPLES:
sage: Pi = LocalComponent(Newforms(DirichletGroup(21)([-1, 1]), 3, names='j')[0], 7)
sage: Pi.check_tempered()
species()#
The species of this local component, which is either ‘Principal Series’, ‘Special’ or ‘Supercuspidal’.
EXAMPLES:
sage: LocalComponent(Newform('37a'), 37).species()
'Special'
class sage.modular.local_comp.local_comp.PrimitiveSupercuspidal(newform, prime, twist_factor)#
A primitive supercuspidal representation.
Except for some exceptional cases when $$p = 2$$ which we do not implement here, such representations are parametrized by smooth characters of tamely ramified quadratic extensions of $$\QQ_p$$.
EXAMPLES:
sage: f = Newform("50a")
sage: Pi = LocalComponent(f, 5)
sage: type(Pi)
<class 'sage.modular.local_comp.local_comp.PrimitiveSupercuspidal'>
sage: Pi.species()
'Supercuspidal'
sage: TestSuite(Pi).run()
characters()#
Return the two conjugate characters of $$K^\times$$, where $$K$$ is some quadratic extension of $$\QQ_p$$, defining this representation. An error will be raised in some 2-adic cases, since not all 2-adic supercuspidal representations arise in this way.
EXAMPLES:
The first example from [LW2012]:
sage: f = Newform('50a')
sage: Pi = LocalComponent(f, 5)
sage: chars = Pi.characters(); chars
[
Character of unramified extension Q_5(s)* (s^2 + 4*s + 2 = 0), of level 1, mapping s |--> -d - 1, 5 |--> 1,
Character of unramified extension Q_5(s)* (s^2 + 4*s + 2 = 0), of level 1, mapping s |--> d, 5 |--> 1
]
sage: chars[0].base_ring()
Number Field in d with defining polynomial x^2 + x + 1
These characters are interchanged by the Frobenius automorphism of $$\GF{25}$$:
sage: chars[0] == chars[1]**5
True
A more complicated example (higher weight and nontrivial central character):
sage: f = Newforms(GammaH(25, [6]), 3, names='j')[0]; f
q + j0*q^2 + 1/3*j0^3*q^3 - 1/3*j0^2*q^4 + O(q^6)
sage: Pi = LocalComponent(f, 5)
sage: Pi.characters()
[
Character of unramified extension Q_5(s)* (s^2 + 4*s + 2 = 0), of level 1, mapping s |--> 1/3*j0^2*d - 1/3*j0^3, 5 |--> 5,
Character of unramified extension Q_5(s)* (s^2 + 4*s + 2 = 0), of level 1, mapping s |--> -1/3*j0^2*d, 5 |--> 5
]
sage: Pi.characters()[0].base_ring()
Number Field in d with defining polynomial x^2 - j0*x + 1/3*j0^2 over its base field
Warning
The above output isn’t actually the same as in Example 2 of [LW2012], due to an error in the published paper (correction pending) – the published paper has the inverses of the above characters.
A higher level example:
sage: f = Newform('81a', names='j'); f
q + j0*q^2 + q^4 - j0*q^5 + O(q^6)
sage: LocalComponent(f, 3).characters() # long time (12s on sage.math, 2012)
[
Character of unramified extension Q_3(s)* (s^2 + 2*s + 2 = 0), of level 2, mapping -2*s |--> -2*d + j0, 4 |--> 1, 3*s + 1 |--> -j0*d + 1, 3 |--> 1,
Character of unramified extension Q_3(s)* (s^2 + 2*s + 2 = 0), of level 2, mapping -2*s |--> 2*d - j0, 4 |--> 1, 3*s + 1 |--> j0*d - 2, 3 |--> 1
]
Some ramified examples:
sage: Newform('27a').local_component(3).characters()
[
Character of ramified extension Q_3(s)* (s^2 - 6 = 0), of level 2, mapping 2 |--> 1, s + 1 |--> -d, s |--> -1,
Character of ramified extension Q_3(s)* (s^2 - 6 = 0), of level 2, mapping 2 |--> 1, s + 1 |--> d - 1, s |--> -1
]
sage: LocalComponent(Newform('54a'), 3, twist_factor=4).characters()
[
Character of ramified extension Q_3(s)* (s^2 - 3 = 0), of level 2, mapping 2 |--> 1, s + 1 |--> -1/9*d, s |--> -9,
Character of ramified extension Q_3(s)* (s^2 - 3 = 0), of level 2, mapping 2 |--> 1, s + 1 |--> 1/9*d - 1, s |--> -9
]
sage: Newform('24a').local_component(2).characters()
Traceback (most recent call last):
...
ValueError: Totally ramified 2-adic representations are not classified by characters
Examples where $$K^\times / \QQ_p^\times$$ is not topologically cyclic (which complicates the computations greatly):
sage: Newforms(DirichletGroup(64, QQ).1, 2, names='a')[0].local_component(2).characters() # long time, random
[
Character of unramified extension Q_2(s)* (s^2 + s + 1 = 0), of level 3, mapping s |--> 1, 2*s + 1 |--> 1/2*a0, 4*s + 1 |--> 1, -1 |--> 1, 2 |--> 1,
Character of unramified extension Q_2(s)* (s^2 + s + 1 = 0), of level 3, mapping s |--> 1, 2*s + 1 |--> 1/2*a0, 4*s + 1 |--> -1, -1 |--> 1, 2 |--> 1
]
sage: Newform('243a',names='a').local_component(3).characters() # long time
[
Character of ramified extension Q_3(s)* (s^2 - 6 = 0), of level 4, mapping -2*s - 1 |--> -d - 1, 4 |--> 1, 3*s + 1 |--> -d - 1, s |--> 1,
Character of ramified extension Q_3(s)* (s^2 - 6 = 0), of level 4, mapping -2*s - 1 |--> d, 4 |--> 1, 3*s + 1 |--> d, s |--> 1
]
check_tempered()#
Check that this representation is tempered (after twisting by $$|\det|^{j/2}$$ where $$j$$ is the twist factor). Since local components of modular forms are always tempered, this is a useful check on our calculations.
Since the computation of the characters attached to this representation is not implemented in the odd-conductor case, a NotImplementedError will be raised for such representations.
EXAMPLES:
sage: LocalComponent(Newform("50a"), 5).check_tempered()
sage: LocalComponent(Newform("27a"), 3).check_tempered()
species()#
The species of this local component, which is either ‘Principal Series’, ‘Special’ or ‘Supercuspidal’.
EXAMPLES:
sage: LocalComponent(Newform('49a'), 7).species()
'Supercuspidal'
type_space()#
Return a TypeSpace object describing the (homological) type space of this newform, which we know is dual to the type space of the local component.
EXAMPLES:
sage: LocalComponent(Newform('49a'), 7).type_space()
6-dimensional type space at prime 7 of form q + q^2 - q^4 + O(q^6)
class sage.modular.local_comp.local_comp.PrincipalSeries(newform, prime, twist_factor)#
A principal series representation. This is an abstract base class, not to be instantiated directly; see the subclasses UnramifiedPrincipalSeries and PrimitivePrincipalSeries.
characters()#
Return the two characters $$(\chi_1, \chi_2)$$ such this representation $$\pi_{f, p}$$ is equal to the principal series $$\pi(\chi_1, \chi_2)$$.
EXAMPLES:
sage: from sage.modular.local_comp.local_comp import PrincipalSeries
sage: PrincipalSeries(Newform('50a'), 3, 0).characters()
Traceback (most recent call last):
...
NotImplementedError: <abstract method characters at ...>
check_tempered()#
Check that this representation is tempered (after twisting by $$|\det|^{j/2}$$), i.e. that $$|\chi_1(p)| = |\chi_2(p)| = p^{(j + 1)/2}$$. This follows from the Ramanujan–Petersson conjecture, as proved by Deligne.
EXAMPLES:
sage: LocalComponent(Newform('49a'), 3).check_tempered()
species()#
The species of this local component, which is either ‘Principal Series’, ‘Special’ or ‘Supercuspidal’.
EXAMPLES:
sage: LocalComponent(Newform('50a'), 3).species()
'Principal Series'
class sage.modular.local_comp.local_comp.UnramifiedPrincipalSeries(newform, prime, twist_factor)#
An unramified principal series representation of $${\rm GL}_2(\QQ_p)$$ (corresponding to a form whose level is not divisible by $$p$$).
EXAMPLES:
sage: Pi = LocalComponent(Newform('50a'), 3)
sage: Pi.conductor()
0
sage: type(Pi)
<class 'sage.modular.local_comp.local_comp.UnramifiedPrincipalSeries'>
sage: TestSuite(Pi).run()
characters()#
Return the two characters $$(\chi_1, \chi_2)$$ such this representation $$\pi_{f, p}$$ is equal to the principal series $$\pi(\chi_1, \chi_2)$$. These are the unramified characters mapping $$p$$ to the roots of the Satake polynomial, so in most cases (but not always) they will be defined over an extension of the coefficient field of self.
EXAMPLES:
sage: LocalComponent(Newform('11a'), 17).characters()
[
Character of Q_17*, of level 0, mapping 17 |--> d,
Character of Q_17*, of level 0, mapping 17 |--> -d - 2
]
sage: LocalComponent(Newforms(Gamma1(5), 6, names='a')[1], 3).characters()
[
Character of Q_3*, of level 0, mapping 3 |--> -3/2*a1 + 12,
Character of Q_3*, of level 0, mapping 3 |--> -3/2*a1 - 12
]
satake_polynomial()#
Return the Satake polynomial of this representation, i.e.~the polynomial whose roots are $$\chi_1(p), \chi_2(p)$$ where this representation is $$\pi(\chi_1, \chi_2)$$. Concretely, this is the polynomial
$X^2 - p^{(j - k + 2)/2} a_p(f) X + p^{j + 1} \varepsilon(p).$
An error will be raised if $$j \ne k \bmod 2$$.
EXAMPLES:
sage: LocalComponent(Newform('11a'), 17).satake_polynomial()
X^2 + 2*X + 17
sage: LocalComponent(Newform('11a'), 17, twist_factor = -2).satake_polynomial()
X^2 + 2/17*X + 1/17
`
|
+0
# HELP THIS MATH QUESTION IS DUE: The number (√2+√3)³ can be written in the form a√2+b√3+c√6, where a, b, and c are integers. What is a+b+c?
+1
250
8
+15
The number $(\sqrt{2}+\sqrt{3})^3$ can be written in the form $a\sqrt{2} + b\sqrt{3} + c\sqrt{6}$, where $a$, $b$, and $c$ are integers. What is $a+b+c$?
The number $$(\sqrt{2}+\sqrt{3})^3$$ can be written in the form $$a\sqrt{2} + b\sqrt{3} + c\sqrt{6}$$, where $$a$$, $$b$$, and $$c$$ are integers. What is $$a+b+c$$
The number (√2+√3)³ can be written in the form a√2+b√3+c√6, where a, b, and c are integers. What is a+b+c?
i put three forms of the same question
THX EVERYBODY!
Dec 8, 2018
#1
+102459
+2
The number (√2+√3)³ can be written in the form a√2+b√3+c√6, where a, b, and c are integers. What is a+b+c?
$$(\sqrt2+\sqrt3)^3=(\sqrt3)^3+3(\sqrt3)^2(\sqrt2)+3(\sqrt3)(\sqrt2)^2+(\sqrt2)^3\\$$
Now expand it out and get the answer for yourself. If you have new problems with this question then let us know.
Dec 8, 2018
#3
+15
+2
I got $$11\sqrt2+9\sqrt3$$, but how would I turn this into the form of $$a\sqrt{2} + b\sqrt{3} + c\sqrt{6}$$
THX FOR EVERYBODY'S HELP!
qpwoei Dec 9, 2018
#5
+102459
+2
I got a different answer from you but I did not get any sqrt6
so I suppose the c is 0
Melody Dec 9, 2018
#2
+4296
+2
I can clearly read the second one. Thanks!
Hint: Apply sum of cubes formula.
Dec 8, 2018
#4
+102459
+2
Yes thanks for posting clearly.
Here is the version I would have liked best
$$\text{The number }\;(\sqrt{2}+\sqrt{3})^3 \text{ can be written in the form }\\a\sqrt{2} + b\sqrt{3} + c\sqrt{6}, \text{ where a, b, and care integers. What is a+b+c ?}$$
I like this version best becasue it is the easiest to copy and to work on.
Dec 9, 2018
#6
+4296
+2
I'll post a solution:
We just apply the sum of cubes formula, so, we get $$\left(\sqrt{2}\right)^3+3\left(\sqrt{2}\right)^2\sqrt{3}+3\sqrt{2}\left(\sqrt{3}\right)^2+\left(\sqrt{3}\right)^3$$
We just simplify this and add like terms, to reach, $$11\sqrt{2}+9\sqrt{3}.$$
From here, we know that $$a=11, b=9, c=0$$ , thus $$11+9+0=\boxed{20}.$$
.
Dec 9, 2018
edited by tertre Dec 9, 2018
#8
+15
+2
$$\textrm{OH! I didn't think of c being 0. Tricky! (P.S. a=11, b=9, and c=0, not b=6)}$$
qpwoei Dec 9, 2018
#7
+15
+2
$$\textrm{OK, got it! So, I still don't understand how I would turn }11\sqrt2+9\sqrt3\textrm{ into the form of }a\sqrt{2} + b\sqrt{3} + c\sqrt{6}.$$
EDIT: nevermind, I got it. THANK YOU TO EVERYBODY FOR HELPING ME WITH THIS PROBLEM!
.
Dec 9, 2018
edited by Guest Dec 9, 2018
|
Jun 2, 2022
# Lower bound for the effective mass of the Polaron
## MATHPHYS ANALYSIS SEMINAR
Date: June 2, 2022 | 4:15 pm – 5:15 pm
Speaker: Steffen Polzer, University of Geneva
Location: Mondi 2 (I01.01.008), Central Building
Language: English
The Fröhlich Polaron describes the slow movement of an electron in a polar crystal. A long open problem is the asymptotics of the effective mass of the electron as the coupling parameter $\alpha$ tends to infinity. While it has been conjectured by Landau and Pekar that the effective mass grows with the fourth power of the coupling parameter, so far it had only been shown by Lieb and Seiringer that the effective mass diverges in the strong coupling limit. I will present recent work where we give a first quantitative lower bound on the effective mass of the Polaron and show that the divergence is at least as fast as $\alpha^{2/5}$ times some constant. For the proof we apply the representation of the path measure of the Polaron in terms of random collections of intervals that has recently been introduced by Mukherjee and Varadhan. Joint work with Volker Betz.
Date:
June 2, 2022
4:15 pm – 5:15 pm
Speaker:
Steffen Polzer, University of Geneva
Location:
Mondi 2 (I01.01.008), Central Building
Language:
English
Contact:
Birgit Oosthuizen-Noczil
Email:
[email protected]
|
## Thinking Mathematically (6th Edition)
The contrapositive of $p\rightarrow q$ is $\sim q\rightarrow \sim p$. The converse of $p\rightarrow q$ is $q \rightarrow p$. The inverse of $p\rightarrow q$ is $\sim p\rightarrow \sim q$. Hence here the converse is: If I am in the South, then I am in Atlanta. The inverse: If I am not in Atlanta, then I am not in the South. The contrapositive: If I am not in the South, then I am not in Atlanta.
|
# Almost Done!
Fingers crossed this will be the last post of the Hydra Chronicles, a.k.a. “Everything You Didn’t Want To Know About Group Statistics But Let Me Tell You Anyway”. Well, it’s more of a salvage job of tidbits from earlier posts that ended up on the cutting room floor that I couldn’t quite bring myself to trash.
One of our early “discoveries” about group statistics is that if we can compute the group sum quickly, we can use it to compute many other group statistics quickly as well. In this post we’ll go over a few of the other group sum strategies I tested out before I settled on the cumsum based approach we used to “beat” data.table.
# rowsum
I did mention this one in passing, but it bears re-examining: base::rowsum is a remarkable creature in the R ecosystem. As far as I know, it is the only base R function that computes group statistics on unequal group sizes in statically compiled code. Despite this it is arcane, especially when compared with its popular cousin base::rowSums.
rowsum and rowSums are near indistinguishable on name alone, but they do quite different things. rowsum collapses rows together by group, leaving column count unchanged, whereas rowSums collapses all columns leaving row count unchanged:
In the single column/vector case, rowsum(x, grp) computes the sum of x by the groups in grp. Typically this operation is done with tapply(x, grp, sum) (or vapply(split(x, grp), sum, 0)):
As illustrated above tapply must split the vector by group, explicitly call the R-level sum on each group, and simplify the result to a vector1. rowsum can just compute the group sums directly in statically compiled code. This makes it substantially faster, although obviously limits it to computing sums.
Let’s look at some examples and timings with our beaten-to-death 10MM row, ~1MM group data set, and our timing function sys.time. We’ll order the data first as that is fastest even when including the time to order:
sys.time({
o <- order(grp)
go <- grp[o]
xo <- x[o]
})
user system elapsed
0.690 0.003 0.696
tapply for reference:
sys.time(gsum.0 <- tapply(xo, go, sum))
user system elapsed
2.273 0.105 2.383
And now rowsum:
sys.time(gsum.1 <- rowsum(xo, go))
user system elapsed
0.507 0.038 0.546
all.equal(c(gsum.0), c(gsum.1), check.attributes=FALSE)
[1] TRUE
This is a ~4-5x speedup for the summing step, or a ~3x speedup if we include the time to order. data.table is faster for this task.
All data.table timings are single threaded (setDTthreads(1))2.
The slow step for data.table is also the ordering, but we don’t have a good way to break that out.
# colSums
base::rowSums, the aforementioned better-known cousin of rowsum, also computes group statistics with statically compiled code. Well, it kind of does if you consider matrix rows to be groups. base::colSums does the same except for columns. Suppose we have three equal sized ordered groups:
(G <- rep(1:3, each=2))
[1] 1 1 2 2 3 3
And values that belong to them:
set.seed(1)
(a <- runif(6))
[1] 0.266 0.372 0.573 0.908 0.202 0.898
We can compute the group sums using colSums by wrapping our vector into a matrix with as many columns as there are groups. Since R internally stores matrices as vectors in column-major order, this is a natural operation (and also why we use colSums instead of rowSums):
(a.mx <- matrix(a, ncol=length(unique(G))))
[,1] [,2] [,3]
[1,] 0.266 0.573 0.202
[2,] 0.372 0.908 0.898
colSums(a.mx)
[1] 0.638 1.481 1.100
This is equivalent to3:
c(rowsum(a, G))
[1] 0.638 1.481 1.100
We run into problems as soon as we have uneven group lengths, but there is a workaround. The idea is to use clever indexing to embed the values associated with each group into columns of a matrix. We illustrate this process with a vector with 95 elements in ten groups. For display purposes we wrap the vector column-wise every ten elements, and designate the groups by a color. The values of the vector are represented by the tile heights:
You can also view the flipbook as a video if you prefer.
The embedding step warrants additional explanation. The trick is to generate a vector that maps the positions from our irregular vector into the regular matrix. There are several ways we can do this, but the one that we’ll use today takes advantage of the underlying vector nature of matrices. In particular, we will index into our matrices as if they were vectors, e.g.:
(b <- 1:4)
[1] 1 2 3 4
(b.mx <- matrix(b, nrow=2))
[,1] [,2]
[1,] 1 3
[2,] 2 4
b[3]
[1] 3
b.mx[1, 2] # matrix indexing
[1] 3
b.mx[3] # vector indexing on matrix
[1] 3
Let’s look at our 95 data before and after embedding, showing the indices in vector format for both our ordered vector (left) and the target matrix (right):
The indices corresponding to each group diverge after the first group due to the unused elements of the embedding matrix, shown in grey above. What we’re looking for is a fast way to generate the indices in the colored cells in the matrix on the right. In other words, we want to generate the id1 (id.embed in the animation) vector below. For clarity we only show it for the first three groups:
We can emphasize the relationship between these by looking at the element by element difference in each index vector, e.g. using diff:
The indices always increment by one, except at group transitions for the embedding indices as shown in green above. There they increment by $1 + pad$. “$pad$” is how much empty space there is between the end of the group and the end of the column it is embedded in. The name of the game is to compute “$pad$”, which thankfully we can easily do by using the output of rle:
g.rle <- rle(sort(g))
g.rle
Run Length Encoding
lengths: int [1:10] 8 7 7 6 8 13 14 7 11 14
values : int [1:10] 1 2 3 4 5 6 7 8 9 10
rle gives us the length of runs of repeated values, which in our sorted group vector gives us the size of each group. Padding is then the difference between each group’s size and that of the largest group:
g.max <- max(g.rle[['lengths']]) # largest group
(pad <- g.max - g.rle[['lengths']])
[1] 6 7 7 8 6 1 0 7 3 0
To compute the embedding vector we start by a vector of the differences which as a baseline are all 1:
id0 <- rep(1L, length(g))
We then add the padding at each group transition. Conveniently, the group transitions are just one element past the length of the previous element, so we can add the padding at the positions following the cumulative sum of the group lengths:
id0[cumsum(g.rle[['lengths']]) + 1L] <- pad + 1L
head(id0, 22) # first three groups
[1] 1 1 1 1 1 1 1 1 7 1 1 1 1 1 1 8 1 1 1 1 1 1
You’ll notice this is essentially the same thing as diff(id1) from earlier. We reproduce id1 by applying cumsum:
head(cumsum(id0), 22)
[1] 1 2 3 4 5 6 7 8 15 16 17 18 19 20 21 29 30 31 32 33 34 35
A distinguishing feature of these manipulations other than possibly inducing death-by-boredom is that they are all in fast vectorized code. This gives us another reasonably fast group sum function. We split it up into a function that calculates the embedding indices and one that does the embedding and sum, for reasons that will become obvious later. Assuming sorted inputs4:
sys.time({
emb <- og_embed_dat(go) # compute embedding indices
og_sum_col(xo, emb) # use them to compute colSums
})
user system elapsed
0.502 0.195 0.699
Most of the run time is actually the embedding index calculation:
sys.time(emb <- og_embed_dat(go))
user system elapsed
0.369 0.141 0.510
One drawback is the obvious wasted memory taken up by the padding in the embedding matrix. This could become problematically large if a small number of groups are much larger than the rest. It may be possible to mitigate this by breaking up the data into by group size5.
Overall this is a little slower than rowsum for the simple group sum, but as we’ll see shortly there are benefits to this approach.
# Pedal To The Metal
For the sake of an absolute benchmark I wrote a C version of rowsum, og_sum_C that takes advantage of group-ordered data to compute the group sums and counts6. Once the data is sorted, this takes virtually no time:
sys.time(og_sum_C(xo, go))
user system elapsed
0.039 0.001 0.041
The only way to make this substantially faster is to make the sorting faster, or come up with an algorithm that can do the group sums without sorting and somehow do it faster than what we have here. Both of these are beyond my reach.
Let’s compare against all the different methods, including the original cumsum based group sum (cumsum-1), and the precision corrected two pass version (cumsum-2):
We’re actually able to beat data.table with our custom C code, although that is only possible because data.table contributed its fast radix sort to R, and data.table requires more complex code to be able to run a broader set of statistics.
The pattern to notice is that for several of the methods the time spent doing the actual summing is small7. For example, for colSums most of the time is ordering and computing the run length encoding / embedding indices (rle*). This is important because those parts can be re-used if the grouping is the same across several calculations. It doesn’t help for single variable group sums, but if we have more variables or more complex statistics it becomes a factor.
Let’s see how helpful re-using the group-based data is with the calculation of the slope of a bivariate regression:
$\frac{\sum(x_i - \bar{x})(y_i - \bar{y})}{\sum(x_i - \bar{x})^{2}}$
The calculation requires four group sums, two to compute $\bar{x}$ and $\bar{y}$, and another two shown explicitly with the $\sum$ symbol. At the same time we only need to compute the ordering and the rle / embedding once because the grouping is the same across all calculations. You can see this clearly in the colSums based group slope calculation in the appendix.
When we compare all the previous implementations of the group slope calculation, against the new rowsum and colSums implementations the advantage of re-using the group information becomes apparent:
Even though rowsum was the fastest group sum implementation, it is the slowest of the base options outside of split/vapply because none of the computation components other than the re-ordering can be re-used. colSums does pretty well and has the advantage of not suffering from the precision issues of cumsum-18, and additionally of naturally handling NA and non-finite values. cumsum-29 might be the best bet of the “base” solutions as it is only slightly slower than colSums method, but should scale better if there are some groups that are much larger than most10.
data.table gets pretty close to the C implementation, but only in the reformulated form which comes with challenging precision issues.
# That’s All Folks
Phew, finally done. I never imagined how out of hand some silly benchmarks against data.table would get. I learned way more than I bargained for, and come away from the whole thing with a renewed admiration for what R does: it can often provide near-statically-compiled performance in an interpreted language right out of the box11. It’s not always easy to achieve this, but in many cases it is possible with a little thought and care.
This post is the last post of the Hydra Chronicles post series.
# Appendix
## Acknowledgments
These are post-specific acknowledgments. This website owes many additional thanks to generous people and organizations that have made it possible.
## Session Info
sessionInfo()
R version 3.6.0 (2019-04-26)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Mojave 10.14.6
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
loaded via a namespace (and not attached):
[1] rgl_0.100.19 Rcpp_1.0.1 prettyunits_1.0.2
[4] png_0.1-7 ps_1.3.0 assertthat_0.2.1
[7] rprojroot_1.3-2 digest_0.6.19 foreach_1.4.4
[10] mime_0.7 R6_2.4.0 imager_0.41.2
[13] tiff_0.1-5 plyr_1.8.4 backports_1.1.4
[16] evaluate_0.14 blogdown_0.12 pillar_1.4.1
[19] rlang_0.4.0 progress_1.2.2 lazyeval_0.2.2
[22] miniUI_0.1.1.1 rstudioapi_0.10 callr_3.2.0
[25] bmp_0.3 rmarkdown_1.12 desc_1.2.0
[28] devtools_2.0.2 webshot_0.5.1 servr_0.13
[31] stringr_1.4.0 htmlwidgets_1.3 igraph_1.2.4.1
[34] munsell_0.5.0 shiny_1.3.2 compiler_3.6.0
[37] httpuv_1.5.1 xfun_0.8 pkgconfig_2.0.2
[43] tidyselect_0.2.5 tibble_2.1.3 bookdown_0.10
[46] codetools_0.2-16 crayon_1.3.4 dplyr_0.8.3
[49] withr_2.1.2 later_0.8.0 grid_3.6.0
[52] xtable_1.8-4 jsonlite_1.6 gtable_0.3.0
[55] magrittr_1.5 scales_1.0.0 cli_1.1.0
[58] stringi_1.4.3 farver_1.1.0 fs_1.3.1
[61] promises_1.0.1 remotes_2.0.4 doParallel_1.0.14
[64] testthat_2.1.1 iterators_1.0.10 tools_3.6.0
[67] manipulateWidget_0.10.0 glue_1.3.1 tweenr_1.0.1
[70] purrr_0.3.2 hms_0.4.2 crosstalk_1.0.0
[76] parallel_3.6.0 colorspace_1.4-1 sessioninfo_1.1.1
[79] memoise_1.1.0 knitr_1.23 usethis_1.5.0
## Data
RNGversion("3.5.2"); set.seed(42)
n <- 1e7
n.grp <- 1e6
grp <- sample(n.grp, n, replace=TRUE)
noise <- rep(c(.001, -.001), n/2) # more on this later
x <- runif(n) + noise
y <- runif(n) + noise # we'll use this later
## Functions
### sys.time
# Run system.time reps times and return the timing closest to the median
# timing that is faster than the median.
sys.time <- function(exp, reps=11) {
res <- matrix(0, reps, 5)
time.call <- quote(system.time({NULL}))
time.call[[2]][[2]] <- substitute(exp)
gc()
for(i in seq_len(reps)) {
res[i,] <- eval(time.call, parent.frame())
}
structure(res, class='proc_time2')
}
print.proc_time2 <- function(x, ...) {
print(
structure(
x[order(x[,3]),][floor(nrow(x)/2),],
names=c("user.self", "sys.self", "elapsed", "user.child", "sys.child"),
class='proc_time'
) ) }
### og_sum_col
The og_ prefix stands for “Ordered Group”.
Compute indices to embed group ordered data into a regular matrix:
og_embed_dat <- function(go) {
## compute run length encoding
g.rle <- rle(go)
g.lens <- g.rle[['lengths']]
max.g <- max(g.lens)
g.lens <- g.lens[-length(g.lens)]
g.pad <- max.g - g.lens + 1L
## compute embedding indices
id0 <- rep(1L, length(go))
id1 <- cumsum(id0)
list(idx=id1, rle=g.rle)
}
And use colSums to compute the group sums:
og_sum_col <- function(xo, embed_dat, na.rm=FALSE) {
## group sizes
rle.len <- embed_dat[['rle']][['lengths']]
## allocate embedding matrix
res <- matrix(0, ncol=length(rle.len), nrow=max(rle.len))
## copy data using embedding indices
res[embed_dat[['idx']]] <- xo
setNames(colSums(res, na.rm=na.rm), embed_dat[['rle']][['values']])
}
### og_sum_C
Similar to rowsum, except it requires ordered input, and it returns group sizes as an attribute. Group sizes allow us to either compute means or recycle the result statistic back to the input length.
This is a limited, lightly tested, implementation that only works for double x values and relies completely on the native code to handle NA/Infinite values. It will ignore dimensions of matrices, and has undefined behavior if any group has more elements than than INT_MAX.
Inputs must be ordered in increasing order by group, with if it exists the NA group last. The NA group will be treated as a single group (i.e. NA==NA is TRUE).
og_sum_C <- function(x, group) {
stopifnot(
typeof(x) == 'double', is.integer(group), length(x) == length(group)
)
tmp <- .og_sum_C(x, group)
res <- setNames(tmp[[1]], tmp[[2]])
attr(res, 'grp.size') <- tmp[[3]]
res
}
.og_sum_C <- inline::cfunction(
sig=c(x='numeric', g='integer'),
body="
R_xlen_t len, i, len_u = 1;
SEXP res, res_x, res_g, res_n;
int *gi = INTEGER(g);
double *xi = REAL(x);
len = XLENGTH(g);
if(len != XLENGTH(x)) error(\"Unequal Length Vectors\");
res = PROTECT(allocVector(VECSXP, 3));
if(len > 1) {
// count uniques
for(i = 1; i < len; ++i) {
if(gi[i - 1] != gi[i]) {
++len_u;
} }
// allocate and record uniques
res_x = PROTECT(allocVector(REALSXP, len_u));
res_g = PROTECT(allocVector(INTSXP, len_u));
res_n = PROTECT(allocVector(INTSXP, len_u));
double *res_xi = REAL(res_x);
int *res_gi = INTEGER(res_g);
int *res_ni = INTEGER(res_n);
R_xlen_t j = 0;
R_xlen_t prev_n = 0;
res_xi[0] = 0;
for(i = 1; i < len; ++i) {
res_xi[j] += xi[i - 1];
if(gi[i - 1] == gi[i]) {
continue;
} else if (gi[i - 1] < gi[i]){
res_gi[j] = gi[i - 1];
res_ni[j] = i - prev_n; // this could overflow int; undefined?
prev_n = i;
++j;
res_xi[j] = 0;
} else error(\"Decreasing group order found at index %d\", i + 1);
}
res_xi[j] += xi[i - 1];
res_gi[j] = gi[i - 1];
res_ni[j] = i - prev_n;
SET_VECTOR_ELT(res, 0, res_x);
SET_VECTOR_ELT(res, 1, res_g);
SET_VECTOR_ELT(res, 2, res_n);
UNPROTECT(3);
} else {
// Don't seem to need to duplicate x/g
SET_VECTOR_ELT(res, 0, x);
SET_VECTOR_ELT(res, 1, g);
SET_VECTOR_ELT(res, 2, PROTECT(allocVector(REALSXP, 0)));
UNPROTECT(1);
}
UNPROTECT(1);
return res;
")
### g_slope_col
Compute the slope using the colSums based group sum. Notice how we compute the embedding indices once, and re-use them for all four group sums.
g_slope_col <- function(x, y, group) {
## order
o <- order(group)
go <- group[o]
xo <- x[o]
yo <- y[o]
## compute group means for x/y
emb <- og_embed_dat(go) # << Embedding
lens <- emb[['rle']][['lengths']]
ux <- og_sum_col(xo, emb)/lens # << Group Sum #1
uy <- og_sum_col(yo, emb)/lens # << Group Sum #2
## recycle means to input vector length and
## compute (x - mean(x)) and (y - mean(y))
gi <- rep(seq_along(ux), lens)
x_ux <- xo - ux[gi]
y_uy <- yo - uy[gi]
## Slope calculation
gs.cs <-
og_sum_col(x_ux * y_uy, emb) / # << Group Sum #3
og_sum_col(x_ux ^ 2, emb) # << Group Sum #4
setNames(gs.cs, emb[['rle']][['vaues']])
}
sys.time(g_slope_col(x, y, grp))
user system elapsed
2.268 0.497 2.765
### g_slope_C
This is a very lightly tested all C implementation of the group slope.
g_slope_C <- function(x, y, group) {
stopifnot(
typeof(x) == 'double', is.integer(group), length(x) == length(group),
typeof(y) == 'double', length(x) == length(y)
)
o <- order(group)
tmp <- .g_slope_C(x[o], y[o], group[o])
res <- setNames(tmp[[1]], tmp[[2]])
res
}
.g_slope_C <- inline::cfunction(
sig=c(x='numeric', y='numeric', g='integer'),
body="
R_xlen_t len, i, len_u = 1;
SEXP res, res_x, res_g, res_y;
int *gi = INTEGER(g);
double *xi = REAL(x);
double *yi = REAL(y);
len = XLENGTH(g);
if(len != XLENGTH(x)) error(\"Unequal Length Vectors\");
res = PROTECT(allocVector(VECSXP, 2));
if(len > 1) {
// First pass compute unique groups
for(i = 1; i < len; ++i) {
if(gi[i - 1] != gi[i]) {
++len_u;
} }
// allocate and record uniques
res_x = PROTECT(allocVector(REALSXP, len_u));
res_y = PROTECT(allocVector(REALSXP, len_u));
res_g = PROTECT(allocVector(INTSXP, len_u));
double *res_xi = REAL(res_x);
double *res_yi = REAL(res_y);
int *res_gi = INTEGER(res_g);
R_xlen_t j = 0;
R_xlen_t prev_i = 0, n;
// Second pass compute means
double xac, yac;
yac = xac = 0;
for(i = 1; i < len; ++i) {
xac += xi[i - 1];
yac += yi[i - 1];
if(gi[i - 1] == gi[i]) {
continue;
} else if (gi[i - 1] < gi[i]){
n = i - prev_i;
res_xi[j] = xac / n;
res_yi[j] = yac / n;
res_gi[j] = gi[i - 1];
prev_i = i;
yac = xac = 0;
++j;
} else error(\"Decreasing group order found at index %d\", i + 1);
}
xac += xi[i - 1];
yac += yi[i - 1];
n = i - prev_i;
res_xi[j] = xac / n;
res_yi[j] = yac / n;
res_gi[j] = gi[i - 1];
// third pass compute slopes
double xtmp, ytmp;
yac = xac = xtmp = ytmp = 0;
j = 0;
for(i = 1; i < len; i++) {
xtmp = xi[i - 1] - res_xi[j];
ytmp = yi[i - 1] - res_yi[j];
xac += xtmp * xtmp;
yac += ytmp * xtmp;
if(gi[i - 1] == gi[i]) {
continue;
} else {
res_xi[j] = yac / xac;
yac = xac = 0;
++j;
}
}
xtmp = xi[i - 1] - res_xi[j];
ytmp = yi[i - 1] - res_yi[j];
xac += xtmp * xtmp;
yac += ytmp * xtmp;
res_xi[j] = yac / xac;
SET_VECTOR_ELT(res, 0, res_x);
SET_VECTOR_ELT(res, 1, res_g);
UNPROTECT(3);
} else {
// Don't seem to need to duplicate x/g
SET_VECTOR_ELT(res, 0, x);
SET_VECTOR_ELT(res, 1, g);
SET_VECTOR_ELT(res, 2, PROTECT(allocVector(REALSXP, 0)));
UNPROTECT(1);
}
UNPROTECT(1);
return res;
")
1. tapply calls lapply internally, not sapply, but the semantics of the default simplify=TRUE case are equivalent. Additionally, the simplification isn’t done with c as suggested by the diagram, but in this case with scalar return values from sum it’s also semantically equivalent.
2. I use single threaded timings as those are more stable, and it allows apples-to-apples comparisons. While it is a definite advantage that data.table comes with its own built-in parallelization, enabling its use means the benchmarks are only relevant for R processes that are themselves not parallelized.
3. rowsum returns a matrix even in the one column case so we drop the dimensions with c to make the comparison to colSums more obvious.
4. We could have the function sort the inputs itself, but doing it this way allows us to compare to the other functions for which we pre-sort, and to re-use the ordering data when summarizing multiple variables.
5. I implemented a test version to test feasibility and it had comparable performance
6. With group-ordered data we can detect group transitions any time the value of the group vector changes. We can use that as a trigger to transfer the accumulator value to the result vector and reset it. We did something similar with our re-implementation of unique for sorted data.
7. sum time is essentially all time that is neither ordering, rle, or embedding index calculation, so for some functions it includes steps other than just summing.
8. Original single pass cumsum based group sum.
9. cumsum based group sum with second precision correcting pass. In this case we use the second pass for every one of the four group sums.
10. Note we’ll really need to use the more complex and probably slightly slower version that handles NAs and non-finite values.
11. I consider ~3x close given that typically differences between statically compiled and interpreted coder are more on the order of ~10-100x.
12. I used rayshader primarily to compute the shadows on the texture of the 3D plots. Obviously the idea of the 3D ggplot comes from rayshader too, but since I do not know how to control it precisely enough to merge the frames from its 3D plots into those of gganimate I ended up using my own half-baked 3D implementation. I’ve recorded the code for the curious, but be prepared to recoil in horror. For an explanation of how the 3D rendering works see the Stereoscopic post.
Brodie Gaslam is a hobbyist programmer based on the US East Coast.
|
Element-wise Operators - Maple Help
Home : Support : Online Help : Programming : Operations : Operators : operators/elementwise
Element-wise Operators
Description
• The element-wise operators in Maple are:
+~ addition or unary plus (prefix) -~ subtraction or unary minus (prefix) *~ multiplication /~ division ^~ exponentiation mod~ modulo !~ factorial (unary postfix) <~ less than <=~ less than or equal >~ greater than >=~ greater than or equal =~ equal <>~ not equal @~ composition @@~ repeated composition ||~ concatenation operator .~ non-commutative multiplication ::~ type operator and~ logical and or~ logical or xor~ exclusive or implies~ implication not~ logical not (unary prefix) union~ set union subset~ subset intersect~ set intersection minus~ set difference in~ set or list membership & ~ neutral operator &name ~ neutral operator (unary prefix) funct~ general element-wise operator (unary postfix)
• An element-wise operation allows you to distribute the operation over the elements of a list, set, table, Array, Matrix, or Vector. The syntax for this is to use a tilde (~) after the given operator or function name.
> <1,2,3> *~ <4,5,6>;
$\left[\begin{array}{r}{4}\\ {10}\\ {18}\end{array}\right]$ (1)
> sin~(<1,2,3>);
$\left[\begin{array}{c}{\mathrm{sin}}{}\left({1}\right)\\ {\mathrm{sin}}{}\left({2}\right)\\ {\mathrm{sin}}{}\left({3}\right)\end{array}\right]$ (2)
• Dimensioned container types: list, set, Array, Matrix, and Vector can be intermixed in a given operation as long as they are the same size. A table can only appear once in an argument list and can only be mixed with non-containers. For the purpose of element-wise operations, records are not considered container objects.
> [true,true,false,false] xor~ ;
$\left[\begin{array}{c}{\mathrm{false}}\\ {\mathrm{true}}\\ {\mathrm{true}}\\ {\mathrm{false}}\end{array}\right]$ (3)
• Non-containers are treated as single elements repeated sufficiently often to match the size of the other containers in the argument sequence.
> <1,2,3> ^~ 2;
$\left[\begin{array}{r}{1}\\ {4}\\ {9}\end{array}\right]$ (4)
• Unlike map and zip, which usually apply to the operands of their given arguments, element-wise operations treat non-container data types as single elements. For example, f~(a=b) evaluates to f(a=b), whereas map(f,a=b) breaks apart the equation applying f to each side of the equation resulting in f(x)=f(b).
• It is never an error to use a single-element non-container in any part of an element-wise expression. If an error occurs it happens as a result of applying the base operator to one subset of the overall operation. In other words, when applying ^~, the only error that will ever be raised will come from ^, with the exception of mismatched container sizes.
• The returned data structure will match the type of the given container. A call involving only lists will return a list. When mixed container types are present the return type will be determined according to the following precedence: rtable, list, set. A call involving rtables and arrays(deprecated) will result in an rtable. A call involving arrays and lists will result in an array.
> f~([a1,a2],,c);
$\left[\begin{array}{c}{f}{}\left({\mathrm{a1}}{,}{\mathrm{b1}}{,}{c}\right)\\ {f}{}\left({\mathrm{a2}}{,}{\mathrm{b2}}{,}{c}\right)\end{array}\right]$ (5)
• Lists and sets are always considered to be 1-dimensional. That is, a list-of-lists is not inferred as a 2-D object.
> [[1,2],[3,4]] +~ 2;
$\left[\left[{1}{,}{2}\right]{+}{2}{,}\left[{3}{,}{4}\right]{+}{2}\right]$ (6)
• Expression sequences are also treated as container types when they appear on either side of an element-wise operator. Due to the normal flattening rules expression sequences cannot normally be used with functional notation as they are interpreted as multiple arguments. The exception to this rule can be found when following the internal representation outlined for overloading element-wise operations. Element-wise operators can be overloaded by binding the ~ function. The ~ function gets called with the operation in square brackets as an index to the function name. In order to distinguish element-wise operator calls with an expression sequence on either side of the operator, the arguments are separated by a special fence token, $(space-dollarsign). The statement, a +~ (b,c) is recast as ~[+](a,$, b, c). Similarly, f~(a,b, \$,1,2) is interpreted as a element-wise function call with two expression sequence arguments, (a,b), and (1,2), resulting in the sequence f(a,1),f(b,2).
• Because ~ is allowed at the beginning of a name in Maple, white-space is required in some situations to make the meaning of a statement unambiguous. Consider the expression (a*~b) -- this means (a *~ b), not a * (~b). Using parentheses or adding a space after the multiplication symbol is required to use the name ~b (tilde-b) in an expression like this.
• The neutral operators &*~ and &+~ are currently valid, so use of element-wise &* and &+ requires a space between the neutral operator and the tilde (i.e., (a &* ~ b) is element-wise &*, and (a &*~ b) is a use of the neutral operator &*~. This distinction is not something that can be enforced by the parser -- be careful when using element-wise &* and &+.)
• When a non-operator expression involving tilde is not part of a function call, a symbolic representation is returned. The underlying representation of f~ is ~[f]. These expressions retain their element-wise properties while being passed into other functions. For example, consider the difference between map(f,[[a,b]]) and map(f~,[[a,b]). In the first case f is applied to the sublist, [a,b], and in the second case f is applied in an element-wise fashion to the same sublist.
Examples of Element-wise Operators
> A := <1,2,3;4,5,6;7,8,9>;
${A}{:=}\left[\begin{array}{rrr}{1}& {2}& {3}\\ {4}& {5}& {6}\\ {7}& {8}& {9}\end{array}\right]$ (7)
> B := LinearAlgebra:-IdentityMatrix(3);
${B}{:=}\left[\begin{array}{rrr}{1}& {0}& {0}\\ {0}& {1}& {0}\\ {0}& {0}& {1}\end{array}\right]$ (8)
> A . B;
$\left[\begin{array}{rrr}{1}& {2}& {3}\\ {4}& {5}& {6}\\ {7}& {8}& {9}\end{array}\right]$ (9)
> A .~ B;
$\left[\begin{array}{rrr}{1}& {0}& {0}\\ {0}& {5}& {0}\\ {0}& {0}& {9}\end{array}\right]$ (10)
> sin~(A);
$\left[\begin{array}{ccc}{\mathrm{sin}}{}\left({1}\right)& {\mathrm{sin}}{}\left({2}\right)& {\mathrm{sin}}{}\left({3}\right)\\ {\mathrm{sin}}{}\left({4}\right)& {\mathrm{sin}}{}\left({5}\right)& {\mathrm{sin}}{}\left({6}\right)\\ {\mathrm{sin}}{}\left({7}\right)& {\mathrm{sin}}{}\left({8}\right)& {\mathrm{sin}}{}\left({9}\right)\end{array}\right]$ (11)
> -~ A;
$\left[\begin{array}{rrr}{-}{1}& {-}{2}& {-}{3}\\ {-}{4}& {-}{5}& {-}{6}\\ {-}{7}& {-}{8}& {-}{9}\end{array}\right]$ (12)
> A !~;
$\left[\begin{array}{rrr}{1}& {2}& {6}\\ {24}& {120}& {720}\\ {5040}& {40320}& {362880}\end{array}\right]$ (13)
> myproc := proc(x) x^2; end:
> myproc~(A);
$\left[\begin{array}{rrr}{1}& {4}& {9}\\ {16}& {25}& {36}\\ {49}& {64}& {81}\end{array}\right]$ (14)
> A mod~ 3;
$\left[\begin{array}{rrr}{1}& {2}& {0}\\ {1}& {2}& {0}\\ {1}& {2}& {0}\end{array}\right]$ (15)
> A >~ 3;
$\left[\begin{array}{ccc}{3}{<}{1}& {3}{<}{2}& {3}{<}{3}\\ {3}{<}{4}& {3}{<}{5}& {3}{<}{6}\\ {3}{<}{7}& {3}{<}{8}& {3}{<}{9}\end{array}\right]$ (16)
> evalb~(A >~ 3);
$\left[\begin{array}{ccc}{\mathrm{false}}& {\mathrm{false}}& {\mathrm{false}}\\ {\mathrm{true}}& {\mathrm{true}}& {\mathrm{true}}\\ {\mathrm{true}}& {\mathrm{true}}& {\mathrm{true}}\end{array}\right]$ (17)
> (evalb@>) ~ (A,3);
$\left[\begin{array}{ccc}{\mathrm{false}}& {\mathrm{false}}& {\mathrm{false}}\\ {\mathrm{true}}& {\mathrm{true}}& {\mathrm{true}}\\ {\mathrm{true}}& {\mathrm{true}}& {\mathrm{true}}\end{array}\right]$ (18)
> evalhf~(A >~ 3);
$\left[\begin{array}{ccc}{0.}& {0.}& {0.}\\ {1.}& {1.}& {1.}\\ {1.}& {1.}& {1.}\end{array}\right]$ (19)
> A ::~ integer;
$\left[\begin{array}{ccc}{1}{::}{\mathrm{integer}}& {2}{::}{\mathrm{integer}}& {3}{::}{\mathrm{integer}}\\ {4}{::}{\mathrm{integer}}& {5}{::}{\mathrm{integer}}& {6}{::}{\mathrm{integer}}\\ {7}{::}{\mathrm{integer}}& {8}{::}{\mathrm{integer}}& {9}{::}{\mathrm{integer}}\end{array}\right]$ (20)
> [true,true,false,false] and~ ;
$\left[\begin{array}{c}{\mathrm{true}}\\ {\mathrm{false}}\\ {\mathrm{false}}\\ {\mathrm{false}}\end{array}\right]$ (21)
> [{1,2},{3,4}] subset~ [{1,3,4,5},{1,3,4,5}];
$\left[{\mathrm{false}}{,}{\mathrm{true}}\right]$ (22)
> A @~ b;
$\left[\begin{array}{ccc}{1}{@}{b}& {2}{@}{b}& {3}{@}{b}\\ {4}{@}{b}& {5}{@}{b}& {6}{@}{b}\\ {7}{@}{b}& {8}{@}{b}& {9}{@}{b}\end{array}\right]$ (23)
> b @~ A;
$\left[\begin{array}{ccc}{b}{@}{1}& {b}{@}{2}& {b}{@}{3}\\ {b}{@}{4}& {b}{@}{5}& {b}{@}{6}\\ {b}{@}{7}& {b}{@}{8}& {b}{@}{9}\end{array}\right]$ (24)
> L := ["a","b"];
${L}{:=}\left[{"a"}{,}{"b"}\right]$ (25)
> L ||~ 1;
$\left[{"a1"}{,}{"b1"}\right]$ (26)
> map(f, [[a,b]]);
$\left[{f}{}\left(\left[{a}{,}{b}\right]\right)\right]$ (27)
> map(f~, [[a,b]]);
$\left[\left[{f}{}\left({a}\right){,}{f}{}\left({b}\right)\right]\right]$ (28)
|
# Math Competition Question Hong Kong 2
Note: This is from the Po Leung Kuk 2012 paper. The previous one I posted was from the Po Leung Kuk 2011.
Here's another question for everyone:
There is a figure made out of two rows of identical squares: three squares in the top row and four in the bottom, so that the first three squares in the bottom row form a $2\times 3$ rectangular block of squares with the squares of the first row. (If anyone knows how to insert a diagram, please do so.) A line, $l$, passes through the top side of the second square on the top row at point $E$ and passes through the bottom side of the third square from the bottom row at point $F$. Line $l$ cuts the figure into 2 equal pieces.
Let the top left corner of the figure be point $A$ and the bottom left be point $C$. If $AE+CF=91$, what is the area of an individual square.
I have no idea on how to approach this problem. A hint on how to start the problem is good enough. Also, if anyone knows how to create a diagram, please put it in the comments.
Thanks!
• Is the figure perfectly symmetric about the line between the 2nd and 3rd squares on the bottom row? – JimmyK4542 Jun 11 '14 at 2:44
• nope. Both rows start from the same column. – user148697 Jun 11 '14 at 2:46
• @jonnytan999 I guess the hardest part of this question is -- (1) guessing what the figure looks like from your description; and (2) The meaning of your 'EQUAL' [are the two equal pieces EQUAL in AREA or EQUAL in shape! – Mick Jun 11 '14 at 15:18
• can you tell me how to insert the diagram, please... – user148697 Jun 12 '14 at 5:03
Let $x$ be the sidelength of a square. Then the area of the entire figure is $7x^2$.
One half of the figure is quadrilateral $ACFE$, which is a trapezoid with height $2x$ and bases $AE$ and $CF$.
Can you figure out the area of this trapezoid in terms of $x$? Then, you should be able to figure out what value of $x$ makes this trapezoid have an area of half the area of the figure.
|
# Definition talk:Product Space (Topology)/Two Factor Spaces
## Product Topology
Let $\TT$ be the product topology on $X$. Since $\TT$ is the topology generated by $\SS = \set {\map {\pr_i^{-1} } U: i \in I, U \in \vartheta_i}$ isn't it also the topology that renders all projections continuous (linear) functionals? Therefore, $\TT$ is the product topology on $X$. What one needs to prove is that the functions:
$\pi_1(x)=x_1$
and
$\pi_2(x)=x_2$
where $x\in X$ is $x=(x_1,x_2)$ with $x_1\in X_1$ and $x_2\in X_2$,
are continuous with respect to the described topology with basis $\BB$. However, I don't like very much this definition because firstly it refers only to cartesian products between two topological spaces (while you can have Cartesian products made up of arbitrarily many such spaces) and because most textbooks provide the definition that the product topology is the topology that renders all projections continuous (i.e. exactly this definition). For example S. Axler and K.A. Ribet, "A Taste of Topology", Springer Editions, Berlin 2005, ISBN: 0-387-25790-X. I suggest that we modify the definition of Product topology and have just a link to the product topology.
While I agree with most of what you say, some remarks are in order. First of all, the projections aren't functionals in the functional analytic sense of the word (which is the only one known to me), they are operators. Linearity subsumes an additive structure, which isn't given. Lastly, please be aware that the definition of an arbitrary Cartesian product needs the Axiom of Choice to render it nonempty. --Lord_Farin 12:15, 30 November 2011 (CST)
## Refactor
The refactor comment suggests creating theorems for the statements:
The product topology $\tau$ is the same as the box topology for $S_1 \times S_2$.
It is also the same as the product topology for $S_1 \times S_2$, which follows from Box Topology on Finite Product Space is Product Topology.
which seemed to me to be unnecessary since Box Topology on Finite Product Space is Product Topology already does this. Instead it seemed more appropriate to cretae the page with two definitions and state that the equivalence of the definitions was given by Box Topology on Finite Product Space is Product Topology.
This is what I have done on this page Definition:Product Space (Topology)
If this is a suitable alternative, let me know and I'll put these pages in place. --Leigh.Samphier (talk) 04:05, 17 December 2019 (EST)
I would say that there do not need to be two definitions. Instead I would make the connection between product space and Product topology more specific, intimately connected. At present, they are unjustly disjoint.
Maybe the most pure solution would be to transclude both Product topology and box topology onto a general, almost disambiguation-style page that highlights the similarities and differences.
By appropriately distinguishing the finite and infinite cases, we could provide proper guidance of intuition while still remaining precise in the infinite case. Some examples would really finish the deal.
What do you say? — Lord_Farin (talk) 13:26, 17 December 2019 (EST)
A worthy aim but challenging, and needs someone who knows their way around. Well volunteered, Leigh! :-) --prime mover (talk) 14:39, 17 December 2019 (EST)
That has me thinking. I'm not convinced that disambiguation is required, so I may not be the person to refactor this.
In my opinion when someone searches for Product Space or Product Topology they want the initial topology with respect to the projections because this is the categorical product whether they know this or not. They are very unlikely to want the box topology.
Someone encountering the Product Space for the first time may find the initial topology with respect to the projections definition daunting and the box definition on a finite set of topologies would be more easily understood and all that is required. But irrespective of the definition it is the initial topology that is being looked for. The box topology is only of interest as an aside to the general definition of the product topology on an infinite cartesian product to emphasise and contrast with the initial topology definition.
If I had a blank slate, I would create two pages:
(1) Product Space/Product Topology
(2) Box topology
there would be no Tychonoff Topology page, this would just be an 'Also known as' on the Product Space page.
The Product Space page would have a general definition and a finite definition. The general definition would have two definitions:
(1) The initial topology with respect to the projections
(2) The set of cartesian products of open sets where only finitely many open sets are not the complete space.
For the finite definition, the box definition then becomes a special case of the second definition of the general definition.
Both pages could then have a note that states that the two definitions on the finite Cartesian product of spaces define the same topology.
But I don't have a blank slate. The current state is that there are two definitions and 3 pages. And I'm not sure what is trying to be achieved by that. The pages Definition:Product Topology and Definition:Product Space (Topology) are duplicates as they both define the topology and the space with the topology. So either they should be merged or keep the two pages but have Product Topology be the topology only and Product Space be the space with the Product Topology only.
Lots to be thought about. --Leigh.Samphier (talk) 05:49, 18 December 2019 (EST)
All things considered, I agree with you that there should not be 3 pages. The Tychonoff topology should stay because we prefer historical naming, product space makes sense because it is the product. Things can be kept simple because we don't talk about the "box space".
To keep things accessible, I would keep the distinction between the binary and general product as separate subpages.
Do we agree enough for you to draft a suggestion based on our discussion? — Lord_Farin (talk) 13:34, 18 December 2019 (EST)
I think I have enough to have another attempt. --Leigh.Samphier (talk) 18:24, 18 December 2019 (EST)
## Refactor - Take 2
I have made a second attempt to rework the pages Definition:Product Space (Topology), Definition:Product Topology and Definition:Box Topology.
My proposed new versions are:
Definition:Product Space (Topology)
Definition:Product Topology
Definition:Box Topology
respectively.
On Definition:Product Space (Topology) I have defined the product space as the Cartesian product with the Product topology. No mention is made of the box topology. The theorem Natural Basis of Product Topology of Finite Product expresses the basis for the topology on the product space in more familiar terms.
The pages Definition:Product Topology and Definition:Box Topology have notes added to walk the reader from the definition of the Product topology as an initial topology to more familiar definitions in terms of basis of products of open sets. I have also clarified what the Product topology gives in terms of the Categorical Product of topological spaces and that the box topology does not give us.
With the proposed changes above, the page Product Topology is Topology probably needs to be reworked. I would suggest that the page be renamed to something like Product of Open Sets is Basis for Topology on Cartesian Product and the theorem reworded as such. A second proof could be added to state that it is a direct consequence of Natural Basis of Product Topology of Finite Product and retain the proof from W A Sutherland.
The page Projection from Product Topology is Continuous would need to be reworked to simply state that this follows by definition as the proof for the general case does: Projection from Product Topology is Continuous/General Result. Although if these pages are merged with Projection from Product Topology is Open into "Projection from Product Topology is Open and Continuous" then the theorems would have a little more substance.
Otherwise, I haven't seen anything else that is significantly impacted by the proposed changes. But I'm still looking.
But its time for some feedback before going to much further. --Leigh.Samphier (talk) 02:18, 29 December 2019 (EST)
I'm going to leave that to someone else, as my own headspace is getting too small. --prime mover (talk) 06:38, 29 December 2019 (EST)
I hope to attend to it later today. — Lord_Farin (talk) 04:12, 30 December 2019 (EST)
Ok, so I have a few remarks:
• The first two sentences under "Note" say not much more than a link under Also see would.
• I would relegate the statement of the exact nature of natural basis and subbasis to the respective subpages and/or subsume it in the Also see section as well.
• The relation to the box topology is nice. I am however not sure how to best structure the pages so that the reference to it from both Product and box topology are natural (subpages are not very suitable here I feel). The links to the "may not" results can then also reside on this subpage. @Prime.mover, do you have an idea how to structure this? Surely there are precedents.
Maybe we could make an overview page titled "Relation between Product and Box Topology" or some such?
Split the "Also see" into subsections. (There's a precedent -- Barto started this convention when he was focusing on raising the importance of this section). Put one section as "Box Topology" and in it list (and briefly describe -- the usual one-liner) what the various pages contain. (We may even want to put that "also see" into its own transcluded subpage -- again, there are precedents for this, can't immediately place one, but it's something we've done.)
As it stands, the section in question works well enough as it is. That is, taking Definition:Product Topology as an example: "Also see" starts at "Note" (which is renamed "Also see") and ends where the existing "Also see" is, which only contains the existing links anyway.
See how it looks when it's done. We can tinker with it then.
BEWARE SPELLO: "Box Topology may not be Coursest Topology such that Projections are Continuous" -- that is "Coarsest". --prime mover (talk) 17:09, 30 December 2019 (EST)
• Please assess to the best of your ability the impact of these changes on any page referring to any impacted page before effecting the change. You don't have to list it exhaustively for me, but I've always found it convenient to make a list of pages to be updated/restated.
Overall it looks nice and structured. Thanks for your efforts! — Lord_Farin (talk) 09:35, 30 December 2019 (EST)
I'll take all that on board and try and create a common page that can be transcluded by both the Product Topology page and the Box Topology page. I did try this but it wasn't working for me at the time.
Regarding impacted pages I will certainly do this. The biggest change is on the page Definition:Product Space (Topology) where it is now defined in terms of the Product Topology (like the general case) rather than the basis of cartesian products of open sets of the factor spaces. I do immediately invoke the theorem Natural Basis of Product Topology of Finite Product to define a basis for the Product topology. This is so that any link to the page that assumes that the product topology is defined by the basis of cartesian products of open sets of the factor spaces is still free to do so. Is this Ok? Or does an invocation to Natural Basis of Product Topology of Finite Product need to be made by pages linking to the page and assuming the basis? --Leigh.Samphier (talk) 16:42, 30 December 2019 (EST)
Thanks for taking this task on. As I've always said, it's easy to just write up pages containing mathematics, but it's surprisingly difficult to structure it. But then it takes a huge effort to make a task look effortless. --prime mover (talk) 17:12, 30 December 2019 (EST)
I think the latest versions are closer to what has been suggested. The pages that I have altered or added are:
--Leigh.Samphier (talk) 00:50, 31 December 2019 (EST)
I like the structure of the page "Relation between Product and Box Topology". I've put a mergeto note on the Universal Property page linked there as it is just the statement of the categorical product and should be subsumed in there (a redirect can remain).
I am however not so sure it should be the "Also see" section (NB: note the capitalisation of "Also see") because this can create unexpected effects (e.g. Box topology-specific references under the Product page). Instead I would prominently link to the "Relation" page in the Also see section of Product and Box topology.
Besides that there might be minor stylistic points but as Prime.mover suggested they can be dealt with after the shift has taken place. Good job! — Lord_Farin (talk) 07:43, 31 December 2019 (EST)
Ok, I've now created Universal Property of Product of Topological Spaces and Definition:Product Space of Topological Spaces as redirects. I've dropped the tranclusion of the page Relation between Product and Box Topology from both Definition:Box Topology and Definition:Product Topology and replaced with a prominent link.
The pages to be moved and added are now:
The following pages list the further changes that I believe need to be made to align with the changes above.
• "Pages Impacted: Box Topology"
• "Pages Impacted: Product Topology"
• "Pages Impacted: Product Space"
I've had enough for the first day of 2020. --Leigh.Samphier (talk) 01:31, 1 January 2020 (EST)
I have now completed the changes. --Leigh.Samphier (talk) 22:37, 4 January 2020 (EST)
Superb job. --prime mover (talk) 01:20, 5 January 2020 (EST)
## Further work
I have embarked on the task of merging Definition:Product Topology and Definition:Product Space (Topology) into one actual entity.
This is so as to make it completely clear that they are one and the same thing conceptually: the one is the topological space induced by the other.
Thus the main definition will be as the coarsest topology on the cartesian product of an arbitrary family of sets.
The 2-factor instance of this will be entered as a subpage. This is as opposed to on the Definition:Product Space (Topology) page where the general form is as a subpage of the 2-factor instance.
In this way it will be easier to emphasise the definition of the natural basis and natural sub-basis, which may also be on the way to being refactored.
This may take a day or two, and I have a day job and a social outlet taking up most of my time tomorrow, so don't expect this to be complete much before the weekend. --prime mover (talk) 22:39, 8 December 2020 (UTC)
I think what you are proposing is a good idea. It appears to be more fashionable to refer to the coarsest topology on the cartesian product as the product topology and the resulting space as the product space. This aligns well with this space being the categorical product, and hence why product topology is more fashionable. It would be good to retain a historical note page for the Tychonoff Topology that recogonised this as the same thing as the product topology and transcluded the definition product topology. More people will be searching for the product topology these days than the Tychonoff topology, but at the same time you don't want to miss any search for Tychonoff --Leigh.Samphier (talk) 00:00, 9 December 2020 (UTC)
Major renaming under way. Some of the above will become inaccurate as I change the names of the pages themselves with a view to avoiding redlinks as and when some of the confusing renames are cleaned up. The aim is that much of the above discussion will be removed and the resulting work summarised. --prime mover (talk) 07:26, 9 December 2020 (UTC)
Right, I think I've hammered it into the shape I envisaged.
Having done so, I'm not sure now whether it's best this way -- we need to evaluate whether it's best:
a) making the general case the main page, with a subpage of "2 factor space"
b) maing the main page the 2-factor space and the subpage the "General case" (which is the way it was before).
Thoughts on which way round is best? The main disadvantage of the first case is that the page names becomes unwieldy -- although most of the results should relate to the general case so this may not be all that much of a problem.
I also note there may also be a need to document the general product space for a general finite cartesian product, as this is a significant case which is explored in its own right and merits its own page.
Apologies for having taken so long to get round to it, I have finally got my head round understanding the nature of product spaces, it was like a light going on. --prime mover (talk) 08:59, 10 December 2020 (UTC)
|
###### Example3.3.4
Solve for $$x$$ in $$y=mx+b\text{.}$$ (This is a line's equation in slope-intercept form.)
In the equation $$y=mx+b\text{,}$$ we see that $$x$$ is multiplied by $$m$$ and then $$b$$ is added to that. Our first step will be to isolate $$mx\text{,}$$ which we'll do by subtracting $$b$$ from each side of the equation:
\begin{align*} y\amp=m\attention{x}+b\\ y\subtractright{b}\amp=m\attention{x}+b\subtractright{b}\\ y-b\amp=m\attention{x} \end{align*}
Now that we have $$mx$$ on it's own, we'll note that $$x$$ is multiplied by $$m\text{.}$$ To “undo” this, we'll need to divide each side of the equation by $$m\text{:}$$
\begin{align*} \divideunder{y-b}{m}\amp=\divideunder{m\attention{x}}{m}\\ \frac{y-b}{m}\amp=\attention{x}\\ x\amp=\frac{y-b}{m} \end{align*}
in-context
|
# Quark Matter 2017
5-11 February 2017
Hyatt Regency Chicago
America/Chicago timezone
## Non-prompt $D^{0}$-meson production in Au+Au collisions at $\sqrt{s_{NN}}$ = 200 GeV in STAR
Not scheduled
2h 30m
Hyatt Regency Chicago
#### Hyatt Regency Chicago
151 East Wacker Drive Chicago, Illinois, USA, 60601
Board: H20
Poster
### Speaker
Xiaolong Chen (USTC/LBNL)
### Description
Heavy flavor quarks ($c$, $b$) are produced dominantly by the interactions of the initial incoming partons, and thus experience the entire evolution of the hot and dense medium created in high-energy nuclear collisions. Systematic investigations of charm and bottom hadron production in heavy-ion collisions will shed lights into the understanding of the parton energy loss in the Quark-Gluon Plasma (QGP), which can help constrain the transport parameters of the QGP medium.
In this poster, we will present the first measurement of non-prompt $D^{0}$-meson production from bottom hadron decays using the Heavy Flavor Tracker (HFT) in Au+Au collisions at $\sqrt{s_{NN}}$ = 200 GeV by the STAR experiment. Distributions of the Distance of Closest Approach (DCA) for reconstructed $D^{0}$-mesons are studied, and fitted with the template distributions for the prompt and non-prompt $D^{0}$-mesons obtained from Monte Carlo simulations. Fractions of non-prompt $D^{0}$-mesons are extracted in the transverse momentum region 3 $< p_{T} <$ 8 GeV/c. The results are compared to model calculations and physics implications on the bottom production will be discussed.
Collaboration STAR Open Heavy Flavors
### Primary author
Xiaolong Chen (USTC/LBNL)
|
## Pages
News: Currently the LaTeX and hidden solutions on this blog do not work on Google Reader.
Email me if you have suggestions on how to improve this blog!
## Wednesday, 31 August 2011
### Math GRE - #30
What is $\int_{-3}^3{|x+1|\,dx}?$
1. 0
2. 5
3. 10
4. 15
5. 20
Solution :
This is a very basic question. The usual method of dealing with absolute values in integrals is to split it into a sum of two integrals and remove the absolute values. Since $|x+1|$ is negative on $[-3, 1)$ and positive on $[-1, 3]\,\,\,$ , we can split the integral into: $\int_{-3}^3{|x+1|\,dx}=-\int_{-3}^{-1}{(x+1)\,dx}+\int_{-1}^3{(x+1)\,dx}=10.$ Another way of doing this problem is to imagine the graph of $|x+1|$ and realize that the integral is the sum of two 45-45-90 triangles (one with base 2 and height 2, the other with base 4 and height 4). Simply add the area of triangles up and we're done.
Bonus question: Given that $c$ is a constant, what is $\int_{-\infty}^{\infty}{e^{-|x+c|}\,dx}?$
This webpage is LaTeX enabled. To type in-line formulae, type your stuff between two '\$'. To type centred formulae, type '$' at the beginning of your formula and '$' at the end.
|
# 22.7 Magnetic force on a current-carrying conductor (Page 2/2)
Page 2 / 2
## Section summary
• The magnetic force on current-carrying conductors is given by
$F=\text{IlB}\phantom{\rule{0.25em}{0ex}}\text{sin}\phantom{\rule{0.25em}{0ex}}\mathrm{\theta ,}$
where $I$ is the current, $l$ is the length of a straight conductor in a uniform magnetic field $B$ , and $\theta$ is the angle between $I$ and $B$ . The force follows RHR-1 with the thumb in the direction of $I$ .
## Conceptual questions
Draw a sketch of the situation in [link] showing the direction of electrons carrying the current, and use RHR-1 to verify the direction of the force on the wire.
Verify that the direction of the force in an MHD drive, such as that in [link] , does not depend on the sign of the charges carrying the current across the fluid.
Why would a magnetohydrodynamic drive work better in ocean water than in fresh water? Also, why would superconducting magnets be desirable?
Which is more likely to interfere with compass readings, AC current in your refrigerator or DC current when you start your car? Explain.
## Problems&Exercises
What is the direction of the magnetic force on the current in each of the six cases in [link] ?
(a) west (left)
(b) into page
(c) north (up)
(d) no force
(e) east (right)
(f) south (down)
What is the direction of a current that experiences the magnetic force shown in each of the three cases in [link] , assuming the current runs perpendicular to $B$ ?
What is the direction of the magnetic field that produces the magnetic force shown on the currents in each of the three cases in [link] , assuming $\mathbf{\text{B}}$ is perpendicular to $\mathbf{\text{I}}$ ?
(a) into page
(b) west (left)
(c) out of page
(a) What is the force per meter on a lightning bolt at the equator that carries 20,000 A perpendicular to the Earth’s $3\text{.}\text{00}×{\text{10}}^{-5}\text{-T}$ field? (b) What is the direction of the force if the current is straight up and the Earth’s field direction is due north, parallel to the ground?
(a) A DC power line for a light-rail system carries 1000 A at an angle of $\text{30.0º}$ to the Earth’s $\text{5.00}×{\text{10}}^{-5}\phantom{\rule{0.25em}{0ex}}\text{-T}$ field. What is the force on a 100-m section of this line? (b) Discuss practical concerns this presents, if any.
(a) 2.50 N
(b) This is about half a pound of force per 100 m of wire, which is much less than the weight of the wire itself. Therefore, it does not cause any special concerns.
What force is exerted on the water in an MHD drive utilizing a 25.0-cm-diameter tube, if 100-A current is passed across the tube that is perpendicular to a 2.00-T magnetic field? (The relatively small size of this force indicates the need for very large currents and magnetic fields to make practical MHD drives.)
A wire carrying a 30.0-A current passes between the poles of a strong magnet that is perpendicular to its field and experiences a 2.16-N force on the 4.00 cm of wire in the field. What is the average field strength?
1.80 T
(a) A 0.750-m-long section of cable carrying current to a car starter motor makes an angle of $\text{60º}$ with the Earth’s $5\text{.}\text{50}×{\text{10}}^{-5}\phantom{\rule{0.25em}{0ex}}\text{T}$ field. What is the current when the wire experiences a force of $\text{7.00}×{\text{10}}^{-3}\phantom{\rule{0.25em}{0ex}}N$ ? (b) If you run the wire between the poles of a strong horseshoe magnet, subjecting 5.00 cm of it to a 1.75-T field, what force is exerted on this segment of wire?
(a) What is the angle between a wire carrying an 8.00-A current and the 1.20-T field it is in if 50.0 cm of the wire experiences a magnetic force of 2.40 N? (b) What is the force on the wire if it is rotated to make an angle of $\text{90º}$ with the field?
(a) $\text{30º}$
(b) 4.80 N
The force on the rectangular loop of wire in the magnetic field in [link] can be used to measure field strength. The field is uniform, and the plane of the loop is perpendicular to the field. (a) What is the direction of the magnetic force on the loop? Justify the claim that the forces on the sides of the loop are equal and opposite, independent of how much of the loop is in the field and do not affect the net force on the loop. (b) If a current of 5.00 A is used, what is the force per tesla on the 20.0-cm-wide loop?
Pls guys am having problem on these topics: latent heat of fusion, specific heat capacity and the sub topics under them.Pls who can help?
Thanks George,I appreciate.
hamidat
this will lead you rightly of the formula to use
Abolarin
Most especially it is the calculatory aspects that is giving me issue, but with these new strength that you guys have given me,I will put in my best to understand it again.
hamidat
you can bring up a question and let's see what we can do to it
Abolarin
the distance between two suasive crests of water wave traveling of 3.6ms1 is 0.45m calculate the frequency of the wave
v=f×lemda where the velocity is given and lends also given so simply u can calculate the frequency
Abdul
You are right my brother, make frequency the subject of formula and equate the values of velocity and lamda into the equation, that all.
hamidat
lExplain what happens to the energy carried by light that it is dimmed by passing it through two crossed polarizing filters.
When light is reflected at Brewster's angle from a smooth surface, it is 100% polarizedparallel to the surface. Part of the light will be refracted into the surface.
Ekram
What is specific heat capacity?
Specific heat capacity is the amount of heat required to raise the temperature of one (Kg) of a substance through one Kelvin
Paluutar
formula for measuring Joules
I don't understand, do you mean the S.I unit of work and energy?
hamidat
what are the effects of electric current
What limits the Magnification of an optical instrument?
Lithography is 2 micron
Venkateshwarlu
what is expression for energy possessed by water ripple
what is hydrolic press
An hydraulic press is a type of machine that is operated by different pressure of water on pistons.
hamidat
what is dimensional unite of mah
i want jamb related question on this asap🙏
What is Boyles law
it can simple defined as constant temperature
Boyles law states that the volume of a fixed amount of a gas is inversely proportional to the pressure acting on in provided that the temperature is constant.that is V=k(1/p) or V=k/p
what is motion
getting notifications for a dictionary word, smh
Anderson
what is escape velocity
the minimum thrust that an object must have in oder yo escape the gravitational pull
Joshua
what is a dimer
Mua
what is a atom
how to calculate tension
what are the laws of motion
Mua
|
# Circle A has a radius of 2 and a center at (5 ,1 ). Circle B has a radius of 1 and a center at (3 ,2 ). If circle B is translated by <-2 ,6 >, does it overlap circle A? If not, what is the minimum distance between points on both circles?
Feb 15, 2018
The circle B does not overlap circle A.
The minimum distance between points on both circles is 3 units
#### Explanation:
Circle A:
radius ${r}_{1} = 2$
center $A \equiv \left(5 , 1\right)$
Circle B:
radius ${r}_{2} = 1$
center $B \equiv \left(3 , 2\right)$
translation of B $B ' \equiv < - 2 , 6 >$
new center $\left(3 - 2 , 2 + 6\right) = \left(5 , 8\right)$
Distance between center of A and new center of B' is
$\sqrt{{\left(5 - 5\right)}^{2} + {\left(8 - 1\right)}^{2}} = \sqrt{{0}^{2} + {7}^{2}} = {\sqrt{7}}^{2} = 7$
$A B ' = 7$
radius of A ${r}_{1} = 2$
radius of B ${r}_{2} = 1$
Sum of the radii ${r}_{1} + {r}_{2} = 2 + 1 = 3$
Sum of the radii $<$ Distance between centers A and B.
Hence, they do d
not overlap.
The difference between
the distance between the centers'an
represents the shortest distance between the two circles
separation $= 7 - 3 = 4$
The two circles are separated by 3 units
Feb 15, 2018
$\text{no overlap } \approx 5.062$
#### Explanation:
$\text{what we have to do here is "color(blue)"compare ""the distance (d)}$
$\text{between the centres to the "color(blue)"sum of radii}$
• " if sum of radii">d" then circles overlap"
• " if sum of radii"< d" then no overlap"
$\text{before calculating d we require to find the centre of}$
$\text{B under the given translation}$
$\text{under the translation } < - 2 , 6 >$
$\left(3 , 2\right) \to \left(3 - 2 , 2 + 6\right) \to \left(1 , 8\right) \leftarrow \textcolor{red}{\text{new centre of B}}$
$\text{to calculate d use the "color(blue)"distance formula}$
•color(white)(x)d=sqrt((x_2-x_1)^2+(y_2-y_1)^2)
$\text{let "(x_1,y_1)=(5,1)" and } \left({x}_{2} , {y}_{2}\right) = \left(1 , 8\right)$
$d = \sqrt{{\left(1 - 5\right)}^{2} + {\left(8 - 1\right)}^{2}} = \sqrt{16 + 49} = \sqrt{65} \approx 8.062$
$\text{sum of radii } = 2 + 1 = 3$
$\text{since sum of radii"< d" then no overlap}$
$\text{min. distance "=d-" sum of radii}$
$\textcolor{w h i t e}{\times \times \times \times \times} = 8.062 - 3 = 5.062$
graph{((x-5)^2+(y-1)^2-4)((x-1)^2+(y-8)^2-1)=0 [-20, 20, -10, 10]}
|
# Learnings from State Farm Distracted Driver dataset on Kaggle
I recently came across Fast.AI, an online deep learning course. Being describe as no BS, practical course, I decided to give it a try. I’m on my 3rd lecture and have learned some nice concepts like finetuning, activation functions and some cool linux tricks too.
As recommended in the course, I decided to try my hand at the State Farm Distracted Driver competition on Kaggle. This is basically an image classification competition with 10 categories. Each category has images of drivers, all shot from the same angle. The drivers in different categories are disctrated in different manners. This is what we need to identify. Each image is 480 x 640 in size.
I tried to use the VGG16 model that had been used the the course for Dogs vs Cat dataset on the distracted drivers dataset.
### Input image sizes
The VGG16 model used 224 x 224 sized images as input. This confused me as to how was the Dogs vs Cats dataset, which had random sized images, trained on this model and how should I proceed with mine? It turns out that keras has the function flow_from_directory() which is used to read images from a specified directory. This functions also take a parameter target_size tuple, which allows us to specify the required image size and Keras takes care of resizing the input images.
### Predictions of all images in test set is the same class
After training my model for soem epochs, I was able to achieve a peak accuracy of ~70%. However, when I got to predicting the images in my test set, they all were being predicted as the same class! Turns out I was giving train and test images in different manner to the model to learn and predict. The model was trained with images with values on scale of 0-255, but I was predicting on test images on a scale of 0-1. A silly oversight!
### Using a “sample” directory really helps!
If all the dataset data is in "data/" directory, create a new folder "data/sample" with the same structure as the dataset. Have your train, val and test in the sample folder too, just with lesser number of images. this folder would really help in quick testing of the code.
So these are some ameteur mistakes I made, along with how to solve them. I hope these help you if you’re ever stuck in the same spot as I was!
Comment below if you have any questions :)
|
Summer is Coming! Join the Game of Timers Competition to Win Epic Prizes. Registration is Open. Game starts Mon July 1st.
It is currently 18 Jul 2019, 10:29
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# M60-03
Author Message
TAGS:
### Hide Tags
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 7603
GMAT 1: 760 Q51 V42
GPA: 3.82
### Show Tags
11 Jun 2018, 02:26
00:00
Difficulty:
35% (medium)
Question Stats:
67% (01:06) correct 33% (01:19) wrong based on 6 sessions
### HideShow timer Statistics
The cost, $$c$$ and the revenue, $$r$$ are related via the equation $$c =ar+b$$. By how much is the cost increased if the revenue is increased by $$10? (1)$$a=0.5$$(2)$$b=5$_________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only$79 for 1 month Online Course"
"Free Resources-30 day online access & Diagnostic Test"
"Unlimited Access to over 120 free video lessons - try it yourself"
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 7603
GMAT 1: 760 Q51 V42
GPA: 3.82
### Show Tags
11 Jun 2018, 02:26
Official Solution:
Forget conventional ways of solving math questions. For DS problems, the VA (Variable Approach) method is the quickest and easiest way to find the answer without actually solving the problem. Remember that equal numbers of variables and independent equations ensure a solution.
The first step of the VA (Variable Approach) method is to modify the original condition and the question, and then recheck the question.
The equation $$c = ar + b$$ tells us that the cost is increases by $a when the revenue is increases by 1 dollar. It follows that the cost is increases by 10a when the revenue is increases by 10 dollars. So, the question is asking for the value of a. The answer is A. Answer: A _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only$79 for 1 month Online Course"
"Free Resources-30 day online access & Diagnostic Test"
"Unlimited Access to over 120 free video lessons - try it yourself"
Intern
Joined: 14 Jun 2018
Posts: 47
Location: India
### Show Tags
13 Sep 2018, 05:02
MathRevolution wrote:
The cost, $$c$$ and the revenue, $$r$$ are related via the equation $$c =ar+b$$. By how much is the cost increased if the revenue is increased by $$10? (1)$$a=0.5$$(2)$$b=5$hello, Is there any simpler approach to this question? It would be of great help in understanding. thank you. Intern Joined: 02 Jul 2018 Posts: 4 Re: M60-03 [#permalink] ### Show Tags 29 Sep 2018, 17:50 1 Shri15kumar wrote: MathRevolution wrote: The cost, $$c$$ and the revenue, $$r$$ are related via the equation $$c =ar+b$$. By how much is the cost increased if the revenue is increased by $$10? (1)$$a=0.5$$(2)$$b=5$
hello,
Is there any simpler approach to this question? It would be of great help in understanding. thank you.
Given equation. C1 = ar + b
when revenue increases by $10, C2 = a(r+10)+b => C2=ar+10a+b Need to find cost increase. C2-C1= 10a A gives exactly that. Intern Joined: 08 Nov 2018 Posts: 3 Concentration: Accounting, Finance GMAT 1: 380 Q32 V12 Re M60-03 [#permalink] ### Show Tags 13 Nov 2018, 01:24 I think this the explanation isn't clear enough, please elaborate. Intern Joined: 15 Jan 2019 Posts: 3 M60-03 [#permalink] ### Show Tags 15 Jan 2019, 11:31 Given equation. C1 = ar + b when revenue increases by$10, C2 = a(r+10)+b => C2=ar+10a+b
Need to find cost increase. C2-C1= 10a
option a gives us the value of a while option b gives value of b
as c2 - c1 =10a we want the value of a
this is given by option a as a=0.5
hence c2-c1=0.5*10=5
hence
option b giving us the value of b is not of any use
Intern
Joined: 18 Nov 2014
Posts: 12
### Show Tags
18 Jan 2019, 02:11
I think this is a high-quality question. solution is not clear
Re M60-03 [#permalink] 18 Jan 2019, 02:11
Display posts from previous: Sort by
# M60-03
Moderators: chetan2u, Bunuel
|
# Math Help - [SOLVED] Euler Expression
1. ## [SOLVED] Euler Expression
Using Euler’s formula, express in terms of $sin\alpha$ and $cos\alpha$ for :
$sin2\alpha$ and $cos2\alpha$
I know that Euler's formula is :
$e^{ix}$ = $cos\alpha$ + $isin\alpha$
So I have come to the conclusion that:
$cos2\alpha =Re[e^{2i\alpha}]$ and $sin2\alpha =Im[e^{2i\alpha}]$
Is this correct and to answer this question do I leave it in this form?
2. Hi
You must express $sin2\alpha$ and $cos2\alpha$ in terms of $sin\alpha$ and $cos\alpha$
$cos2\alpha =Re[e^{2i\alpha}]$ and $e^{2i\alpha} = \left(e^{i\alpha}\right)^2 = (cos\alpha + i\:sin\alpha)^2$
3. Originally Posted by ronaldo_07
Using Euler’s formula, express in terms of $sin\alpha$ and $cos\alpha$ for :
$sin2\alpha$ and $cos2\alpha$
I know that Euler's formula is :
$e^{ix}$ = $cos\alpha$ + $isin\alpha$
So I have come to the conclusion that:
$cos2\alpha =Re[e^{2i\alpha}]$ and $sin2\alpha =Im[e^{2i\alpha}]$
Is this correct and to answer this question do I leave it in this form?
I think you need to use these Euler's formulas: $\sin \alpha = \frac{{{e^{ia}} - {e^{ - ia}}}}{{2i}}$ and $\cos \alpha = \frac{{{e^{ia}} + {e^{ - ia}}}}{2}$.
|
# Sommerfeld results & van Hove singularities
According to Sommerfeld the derivative of the density of states $g'(\varepsilon)$ apears in several thermodynamic quantities. Will this also be the case if one use the correct dispersion relation of crystal? If yes don't we encounter the divergence in these quantities when we pass a van Hove singularity (where $g'(\varepsilon)\rightarrow \infty$)?
-
For the logarithmic and weaker singularities that van Hove singularities give rise to in two and three dimensional crystals, $g'(\epsilon)$ will be discontinuous at the critical frequency of any van Hove singularity, so yes, you are quite right. Which quantities did you have in mind? – Chay Paterson Jun 27 '13 at 18:14
|
## Iterating the logistic map: limsup of nonperiodic orbits
Last time we found that when a sequence with ${x_1\in (0,1)}$ and ${x_{n+1} = 4x_n(1-x_n)}$ does not become periodic, its upper limit ${\limsup x_n}$ must be at least ${\approx 0.925}$. This time we’ll see that ${\limsup x_n}$ can be as low as ${(2+\sqrt{3})/4\approx 0.933}$ and determine for which ${x_1}$ it is equal to 1.
The quadratic polynomial ${f(x)=4x(1-x)}$ maps the interval ${[0,1]}$ onto itself. Since the linear function ${g(x) = 1-2x}$ maps ${[0,1]}$ onto ${[-1,1]}$, it follows that the composition ${h=g\circ f\circ g^{-1}}$ maps ${[-1,1]}$ onto ${[-1,1]}$. This composition is easy to compute: ${h(x) = 2x^2-1 }$.
We want to know whether the iteration of ${f}$, starting from ${x_1}$, produces numbers arbitrarily close to ${1}$. Since ${f\circ f \circ \cdots \circ f = g^{-1}\circ h \circ h \circ \cdots \circ h\circ g}$ the goal is equivalent to finding whether the iteration of ${h}$, starting from ${g(x_1)}$, produces numbers arbitrarily close to ${g(1) = -1}$. To shorten formulas, let’s write ${h_n}$ for the ${n}$th iterate of ${h}$, for example, ${h_3 = h\circ h\circ h}$.
So far we traded one quadratic polynomial ${f}$ for another, ${h}$. But ${h}$ satisfies a nice identity: ${h(\cos t)=2\cos^2 t-1 = \cos(2t)}$, hence ${h_n(\cos t) = \cos (2^n t)}$ for all ${n\in\mathbb N}$. It’s convenient to introduce ${\alpha = \frac{1}{\pi}\cos^{-1}(1-2x_1)}$, so that ${ h_n(g(x_1)) = h_n(\cos 2\pi \alpha ) = \cos(2^n\cdot 2\pi \alpha) }$.
The problem becomes to determine whether the numbers ${2^n\cdot 2\pi \alpha}$ come arbitrarily close to ${\pi}$, modulo an integer multiple of ${2\pi}$. Dividing by ${2\pi}$ rephrases this as: does the fractional part of ${2^n \alpha}$ come arbitrarily close to ${1/2}$?
A number that is close to ${1/2}$ has the binary expansion beginning either with ${0.01111111\dots}$ or with ${0.10000000\dots}$. Since the binary expansion of ${2^n\alpha}$ is just the binary expansion of ${\alpha}$ shifted ${n}$ digits to the left, the property ${\limsup x_n=1}$ is equivalent to the following: for every ${k\in\mathbb N}$ the binary expansion of ${\alpha}$ has infinitely many groups of the form “1 followed by k zeros” or “0 followed by k ones”.
A periodic expansion cannot have the above property; this, ${\alpha}$ must be irrational. The property described above can then be simplified to “irrational and has arbitrarily long runs of the same digit”, since a long run of ${0}$s will be preceded by a ${1}$, and vice versa.
For example, combining the pairs 01 and 10 in some non-periodic way, we get an irrational number ${\alpha}$ such that the fractional part of ${2^n\alpha}$ does not get any closer to 1/2 than ${0.01\overline{10}_2 = 5/12}$ or ${0.10\overline{01}_2 = 7/12}$. Hence, ${\cos 2^n 2\pi \alpha \ge -\sqrt{3}/2}$, which leads to the upper bound ${x_n\le (2+\sqrt{3})/4\approx 0.933}$ for the sequence with the starting value ${x_1=(1-\cos\pi\alpha)/2}$.
Let us summarize the above observations about ${\limsup x_n}$.
Theorem: ${\limsup x_n=1}$ if and only if (A) the number ${\alpha = \frac{1}{\pi}\cos^{-1}(1-2x_1)}$ is irrational, and (B) the binary expansion of ${\alpha}$ has arbitrarily long runs of the same digit.
Intuitively, one expects that a number that satisfies (A) will also satisfy (B) unless it was constructed specifically to fail (B). But to verify that (B) holds for a given number is not an easy task.
As a bonus, let’s prove that for every rational number ${y\in (-1,1)}$, except 0, 1/2 and -1/2, the number ${\alpha = \frac{1}{\pi}\cos^{-1}y}$ is irrational. This will imply, in particular, that ${x_1=1/3}$ yields a non-periodic sequence. The proof follows a post by Robert Israel and requires a lemma (which could be replaced with an appeal to Chebyshev polynomials, but the lemma keeps things self-contained).
Lemma. For every ${n\in \mathbb N}$ there exists a monic polynomial ${P_n}$ with integer coefficients such that ${P_n(2 \cos t) = 2\cos nt }$ for all ${t}$.
Proof. Induction, the base case ${n=1}$ being ${P_1(x)=x}$. Assuming the result for integers ${\le n}$, we have ${2 \cos (n+1)t = e^{i(n+1)t} + e^{-i(n+1)t} }$ ${ = (e^{int} + e^{-int})(e^{it} + e^{-it}) - (e^{i(n-1)t} + e^{-i(n-1)t}) }$ ${ = P_n(2 \cos t) (2\cos t) - P_{n-1}(2\cos t) }$
which is a monic polynomial of ${2\cos t}$. ${\Box}$
Suppose that there exists ${n}$ such that ${n\alpha \in\mathbb Z}$. Then ${2\cos(\pi n\alpha)=\pm 2}$. By the lemma, this implies ${P_n(2\cos(\pi \alpha)) =\pm 2}$, that is ${P_n(2y)=\pm 2}$. Since ${2y}$ is a rational root of a monic polynomial with integer coefficients, the Rational Root Theorem implies that it is an integer. ${\Box}$
## A limsup exercise: iterating the logistic map
Define the sequence ${\{x_n\}}$ as follows: ${x_1=1/3}$ and ${x_{n+1} = 4x_n(1-x_n)}$ for ${n=1,2,\dots}$. What can we say about its behavior as ${n\rightarrow\infty}$?
The logistic map ${f(x)=4x(1-x)}$ leaves the interval [0,1] invariant (as a set), so ${0\le x_n\le 1}$ for all ${n}$. There are two fixed points: 0 and 3/4.
Can ${x_n}$ ever be 0? If ${n}$ is the first index this happens, then ${x_{n-1}}$ must be ${1}$. Working backwards, we find ${x_{n-2}=1/2}$, and ${x_{n-3}\in \{1/2 \pm \sqrt{2}/4\}}$. But this is impossible since all elements of the sequence are rational. Similarly, if ${n}$ is the first index when ${x_n = 3/4}$, then ${x_{n-1}=1/4}$ and ${x_{n-2}\in \{1/2\pm \sqrt{3}/4\}}$, a contradiction again. Thus, the sequence never stabilizes.
If ${x_n}$ had a limit, it would have to be one of the two fixed points. But both are repelling: ${f'(x) = 4 - 8x}$, so ${|f'(0)|=4>1 }$ and ${|f'(3/4)| = 2 > 1}$. This means that a small nonzero distance to a fixed point will increase under iteration. The only way to converge to a repelling fixed point is to hit it directly, but we already know this does not happen. So the sequence ${\{x_n\}}$ does not converge.
But we can still consider its upper and lower limits. Let’s try to estimate ${S = \limsup x_n}$ from below. Since ${f(x)\ge x}$ for ${x\in [0,3/4]}$, the sequence ${\{x_n\}}$ increases as long as ${x_n\le 3/4}$. Since we know it doesn’t have a limit, it must eventually break this pattern, and therefore exceed 3/4. Thus, ${S\ge 3/4}$.
This can be improved. The second iterate ${f_2(x)=f(f(x))}$ satisfies ${f_2(x)\ge x}$ for ${x}$ between ${3/4}$ and ${a = (5+\sqrt{5})/8 \approx 0.9}$. So, once ${x_n>3/4}$ (which, by above, happens infinitely often), the subsequence ${x_n, x_{n+2}, x_{n+4},\dots}$ increases until it reaches ${a}$. Hence ${S\ge a}$.
The bound ${\limsup x_n\ge a}$ is best possible if the only information about ${x_1}$ is that the sequence ${x_n}$ does not converge. Indeed, ${a}$ is a periodic point of ${f}$, with the corresponding iteration sequence ${\{(5+ (-1)^n\sqrt{5})/8\}}$.
Further improvement is possible if we recall that our sequence is rational and hence cannot hit ${a}$ exactly. By doubling the number of iterations (so that the iterate also fixes ${a}$ but also has positive derivative there) we arrive at the fourth iterate ${f_4}$. Then ${f_4(x)\ge x}$ for ${a\le x\le b}$, where ${b }$ is the next root of ${f_4(x)-x}$ after ${a}$, approximately ${0.925}$. Hence ${S\ge b}$.
This is a colorful illustration of the estimation process (made with Sage): we are covering the line ${y=x}$ with the iterates of ${f}$, so that each subsequent one rises above the line the moment the previous one falls. This improves the lower bound on ${S}$ from 0.75 to 0.9 to 0.92.
Although this process can be continued, the gains diminish so rapidly that it seems unlikely one can get to 1 in this way. In fact, one cannot because we are not using any properties of ${x_1}$ other than “the sequence ${x_n}$ is not periodic.” And it’s not true that ${\limsup x_n = 1}$ for every non-periodic orbit of ${f}$. Let’s return to this later.
## From boring to puzzling in 30 iterative steps
The function ${f(x)=2\cos x }$ may be nice and important as a part of trigonometric basis, but there is nothing exciting in its graph:
Let’s look at its iterations ${f^n=f\circ f\circ \dots \circ f}$ where ${n }$ is the number of iterations, not an exponent. Here is the graph of ${f^{14}}$:
A rectangular pattern is already visible above; further iterations only make it stronger. For example, ${f^{30} }$:
It may be impossible to see on the graph, but the rectangles are slightly apart from one another (though of course they are connected by the graph of continuous function). This is easier to see on the histogram of the values ${f^{n}(0) }$ for ${n=0,\dots, 10000 }$, which contains two small gaps in addition to a large one:
What goes on here? The range of ${f}$ on ${[-2,2]}$, as well as the range of any of its iterates, is of course connected: it is the closed interval ${[f^{2}(0),f(0)] = [2 \cos 2, 2]}$. But the second iterate ${f^2=f\circ f}$ also has two invariant subintervals, marked here by horizontal lines:
Namely, they are ${I_1=[f^{2}(0), f^{4}(0)]}$ and ${I_2=[f^{3}(0),2]}$. It is easy to see that ${f(I_1)=I_2}$ and ${f(I_2)=I_1}$. The gap between ${I_1}$ and ${I_2}$ contains the repelling fixed point of ${f}$, approximately ${x=1.03}$. Every orbit except for the fixed point itself is pushed away from this point and is eventually trapped in the cycle between ${I_1}$ and ${I_2}$.
But there is more. A closer look at the fourth iterate reveals smaller invariant subintervals of ${f^4}$. Here is what it does on ${I_2}$:
Here the gap contains a repelling fixed point of ${f^2}$, approximately ${1.8}$. The invariant subintervals of ${I_2}$ are ${I_{21}=[f^{3}(0), f^{7}(0)]}$ and ${I_{22}=[f^9(0), 2]}$. Also, ${I_1}$ contains invariant subintervals ${I_{11}=[f^{2}(0), f^{6}(0)]}$ and ${I_{12}=[f^8(0), f^4(0)]}$. These are the projections of the rectangles in the graph of ${f^{30}}$ onto the vertical axes.
No more such splitting occurs. The histogram of the values of iterates of ${f}$ indeed consists of four disjoint intervals. Can one get a Cantor-type set in this way, starting from some boring function?
## Fun with TI-83: billions upon billions of cosines
Okay, maybe not billions. But by taking cosines repeatedly, one can find the solution of the equation ${\cos x = x}$ with high precision in under a minute.
Step 1: Enter any number, for example 0, and press Enter.
Step 2: Enter cos(Ans) and press Enter
Step 3: Keep pushing Enter. (Unfortunately, press-and-hold-to-repeat does not work on TI-83). This will repeatedly execute the command cos(Ans).
After a few iterations, the numbers begin to settle down:
and eventually stabilize at 0.7390851332
Explanation: the graph of cosine meets the line ${y = x}$ at one point: this is a unique fixed point of the function ${f(x)=\cos x}$.
Since the derivative ${f'(x)=-\sin x}$ at the fixed point is less than 1 in absolute value, the fixed point is attracting.
Now try the same with the equation ${10 \cos x =x}$.
This time, the numbers flat out refuse to converge:
Explanation: the graph of ${f(x)=10\cos x}$ meets the line ${y = x}$ at seven point: thus, this function has seven fixed points.
And it so happens that ${|f'(x)|>1}$ at each of those fixed points. This makes them repelling. The sequence has nowhere to converge, because every candidate for the limit pushes it away. All that’s left to it is to jump chaotically around the interval ${[-10,10]}$. Here are the first 1024 terms, plotted with OpenOffice:
Clearly, the distribution of the sequence is not uniform. I divided the interval ${[-10,10]}$ into subintervals of length ${0.05}$ and counted the number of terms falling into each.
What is going on here? Stay tuned.
|
1. ## Complex-Residue Theorem
How can I calculate the residue at 0 of the next function:
$f(z)=\frac{1}{z^2 \sin z}$ ??
Hope you'll be able to help me
Thanks a lot!
2. By calculating the residue "at infinity" of $\frac{1}{f(z)}= z^2 sin(z)$.
That is, write sin(z) as a Taylor's series, then multiply by $z^2$. That, of course, will be a power series with only positive powers. Now invert the powers getting a Laurent series with only negative powers. The residue is the coefficient of the $z^{-1}$ term.
3. what hallsofivy said. I thought i had another approach but on reflection don't think it would be correct
4. Originally Posted by HallsofIvy
By calculating the residue "at infinity" of $\frac{1}{f(z)}= z^2 sin(z)$.
That is, write sin(z) as a Taylor's series, then multiply by $z^2$. That, of course, will be a power series with only positive powers. Now invert the powers getting a Laurent series with only negative powers. The residue is the coefficient of the $z^{-1}$ term.
Here is my try:
$z^2 sin(z)=z^2 (z-\frac{z^3}{3!} + \frac{z^5}{5!} -...)=$
$z^3 - \frac{z^5}{3!} +\frac{z^7}{5!} -...$
Hence:
$\frac{1}{z^2 sin(z) } = \frac{1}{z^3 - \frac{z^5}{3!} +\frac{z^7}{5!} -...} =$
$= \frac{1}{\sum_{n=3}^{\infty}\frac{z^n}{(n-2)!} }$
and this is where I got stuck in my calculation... We can't just right it with negative powers because all the sum is in negative power...
Hope you'll be able to correct my misunderstanding and help me continue...
Thanks a lot !
$f(z)= \frac{z}{\sin z} = a_{0} + a_{1}\cdot z + a_{2}\cdot z^{2} + \dots$ (1)
... from the well know result...
$\frac{\sin z}{z} = \sum_{n=0}^{\infty} (-1)^{n}\cdot \frac{z^{2n}}{(2n+1)!} = 1 - \frac{z^{2}}{6} + \frac{z^{4}}{120} - \dots$ (2)
The most 'spontaneous' approach is to impose...
$(a_{0} + a_{1}\cdot z + a_{2}\cdot z^{2} + \dots) \cdot (1 - \frac{z^{2}}{6} + \frac{z^{4}}{120} - \dots)= 1$ (3)
... and that permits us to compute the $a_{n}$ in recursive way...
$a_{0}=1$
$a_{1} =0$
$a_{2} = \frac{a_{0}}{6} = \frac{1}{6}$
$a_{3} =0$
$a_{4} = \frac{1}{36} - \frac{1}{120} = \frac{7}{360}$
... and so one...
Now we can write...
$\frac{z}{\sin z} = 1 + \frac{z^{2}}{6} + \frac{7}{360}\cdot z^{4} + \dots$ (4)
... from which it follows...
$\frac{1}{z^{2}\cdot \sin z} = z^{-3} + \frac{z^{-1}}{6} + \frac{7}{360}\cdot z + \dots$ (5)
... so that the residue of $\frac{1}{z^{2}\cdot \sin z}$ in $z=0$ is $\frac{1}{6}$...
Kind regards
$\chi$ $\sigma$
6. Actually I didn't understand your last result:
$
\frac{z}{\sin z} = 1 + \frac{z^{2}}{6} + \frac{7}{360}\cdot z^{4} + \dots
$
It means that:
$
\frac{\frac{1}{z^2}}{\sin z} = 1 + \frac{(\frac{1}{z^2})^{2}}{6} + \frac{(\frac{1}{z^2})^4}{360}\cdot + \dots
$
Which means the Residue is excatly 0, isn't it?
Thanks!
7. Originally Posted by WannaBe
Actually I didn't understand your last result:
$
\frac{z}{\sin z} = 1 + \frac{z^{2}}{6} + \frac{7}{360}\cdot z^{4} + \dots
$
It means that:
$
\frac{\frac{1}{z^2}}{\sin z} = 1 + \frac{(\frac{1}{z^2})^{2}}{6} + \frac{(\frac{1}{z^2})^4}{360}\cdot + \dots
$
Which means the Residue is excatly 0, isn't it?
Thanks!
If You start from $f(z) = \frac{z}{\sin z}$ , then is...
$\frac{1}{z^{2}\cdot \sin z} = \frac{f(z)}{z^{3}}$ (1)
Your Laurent expression is very interesting however, but the correct expression is...
$\frac{1}{z^{2}\cdot \sin \frac{1}{z^{2}}} = 1 + \frac{1}{6}\cdot z^{-4} + \frac{7}{360}\cdot z^{-8} + \dots$ (2)
The function represented in (2) has in $z=0$ an essential singularity, that is not the case for $\frac{1}{z^{2}\cdot \sin z}$...
Kind regards
$\chi$ $\sigma$
8. Thanks...Sry for my mistake...Completely understandable!
|
Just because you can serve Forms 2.0 JavaScript enhancements from a Rich Text area on the form — I showed how to do that in an earlier post — doesn’t mean you should!
If you use this method, you need to be sure — much more sure than usual — that your code doesn’t bug out. If you tend to code optimistically, instead of defensively, you shouldn’t do it in a Rich Text.
Let’s say this Forms 2.0 JS had been running on the page itself:
<script>
document.querySelector("#gem-details").open = true;
});
</script>
This particular code finds the <details> element with a known id, then reveals it as the form header. But the code could be doing anything — I just took a simple example from a recent Community thread.
The expected final output is like this:
But even if there isn’t a <details> with that id on the page, the form still renders and works, just without its header text (“Fill out the form below.”) That one little script errors out, since querySelector() returns null (which you’d see in the F12 console) but there aren’t any other ill effects.
Now you think to yourself, “It’s just a couple of lines, that’s annoying to remember to put on the page. Besides, it depends on the form being there, so why I don’t I put it in the form itself?”
So you move that same code[1] into a Rich Text area on the form:
(You don’t have to listen for whenReady anymore, because the form must be present if you’re, uh, inside the form at the time.)
So that seems simpler. And yep, it works fine, if the <details> is there.
But suppose somebody on the web team changed the id of the <details> without considering the consequences:
<details id="jewel-details"><summary />Fill out the form below</details>
Now your form is gonna look like this:
To put it mildly, that form ain’t showing up. Let’s trace why.
First, check the F12 Console and you’ll see the direct cause. document.querySelector() returns null, so you obviously can’t set the open property on it:
Wait a minute, though. Didn’t I say in my last post that independent <script> tags never affect each other? Shouldn’t the <script> inside the Rich Text have its own Execution Context, so even if it errors out, it doesn’t cause catastrophic side effects?
Hold that thought. ☺
## Surprises herein
Let’s inspect the HTML:
OK, that’s kind of strange, right? We see the <script> in a Rich Text container (mktoHtmlText), but it’s inside another <form> element at the base of the page. Not the <form> element we placed in the body.
And that secondary <form> is actually thrown offscreen (via position: absolute, top, and left styles), while the standard <form> remains empty.
But here’s the thing: that other <form> is always there, even on a good day! You just didn’t notice it unless you needed to do deep DOM inspection. Here’s how the form(s) look normally:
## The secondary <form> is for dynamically calculating static dimensions (if that’s not too confusing)
In normal operation, the main <form>, obviously, is where the visible elements are rendered.
The secondary <form> is empty 99.9999% of the time, because it’s only used to measure the dimensions of prospective elements before injecting them into the visible area.
As you probably know (but may not have thought deeply about) Forms 2.0 mixes static height and width values with responsive CSS dimensions. In order to calculate the former, it needs to render elements offscreen and see what they’ll look like. Then it moves them into the main <form> when all the calculations are done. (This approach makes sense, by the way, if you need to know things like “What’s the widest child element?” without unbearable flicker.)
So now, at least, you know why there’s another form. But why does the measuring form stop in its tracks just because there’s a TypeError thrown by a <script> that appears in one Rich Text area? Why doesn’t the error get logged to the console in the background, while other elements continue to be measured & moved?
## Another factoid about <script> tags
It comes down to another thing you probably haven’t had a chance to learn (unlike us old-timers) about the <script> element.
Let’s see if I can keep this simple to start: code is not executed if you just insert an inline <script>, as HTML, using JavaScript.
In other words, this will insert a <script> tag, yes:
document.querySelector("#someWrapper").innerHTML = "<script>alert('Hi!');</script>";
But it’ll be a <script> in every way but the one that counts!
You can see it in F12 Dev Tools, where it looks like any other <script>. You can find it in the DOM using querySelector. It’s a member of the document.scripts collection, and it’s an instance of HTMLScriptElement. But the code doesn’t run!
Weird, eh? Well, such is browser life.
## Executing the inexecutable
So there are 2 ways around this, which JS frameworks have been choosing between since time immemorial:
1. Read the inner code of the <script> as a string and immediately eval() it.
2. Create a new script using document.createElement("script"), set its text property to the code string, and inject it into the DOM.
Marketo’s Forms 2.0 library chooses the former, which is marginally more compatible (the latter doesn’t work in IE 8, which absolutely no one cares about today but which was still a concern when Marketo first debuted).
Other than compatibility, the 2 methods are generally considered equivalent. Method 1 is a little faster, not that you’d notice as we’re talking nanoseconds.
But there’s something else, a major difference that’s usually ignored: with Method 1, the first error from eval() stops execution, while with Method 2, scripts can’t crash each other.
And that, friends, is why having bad code inside a Rich Text is far more fatal than you expect. Rather than injecting independent <script> tags, the code is eval’d inside the same loop that builds the form. So any error is a global error.
## What to do?
An obvious solution is to pull your code out of the Rich Text and put it back on the page, which is to my mind more manageable and debuggable anyway (really, who wants to debug eval’d code?). You’ll still need to find errors, but they may not be fatal to the core functionality/visibility of the form.
Or you can make your code more resilient, even if you otherwise consider it non-critical. In this case, check if the element exists:
<script>
</script>
[1] Note the <![CDATA[ wrapper is not related to the problem, that’s just something Marketo does for backward compatibility.
|
1 like 0 dislike
58 views
Given: $\quad f(x)=x^2+3 x-4$
1.2.1 Solve for $x$ if $f(x)=0$
1.2.2 Solve for $x$ if $f(x)<0$
1.2.3 Determine the values of $x$ for which $f^{\prime}(x) \geq 0$
| 58 views
0 like 0 dislike
1.2.1 The equation $f(x)=x^2+3x-4=0$ has two solutions, which can be found using the quadratic formula:
$x = \frac{-3 \pm \sqrt{3^2-4\cdot 1\cdot (-4)}}{2\cdot 1} = \frac{-3 \pm \sqrt{9+16}}{2} = \frac{-3 \pm \sqrt{25}}{2} = \frac{-3 \pm 5}{2}$
Therefore, the solutions to the original equation are $x=\frac{-3+5}{2}=1$ and $x=\frac{-3-5}{2}=-4$.
1.2.2 The equation $f(x)<0$ can be rewritten as $x^2+3x-4<0$, which means that the value of $x^2+3x-4$ must be negative. This is only possible if $x^2+3x-4$ is less than 0 for all values of $x$. To determine the values of $x$ for which this is true, we can plot the graph of the function $y=x^2+3x-4$ and find the values of $x$ where the graph is below the x-axis.
[GRAPH]
From the graph, we can see that the function $y=x^2+3x-4$ is negative for all values of $x$ that are less than -4 or greater than 1. Therefore, the solutions to the original equation are $x<-4$ or $x>1$.
1.2.3 The derivative of the function $f(x) = x^2+3x-4$ is $f'(x) = 2x+3$. The values of $x$ for which $f'(x) \ge 0$ are the values of $x$ for which the slope of the graph of
by Bronze Status (5,549 points)
0 like 0 dislike
0 like 0 dislike
1 like 0 dislike
1 like 0 dislike
1 like 0 dislike
1 like 0 dislike
1 like 0 dislike
1 like 0 dislike
0 like 0 dislike
0 like 0 dislike
1 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
|
## Mean Absolute Deviation for ungrouped data
Use this calculator to find the variance and standard deviation for ungrouped (raw) data.
Mean Absolute Deviation Calculator
Enter the X Values (Separated by comma,)
Results
Number of Obs. (n):
Sample Mean : ($\overline{x}$)
Let $x_i, i=1,2, \cdots , n$ be $n$ observations then the mean of $X$ is denoted by $\overline{x}$ and is given by \begin{aligned} \overline{x}& =\frac{1}{n}\sum_{i=1}^{n}x_i. \end{aligned}
The mean absolute deviation about mean is denoted by MAD and is given by \begin{aligned} MAD & =\frac{1}{n}\sum_{i=1}^{n}|x_i -\overline{x}| \end{aligned}
|
## Friday, July 06, 2012
### Higgs bets: I won $500, Gordy Kane won$100 from Stephen Hawking
Update: I just received my $500. Thanks to the trustworthy losing party! ;-) Now I must tidy up my living room. I foolishly made a bet that they would produce *a* Higgs boson but I forgot that they could produce many of them... If you didn't watch the July 4th, 2012 Higgs talks, here they are recorded to be replayed. Just hours after the yesterday's discovery of the Higgs boson at CERN, my counterparty contacted me by e-mail, conceded defeat in our 2007 bet, and requested the relevant contacts or banking numbers to pay$500.
So I sent the data and I haven't heard from him again after that so far but I hope it will get fixed because the story about the bet is already a topic in the Czech media. ;-) (The interview will be visible to the public on Monday, to avoid low traffic in the two post-Independent-Day Czech national holidays.)
In that interview, I sketch the situation with the Higgs, research in HEP physics, its meaning, expected Nobel prize winners, emerging anomalies etc. I also say that a part of the $500 will be used to sustain the$100 funds needed for the case that I lose the SUSY bet against Adam Falkowski of Resonaances. A more interesting outcome would be if the LHC does find SUSY; in that case, Adam Falkowski will owe me $10,000: a more explosive bet, you know. ;-) But there was an even more interesting bet. I knew that Stephen Hawking made a bet against the Higgs boson (what was he thinking?) but I didn't know – or I forgot – who was the other party. AFP and hundreds of other media outlets demystified this mystery: it was no one else than TRF guest blogger Gordon Kane. Congratulations, Prof Kane! (See dozens of articles with his name on TRF.) Sadly for Gordon Kane, his counterparty couldn't afford more than those$100 – he only receives occasional money from professorships, tens of millions of sold copies of books, and similar modest sources. Moreover, Stephen Hawking had to buy lots of things, such as an encyclopedia for John Preskill, for his previous lost bets, such as one in which argued that the information really gets lost in the black holes. Needless to say, the scalp of a very famous physicist is probably vastly more valuable than the actual bounty in this case – even though Gordon Kane isn't the exclusive owner of this scalp anymore. ;-)
Has Martinus Veltman lost a bet? Given his anti-Higgs rhetoric in the past, he would deserve to pay millions. According to his latest talk in Lindau, he reconciled himself with the existence of the Higgs but he says it's a bad news because it's "closing the door". Go to 30:00 of the video or so.
1. Dear Lubos,
maybe your unknown counter party was too hasty;-).
The production rate for gamma pairs has been from the beginning twice the predicted one in both ATLAS and CMS so that the interpretation as statistical fluke does not seem plausible anymore. If this continues to be the case, the interpretation as standard model Higgs does not make sense. The interpretation as Higgs like SUSY particle is even more suspicious since we all are well aware of the sad fate of the standard SUSY.
"Higgs" is not a synonym for a "new particle" as I have been forced repeatedly to emphasize.
2. John F. HultquistJul 5, 2012, 7:50:00 AM
$500 !! Party time. 3. Dear Matti, I personally guess it's more likely that the diphoton excess will go away - it's just 2 sigma now or so - but even if it won't, it wouldn't impact our bet. I didn't make a bet that one would prove that the Standard Model is the exactly theory. Indeed, everyone knows that as a supersymmetry champion, I don't really find it too likely. Instead, the key claim by my other party was that "no Higgs particle would be found by 2015" and the status of this claim has been decided. It is "A Higgs particle has already been discovered." 4. Félicitations Lubos ! Both for the bet and the article in the Czech newspaper (you're a star, kinda ;-) 5. Thanks, John! ;-) 6. Merci, Shannon. A ministar. ;-) 7. "Nature acts, Men argue" --Voltaire I like the comment by Dr Alex Fillipenko/Berkeley (supernova astronomer collaborator with Seth Perlmutter), sitting on Mauna Kea "It's like the Universe has a consciousness [ humans as end product of Big Bang & Evolution ]" 8. Dear Lumo, this was a very nice interview you gave, I like it a lot :-) This is very cool that Gordy Kane won a bet agains Stephen Hawking, LOL :-D 9. Congratulations- good news, but no real surprise apart for J. Ramsden and Prof.Strassler! Convincing evidence from the di-photon channel but as far as I understand rather quiet along the other expected decay pathways - is that correct? all the best Zbynek 10. Dear Zbyněk, right, no surprise for most people who were "in". And not really. The diphoton channel seems amplified but all the other channels seem to be pretty much exactly matching the expectations. They weren't expected to yield strong significant signals at this point and they didn't. Ditau channel may seem suppressed but this even less significant than the diphoton excess. You know, if you compare 5 or more channels, it's more likely than not due to basic laws of statistics that at least one of the channels will deviate by more than 1 sigma from expectations and not much more than that is seen, except for the 2-sigma-or-so excess in the diphoton. The deviations from the SM generally shrunk since Dec 2011, especially for ATLAS. 11. Congratulations, Lubos! So great to read this news and have the world excited about nature, instead of the silly rants about so many far less significant and interesting topics that the world obsesses over. Thanks for all your fantastic coverage on Higgs and related subjects. 12. Cool to see you here, Ann, and thanks for your happy words. ;-) 13. What would be problematic if the Higgs boson was a spin-2 particle ? After all, this could maybe "unify" gravity (seen as an interaction between particles by the intermediary of gravitons) and the origin of masses (interaction of particles with some bosonic field, here a spin-2 one, which has a not-null VEV expectation) 14. Dear Trimok, if something has a spin 2 and if it mediates gravity, then it is a graviton, not a Higgs boson. It also follows that its (graviton's) interactions with individual elementary particles are so weak that they're undetectable. Just calculate the probability that a Z-boson (produced in decay products of the Higgs at CERN) emits strong enough gravitational waves so they are absorbed and influence another Z-boson. The chance is of course zero for all practical purposes. We can't ever detect ordinary graviton at colliders. (We could detect gravitons moving in extra dimensions if those were large or warped enough.) Gravitons are quanta of the gravitational fields or waves and those are about the spacetime geometry. So your question is like asking Would it be problematic if a banana had 2 wings and if it were a mosquito? It could also explain insect and unify zoology and botany. Well, mosquitos and bananas are completely different things, much like Higgses and gravitons. 15. How does the Higgs field give black hole mass? 16. Lubos, you should invest the$500 exclusively for SnorgTees via your website! Congrats from Texas. Rob M
17. It doesn't. It only gives masses to some elementary particles - charged leptons, quarks, W-bosons, Z-bosons, and itself. Messages about its being the Godly master of the whole Universe are exaggerated.
At this level, it's really a particle just like any other that interacts with some other particles but not all of them and that becomes equally irrelevant for objects such as black holes which are purely gravitational objects, curvature in spacetime geometry.
Even protons and neutrons get most of their mass from effects that have nothing to do with the Higgs. Also, superpartners if they existed would have most of their mass from non-Higgs sources, from SUSY breaking, not electroweak symmetry breaking. Only the latter is due to the Higgs field. See
http://physics.stackexchange.com/questions/31343/does-dark-matter-interact-with-higgs-field/31345#31345
18. Do we know if the measured value of the Higgs boson leads to a stable or an unstable (or metastable) vacuum, or does the value fall within the "uncertain" range?
(I was thinking of the interesting paper by Sidney Coleman and Frank De Luccia--http://prd.aps.org/abstract/PRD/v21/i12/p3305_1 (which, of course, does not mention the Higgs :))
19. Dear Gordon, there are various uncertainties - LHC measured, ATLAS/CMS difference, theoretical errors in the calculations, errors in the other measured quantities that the theoretical calculation depends upon etc. - and the ranges differ from paper to another paper.
Most of the oldest papers would put 125 GeV to the straight unstable interval. Some of them would have it as intermediate. The newest paper, as far as I know, puts 125 or at least 126 GeV is in the "metastable" region. The value of the Higgs field is confined in a shallow local minimum but there's a deeper minimum very far. However, one can't get there without tunneling and the tunneling rate is such that the lifetime of the Universe is longer than the observed age of the Universe.
But if that's true, the Universe could disappear at any moment because the Higgs suddenly jumps to the true minimum. Well, it would jump in one point of space first and this "disease" - Higgs' love for the new, totally different value of the fied - would spread to the rest of space by the speed of light (almost).
21. We must first name the problem; it's the global starving of the Higgs boson. To save the world, we have to make it heavier. The Higgs boson has been starving because the colliders were producing and distracting W-bosons and Z-bosons so that the Higgs boson couldn't eat them. I am afraid we must probably stop colliders that distract the gauge bosons. ;-)
22. Lubos,
Would you be able to comment on this paper?
http://www.worldnpa.org/pdf/abstracts/abstracts_153.pdf
It seems the Coulomb's law does not stand.
What experiment can prove Coulomb's law?
Thanks,
Jano
23. Nice picture ;-) That's exactly how I see the Higgs field.
24. Apologies, Jano, it doesn't look like a serious paper if it assaults the Coulomb's law itself and makes unsubstantiated links to gravity and I am busy.
The experiments proving Coulomb's law were first performed in 1785 by ... surprise ... Charles Augustin de Coulomb and it involves torsion balances. It's been tested heavily after him, too. It works.
http://en.wikipedia.org/wiki/Coulomb%27s_law
25. Dear Shannon, it's Higgs' field in Edinburgh after he built a house via a mortgage on it and after he returned from Scandinavia to speed up the repayments. ;-)
26. Boss, how about letting me in on a cut of the SUSY bet? Put me in for 20 bucks worth.
27. Thanks Lubos.
Does the Higgs field penitrate the event horizon?
Would objects falling into the fossil/static gravity field still have mass once they cross the EH or would they become massless during the time they spend falling to the singularity?
Once the Higgs field gives a particle mass can it be taken away?
28. Dear Jitter, the Higgs field is omnipresent so in our Universe, it is a property of space much like the space's geometry itself, so it can't be taken away. It can't be taken away "after" it gives someone mass - it always does. It can't be made disappeared beneath the event horizon, either: nothing macroscopically detectable occurs at the event horizon at all. The black hole interior is a region of space just like any other so the same physical processes occur there, including the existence of the Higgs field and the Higgs mechanism that also affects masses of particles.
|
#### Vim
1,939,171
19
Maintainer(s):
Software Author(s):
• Bram Moolenaar
• Vim Community
• 1
• 2
• 3
### Some Checks Have Failed or Are Not Yet Complete
Not All Tests Have Passed
Validation Testing Passed
Verification Testing Failed
Details
Scan Testing Pending
This package was rejected on 02 Nov 2022. The reviewer chocolatey-ops has listed the following reason(s):
#### chocolatey-community (maintainer) on 28 Sep 2022 12:15:48 +00:00:
User 'chocolatey-community' (maintainer) submitted package.
#### chocolatey-ops (reviewer) on 28 Sep 2022 12:52:29 +00:00:
vim has passed automated validation. It may have or may still fail other checks like testing (verification).
NOTE: No required changes that the validator checks have been flagged! It is appreciated if you fix other items, but only Requirements will hold up a package version from approval. A human review could still turn up issues a computer may not easily find.
##### Guidelines
Guidelines are strong suggestions that improve the quality of a package version. These are considered something to fix for next time to increase the quality of the package. Over time Guidelines can become Requirements. A package version can be approved without addressing Guideline comments but will reduce the quality of the package.
• There are more than 3 automation scripts in this package. This is not recommended as it increases the complexity of the package. More...
##### Notes
Notes typically flag things for both you and the reviewer to go over. Sometimes this is the use of things that may or may not be necessary given the constraints of what you are trying to do and/or are harder for automation to flag for other reasons. Items found in Notes might be Requirements depending on the context. A package version can be approved without addressing Note comments.
• Binary files (.exe, .msi, .zip) have been included. The reviewer will ensure the maintainers have distribution rights. More...
#### chocolatey-ops (reviewer) on 28 Sep 2022 13:04:15 +00:00:
vim has failed automated package testing (verification).
The package status will be changed and will be waiting on your next actions.
• NEW! We have a test environment for you to replicate the testing we do. This can be used at any time to test packages! See https://github.com/chocolatey-community/chocolatey-test-environment
• If you see the verifier needs to rerun testing against the package without resubmitting (a issue in the test results), you can do that on the package page in the review section.
• If the verifier is incompatible with the package, please log in and leave a review comment if the package needs to bypass testing (e.g. package installs specific drivers).
• Automated testing can also fail when a package is not completely silent or has pop ups (AutoHotKey can assist - a great example is the VeraCrypt package).
• A package that cannot be made completely unattended should have the notSilent tag. Note that this must be approved by moderators.
#### chocolatey-ops (reviewer) on 18 Oct 2022 12:53:46 +00:00:
We've found vim v9.0.0612 in a submitted status and waiting for your next actions. It has had no updates for 20 or more days since a reviewer has asked for corrections. Please note that if there is no response or fix of the package within 15 days of this message, this package version will automatically be closed (rejected) due to being stale.
Take action:
• Resubmit fixes for this version.
• If the package version is failing automated checks, you can self-reject the package.
If your package is failing automated testing, you can use the chocolatey test environment to manually run the verification and determine what may need to be fixed.
Note: We don't like to see packages automatically rejected. It doesn't mean that we don't value your contributions, just that we can not continue to hold packages versions in a waiting status that have possibly been abandoned. If you don't believe you will be able to fix up this version of the package within 15 days, we strongly urge you to log in to the site and respond to the review comments until you are able to.
#### chocolatey-ops (reviewer) on 02 Nov 2022 12:54:00 +00:00:
Unfortunately there has not been progress to move vim v9.0.0612 towards an approved status within 15 days after the last review message, so we need to close (reject) the package version at this time. If you want to pick this version up and move it towards approval in the future, use the contact site admins link on the package page and we can move it back into a submitted status so you can submit updates.
Status Change - Changed status of package from 'submitted' to 'rejected'.
Description
Vim is a highly configurable text editor built to enable efficient text editing. It is an improved version of the vi editor distributed with most UNIX systems.
Vim is often called a programmer's editor, and so useful for programming that many consider it an entire IDE. It's not just for programmers, though. Vim is perfect for all kinds of text editing, from composing email to editing configuration files.
## Features
• Vim: Vim terminal(CLI) application can be used from Powershell and Command Prompt.
• GVim: The GUI version of Vim provides full featured Windows GUI application experience.
• Terminal Integration: Batch files are created to provide vim, gvim, evim, view, gview, vimdiff, gvimdiff and vimtutor command on terminal use.
• Shell Integration: Vim is added in Open with ... context menu. And by default Edit with Vim context menu is created to open files whose extensions are associated with other applications.
• /InstallDir - Override the installation directory. By default, the software is installed in $ChocolateyToolsLocation, it's default value is C:\tools. You can include spaces. See the example below. • /RestartExplorer - Restart Explorer to unlock GVimExt.dll used for Edit with Vim context menu feature. • /NoDefaultVimrc - Don't create default _vimrc file. • /NoContextmenu - Don't create Edit with Vim context menu. • /NoDesktopShortcuts - Don't create shortcuts on the desktop. Example: choco install vim --params "'/NoDesktopShortcuts /InstallDir:C:\path\to\your dir'" ## Notes • This package uses the ZIP build to install to provide installation parameters. • All compilation of the software is automated and performed on Appveyor. The building status is open. • This package provides an official build. Similar package vim-tux is from a well-known unofficial vim building project. Unlike vim-tux, this package can take some installation parameters. • See https://github.com/vim/vim-win32-installer for more information. • If the package is out of date please check Version History for the latest submitted version. If you have a question, please ask it in Chocolatey Community Package Discussions or raise an issue on the Chocolatey Community Packages Repository if you have problems with the package. Disqus comments will generally not be responded to. tools\chocolateybeforemodify.ps1 $toolsDir = "$(Split-Path -parent$MyInvocation.MyCommand.Definition)"
$installDir = Get-Content "$toolsDir\installDir"
$shortversion = '90' try { # Is dlls locked? Remove-Item "$installDir\vim\vim$shortversion\GvimExt32\gvimext.dll", "$installDir\vim\vim$shortversion\GvimExt64\gvimext.dll" -ErrorAction Stop } catch { # Restart explorer to unlock dlls Write-Debug 'Restarting explorer.' Get-Process explorer | Stop-Process -Force } tools\chocolateyinstall.ps1 $ErrorActionPreference = 'Stop';
$toolsDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)"$shortversion = '90'
$pp = Get-PackageParameters .$toolsDir\helpers.ps1
$installDir = Get-InstallDir$packageArgs = @{
packageName = $env:ChocolateyPackageName unzipLocation =$installDir
file = "$toolsDir\gvim_9.0.0612_x86.zip" file64 = "$toolsDir\gvim_9.0.0612_x64.zip"
}
$installArgs = @{ statement = Get-Statement exeToRun = "$installDir\vim\vim$shortversion\install.exe" } '$installDir', ($installDir | Out-String), '$packageArgs', ($packageArgs | Out-String), '$installArgs', ($installArgs | Out-String) | ForEach-Object { Write-Debug$_ }
Install-ChocolateyZipPackage @packageArgs | Write-Debug
Set-Content -Path "$toolsDir\installDir" -Value$installDir
tools\chocolateyuninstall.ps1
$toolsDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)"$installDir = Get-Content "$toolsDir\installDir"$shortversion = '90'
$statement = '-nsis'$exeToRun = "$installDir\vim\vim$shortversion\uninstall.exe"
# From vim-tux.install. Make input.
Set-Content -Path "$env:TEMP\vimuninstallinput" -Value 'y' Start-Process -FilePath$exeToRun -ArgumentList $statement -RedirectStandardInput "$env:TEMP\vimuninstallinput" -Wait -WindowStyle Hidden
Remove-Item "$env:TEMP\vimuninstallinput" Remove-Item "$installDir\vim" -Recurse -Force
tools\gvim_9.0.0612_x64.zip
md5: 1DC737675EE547AE5132F485280A7694 | sha1: A59AB838C7A7AE8B61942557E466CC167EBE0989 | sha256: A16F25840ED6FCF88B4CBFDDC1D48A4543A65726B46CF439A0FF6CC3BB8008BA | sha512: 85E64151106C09A7B7E4750223D361D3C6D1847F807BA6A74EEA6CEBD49AEA330C222A8C7635E05CA1BEA09579769DE92C06A45BE2D6E3A06414EAE1E68223C1
tools\gvim_9.0.0612_x86.zip
md5: 8100527A9623521994040BB1C990E683 | sha1: 994321101E58352B41109056ADC28D02F2C12D49 | sha256: 565B68D50F4A06FA4E16DE0AE66258BD202D75AB4DAE13FC452304EC8DEF502B | sha512: 8FAE7A1BA8D65F48EB345BBA1DC25295562DF7CB6BD9978DD7CF2554DB686F98C83CAADA07216B39E23A42272653263D2A60010C13F397822CA85949B8F6F756
tools\helpers.ps1
function Get-InstallDir()
{
if ($pp['InstallDir']) { Write-Debug '/InstallDir found.' return$pp['InstallDir']
}
return Get-ToolsLocation
}
function Get-Statement()
{
$options = '-create-batfiles vim gvim evim view gview vimdiff gvimdiff vimtutor -install-openwith -add-start-menu'$createvimrc = '-create-vimrc -vimrc-remap no -vimrc-behave default -vimrc-compat all'
$installpopup = '-install-popup'$installicons = '-install-icons'
if ($pp['RestartExplorer'] -eq 'true') { Write-Debug '/RestartExplorer found.' Get-Process explorer | Stop-Process -Force } if ($pp['NoDefaultVimrc'] -eq 'true') {
Write-Debug '/NoDefaultVimrc found.'
$createvimrc = '' } if ($pp['NoContextmenu'] -eq 'true') {
$installpopup = '' } if ($pp['NoDesktopShortcuts'] -eq 'true') {
Write-Debug '/NoDesktopShortcuts found.'
$installicons = '' } return$options, $createvimrc,$installpopup, $installicons -join ' ' } # Replace old ver dir with symlink # Use mklink because New-Item -ItemType SymbolicLink doesn't work in test-env # Use rmdir because Powershell cannot unlink directory symlink function Create-SymbolicLink() { Get-ChildItem -Path "$installDir\vim" -Exclude "vim$shortversion" -Attributes Directory+!ReparsePoint | ForEach-Object { Remove-Item$_ -Recurse ; New-Item -Path $_ -ItemType Directory } Get-ChildItem -Path "$installDir\vim" -Exclude "vim$shortversion" -Attributes Directory | ForEach-Object {$_.Name } | ForEach-Object { cmd /c rmdir "$installDir\vim\$_" ; cmd /c mklink /d "$installDir\vim\$_" "$installDir\vim\vim$shortversion" }
}
From: https://vimhelp.org/uganda.txt.html
I) There are no restrictions on distributing unmodified copies of Vim except
that they must include this license text. You can also distribute
unmodified parts of Vim, likewise unrestricted except that they must
include this license text. You are also allowed to include executables
that you made from the unmodified Vim sources, plus your own usage
examples and Vim scripts.
II) It is allowed to distribute a modified (or extended) version of Vim,
including executables and/or source code, when the following four
conditions are met:
1) This license text must be included unmodified.
2) The modified Vim must be distributed in one of the following five ways:
a) If you make changes to Vim yourself, you must clearly describe in
the distribution how to contact you. When the maintainer asks you
(in any way) for a copy of the modified Vim you distributed, you
must make your changes, including source code, available to the
maintainer without fee. The maintainer reserves the right to
include your changes in the official version of Vim. What the
will be distributed is negotiable. If there has been no negotiation
then this license, or a later version, also applies to your changes.
The current maintainer is Bram Moolenaar <[email protected]>. If this
changes it will be announced in appropriate places (most likely
vim.sf.net, www.vim.org and/or comp.editors). When it is completely
impossible to contact the maintainer, the obligation to send him
your changes ceases. Once the maintainer has confirmed that he has
b) If you have received a modified Vim that was distributed as
mentioned under a) you are allowed to further distribute it
unmodified, as mentioned at I). If you make additional changes the
text under a) applies to those changes.
c) Provide all the changes, including source code, with every copy of
the modified Vim you distribute. This may be done in the form of a
context diff. You can choose what license to use for new code you
making their own changes to the official version of Vim.
d) When you have a modified Vim which includes changes as mentioned
under c), you can distribute it without the source code for the
changes if the following three conditions are met:
- The license that applies to the changes permits you to distribute
the changes to the Vim maintainer without fee or restriction, and
permits the Vim maintainer to include the changes in the official
version of Vim without fee or restriction.
- You keep the changes for at least three years after last
distributing the corresponding modified Vim. When the maintainer
or someone who you distributed the modified Vim to asks you (in
any way) for the changes within this period, you must make them
available to him.
- You clearly describe in the distribution how to contact you. This
contact information must remain valid for at least three years
after last distributing the corresponding modified Vim, or as long
as possible.
e) When the GNU General Public License (GPL) applies to the changes,
you can distribute the modified Vim under the GNU GPL version 2 or
any later version.
3) A message must be added, at least in the output of the ":version"
command and in the intro screen, such that the user of the modified Vim
is able to see that it was modified. When distributing as mentioned
under 2)e) adding the message is only required for as far as this does
not conflict with the license used for the changes.
4) The contact information as required under 2)a) and 2)d) must not be
removed or changed, except that the person himself can make
corrections.
III) If you distribute a modified version of Vim, you are encouraged to use
the Vim license for your changes and make them available to the
maintainer, including the source code. The preferred way to do this is
by e-mail or by uploading the files to a server and e-mailing the URL.
If the number of changes is small (e.g., a modified Makefile) e-mailing a
context diff will do. The e-mail address to be used is
<[email protected]>
IV) It is not allowed to remove this license from the distribution of the Vim
sources, parts of it or from a modified version. You may use this
legal\VERIFICATION.txt
VERIFICATION
Verification is intended to assist the Chocolatey moderators and community
in verifying that this package's contents are trustworthy.
The embedded software have been downloaded from GitHub and can be verified like this:
2. You can use one of the following methods to obtain the SHA256 checksum:
- Use powershell function 'Get-FileHash'
- Use Chocolatey utility 'checksum.exe'
checksum32: 565B68D50F4A06FA4E16DE0AE66258BD202D75AB4DAE13FC452304EC8DEF502B
checksum64: A16F25840ED6FCF88B4CBFDDC1D48A4543A65726B46CF439A0FF6CC3BB8008BA
No results available for this package. We are building up results for older packages over time so expect to see results. If this is a new package, it should have results within a day or two.
Vim 9.0.1024 1720 Wednesday, December 7, 2022 Approved
Vim 9.0.1004 2769 Monday, December 5, 2022 Approved
Vim 9.0.1000 957 Sunday, December 4, 2022 Approved
Vim 9.0.0995 540 Saturday, December 3, 2022 Approved
Vim 9.0.0984 1051 Friday, December 2, 2022 Approved
Vim 9.0.0978 1697 Thursday, December 1, 2022 Approved
Vim 9.0.0975 1398 Wednesday, November 30, 2022 Approved
Vim 9.0.0969 1609 Tuesday, November 29, 2022 Approved
Vim 9.0.0962 1597 Monday, November 28, 2022 Approved
This package has no dependencies.
Discussion for the Vim Package
|
Last edited by Nikobar
Wednesday, July 22, 2020 | History
2 edition of Sequential tests for exponential populations and poisson processes found in the catalog.
Sequential tests for exponential populations and poisson processes
Gus W. Haggstrom
# Sequential tests for exponential populations and poisson processes
## by Gus W. Haggstrom
Written in English
Subjects:
• Harmonic functions.
• Edition Notes
Bibliography: p. 13.
The Physical Object ID Numbers Statement Gus W. Haggstrom. Series The Rand paper series ; P-6336 Pagination 22 p. ; Number of Pages 22 Open Library OL16472094M
This video explains these two distributions and the relationship between them. Unlike traditional books presenting stochastic processes in an academic way, this book includes concrete applications that students will find interesting such as gambling, finance, physics, signal processing, statistics, fractals, and biology. Written with an important illustrated guide in the beginning, it contains many illustrations, photos and pictures, along with several website links.
Example Let X = amount of time (in minutes) a postal clerk spends with his or her customer. The time is known to have an exponential distribution with the average amount of time equal to four minutes. X is a continuous random variable since time is measured. It is given that μ = 4 minutes. To do any calculations, you must know m, the decay parameter.. m = 1 μ m = 1 μ. • Sequential Testing • Pass-Fail Testing • Exponential Distribution • Weibull Distribution • Randomization of Load Cycles • Reliability Growth • Reliability Growth Process • Reliability Growth Models • Summary .
How to derive the property of Poisson processes that the time until the first arrival, or the time between any two arrivals, has an Exponential pdf. For the Poisson, take the mean of your data. That will be the mean ($\lambda$) of the Poisson that you generate. Compare the generated values of the Poisson distribution to the values of your actual data. Usually compare means find the distance between the distribution. You .
You might also like
Woodworking machines.
Woodworking machines.
Knowledge of illness in a Sepik society
Knowledge of illness in a Sepik society
Threedimensional Planning in External Photon Radiotherapy
Threedimensional Planning in External Photon Radiotherapy
From the mountain
From the mountain
Peshwa Bajirao I & Maratha expansion
Peshwa Bajirao I & Maratha expansion
Petroleum for National Defense
Petroleum for National Defense
Selectric composer type styles portfolio.
Selectric composer type styles portfolio.
Black people and the media
Black people and the media
The Legend of the Nibelungenlied
The Legend of the Nibelungenlied
Employment and status of teachers in state schools in the European Community.
Employment and status of teachers in state schools in the European Community.
Butterworths Construction Law Manual
Butterworths Construction Law Manual
Dear friends at home
Dear friends at home
Middle Kingdom art in ancient Egypt, 2300-1590 B.C.
Middle Kingdom art in ancient Egypt, 2300-1590 B.C.
Wire industry yearbook.
Wire industry yearbook.
Reducing residential crime and fear
Reducing residential crime and fear
### Sequential tests for exponential populations and poisson processes by Gus W. Haggstrom Download PDF EPUB FB2
Additional Physical Format: Online version: Haggstrom, Gus W. Sequential tests for exponential populations and poisson processes.
Santa Monica, Calif.: Rand, Sequential probability ratio tests of hypotheses about the mean of a negative exponential distribution are closely related to SPRTs of hypotheses about the parameter of a Poisson process.
In both cases exact, but computationally intractable, formulas, exist for the operating characteristics and average sample size functions of the tests. Title: Sequential Tests for Exponential Populations and Poisson Processes Author: Gus Haggstrom Subject: Sequential probability ratio tests of hypotheses about the mean of a negative exponential distribution are closely related to SPRTs of hypotheses about the parameter of a Poisson process.
Special problems of sequential testing for compound Poisson processes have been studied by Peskir and Shiryaev () and Gapeev ().
Peskir and Shiryaev () solved the problem in (, For a compound Poisson process whose marks are exponentially distributed with mean the same as their arrival rate, Gapeev () derived under the same Bayes risk optimal sequential tests for two. The Poisson Process. A previous post shows that a sub family of the gamma distribution that includes the exponential distribution is derived from a Poisson process.
This post gives another discussion on the Poisson process to draw out the intimate connection between the exponential distribution and the Poisson process.
The exponential distribution is one of the most significant and widely used distribution in statistical practice. It possesses several important statistical properties, and yet exhibits great mathematical tractability. This volume provides a systematic and comprehensive synthesis of the diverse literature on the theory and applications of the exponential distribution.4/5(2).
Sequential testing for a simple Poisson process. Peskir and Shiryaev solved the sequential testing problem of two simple hypotheses about the unknown arrival rate λ of a simple Poisson process X; namely, ν 0 (⋅) = ν 1 (⋅) = δ {1} (⋅), and becomes () H 0: λ = λ 0 and H 1: λ = λ 1.
Their method is different from ours. This book discusses as well the ergodic type chains with finite and countable state-spaces and describes some results on birth and death processes that are of a non-ergodic type.
The final chapter deals with inference procedures for stochastic processes through sequential procedures. This book is a valuable resource for graduate students.
Special problems of sequential testing for compound Poisson processes have been studied by Peskir and Shiryaev () and Gapeev (). Peskir and Shiryaev () solved the problem in (, ) when the Poisson process Xis simple. Equivalently, the mark distribu-tion ν() is known (i.e., ν 0() ≡ ν.
Section 3 reviews sequential GLR tests and other sequential tests that have been applied to test vaccine safety. A key ingredient in our proposed GLR tests for vaccine safety, given in Section 4, is the exponential family representation of the rare event sequence under the commonly assumed model of Poisson arrivals of adverse events.
excellent book for students in term of understanding. All 6 reviews» Tests of Hypotheses. Queueing Theory. Design of Experiments. Important Formulae. RVs joint pdf least limits Markov mean and variance noise normal distribution Note obtained occurs Poisson Poisson distribution Poisson process /5(6).
Exponential Distribution. If the Poisson distribution deals with the number of occurrences in a fixed period of time, the exponential distribution deals with the time between occurrences of. The time-dependent Poisson process with the intensity function A (t) = e α+βt was considered by Cox and Lewis () systematically,where they discussed the statistical test of the hypothesis β= 0.
In this paper, the estimation and the hypothesis testing problems of the population parameters are considered. This only accounts for situations in which you know that a poisson process is at work. But you'd need to prove the existence of the poisson distribution AND the existence of an exponential pdf to show that a poisson process is a suitable model.
$\endgroup$ – Jan Rothkegel Nov 20 '15 at Zero-inflated Poisson. One well-known zero-inflated model is Diane Lambert's zero-inflated Poisson model, which concerns a random event containing excess zero-count data in unit time.
For example, the number of insurance claims within a population for a certain type of risk would be zero-inflated by those people who have not taken out insurance against the risk and thus are unable to claim.
GLR tests for vaccine safety, given in Section 4, is the exponential family representation of the rare event sequence under the commonly assumed model of Poisson arrivals of adverse events. Simulation studies are presented in Section 5 to compare the performance of various sequential testing methods, and Section 6 gives an illustrative example.
He conducts research in sequential analysis and optimal stopping, change-point detection, Bayesian inference, and applications of statistics in epidemiology, clinical trials, semiconductor manufacturing, and other fields. Poisson process Simulation of stochastic processes Confidence interval for the ratio of population variances F-tests.
RELATIONSHIP BETWEEN POISSON PROCESS AND EXPONENTIAL PROBABILITY DISTRIBUTION. If the number of arrivals in a time interval of length (t) follows a Poisson Process, then corresponding interarrival time follows an 'Exponential Distribution'.; If the interarrival times are independently, identically distributed random variables with an exponential probability distribution.
Testing for a Nonhomogeneous Poisson Process Testing the Mean of AR Processes Testing the Mean in Linear State-Space Models SPRT: Local Approach ESS Function OC Function Locally Most Powerful Sequential Test Nuisance Parameters and an Invariant SPRT.
Bayesian Predictive Inference under Sequential Sampling with Selection Bias (B Nandram and D Kim) Tests For and Against Uniform Stochastic Ordering on Multinomial Parameters Based on Φ-Divergences (J Peng) Development and Management of National Health Plans: Health Economics and Statistical Perspectives (P K Sen).Performance analysis of sequential tests between Poisson processes.
IEEE Trans. Inform. Theory 43 – Han, S. W. and Tsui, K. L. (). Early detection of a change in Poisson rate after accounting for population size effects. Statist. Sinica 21 – [15] Moustakides, G. V. ().
Quickest detection with exponential penalty.The sequential test method. Hazard rates bounded away from 1. for Poisson distribution exponential mixtures exponential order statistics transformation of a homogeneous Poisson process transformation of beta random variables
|
# Solve for x in 2x + 20sqrt(x) - 42 = 0?
Apr 14, 2018
#### Explanation:
This may look complicated but can be solved like a quadratic equation if we let $u = \sqrt{x}$
$2 x + 20 \sqrt{x} - 42 = 0$
$2 {u}^{2} + 20 u - 42 = 0$
${u}^{2} + 10 u - 21 = 0$
$u = \frac{- b \pm \sqrt{{b}^{2} - 4 a c}}{2 a}$
$u = \frac{- 10 \pm \sqrt{{10}^{2} - 4 \times 1 \times - 21}}{2 \times 1}$
$u = \frac{- 10 \pm \sqrt{184}}{2}$
$u = \frac{- 10 \pm 2 \sqrt{46}}{2}$
$u = - 5 \pm \sqrt{46}$
Therefore:
$\sqrt{x} = \sqrt{- 5 \pm \sqrt{46}}$
|
# Facts and Fictions about Anti deSitter Spacetimes with Local Qantum Matter
Bert Schroer
November 14, 1999
It is natural to analyse the AdS$_{d+1}$-CQFT$_{d}$ correspondence in the context of the conformal- compactification and covering formalism. In this way one obtains additional inside about Rehren's rigorous algebraic holography in connection with the degree of freedom issue which in turn allows to illustrates the subtle but important differences beween the original string theory-based Maldacena conjecture and Rehren's theorem in the setting of an intrinsic field-coordinatization-free formulation of algebraic QFT. I also discuss another more generic type of holography related to light fronts which seems to be closer to 't Hooft's original ideas on holography. This in turn is naturally connected with the generic concept of Localization Entropy'', a quantum pre-form of Bekenstein's classical black-hole surface entropy.
Keywords:
none
|
# Is there a Martin Gardner's article archive available online?
Martin Gardener was a great recreational mathematics expert and his column "Mathematical Games" is an all time hit. But is there any archive available online consisting of his articles (preferably in PDF format)? I searched online but I got only one or two articles in PDF format.
• I got a lot more by typing "martin gardner pdf" into Google. See Gardner's home site for a more structured experience martin-gardner.org – Conifold May 5 '16 at 20:33
All the "Mathematical Games" columns from Scientific American: you can get them on one CD ...about $51 from Amazon.com, cheaper from the MAA if you are a member. • Are those books contains every article he wrote for Scientific American? – Kushal Bhuyan May 5 '16 at 1:43 • Yes. He published volumes once a year with the 12 columns for that year. But if you want all the Sci. Am. columns, you can get them on CD. Answer edited to link to it. – Gerald Edgar May 5 '16 at 13:04 • Hmmm thanks for the link but in India$51 is a lot of money. Anyways I have found the cd link in library genesis though. – Kushal Bhuyan May 5 '16 at 13:17
|
## Abstract and Applied Analysis
### A New Approach for Linear Eigenvalue Problems and Nonlinear Euler Buckling Problem
#### Abstract
We propose a numerical Taylor's Decomposition method to compute approximate eigenvalues and eigenfunctions for regular Sturm-Liouville eigenvalue problem and nonlinear Euler buckling problem very accurately for relatively large step sizes. For regular Sturm-Liouville problem, the technique is illustrated with three examples and the numerical results show that the approximate eigenvalues are obtained with high-order accuracy without using any correction, and they are compared with the results of other methods. The numerical results of Euler Buckling problem are compared with theoretical aspects, and it is seen that they agree with each other.
#### Article information
Source
Abstr. Appl. Anal., Volume 2012, Special Issue (2012), Article ID 697013, 21 pages.
Dates
First available in Project Euclid: 5 April 2013
https://projecteuclid.org/euclid.aaa/1365174046
Digital Object Identifier
doi:10.1155/2012/697013
Mathematical Reviews number (MathSciNet)
MR2926914
Zentralblatt MATH identifier
1246.65130
#### Citation
Adiyaman, Meltem Evrenosoglu; Somali, Sennur. A New Approach for Linear Eigenvalue Problems and Nonlinear Euler Buckling Problem. Abstr. Appl. Anal. 2012, Special Issue (2012), Article ID 697013, 21 pages. doi:10.1155/2012/697013. https://projecteuclid.org/euclid.aaa/1365174046
#### References
• J. D. Pryce, Numerical Solution of Sturm-Liouville Problems, Monographs on Numerical Analysis, The Clarendon Press Oxford University Press, New York, NY, USA, 1993.
• I. Stakgold, “Branching of solutions of nonlinear equations,” SIAM Review, vol. 13, pp. 289–332, 1971.
• R. M. Jones, Buckling of Bars, Plates, and Shells, Bull Ridge, Virginia, Va, USA, 2006.
• G. Domokos and P. Holmes, “Euler's problem, Euler's method, and the standard map; or, The discrete charm of buckling,” Journal of Nonlinear Science, vol. 3, no. 1, pp. 109–151, 1993.
• D. H. Griffel, Applied Functional Analysis, Ellis Horwood Series in Mathematics and Its Applications, Ellis Horwood, Chichester, UK, 1981.
• L. Euler, “Methodus inveniendi lineas curvas maximi minimive proprietate gaudentes ostwald's klassiker der exakten wiss,” Laussane and Geneva, vol. 75, 1774.
• M. E. Adiyaman and S. Somali, “Taylor's decomposition on two points for one-dimensional Bratu problem,” Numerical Methods for Partial Differential Equations, vol. 26, no. 2, pp. 412–425, 2010.
• A. Ashyralyev and P. E. Sobolevskii, New difference schemes for partial differential equations, vol. 148 of Operator Theory: Advances and Applications, Birkhäuser, Basel, Switzerland, 2004.
• N. M. Bujurke, C. S. Salimath, and S. C. Shiralashetti, “Computation of eigenvalues and solutions of regular Sturm-Liouville problems using Haar wavelets,” Journal of Computational and Applied Mathematics, vol. 219, no. 1, pp. 90–101, 2008.
• A. L. Andrew, “Asymptotic correction of Numerov's eigenvalue estimates with natural boundary conditions,” Journal of Computational and Applied Mathematics, vol. 125, no. 1-2, pp. 359–366, 2000.
• B. van Brunt, The Calculus of Variations, Universitext, Springer, New York, NY, USA, 2004.
• R. S. Anderssen and F. R. de Hoog, “On the correction of finite difference eigenvalue approximations for Sturm-Liouville problems with general boundary conditions,” BIT Numerical Mathematics, vol. 24, no. 4, pp. 401–412, 1984.
• S. Somali and V. Oger, “Improvement of eigenvalues of Sturm-Liouville problem with $t$-periodic boundary conditions,” Journal of Computational and Applied Mathematics, vol. 180, no. 2, pp. 433–441, 2005.
• V. Mehrmann and A. Miedlar, “Adaptive computation of smallest eigenvalues of self-adjoint elliptic partial differential equations,” Numerical Linear Algebra with Applications, vol. 18, no. 3, pp. 387–409, 2011.
• Q.-M. Cheng, T. Ichikawa, and S. Mametsuka, “Estimates for eigenvalues of the poly-Laplacian with any order in a unit sphere,” Calculus of Variations and Partial Differential Equations, vol. 36, no. 4, pp. 507–523, 2009.
• C. Lovadina, M. Lyly, and R. Stenberg, “A posteriori estimates for the Stokes eigenvalue problem,” Numerical Methods for Partial Differential Equations, vol. 25, no. 1, pp. 244–257, 2009.
• S. Jia, H. Xie, X. Yin, and S. Gao, “Approximation and eigenvalue extrapolation of biharmonic eigenvalue problem by nonconforming finite element methods,” Numerical Methods for Partial Differential Equations, vol. 24, no. 2, pp. 435–448, 2008.
• C. V. Verhoosel, M. A. Gutiérrez, and S. J. Hulshoff, “Iterative solution of the random eigenvalue problem with application to spectral stochastic finite element systems,” International Journal for Numerical Methods in Engineering, vol. 68, no. 4, pp. 401–424, 2006.
|
34. If the two roots of the equaion...
Question
# 34. If the two roots of the equaion (a - 1)(x + x² +1)+(a + 1)(x² + x + 1) = 0 are rea set of all value of a is +x+1)+(a + 1)(x² + x + 1) = 0 are real and distinct, then the (B) (-2,-2)(2.) JEE Main (Online) 151
JEE/Engineering Exams
Maths
Solution
95
4.0 (1 ratings)
( (a-1)left(x^{2}+x+1right)^{2}-(a+1)left(x^{4}+x^{2}+1right)=0 ) ( (a-1)left[left(x^{4}+x^{2}+1right)+2 xleft(x^{2}+x+1right)right]-(a+1)left(x^{4}+x^{2}+1right)=0 ) ( 2 a xleft(1+x+x^{2}right)-2left(x^{4}+x^{3}+x^{2}+x^{2}+x+1right)=0 ) ( 2 a xleft(1+x+x^{2}right)-2 x^{2}left(x^{2}+x+1right)-2left(x^{2}+x+1right)=0 ) ar ( x ) is real, ( 1+x+x^{2} eq 0 ) So ( 2 a x-2 x^{2}-2=0 quad Rightarrow quad x=frac{a pm sqrt{a^{2}-4}}{2} ) two roots ane real & distinet, so ( 101>2 ), ( a in(2, infty) cup(-2,-infty) )
|
# How to write a matrix $\mathcal{M}$ such that $\mathcal{M} \boldsymbol{x}=\boldsymbol{\omega}\times\boldsymbol{x}$? [duplicate]
As is well known, it is possible to use the $$\nabla$$ operator as if it were a vector. Someone consider it an abuse of notation but surely something that works well and is very useful. Well, how is it possible to consider the operator $$\boldsymbol{\omega}\times$$ as a matrix? How build a matrix $$\mathcal{M}$$ such that $$\boldsymbol{\omega} \times \boldsymbol{x} = \mathcal{M} \boldsymbol{x}$$?
The present answer is already correct, just let me show how to get the result. Using Einstein convention (i.e. repeated indices are summed) the product $$c = a \times b$$ can be written as
$$c_i = (a \times b)_i =\epsilon_{ijk} \, a_j \, b_k = A_{ik} b_k$$
where $$\epsilon_{ijk}$$ is the Levi-Civita symbol and the matrix $$A$$ is defined by $$A_{ik}=\epsilon_{ijk} \, a_j=-\epsilon_{iks} \, a_s$$. It is easy to see that $$A$$ is skew-symmetric, i.e. $$A_{lm} =-A_{ml}$$. This trick will be useful to study the $$SO(3)$$ group and its associated algebra $$so(3)$$.
$$[\mathbf{a}]_{\times} = \begin{bmatrix} \,\,0 & \!-a_3 & \,\,\,a_2 \\ \,\,\,a_3 & 0 & \!-a_1 \\ \!-a_2 & \,\,a_1 & \,\,0 \end{bmatrix},$$
|
# Modelling volatility for higher frequency data
I’m doing some academic work on volatility forecasting. I’ve got 1-minute bar data. It is not clear to me what model is best suited for forecasting volatility when higher frequency data is available.
I understand the following families/classes of volatility models exist:
1. (G)ARCH family models
2. Stochastic Volatility
3. Implied Volatility (not applicable because I don’t have options prices)
4. Realised volatility
I was wondering, considering I have high frequency data, realised volatility should provide a reasonable approximation. I could potentially calculate the volatility using realised volatility and then use standard time series forecasting methods to forecast this (realised volatility) series?
Quantitative Finance Asked on November 13, 2021
|
# Intersection of open dense subsets in a complete metric space is nonempty?
The proof for Baire category theorem in my book assumes that the intersection of dense subsets of a complete metric space is nonempty. Why can this be assumed, why is it true?
It's not true the way you stated it (think $$\Bbb Q$$ and the irrationals $$\Bbb P$$ in $$\Bbb R$$, e.g.)
|
Electron Configuration and energy levels
1. Dec 8, 2007
Nusc
--------------------------------------------------------------------------------
1. The problem statement, all variables and given/known data
The electrons of a nelectronically-excited neutral magnesium atom have the coniguration 1s^2 2s^2 2p^6 3p 3d. Provide spec notation for the possible quant states of the atom as a whole
2. Relevant equations
3. The attempt at a solution
We neglect 1s^2 2s^2 2p^6 and only focus on 3p and 3d. I don't know what to do because we have two high energy levels to consider. If there was just one the then problem would be trivial.
2. Dec 8, 2007
Staff: Mentor
Last edited: Dec 8, 2007
3. Dec 8, 2007
Nusc
Okay so n = 3 therefore l = n-1 = 2.
Thus, when l = 2 we have d.
So d is our designator of choice.
4. Dec 8, 2007
Nusc
Was the above correct? Because if I apply the same reasoning to this following problem:
an excited e-state of aluminimum atom is labelled by the config: 1s2 2s2 2p6 3s 3p2. What are the possible e-states for a atom with this config?
Here we have a choice between 3s and 3p2. n = 3 in this case therefore l = n-1 = 2 which would correspond with a d.
What the heck am I doing?
5. Dec 9, 2007
Kushal
i dunno if i understood your problem correctly;
Mg is 1s2 2s2 2p6 3s2
when it is excited, one electron from 3s will be promoted to an empty 3p orbital.
hence:
*Mg 1s2 2s2 2p6 3s1 3p1
6. Dec 9, 2007
Staff: Mentor
This is a good discussion of electron configuration
http://en.wikipedia.org/wiki/Electron_configuration
http://www.chemicalelements.com/show/electronconfig.html - chart
http://www.webelements.com/
Anyway, I was wondering if one has an example of what is meant by "Provide spec notation for the possible quant states of the atom as a whole"
Mg has the ground state electron configuration of 1s2 2s2 2p6 3s2, and Al has the g.s. e-config of 1s2 2s2 2p6 3s2 3p1 as indicated by Kushal.
The problem statement for an excited Mg atom states 1s2 2s2 2p6 3p 3d, so it appears one 3s electron is excited to 3p and one to 3d.
Is the problem asking for the possible quantum states described by n, l, m (or ml), ms? Does one need to provide descriptions for the n=1 and n=2 electrons, or just the outermost (valence electrons)?
See - http://hyperphysics.phy-astr.gsu.edu/hbase/hyde.html#c3
For n, what are the possible values for l, and then for l, what are the possible values for m. Then ms has two possibilities.
l is constrained such that l = 0, 1, . . . n-1,
http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/hydcol.html#c2
then m = -l, -l+1, . . . , 0, 1, . . . , l-1, l (there are 2l+1 values)
http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/hydazi.html#c2
http://en.wikipedia.org/wiki/Magnetic_quantum_number
7. Dec 9, 2007
Nusc
The problem wants the configuration based on the 2S+1{L}_J labelling.
8. Dec 9, 2007
Nusc
Here's an example: What are the poss gs of Cl+.
1s2 2s2 2p6 3s2 3p4
We need only consider the 3p4.
S = 1/2 + 1/2 + 1/2 + 1/2 : + indicates direct sum notation
So S = {0,1,2}
Since we only consider the four p electron, l =1
Therefore ml = -1,0,1
=> At least one of the spin particles must have opposite spin to the others so at least 2o f hte elctrons pair cancellin their psins. THus S= 1/2 + 1/2 = {0,1}
So the spin degeneracies are 2S+1 = {1,3}
Then L = 1 + 1 + 1 + 1 = {0,1,2,3,4}
|L-S| <= J <= L+S
Gives us a whole number of values using the 2S+1 {L}_J
Does that help?
9. Dec 9, 2007
Nusc
I think the problem is asking for the possible quantum states described by n, l, m (or ml), ms
10. Dec 9, 2007
Staff: Mentor
That is what I'm thinking.
OK, so in the example of the excited Mg, there is 1 p and 1 d electron. Then, what are the possible states for the p and for the d electron?
11. Dec 9, 2007
Nusc
You mean with a given p state and d state you want the orbital ang mom states?
FOr p, l =1
so ml = -1 , 0, 1
and for d, l = 2.
so ml = -2,-1 , 0, 1,2
N =3 in this case
IS that what you want?
12. Dec 9, 2007
Staff: Mentor
Focus on p, l =1 and d, l = 2.
J runs from |L+S| to |L-S|.
See this discussion of optical spectra.
13. Dec 10, 2007
nrqed
From that statement I will assume that we are doing L-S coupling (as opposed to j-j coupling).
Each electron has s=1/2. So what are the possible values of the total spin S?
One electron has l=1 (a p state) and the other electron has l=2 (a d state). What are the possible values of the total orbital angular momentum L?
Now, for each combination (S,L) you have (there will be a total of 6 combinations), figure out the possible values of J. Then, in each case, give the result in the notation $$\,^{2S+1} L_J$$. For example, if the total spin is 1, L is 2 and J is 2, you have a$$\,^3 D_2$$ state.
I hope this helps
Last edited: Dec 10, 2007
|
# Revision history [back]
1. No, Aruco is the best choice in OpenCV.
2. Yes, it is still important. You very much need the camera matrix and distortion.
3. You should. This is the proper way to do it. However, you simply reverse the pose to get the position and orientation of the marker relative to the camera, instead of the camera relative to the marker. The code is pretty easy.
Mat R;
Rodrigues(rvec, R);
R = R.t();
tvec = -R*tvec;
Rodrigues(R, rvec);
NOTE: the detectPose functions return the transformation from board coordinates to camera coordinates. This reverses that so camera coordinates become board coordinates. I'm pretty sure what you want is already what you get. If you are defining a world system, you will find the transformation from the camera to the world system, and chain them together like any other coordinate transforms.
4. Yes, one of the corners is the origin of the marker, I'm pretty sure it's the top-left of the marker or pattern.
|
Time domain decomposition in final value optimal control of the Maxwell system
ESAIM: Control, Optimisation and Calculus of Variations, Volume 8 (2002), p. 775-799
We consider a boundary optimal control problem for the Maxwell system with a final value cost criterion. We introduce a time domain decomposition procedure for the corresponding optimality system which leads to a sequence of uncoupled optimality systems of local-in-time optimal control problems. In the limit full recovery of the coupling conditions is achieved, and, hence, the local solutions and controls converge to the global ones. The process is inherently parallel and is suitable for real-time control applications.
DOI : https://doi.org/10.1051/cocv:2002042
Classification: 65N55, 49M27, 35Q60
Keywords: Maxwell system, optimal control, domain decomposition
@article{COCV_2002__8__775_0,
author = {Lagnese, John E. and Leugering, G.},
title = {Time domain decomposition in final value optimal control of the Maxwell system},
journal = {ESAIM: Control, Optimisation and Calculus of Variations},
publisher = {EDP-Sciences},
volume = {8},
year = {2002},
pages = {775-799},
doi = {10.1051/cocv:2002042},
zbl = {1063.78029},
mrnumber = {1932973},
language = {en},
url = {http://www.numdam.org/item/COCV_2002__8__775_0}
}
Lagnese, John E.; Leugering, G. Time domain decomposition in final value optimal control of the Maxwell system. ESAIM: Control, Optimisation and Calculus of Variations, Volume 8 (2002) , pp. 775-799. doi : 10.1051/cocv:2002042. http://www.numdam.org/item/COCV_2002__8__775_0/
[1] A. Alonso and A. Valli, An optimal domain decomposition preconditioner for low-frequency time-harmonic Maxwell equations. Math. Comp. 68 (1999) 607-631. | MR 1609607 | Zbl 1043.78554
[2] M. Belishev and A. Glasman, Boundary control of the Maxwell dynamical system: Lack of controllability by topological reason. ESAIM: COCV 5 (2000) 207-218. | Numdam | MR 1750615 | Zbl 1121.93307
[3] J.-D. Benamou, Décomposition de domaine pour le contrôle de systèmes gouvernés par des équations d'évolution. C. R. Acad. Sci Paris Sér. I Math. 324 (1997) 1065-1070. | Zbl 0879.35090
[4] J.-D. Benamou, Domain decomposition, optimal control of systems governed by partial differential equations and synthesis of feedback laws. J. Opt. Theory Appl. 102 (1999) 15-36. | MR 1702845 | Zbl 0946.49025
[5] J.-D. Benamou and B. Desprès, A domain decomposition method for the Helmholtz equation and related optimal control problems. J. Comp. Phys. 136 (1997) 68-82. | MR 1468624 | Zbl 0884.65118
[6] M. Gander, L. Halpern and F. Nataf, Optimal Schwarz waveform relaxation for the one dimensional wave equation. École Polytechnique, Palaiseau, Rep. 469 (2001). | Zbl 1085.65077
[7] M. Heinkenschloss, Time domain decomposition iterative methods for the solution of distributed linear quadratic optimal control problems (submitted). | Zbl 1075.65091
[8] J.E. Lagnese, A nonoverlapping domain decomposition for optimal boundary control of the dynamic Maxwell system, in Control of Nonlinear Distributed Parameter Systems, edited by G. Chen, I. Lasiecka and J. Zhou. Marcel Dekker (2001) 157-176. | MR 1817181 | Zbl 0979.93058
[9] J.E. Lagnese, Exact boundary controllability of Maxwell's equation in a general region. SIAM J. Control Optim. 27 (1989) 374-388. | Zbl 0678.49032
[10] J.E. Lagnese and G. Leugering, Dynamic domain decomposition in approximate and exact boundary control problems of transmission for the wave equation. SIAM J. Control Optim. 38/2 (2000) 503-537. | MR 1741151 | Zbl 0952.93010
[11] J.E. Lagnese, A singular perturbation problem in exact controllability of the Maxwell system. ESAIM: COCV 6 (2001) 275-290. | Numdam | MR 1824104 | Zbl 1030.93025
[12] J.-L. Lions, Virtual and effective control for distributed parameter systems and decomposition of everything. J. Anal. Math. 80 (2000) 257-297. | MR 1771528 | Zbl 0964.93043
[13] J.-L. Lions, Decomposition of energy space and virtual control for parabolic systems, in 12th Int. Conf. on Domain Decomposition Methods, edited by T. Chan, T. Kako, H. Kawarada and O. Pironneau (2001) 41-53. | MR 1827521 | Zbl 1082.93581
[14] J.-L. Lions and O. Pironneau, Domain decomposition methods for C.A.D. C. R. Acad. Sci. Paris 328 (1999) 73-80. | MR 1674382 | Zbl 0937.68140
[15] Kim Dang Phung, Contrôle et stabilisation d'ondes électromagnétiques. ESAIM: COCV 5 (2000) 87-137. | Numdam | Zbl 0942.93002
[16] J.E. Santos, Global and domain decomposed mixed methods for the solution of Maxwell's equations with applications to magneotellurics. Num. Meth. for PDEs 14/4 (2000) 407-438. | Zbl 0918.65083
[17] H. Schaefer, Über die Methode sukzessiver Approximationen. Jber Deutsch. Math.-Verein 59 (1957) 131-140. | MR 84116 | Zbl 0077.11002
|
It is currently 19 Nov 2017, 23:08
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# A car travels from point A to point B. The average speed of
Author Message
TAGS:
### Hide Tags
Manager
Joined: 20 Nov 2009
Posts: 163
Kudos [?]: 260 [3], given: 64
A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
26 Jul 2010, 22:46
3
KUDOS
27
This post was
BOOKMARKED
00:00
Difficulty:
45% (medium)
Question Stats:
65% (01:36) correct 35% (01:20) wrong based on 822 sessions
### HideShow timer Statistics
A car travels from point A to point B. The average speed of the car is 60 miles/hr and it travels the first half of the trip at a speed of 90 mi/hr. What is the speed of the car in the second half of the trip?
A. 30
B. 45
C. 60
D. 75
E. 90
[Reveal] Spoiler: OA
_________________
But there’s something in me that just keeps going on. I think it has something to do with tomorrow, that there is always one, and that everything can change when it comes.
http://aimingformba.blogspot.com
Kudos [?]: 260 [3], given: 64
Math Expert
Joined: 02 Sep 2009
Posts: 42253
Kudos [?]: 132740 [3], given: 12360
### Show Tags
26 Jul 2010, 23:08
3
KUDOS
Expert's post
7
This post was
BOOKMARKED
A car travels from point A to point B. The average speed of the car is 60 miles/hr and it travels the first half of the trip at a speed of 90 mi/hr. What is the speed of the car in the second half of the trip?
A. 30
B. 45
C. 60
D. 75
E. 90
Let the average speed for the second half of the trip be $$x$$ miles per hour.
Pick some smart number for the distance from A to B: let the distance be 180 miles, so half of the distance will be 90 miles.
$$Average \ speed=\frac{total \ distance}{total \ time}$$ --> $$60=\frac{180}{total \ time}$$;
Total time equals time for the first half plus time for the second half --> $$\frac{90}{90}+\frac{90}{x}=\frac{x+90}{x}$$;
So $$60=\frac{180}{total \ time}$$ --> $$60=\frac{180x}{x+90}$$ --> $$60x+60*90=180x$$ --> $$120x=60*90$$ --> $$x=45$$.
_________________
Kudos [?]: 132740 [3], given: 12360
Manager
Joined: 11 Aug 2012
Posts: 124
Kudos [?]: 135 [0], given: 16
Schools: HBS '16, Stanford '16
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
16 Nov 2012, 15:15
Thanks Bunuel!
An additional question: What is the logic behind picking numbers in this problem? , How do these numbers lead us to the answer?
Kudos [?]: 135 [0], given: 16
Intern
Status: wants to beat the gmat
Joined: 18 Jul 2012
Posts: 20
Kudos [?]: 9 [0], given: 1
Location: United States
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
18 Nov 2012, 14:34
danzig wrote:
Thanks Bunuel!
An additional question: What is the logic behind picking numbers in this problem? , How do these numbers lead us to the answer?
180 is the least common multiple of 90 mi/hr and 60 mi/hr. Picking 180 makes calculations easy when you know d = rt
Kudos [?]: 9 [0], given: 1
Senior Manager
Status: Prevent and prepare. Not repent and repair!!
Joined: 13 Feb 2010
Posts: 250
Kudos [?]: 131 [4], given: 282
Location: India
Concentration: Technology, General Management
GPA: 3.75
WE: Sales (Telecommunications)
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
27 Nov 2012, 03:28
4
KUDOS
how about weighted average for this method? the number between 90 and 60 is 30. So it should be slightly higher than 30 and less than 60. So the ans is 45
_________________
I've failed over and over and over again in my life and that is why I succeed--Michael Jordan
Kudos drives a person to better himself every single time. So Pls give it generously
Wont give up till i hit a 700+
Kudos [?]: 131 [4], given: 282
Intern
Joined: 17 May 2013
Posts: 47
Kudos [?]: 13 [5], given: 8
GMAT Date: 10-23-2013
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
18 Aug 2013, 20:12
5
KUDOS
Let distance be D. Half distance travelled at avg speed of 90m/h. Let x m/h be the avg speed for seconds half of the distance.
Given : Avg speed for the entire trip is 60
D/2/90 + D/2/x = D/60
D/180 + D/2x = D/60
Solving for x, we get 45m/h
Ans: B
Kudos [?]: 13 [5], given: 8
Intern
Joined: 03 Apr 2012
Posts: 27
Kudos [?]: 13 [1], given: 10
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
19 Aug 2013, 02:53
1
KUDOS
average speed= 2 s1 s2/ s1+s2 (when disctance travelled is equal.)
i
2x 90 x s2
---------- = 60
90 + s2
180 s2 = 60 (90 + s2)
180s2- 60 s2 = 5400
s2 = 5400 / 120
s2 = 45
Kudos [?]: 13 [1], given: 10
Current Student
Joined: 06 Sep 2013
Posts: 1972
Kudos [?]: 741 [1], given: 355
Concentration: Finance
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
13 Feb 2014, 07:05
1
KUDOS
1. Use 180 for each half distance, therefore total distance is 360
2. Total distance / Avg Speed = Total Time---> 360/60 = 60
3. Total time in first leg = 180/90 = 2 hrs
4. Avg speed in second leg= 180/4=45
B
Hope it helps
Cheers
J
Kudos [?]: 741 [1], given: 355
SVP
Status: The Best Or Nothing
Joined: 27 Dec 2012
Posts: 1851
Kudos [?]: 2713 [2], given: 193
Location: India
Concentration: General Management, Technology
WE: Information Technology (Computer Software)
A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
26 Jun 2014, 03:27
2
KUDOS
2
This post was
BOOKMARKED
Let total distance = 2x
Avg Speed = 60
For first half
Distance = x
Speed = 90
For second Half
Distance =x
Speed = s (Assume)
Setting up the equation
$$\frac{x}{90} + \frac{x}{s} = \frac{2x}{60}$$
s = 45
_________________
Kindly press "+1 Kudos" to appreciate
Kudos [?]: 2713 [2], given: 193
Manager
Joined: 24 Jun 2014
Posts: 60
Kudos [?]: 44 [0], given: 19
Location: United States
Concentration: Marketing, General Management
GPA: 3.95
WE: Sales (Consumer Products)
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
26 Jun 2014, 03:45
1
This post was
BOOKMARKED
aiming4mba wrote:
A car travels from point A to point B. The average speed of the car is 60 miles/hr and it travels the first half of the trip at a speed of 90 mi/hr. What is the speed of the car in the second half of the trip?
A. 30
B. 45
C. 60
D. 75
E. 90
VERY EASY USE THE FORMULA FOR EQUAL DISTANCE TRAVELLED OR HALF DISTANCE
AVERAGE SPEED = 2AB/A+B, WHERE A AND B ARE THE SPEEDS OF THE CAR IN FIRST HALF DISTANCE AND SECOND HALF DISTANCE RESP.
FOR THREE EQUAL DISTANCE USE AVG SPEED = 3ABC/A+B+C ==== SAME AS A,B,C ARE SPPED RESP
IN QUESTION A IS GIVEN FIND OUT B
Kudos [?]: 44 [0], given: 19
Intern
Joined: 08 Apr 2014
Posts: 2
Kudos [?]: [0], given: 0
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
06 Sep 2014, 11:49
I am wondering... so "half the trip" refers to distance or to time? If "half the trip" means half of the distance, the answer is 45. If "half the trip" means half of the total time, the answer is 30.
Kudos [?]: [0], given: 0
Manager
Joined: 14 Sep 2014
Posts: 94
Kudos [?]: 35 [0], given: 51
WE: Engineering (Consulting)
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
17 Sep 2014, 21:11
Avg. Speed = 2S1S2/(S1+S2),
we know S1 and we know Avg. Speed
So it is very easy to calculate S2
Ans. B
Kudos [?]: 35 [0], given: 51
Manager
Status: PLAY HARD OR GO HOME
Joined: 25 Feb 2014
Posts: 175
Kudos [?]: 193 [0], given: 623
Location: India
Concentration: General Management, Finance
Schools: Mannheim
GMAT 1: 560 Q46 V22
GPA: 3.1
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
03 Nov 2014, 13:12
FORMULA FOR CALCULATING AVERAGE SPEED =
Average speed = $$\frac{2 * S1 * S2}{S1+ S2}$$
let speed for 2nd part be X
$$60 =\frac{2 * 90 * X}{X + 90}$$
$$X + 90 = 3X$$
$$X = 45$$
P.S = If 3 speeds - A , B, C are given , then formula is = $$\frac{3 * A * B * C}{AB * AC * BC}$$
Please consider KUDOS if my post helped
_________________
ITS NOT OVER , UNTIL I WIN ! I CAN, AND I WILL .PERIOD.
Kudos [?]: 193 [0], given: 623
Manager
Status: I am not a product of my circumstances. I am a product of my decisions
Joined: 20 Jan 2013
Posts: 131
Kudos [?]: 122 [1], given: 68
Location: India
Concentration: Operations, General Management
GPA: 3.92
WE: Operations (Energy and Utilities)
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
03 Nov 2014, 23:30
1
KUDOS
1
This post was
BOOKMARKED
vards wrote:
FORMULA FOR CALCULATING AVERAGE SPEED =
Average speed = $$\frac{2 * S1 * S2}{S1+ S2}$$
let speed for 2nd part be X
$$60 =\frac{2 * 90 * X}{X + 90}$$
$$X + 90 = 3X$$
$$X = 45$$
P.S = If 3 speeds - A , B, C are given , then formula is = $$\frac{3 * A * B * C}{AB * AC * BC}$$
Please consider KUDOS if my post helped
I don't think this formula would work always. It worked fine here because both the distances covered were in the ratio 1:1.
Case 1 : If a body travels from Point A to Point B with a speed of S1 and back from Point B to Point A with a speed of S2, then
$$Avg, Speed = 2*S1*S2 / (S1 + S2)$$
Case 2 : If a body covers part of a journey at speed S1 and remaining part of the journey at speed S2 and the distance covered are in the ratio D1:D2, then
$$Avg. Speed = (D1 + D2)*S1*S2 / (D1*S2 + D2*S1)$$
Had the distances been covered in ratios other than 1:1, then Case 1 would not give the right answer. We would have to use Case 2 then.
Please correct me if I'm wrong.
Kudos [?]: 122 [1], given: 68
SVP
Status: The Best Or Nothing
Joined: 27 Dec 2012
Posts: 1851
Kudos [?]: 2713 [0], given: 193
Location: India
Concentration: General Management, Technology
WE: Information Technology (Computer Software)
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
03 Nov 2014, 23:38
siggarusfigs wrote:
I am wondering... so "half the trip" refers to distance or to time? If "half the trip" means half of the distance, the answer is 45. If "half the trip" means half of the total time, the answer is 30.
Half the trip refers the distance
_________________
Kindly press "+1 Kudos" to appreciate
Kudos [?]: 2713 [0], given: 193
Manager
Status: PLAY HARD OR GO HOME
Joined: 25 Feb 2014
Posts: 175
Kudos [?]: 193 [1], given: 623
Location: India
Concentration: General Management, Finance
Schools: Mannheim
GMAT 1: 560 Q46 V22
GPA: 3.1
A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
04 Nov 2014, 01:05
1
KUDOS
Ashishmathew01081987 wrote:
vards wrote:
FORMULA FOR CALCULATING AVERAGE SPEED =
Average speed = $$\frac{2 * S1 * S2}{S1+ S2}$$
let speed for 2nd part be X
$$60 =\frac{2 * 90 * X}{X + 90}$$
$$X + 90 = 3X$$
$$X = 45$$
P.S = If 3 speeds - A , B, C are given , then formula is = $$\frac{3 * A * B * C}{AB * AC * BC}$$
Please consider KUDOS if my post helped
I don't think this formula would work always. It worked fine here because both the distances covered were in the ratio 1:1.
Case 1 : If a body travels from Point A to Point B with a speed of S1 and back from Point B to Point A with a speed of S2, then
$$Avg, Speed = 2*S1*S2 / (S1 + S2)$$
Case 2 : If a body covers part of a journey at speed S1 and remaining part of the journey at speed S2 and the distance covered are in the ratio D1:D2, then
$$Avg. Speed = (D1 + D2)*S1*S2 / (D1*S2 + D2*S1)$$
Had the distances been covered in ratios other than 1:1, then Case 1 would not give the right answer. We would have to use Case 2 then.
Please correct me if I'm wrong.
Yes u r absolutely correct. I couldnt write the details of formula as comprehensively as u did. If equal distances are covered are covered at different speeds , then the formula for 2 speeds and 3 speeds applies as written in my post. If different distances are travelled , then the Case 2 in your post applies. +1 kudos to u
_________________
ITS NOT OVER , UNTIL I WIN ! I CAN, AND I WILL .PERIOD.
Kudos [?]: 193 [1], given: 623
Manager
Joined: 23 Dec 2014
Posts: 50
Kudos [?]: 11 [0], given: 53
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
09 Feb 2015, 07:27
aiming4mba wrote:
A car travels from point A to point B. The average speed of the car is 60 miles/hr and it travels the first half of the trip at a speed of 90 mi/hr. What is the speed of the car in the second half of the trip?
A. 30
B. 45
C. 60
D. 75
E. 90
Kudos [?]: 11 [0], given: 53
Manager
Joined: 18 Aug 2014
Posts: 129
Kudos [?]: 78 [0], given: 36
Location: Hong Kong
Schools: Mannheim
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
10 Feb 2015, 10:46
PareshGmat wrote:
Let total distance = 2x
Avg Speed = 60
For first half
Distance = x
Speed = 90
For second Half
Distance =x
Speed = s (Assume)
Setting up the equation
$$\frac{x}{90} + \frac{x}{s} = \frac{2x}{60}$$
s = 45
Hello,
could you kindly provide the steps of the equation to the solution s = 45 ?
Kudos [?]: 78 [0], given: 36
Non-Human User
Joined: 09 Sep 2013
Posts: 15643
Kudos [?]: 283 [0], given: 0
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
21 Feb 2016, 01:58
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Kudos [?]: 283 [0], given: 0
Manager
Joined: 17 Aug 2015
Posts: 105
Kudos [?]: 13 [0], given: 196
Re: A car travels from point A to point B. The average speed of [#permalink]
### Show Tags
25 Jun 2016, 21:31
Two ways to solve this problem
equating time ... we know that the total time taken to travel x distance = x/60.. since 60 is the average speed. For simplicity we can take x= 60.
30/90+30/x = 1 => x = 45.
Another way that is even shorter. Let us get some distance that is a multiple of both 60 and 90. 180
90/90 + 90/x = 3=> 90/x = 2 => x =45.
Kudos [?]: 13 [0], given: 196
Re: A car travels from point A to point B. The average speed of [#permalink] 25 Jun 2016, 21:31
Go to page 1 2 Next [ 24 posts ]
Display posts from previous: Sort by
|
# Closed Smooth Curve with Curvature 1 is a Circle
I'm trying to show that a closed, smooth plane curve with curvature 1 is a circle.
The Frenet equations are:
\begin{align} t' &= kn \\ n' &= -kt - \tau b \\ b' &= \tau n. \end{align}
Now, I've shown that if $\alpha(t)$ is a plane curve, then $\tau = 0$.
Thus, our equations give:
$$t'' = -t$$ and $$b'=0.$$
From the first equation we have that,
\begin{align} t'' &= t'\\ \implies \alpha'''(s)&=-\alpha'(s)\\ \implies \alpha(s) &= c_1sinx+c_2cosx+c_3. \end{align}
It seems like I may be on the right track... though I have not yet used the fact that $\alpha$ is a closed curve. Or is there a different approach I should take?
*Also, I have moved the other direction: I have previously shown that a circle possesses curvature $\frac{1}{r}$ where $r$ is its radius. A circle of radius one, then has curvature 1. This proof doesn't work the other direction.
*For reference, I'm almost exclusively using Do Carmo's Differential Geometry of Curves and Surfaces
Thx!
Update:
No one has answered with anything yet, but here's another idea:
Do Carmo asks that we prove the existence of an Osculating Circle. Here's an idea for that:
Let there be two points $s_0$ and $s_1$ on $\alpha(s)$. Let their corresponding normals intersect at point which we will define as the center of a circle. Let ${s_0}$ approach ${s_1}$. Now show the limiting position of the intersection of normals becomes the center of an osculating circle (i.e., a circle with a tangent that coincides with the tangent of our curve at $s_1$) and the limiting position of the circle become the osculating circle. We may then prove that the radius of this circle is $\frac{1}{k(s)}$ Is this correct and how may I approach this?
• Once you derived the ansatz for $\alpha$ you just have to compute the curvature in terms of the constants. This will give you a relation between the $c_i$. Then you should be able to directly show it is a circle by constructing a middle point and computing the distance. – Nikolas Kuhn Mar 4 '16 at 10:23
• Showing that the curvature of the curve at a point is the reciprocal of the radius of the osculating circle can be easily done with elementary co-ordinate geometry and elementary calculus. Compute the co-ordinates of the intersection of the normals at $s_0$ and $s_1$ and let $s_0$ approach $s_1$ – DanielWainfleet Mar 4 '16 at 19:15
• Are you sure that the curve is assumed plane to begin with? It seems so obvious then: The conditions $t'=n$, $n'=-t$ and $t(0)=(1,0)$, $n(0)=(0,1)$ enforce $\alpha(s)=\alpha(0)+(\sin s,\cos s)$. – Christian Blatter Mar 6 '16 at 13:15
$\alpha$ has unit speed so that $$n'=-kT=-T$$ where $T=\alpha'$ Hence $$(\alpha +n)'=T-T =0$$ So $\alpha +n=C\Rightarrow |\alpha -C|=1$
|
# On injective objects and existence of injective hulls in 𝑄-TOP/(𝑌, 𝜎)
Document Type : Research Paper
Authors
Department of Mathematical Sciences, Indian Institute of Technology, Banaras Hindu University, Varanasi-221005, India.
Abstract
In this paper, motivated by Cagliari and Mantovani, we have obtained a characterization of injective objects (with respect to the class of embeddings in the category 𝑄-TOP of 𝑄-topological spaces) in the comma category 𝑄-TOP/(𝑌,𝜎), when (𝑌,𝜎) is a stratified 𝑄-topological space, with the help of their 𝑇0-reflection. Further, we have proved that for any 𝑄-topological space (𝑌,𝜎), the existence of an injective hull of ((𝑋, 𝜏), 𝑓 ) in the comma category 𝑄-TOP/(𝑌, 𝜎) is equivalent to the existence of an injective hull of its 𝑇0-reflection ((𝑋 ̃,𝜏 ̃), 𝑓 ̃) in the comma category Q-TOP/(𝑌 ̃, 𝜎 ̃ ) (and in the comma category 𝑄-TOP0/(𝑌 ̃, 𝜎 ̃ ), where 𝑄-TOP0 denotes the category of 𝑇0-𝑄-topological spaces).
Keywords
Main Subjects
#### References
[1] Adamek, J., Herrlich, H., and Strecker, G., “Abstract and Concrete Categories”, Wiley- Interscience, 1990.
[2] Adamek, J., Herrlich, H., Rosicky, J., and Tholen, W., Weak factorization systems and topo- logical functors, Appl. Categ. Struct. 10 (2002), 237-249.
[3] Adamek, J., Herrlich, H., Rosicky, J., and Tholen, W., Injective hulls are not natural, Appl. Categ. Struct. 48 (2002), 379-388.
[4] Cagliari, F. and Mantovani, S., Injective topological fibre spaces, Topology Appl. 125 (2002), 525-532.
[5] Cagliari, F. and Mantovani, S., 𝑇0-reflection and injective hulls of fibre spaces, Topology Appl. 132 (2003), 129-138.
[6] Cagliari, F. and Mantovani, S., Injective hulls of 𝑇0-topological fibre spaces, Appl. Categ. Struct. 11 (2003), 377-390.
[7] Chang, C.L., Fuzzy topological spaces, J. Math. Anal. Appl. 24 (1968), 182-190.
[8] Goguen, J., The fuzzy Tychonoff theorem, J. Math. Anal. Appl. 43 (1973), 734-742.
[9] Singh, S.K. and Srivastava, A.K., A characterization of the category Q-TOP, Fuzzy Sets Syst. 227 (2013), 46-50.
[10] Singh, S.K. and Srivastava, A.K., On T0-objects in 𝑄-TOP, Ann. Fuzzy Math. Inform. 12 (2016), 597-604.
[11] Solovyov, S.A., Sobriety and spatiality in varieties of algebras, Fuzzy Sets Syst. 159 (2008), 2567-2585.
[12] Tholen, W., Exponentiable monomorphisms, Quaest. Math. 9 (1986), 443-458.
[13] Tholen, W., Essential weak factorization systems, Contrib. Gen. Algebra 13 (2001), 321-333.
[14] Wyler, O., Injective spaces and essential extensions in TOP, Gen. Topology Appl. 7 (1977), 247-249.
|
# Is the following a PRG?
Let $G: \{0, 1\}^n → \{0, 1\}^m$ be a PRG. We construct a function $G': \{0, 1\}^{m + n} → \{0, 1\}^{2m}$ defined as follows
$G'(x || y) = x || (G(y) ⊕ x)$
for all $x ∊ \{0, 1\}^m$ and $y ∊ \{0, 1\}^n$. (The symbol || denotes here the concatenation of binary strings.)
Is $G'$ a PRG?
I believe that it is not, but I could easily be wrong.
I know, given a random string s of length $2m$ (split into equal length strings $s_1$ and $s_2$), that $s_1$ would be equivalent to $x$, and that $s_2 \oplus x$ would be equivalent to $G(y)$. But since $G(y)$ is indistinguishable from $U_m$, I'm having trouble reaching the conclusion of my contradiction.
I'm still relatively new to all of this, so any help would be appreciated.
• How about $G''(xy) = xG(y)$? – fkraiem Oct 20 '17 at 2:01
• Suppose that you are given a distinguisher $A'$ for $G'$. Can you use $A'$ to build a distinguisher $A$ for $G$? – erth Oct 20 '17 at 5:57
I believe that it is not, but I could easily be wrong.
... I'm having trouble reaching the conclusion of my contradiction
The problem is you can't show that contradiction, because there isn't any. As a rule of thumb for homework questions: If it becomes clear that you can't prove your initial assumption, consider that your intuition was wrong.
Here are some pointers how you can show that $G'$ is in fact a PRG, in a proof by contradiction:
• Let's assume $G'$ is not a PRG, and that means a distinguisher $\mathcal{D}$ exists.
• The input for $\mathcal{D}$ is one element either uniform random or of the form $x||(G(y)\oplus x)$, and then it has some non-negligible advantage $\epsilon$.
• We want to build a distinguisher for $G$. So as input we get one element, which is either from a uniform random distribution or from the image of $G$.
• In general, if $a$ is uniform random, then $a \oplus b$ is also uniform random - this is true as long as $b$ is independent of $a$. This includes fixed $b$ as well as $b$ drawn independently from any distribution.
• To clarify that last sentence is of course only true if b is independent from a. – Maeher Oct 20 '17 at 19:59
• @Maeher You're right of course. I've added it to the answer. – tylo Oct 23 '17 at 9:52
|
# Bone Fusion in Normal and Pathological Development is Constrained by the Network Architecture of the Human Skull
## Abstract
Craniosynostosis, the premature fusion of cranial bones, affects the correct development of the skull producing morphological malformations in newborns. To assess the susceptibility of each craniofacial articulation to close prematurely, we used a network model of the skull to quantify the link reliability (an index based on stochastic block models and Bayesian inference) of each articulation. We show that, of the 93 human skull articulations at birth, the few articulations that are associated with non-syndromic craniosynostosis conditions have statistically significant lower reliability scores than the others. In a similar way, articulations that close during the normal postnatal development of the skull have also lower reliability scores than those articulations that persist through adult life. These results indicate a relationship between the architecture of the skull and the specific articulations that close during normal development as well as in pathological conditions. Our findings suggest that the topological arrangement of skull bones might act as a structural constraint, predisposing some articulations to closure, both in normal and pathological development, also affecting the long-term evolution of the skull.
## Introduction
Craniofacial articulations are sites of primary bone growth and remodeling; adequate formation and maintenance of these articulations is therefore important for a healthy development of the head and brain. The timely closure of bone articulations is a normal process that takes place during skull development. Craniosynostosis is a pathological condition with an estimated prevalence of about 5 in 10,000 live births1, in which one or more articulations between cranial bones (frontal, parietal, temporal, and occipital) close prematurely, leading to the fusion of these bones. This premature fusion of bones, if not treated surgically, can cause head malformations due to compensatory growth of other joints2, sometimes provoking severe brain damage due to an increase of intracranial pressure3. Craniosynostosis can occur in isolation, as non-syndromic craniosynostosis4, 5, or as part of a variety of congenital disorders, such as Apert and Crouzon syndromes6.
In general, it is not well understood which factors predispose some articulations but not others to close prematurely. It is known that both genetic and non-genetic factors participate in the formation and maintenance of craniofacial articulations through life. The number of genes identified to be carrying mutations associated with craniosynostosis has grown in the last two decades7; for example, more than 60 genes have been shown to carry mutations associated with craniosynostosis7: some of them show specificity for a suture in the context of a syndrome (e.g., ASXL1 and metopic suture in the Bohring-Opitz syndrome), others predispose to more than one type of craniosynostosis (e.g., FGFR2 in coronal, sagittal, and multi-suture synostoses), while most of them are not specifically associated with suture development, but to osteogenesis in general (e.g., ALX4, EFNA4, and TGFBR2). Non-genetic factors are even less specific than genetic ones and include, among many others, bio-mechanical stress, hypoxia, and use of drugs or smoking during pregnancy5, 8,9,10,11.
Here, we address the susceptibility of articulations to close from a theoretical standpoint, by modeling the skull as a network in which nodes and links formalize bones and their articulations at birth (Fig. 1). This network model is thus a mathematical representation of the entire pattern of structural relations (i.e., physical contacts or articulations) among skull bones. Anatomical network models have been used before, for example, to identify developmental constraints in skull evolution12, 13, analyze the evolution of tetrapod disparity in morphospace across phylogeny14, and model the growth of human skull bones15. A recent comparison of network models of craniosynostosis conditions showed that, despite the associated abnormal shape variation, skulls with different types of craniosynostosis share a same general pattern of network modules16.
We infer the susceptibility of craniofacial articulations to close prematurely using the reliability formalism developed for network models17. A common feature of the topology of complex networks such as the skull is that one can identify groups of nodes (bones) that have well-defined patterns of connections (i.e., craniofacial articulations or synarthrosis) with other groups of nodes17. Such formalism allows us to identify connections that are not ‘expected’ to occur in the context of the entire topology of the network. Since the network represents the actual anatomy of the skull at the newborn stage, the biological processes behind the ‘topological unexpectedness’ of some articulations can also be interpreted as the result of the developmental processes that shape the anatomy of the skull; for example, position of ossification centers, growth patterns, and/or presence of functional matrices15, 18. If the architecture of the skull is driving (or influencing) the closure of articulations, we surmise that there is a relationship between the susceptibility of a pair of bones to fuse and the ‘topological unexpectedness’ of their articulation. To quantify such ‘unexpectedness’, we use the link reliability score, that is the probability that a connection exists in the network given the observed (neonatal) topology of the skull17. A low score means that the presence of this articulation is rare, that is, not commonly expected in the given arrangement of bones (see Methods for details on how this is estimated). Importantly, the link reliability formalism has been used in other complex systems to accurately predict missing and spurious interactions in social, neural, and molecular networks17, to predict harmful interactions between pairs of drugs19, and to predict the appearance of conflicts in teams20. Here we use the reliability formalism to investigate whether the topological arrangement of bones predicts which articulations are more susceptible to close in development; in other words, we want to assess if the architecture of the skull acts as an agent that constrains the fusion of bones.
## Methods
### A network model of the skull
We built a network model of the human skull at birth based on anatomical descriptions21 and information of ossification timing and fusion events22. The nodes and links of the network model formalize the bones and articulations of the skull, respectively (Fig. 1). For simplicity, we use bone in a broad sense to refer both to bonny elements (e.g., a parietal bone) and well-formed cartilaginous templates of the future bones (e.g., the ethmoidal bone). Likewise, we use the term articulation to refer to the cartilaginous (synchondroses) as well as fibrous joints (sutures) of the skull. We are aware that each type of skeletal element and articulation has different biological properties, which might be hard to compare in some contexts. However, our theoretical analysis focuses on a higher level of abstraction, that of topology (i.e., the arrangement of constituent parts), aiming to extract relevant information from the sole topological structure of the skull. Thus, specific properties of nodes (e.g., cellular origins, ossification mechanisms) and of articulations (e.g., contact areas, tensile properties) have not been included in the present model (see ref. 23 for a review of examples of how anatomical network analysis abstractions have successfully been applied in different anatomical contexts).
### Topological Organization of the Neonatal Skull
The topological organization of the skull varies during pre- and postnatal development. We have chosen to work with the skull configuration at birth because it allows a broader comparison between closed and persistent articulations, both in normal and pathological conditions. What follows is a summary of the bones present at birth that we used to build the neonatal skull network model (for details, see refs 20 and 21).
The occipital bone at birth consists of four units: a ventral basilar part, a more dorsal occipital plate, and two lateral parts. Around the fourth year the occipital plate and the lateral parts fuse into one unit. Around the sixth year the basilar part is also fused together. During adulthood (about 18–25 years) the occipital bone and the sphenoid bone fuse into a single unit. The frontal bone at birth consists of two halves separated by the metopic suture. Around the eighth year the metopic suture obliterates and the two halves of the frontal fuse into one single bone (although in some individuals the suture endures and left and right frontals are present through life). The premature fusion of the metopic suture is one of the craniosynostosis conditions included in the present study (see Fig. 1). Each temporal bone at birth consists of two parts: the petromastoid and the squama (to which the tympanic ring has united shortly before birth). Around the first year the petromastoid and squama fuse into a single unit. The temporal bone has a tight relationship with two small structures: the ear ossicles (maellus, incus, and stapes) and the styloid process (tympanohyal part and stylohyal part). The former structures develop partially embedded within the temporal bone, while the latter structures fuse with it during the first years of development. For simplicity, we have decided not to consider these structures as separate nodes in the network model; instead, we include them within the temporal bone in order to focus on the main skeletal units of the skull. The sphenoid bone at birth consists of three parts: a central body (including the small wings) and two lateral parts or alisphenoids (comprising the great wings and the pterygoid processes). Around the first year the sphenoid body and the alisphenoids fuse together. As we already mentioned, the sphenoid and the occipital fuse into a single unit during adulthood. The ethmoid bone is still a cartilaginous template at birth, which will later ossify endochondrally to form the ehtmoid bone. The maxilla and premaxilla (one of each per side) at birth are still separated by a suture that can persist until well into adulthood. Each zygomatic bone consists of one single skeletal structure at birth, although sometimes can be divided horizontally in an upper and a lower part. The vomer at birth consist of two lamellae, which fuse together at puberty (although sometimes there are traces of their paired laminar origin). Finally, the lacrimals, nasals, inferior nasal conchae, palatines, and parietals are well-formed skeletal units at birth (although the parietal and palatines still will continue growing some time after birth). At times, the parietal bone can be divided by a longitudinal suture in an upper and a lower part (as this is a deviation of the more common pattern found in humans, we did not include this phenotype in our network model).
### Estimation of Link Type Probability Using Stochastic Block Models
Stochastic block models are good models to describe the patterns of connections in complex networks. In such models, nodes are assigned to groups and the probability of a link existing between two pairs of nodes is given by a matrix that specifies the connectivity rate between nodes belonging to pairs of blocks. For a given network, good stochastic block models are those that group nodes that have a similar pattern of connections; for instance, in our case we could group together nodes vomer and palatine since both tend to connect to similar nodes (sphenoid, ethmoid, maxilla) along with a disconnection to similar nodes (e.g., parietal, zygomatic, frontal). Within this description, links between pairs of nodes that belong to groups that are densely interconnected are more likely than those links between pairs of nodes belonging to groups that are sparsely connected. For instance, in the previous example an articulation existing between nodes palatine and maxilla is much more likely than a suture between nodes palatine and parietal. Biologically, the probability that a pair of bones connect depends on the developmental processes that determine the spatial location and growth patterns (direction and speed) of each ossification center, as well as the presence of functional matrices15.
To mathematically formalize this intuition, we compute the reliability score, that is the probability that a link exists given the network of connections we observe (the newborn skull in our case) using stochastic block models as the basis for our inference algorithm. In practice, our algorithm samples the space of partitions of nodes into groups taking into account how good a given partition manages to classify nodes with similar patterns of connections into the same group. For each of these partitions, each link between a pair of nodes (i, j) has a specific probability. The reliability score of link N ij is then a weighted average of the probabilities of that link for each sampled partition. Mathematically, we formalize the previous arguments in a Bayesian framework as follows. Given a family of models ε, the probability that N ij given the observed network N O (that is the matrix of connections) is17
$$p({N}_{ij}=1|{N}^{O})={\int }_{\varepsilon }dM\,p({N}_{ij}=1|M)\,p(M|{N}^{O}),$$
(1)
where the integral is over all the models M in ensemble ε. We can rewrite this equation using Bayes theorem and obtain17, 24
$$p({N}_{ij}=1|{N}^{O})=\frac{{\int }_{\varepsilon }dM\,p({N}_{ij}=1|M)\,p({N}^{O}|M)p(M)}{{\int }_{\varepsilon }dM\,p({N}^{O}|M)\,p(M)}.$$
(2)
Here, p(N O |M) is the probability of the observed interactions given model M and p(M) is the a priori probability of a model, which we assume to be model-independent p(M) = const. In our approach, we assume that the family of stochastic block models is a good ensemble to describe the connectivity in a complex network (in our case that of the human skull). Therefore, each model M = (P, Q) is completely determined by a partition P of bones into groups and the group-to-group interaction probability matrix Q. For a given partition P, the matrix element Q αβ is the probability of an articulation joining a bone in group α with a bone in group β. Thus, if i belongs to group σ i and j to group σ j we have that24
$$p({N}_{ij}={\rm{1}}|M)={Q}_{{\sigma }_{i}{\sigma }_{j}};$$
(3)
and
$$p({N}^{O}|M)=\prod _{\alpha \le \beta }{Q}_{\alpha \beta }^{{n}_{\alpha \beta }^{{\rm{1}}}}{({\rm{1}}-{Q}_{\alpha \beta })}^{{n}_{\alpha \beta }^{{\rm{0}}}},$$
(4)
where n 1 αβ is the number of articulations between bones in groups α and β and n 0 αβ is the number of disconnections between bones in groups α and β.
The integral over all models in ε can be separated into a sum over all possible partitions of the bones into groups, and an integral over all possible values of each Q αβ . Using this together with Equation 2 to 4, and under the assumption of no prior knowledge about the models (p(M) = const), we have
$$p({N}_{ij}=1|{N}^{O})=\frac{1}{Z}\sum _{P}{\int }_{{\bf{Q}}}{\bf{d}}{\bf{Q}}\,{Q}_{{\sigma }_{i}{\sigma }_{j}}\prod _{\alpha \le \beta }{Q}_{\alpha \beta }^{{n}_{\alpha \beta }^{1}}{(1-{Q}_{\alpha \beta })}^{{n}_{\alpha \beta }^{0}},$$
(5)
where $$\mathrm{dQ}={\prod }_{{\alpha }\le {\beta }}{dQ}_{{\alpha }{\beta }}$$ and the integral is over all values of Q αβ , and Z is the normalizing constant (or partition function). Since the dependence on the Q αβ s factorizes, one can carry out analytically the integral over the Q αβ s. Specifically, using that $${\int }_{0}^{1}dq{q}^{a}{({\rm{1}}-q)}^{b}=a!b!/(a+b+{\rm{1}})!$$ we can express Eq. 5 as,
$$p({N}_{ij}={\rm{1}}|{N}^{O})=\frac{{\rm{1}}}{Z}\sum _{P}(\frac{{n}_{{\sigma }_{i}{\sigma }_{j}}^{{\rm{1}}}+{\rm{1}}}{{n}_{{\sigma }_{i}{\sigma }_{j}}+{\rm{2}}})\exp (-H(P)),$$
(6)
where the sum is over all partitions of bones into groups, n σi σj = n 1 σi σj + n 0 σi σj is the total number of possible sutures between groups σ i and σ j , and H(P) is a function that depends on the partition only
$$H(P)=\sum _{\alpha \le \beta }[\mathrm{ln}({n}_{\alpha \beta }+{\rm{1}})+\,\mathrm{ln}({n}_{\alpha \beta }{n}_{\alpha \beta }^{{\rm{1}}})].$$
(7)
This sum can be estimated using the Metropolis algorithm17, 25 as detailed next.
### Implementation Details
The sum in Equation 6 cannot be computed exactly because the number of possible partitions is combinatorially large, but can be estimated using the Metropolis algorithm17, 25. This amounts to generating a sequence of partitions in the following way. From the current partition P 0, select a random bone and move it to a random new group giving a new partition P 1. If H(P 1) < H(P 0), always accept the move; otherwise, accept the move only with probability $$P={e}^{H({P}^{0})-H({P}^{1})}$$. By doing this, one gets a sequence of partitions {P i} such that one can approximate the sum in Equation 6 as ref. 25
$$p({N}_{ij}={\rm{1}}|{N}^{O})\approx \frac{{\rm{1}}}{S}\sum _{P\in \{{P}^{i}\}}(\frac{{n}_{{\sigma }_{i}{\sigma }_{j}}^{{\rm{1}}}+{\rm{1}}}{{n}_{{\sigma }_{i}{\sigma }_{j}}+{\rm{2}}}),$$
(8)
where S is the number of sampled partitions in {P i}.
In practice, it is useful to “thin” the sample {P i}, that is, to consider only a small fraction of evenly spaced partitions so as to avoid the computational cost of sampling very similar partitions which provide very little additional information. Moreover, one needs to make sure that sampling starts only when the sampler is “thermalized”, that is, when sampled partitions are drawn from the desired probability distribution (which in our case is given by e H(P)/Z). Our implementation automatically determines a reasonable thinning of the sample, and only starts sampling when certain thermalization conditions are met. Therefore, the whole process is completely unsupervised. The source code of our implementation of the algorithm is publicly available from http://seeslab.info/downloads/network-c-libraries-rgraph/ and http://github.com/seeslab/rgraph.
### Statistical Analysis
We performed independent Mann-Whitney U tests for the following comparisons: (1) articulations affected by non-syndromic craniosynostosis vs. articulations unaffected; and (2) articulations normally closed in development vs. articulations that persist in the adult; and (3) articulations that close in craniosynostosis vs. articulations that close during normal development. The effect size of the difference of means between groups in standard deviations was estimated using Cohen’s d. The statistical analysis was performed using JASP version 0.7.5.6.
We tested the null hypothesis of equal distribution between groups against the corresponding alternative hypotheses that:
1. 1.
articulations affected by craniosynostosis have lower reliability scores than articulations unaffected (one-sided test);
2. 2.
articulations that close during normal development have lower reliability than those that persist in the adult (one-sided test);
3. 3.
articulations affected by craniosynostosis have different reliability scores than those that close during normal development (two-sided test).
## Results
The human skull at birth comprises 32 bones and 93 articulations, of which only a small fraction are associated with non-syndromic craniosynostosis conditions. We investigated the relationship between the link reliability score and the susceptibility of an articulation to close during normal development or due to craniosynostosis.
First, we compared the reliability score of those articulations that close during the normal development of the skull to those that persist in the adult. We find that sutures that normally close have significantly slightly lower reliability scores than those that do not (Mann-Whitney-Wilcoxon: one sided, W = 368, p-value = 0.047; Cohen’s d = −0.52) (Fig. 2); which is in agreement with our hypothesis that during normal development there is a tendency to close articulations that are topologically rare in the newborn skull.
Next, we compared the reliability score of articulations that close prematurely in craniosynostosis to that of those articulations unaffected by this pathological condition (Fig. 2). We found that articulations associated with craniosynostosis have significantly lower reliability scores than unaffected articulations (Mann-Whitney-Wilcoxon: one-sided, W = 98, p-value = 0.006; Cohen’s d = −1.066) (Fig. 2); which shows that articulations associated to craniosynostosis are also unexpected from a topological point of view.
Interestingly, we find that while the reliability scores of articulations that close in craniosynostosis conditions tend to have lower scores than those that close during normal development, this difference is not statistically significant at a 5% significance level (Mann-Whitney-Wilcoxon: one sided, W = 15.5, p-value = 0.087; Cohen’s d = −0.964). While this marginal significance (p-value < 0.1) might be due to the reduced statistical power of two sample tests in small samples, our finding suggests that despite skull architecture being an important factor in the loss of sutures during both pathological and normal development, there are non-topological factors that further discriminate between normal and pathological loss of sutures. However, this result must be interpreted with caution due to the small sample size of both groups (N = 6 and N = 11, respectively); notice that Cohen’s d is in fact indicating a difference of means of a similar magnitude to that observed in the previous comparison (see also Fig. 2). Further details of the statistical analysis and score values are available in the Supplementary Information.
## Discussion
Our results suggest that the whole arrangement of craniofacial articulations of the skull might act itself as a structural constraint, making some articulations more susceptible to closure than others. The presence of processes acting at the level of the entire skull (e.g., via bio-mechanical signaling) and predisposing bones to a premature fusion have been suggested before in the context of the functional matrix hypothesis26. In addition, we show that reliability scores can pinpoint articulations that are more susceptible to close during both normal and pathological development. A low reliability score identifies articulations that are ‘unexpected’ in the context of the network topology of the skull. Thus, we propose that the very arrangement of bones in the skull predisposes some articulations as targets of pathological conditions.
We are not yet in a position to offer a mechanistic explanation for the relationship reported here, which we believe may be related to the same developmental mechanism that regulate the compensatory growth of bones after premature synostoses2, 27, 28. However, because articulations that close during normal development also show low reliability scores (i.e., they are unexpected to occur or persist) compared to those articulations that persist in the adult skull, our findings also suggest that such mechanisms might not be different between normal development and pathological conditions. In fact, the signaling pathways in both cases are the same, notably, the FGF, TGF-β/BMP, and Wnt pathways, as well as their upstream and downstream targets. Moreover, it is known that polycistins act as mechanosensors, transducing tensile forces on the mesenchymal cells to promote osteogenesis at the cranial sutures via those pathways29. Thus, a possible explanation is that there is a link between the structural constraint caused by the network topology and the signaling pathways promoting osteogenesis via mechanosensors. For example, since tensile forces result from the mutual interaction among the growth fronts of each bone (i.e., the connection), the resulting connectivity pattern of the network must distribute these tensile forces in a very specific way, playing a critical role in the likelihood that some sutures, and not others, will close as the action of the mechanosensors will respond differently by activating or suppressing the osteogenic signaling pathways.
Pathological conditions of the human skull such as craniosynostosis are a medical and social problem that needs special attention from the research community. In addition, they represent medical examples of more general developmental and evolutionary processes found in all tetrapods16, 30. Both aspects, the medical and the biological, need and can be integrated in order to reach a better understanding that could lead to improve treatments as well as to further our knowledge about fundamental evolutionary questions. If, as our results suggest, the system of articulations of skull bones is able to self-regulate or to constrain the formation and maintenance of individual bone articulations, this might have consequences also at an evolutionary scale. In craniosynostosis conditions, the number of bones is reduced due to the early fusion of bones, much in the same way as the net reduction in the number of bones during vertebrate evolution12, 31, 32; as a consequence, it has been postulated that craniosynostosis could be used as an informative model for skull evolution33. Our results suggest that this is not a mere analogy, but that similar constraints would regulate the pattern of bone contacts in the skull, both in development and in evolution.
## References
1. 1.
Di Rocco, F., Arnaud, E. & Renier, D. Evolution in the frequency of nonsyndromic craniosynostosis. J. Neurosurg. Pediatr. 4, 21–25 (2009).
2. 2.
Delashaw, J. B., Persing, J. A., Broaddus, W. C. & Jane, J. A. Cranial vault growth in craniosynostosis. J. Neurosurg. 70, 159–165 (1989).
3. 3.
Inagaki, T. et al. The intracranial pressure of the patients with mild form of craniosynostosis. Childs Nerv. Syst. 23, 1455–1459 (2007).
4. 4.
Garza, R. M. & Khosla, R. K. Nonsyndromic craniosynostosis. Seminars in Plastic Surgery 26, 53 (2012).
5. 5.
Watkins, S. E., Meyer, R. E., Strauss, R. P. & Aylsworth, A. S. Classification, epidemiology, and genetics of orofacial clefts. Clin. Plast. Surg. 41, 149–163 (2014).
6. 6.
Rice, D. In Craniofacial sutures development, disease, and treatment (ed. Rice, D.) 91–106 (Karger, 2008).
7. 7.
Twigg, S. R. & Wilkie, A. O. A genetic-pathophysiological framework for craniosynostosis. Am. J. Hum. Genet. 97, 359–377 (2015).
8. 8.
Oppenheimer, A. J., Rhee, S. T., Goldstein, S. A. & Buchman, S. R. Force-induced craniosynostosis in the murine sagittal suture. Plast. Reconstr. Surg. 124, 1840 (2009).
9. 9.
Percival, C. & Richtsmeier, J. In Epigenetics linking genotype and phenotype in development and evolution (eds Hallgrímson, B. & Hall, B. K.) 377–397 (University Press, 2011).
10. 10.
Carmichael, S. L. et al. Craniosynostosis and maternal smoking. Birt. Defects Res. A. Clin. Mol. Teratol. 82, 78–85 (2008).
11. 11.
Shi, M., Wehby, G. L. & Murray, J. C. Review on genetic variants and maternal smoking in the etiology of oral clefts and other birth defects. Birth Defects Res. Part C Embryo Today Rev. 84, 16–29 (2008).
12. 12.
Esteve-Altava, B., Marugán-Lobón, J., Botella, H. & Rasskin-Gutman, D. Structural constraints in the evolution of the tetrapod skull complexity: Williston’s law revisited using network models. Evol. Biol. 40, 209–219 (2013).
13. 13.
Esteve-Altava, B., Marugán-Lobón, J., Botella, H. & Rasskin-Gutman, D. Random loss and selective fusion of bones originate morphological complexity trends in tetrapod skull networks. Evol. Biol. 41, 52–61 (2014).
14. 14.
Esteve-Altava, B. & Rasskin-Gutman, D. Theoretical morphology of tetrapod skull networks. Comptes Rendus Palevol 13, 41–50 (2014).
15. 15.
Esteve-Altava, B. & Rasskin-Gutman, D. Beyond the functional matrix hypothesis: a network null model of human skull growth for the formation of bone articulations. J. Anat. 225, 306–316 (2014).
16. 16.
Esteve-Altava, B. & Rasskin-Gutman, D. Evo-Devo insights from pathological networks: exploring craniosynostosis as a developmental mechanism for modularity and complexity in the human skull. J. Anthropol. Sci. 93, 1–15 (2015).
17. 17.
Guimerà, R. & Sales-Pardo, M. Missing and spurious interactions and the reconstruction of complex networks. Proc. Natl. Acad. Sci. 106, 22073–22078 (2009).
18. 18.
Lieberman, D. E. In Epigenetics: Linking Genotype and Phenotype in Development and Evolution (eds Hallgrimsson, B. & Hall, B. K.) (California University Press, 2011).
19. 19.
Guimerà, R. & Sales-Pardo, M. A network inference method for large-scale unsupervised identification of novel drug-drug interactions. PLoS Comput Biol 9, e1003374 (2013).
20. 20.
Rovira-Asenjo, N., Gumí, T., Sales-Pardo, M. & Guimerà, R. Predicting future conflict between team-members with parameter-free models of social networks. Sci. Rep. 3 (2013).
21. 21.
Gray, H. Anatomy of the human body. (Lea & Febiger, 1918).
22. 22.
Sperber, G. H., Sperber, S. M. & Guttmann, G. D. Craniofacial embryogenetics and development. (PMPH-USA, 2010).
23. 23.
Rasskin-Gutman, D. & Esteve-Altava, B. Connecting the dots: anatomical network analysis in morphological EvoDevo. Biol. Theory 9, 178–193 (2014).
24. 24.
Guimerà, R., Llorente, A., Moro, E. & Sales-Pardo, M. Predicting human preferences using the block structure of complex social networks. PLoS One 7, e44620 (2012).
25. 25.
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. & Teller, E. Equation of state calculations by fast computing machines. J. Chem. Phys. 21, 1087–1092 (1953).
26. 26.
Moss, M. Functional anatomy of cranial synostosis. Pediatr. Neurosurg. 1, 22–33 (1975).
27. 27.
Morriss-Kay, G. M. & Wilkie, A. O. Growth of the normal skull vault and its alteration in craniosynostosis: insights from human genetics and experimental studies. J. Anat. 207, 637–653 (2005).
28. 28.
Lieberman, D. The evolution of the human head. (Harvard University Press, 2011).
29. 29.
Katsianou, M. A., Adamopoulos, C., Vastardis, H. & Basdra, E. K. Signaling mechanisms implicated in cranial sutures pathophysiology: craniosynostosis. BBA Clin. 6, 165–176 (2016).
30. 30.
Esteve-Altava, B., Marugán-Lobón, J., Botella, H. & Rasskin-Gutman, D. Network models in anatomical systems. J. Anthropol. Sci. 89, 175–184 (2011).
31. 31.
Gregory, W. K. Williston’s law relating to the evolution of skull bones in the vertebrates. Am. J. Phys. Anthropol. 20, 123–152 (1935).
32. 32.
Sidor, C. A. Simplification as a trend in synapsid cranial evolution. Evolution 55, 1419–1442 (2001).
33. 33.
Richtsmeier, J. T. et al. Phenotypic integration of neurocranium and brain. J. Exp. Zoolog. B Mol. Dev. Evol. 306, 360–378 (2006).
## Acknowledgements
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under a Marie Skłodowska-Curie grant (654155) to BE-A (654155), from the Ministerio de Economia y Competitividad de España (MINECO-FEDERBFU2015-70927-R) to DR-G and BE-A, from the Ministerio de Economia y Competitividad de España (FIS2013-47532-C3-P1 and FIS2016-78904-C3-P-1) to RG & MS-P and from the James S McDonnell Foundation to RG & MS-P.
## Author information
Authors
### Contributions
All authors designed the study. B.E.-A. made the network model of the skull and analyzed the results. R.G., M.S.-P., and T.V.-C. analyzed the network model and calculated reliability scores. All authors discussed the results and wrote the manuscript.
### Corresponding author
Correspondence to Marta Sales-Pardo.
## Ethics declarations
### Competing Interests
The authors declare that they have no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Esteve-Altava, B., Vallès-Català, T., Guimerà, R. et al. Bone Fusion in Normal and Pathological Development is Constrained by the Network Architecture of the Human Skull. Sci Rep 7, 3376 (2017). https://doi.org/10.1038/s41598-017-03196-9
• Accepted:
• Published:
• ### The biological basis of treating jaw discrepancies: An interplay of mechanical forces and skeletal configuration
• Konstantinos Karamesinis
• & Efthimia K. Basdra
Biochimica et Biophysica Acta (BBA) - Molecular Basis of Disease (2018)
|
HOME
PDF (letter size)
PDF (legal size)
Finding equations of motion for pendulum on moving cart
November 5, 2018 Compiled on November 5, 2018 at 9:08am [public]
1 Introduction
This report shows how to determine the equations of motion for a rigid bar pendulum (physical pendulum) on a moving cart as shown in the following diagram using both Newton’s method and the energy (Lagrangian) method. It is useful to solve the same problem when possible using both methods as this will help verify the results.
There are two degrees of freedom. The $$x$$ coordinate and the $$\theta$$ coordinate. Hence we need to find two equations of motion, one for each coordinate.
2 Newton’s Method
The first step is to make free body diagram (FBD). One for the cart and one for the physical pendulum and equate each FBD to the kinematics diagrams in order to write down the equations of motion.
2.1 FBD for cart
Equation of motion along positive $$x$$ is$$-kx-c\dot{x}+F+P_{x}=M\ddot{x} \tag{1}$$ Equation of motion along positive $$y$$ is not needed since cart does not move in vertical direction. We see that to find equation for $$\ddot{x}$$ we just need to determine $$P_{x}$$, since that is the only unknown in (1). $$P_{x}$$ will be found from the physical pendulum equation as is shown below.
2.2 FBD for pendulum
We see now that the equation of motion along positive $$x$$ is$$-P_{x}=m\ddot{x}+m\frac{L}{2}\ddot{\theta }\cos \theta -m\frac{L}{2}\dot{\theta }^{2}\sin \theta \tag{2}$$ This gives us the $$P_{x}$$ we wanted to plug in (1). Equation (1) now becomes\begin{align} -kx-c\dot{x}+F-\left ( m\ddot{x}+m\frac{L}{2}\ddot{\theta }\cos \theta -m\frac{L}{2}\dot{\theta }^{2}\sin \theta \right ) & =M\ddot{x}\nonumber \\ -kx-c\dot{x}+F-m\frac{L}{2}\ddot{\theta }\cos \theta +m\frac{L}{2}\dot{\theta }^{2}\sin \theta & =\ddot{x}\left ( M+m\right ) \nonumber \end{align}
Hence$$\fbox{\ddot{x}\left ( M+m\right ) +c\dot{x}+kx+\frac{mL}{2}\left ( \ddot{\theta }\cos \theta -\dot{\theta }^2\sin \theta \right ) =F} \tag{3}$$ The above is equation of motion for $$\ddot{x}$$.
To find equation of motion for $$\ddot{\theta }$$ we take moments around C.G. of the rigid pendulum, using counter clock wise as positive. This gives\begin{align} P_{y}\frac{L}{2}\sin \theta -P_{x}\frac{L}{2}\cos \theta & =-I_{cg}\ddot{\theta }\nonumber \\ P_{y}\frac{L}{2}\sin \theta -P_{x}\frac{L}{2}\cos \theta & =-\frac{1}{12}mL^{2}\ddot{\theta }\tag{4} \end{align}
We know $$P_{x}$$ from (2). We know need to just find $$P_{y}$$. This is found from resolving forces in the vertical direction for the pendulum free body diagram giving\begin{align} -P_{y}-mg & =-m\frac{L}{2}\dot{\theta }^{2}\cos \theta -m\frac{L}{2}\ddot{\theta }\sin \theta \nonumber \\ p_{y} & =m\frac{L}{2}\dot{\theta }^{2}\cos \theta +m\frac{L}{2}\ddot{\theta }\sin \theta -mg\tag{5} \end{align}
Plugging (2) and (5) into (4) to eliminate $$P_{x},P_{y}$$, then (4) simplifies to\begin{align*} \left ( m\frac{L}{2}\dot{\theta }^{2}\cos \theta +m\frac{L}{2}\ddot{\theta }\sin \theta -mg\right ) \frac{L}{2}\sin \theta +\left ( m\ddot{x}+m\frac{L}{2}\ddot{\theta }\cos \theta -m\frac{L}{2}\dot{\theta }^{2}\sin \theta \right ) \frac{L}{2}\cos \theta & =-\frac{1}{12}mL^{2}\ddot{\theta }\\ m\frac{L^{2}}{4}\dot{\theta }^{2}\cos \theta \sin \theta +m\frac{L^{2}}{4}\ddot{\theta }\sin ^{2}\theta -mg\frac{L}{2}\sin \theta +m\ddot{x}\frac{L}{2}\cos \theta +m\frac{L^{2}}{4}\ddot{\theta }\cos ^{2}\theta -m\frac{L^{2}}{4}\dot{\theta }^{2}\sin \theta \cos \theta & =-\frac{1}{12}mL^{2}\ddot{\theta }\\ -mg\frac{L}{2}\sin \theta +m\frac{L^{2}}{4}\ddot{\theta }\sin ^{2}\theta +m\ddot{x}\frac{L}{2}\cos \theta +m\frac{L^{2}}{4}\ddot{\theta }\cos ^{2}\theta & =-\frac{1}{12}mL^{2}\ddot{\theta }\\ -mg\frac{L}{2}\sin \theta +m\ddot{x}\frac{L}{2}\cos \theta +m\frac{L^{2}}{4}\ddot{\theta } & =-\frac{1}{12}mL^{2}\ddot{\theta }\\ -g\sin \theta +\ddot{x}\cos \theta & =-\frac{2}{3}L\ddot{\theta } \end{align*}
Therefore$$\fbox{\ddot{\theta }=\frac{3}{2}\left ( \frac{g\sin \theta -\ddot{x}\cos \theta }{L}\right ) }\tag{6}$$ The above is the required equation of motion for $$\ddot{\theta }$$. Equations (3,6) are coupled and have to be solved numerically since they are nonlinear or small angle approximation can be used in order to simplify these two equations and to solve them analytically.
3 Lagrange method
The first step in using Lagrange method is to make a velocity diagram to each object. This is shown below
From the velocity diagram above we see that the kinetic energy of the system is$$T=\frac{1}{2}M\dot{x}^{2}+\frac{1}{2}mv^{2}+\frac{1}{2}I_{cg}\dot{\theta }^{2} \tag{7}$$ Where $$\frac{1}{2}M\dot{x}^{2}$$ is K.E. of cart due to its linear motion, and $$\frac{1}{2}mv^{2}$$ is K.E. of physical pendulum due to its translation motion of its center of mass, and $$\frac{1}{2}I_{cg}\dot{\theta }^{2}$$ is K.E. of physical pendulum due to its rotational motion. Now we find $$v$$\begin{align*} v^{2} & =v_{x}^{2}+v_{y}^{2}\\ v_{x}^{2} & =\left ( \dot{x}+\frac{L}{2}\dot{\theta }\cos \theta \right ) ^{2}\\ v_{y}^{2} & =\left ( \frac{L}{2}\dot{\theta }\sin \theta \right ) ^{2} \end{align*}
Therefore the K.E. from (7) becomes\begin{align*} T & =\overset{\text{cart K.E.}}{\overbrace{\frac{1}{2}M\dot{x}^{2}}}+\overset{\text{translation K.E. of physical pendulum}}{\overbrace{\frac{1}{2}m\left ( \left ( \dot{x}+\frac{L}{2}\dot{\theta }\cos \theta \right ) ^{2}+\left ( \frac{L}{2}\dot{\theta }\sin \theta \right ) ^{2}\right ) }}+\overset{\text{rotation K.E.}}{\overbrace{\frac{1}{2}\left ( \frac{1}{12}mL^{2}\right ) \dot{\theta }^{2}}}\\ & =\frac{1}{2}M\dot{x}^{2}+\frac{1}{2}m\left ( \dot{x}^{2}+\frac{L^{2}}{4}\dot{\theta }^{2}\cos ^{2}\theta +\dot{x}L\dot{\theta }\cos \theta +\frac{L^{2}}{4}\dot{\theta }^{2}\sin ^{2}\theta \right ) +\frac{1}{24}mL^{2}\dot{\theta }^{2}\\ & =\frac{1}{2}\dot{x}^{2}(M+m)+\frac{1}{2}m\left ( \frac{L^{2}}{4}\dot{\theta }^{2}+\dot{x}L\dot{\theta }\cos \theta \right ) +\frac{1}{24}mL^{2}\dot{\theta }^{2}\\ & =\frac{1}{2}\dot{x}^{2}(M+m)+m\frac{L^{2}}{8}\dot{\theta }^{2}+\frac{1}{2}m\dot{x}L\dot{\theta }\cos \theta +\frac{1}{24}mL^{2}\dot{\theta }^{2}\\ & =\frac{1}{2}\dot{x}^{2}(M+m)+\frac{1}{2}m\dot{x}L\dot{\theta }\cos \theta +\frac{1}{6}mL^{2}\dot{\theta }^{2} \end{align*}
Taking zero potential energy $$V$$ as the horizontal level where the pendulum is attach to the cart, then P.E. comes from only spring extension and change of vertical position of center of mass of pendulum which is given by $V=mg\frac{L}{2}\cos \theta +\frac{1}{2}kx^{2}$ Hence the Lagrangian $$\Gamma$$ is \begin{align*} \Gamma & =T-V\\ & =\frac{1}{2}\dot{x}^{2}(M+m)+\frac{1}{2}m\dot{x}L\dot{\theta }\cos \theta +\frac{1}{6}mL^{2}\dot{\theta }^{2}-mg\frac{L}{2}\cos \theta -\frac{1}{2}kx^{2} \end{align*}
There are two degrees of freedom: $$x$$ and $$\theta$$. The generalized forces in for $$x$$ are given by $$Q_{x}=F-c\dot{x}$$ and the generalized force for $$\theta$$ is $$Q_{\theta }=0$$. Equation of motions are now found. For $$x$$ \begin{align*} \frac{d}{dt}\frac{\partial \Gamma }{\partial \dot{x}}-\frac{\partial \Gamma }{\partial x} & =Q_{x}\\ \frac{d}{dt}\left ( \dot{x}(M+m)+\frac{1}{2}mL\dot{\theta }\cos \theta \right ) +kx & =F(t)-c\dot{x}\\ \ddot{x}(M+m)+\frac{1}{2}mL\ddot{\theta }\cos \theta -\frac{1}{2}mL\dot{\theta }^{2}\sin \theta +kx & =F(t)-c\dot{x}\\ \ddot{x}(M+m)+c\dot{x}+kx+\frac{1}{2}mL\ddot{\theta }\cos \theta -\frac{1}{2}mL\dot{\theta }^{2}\sin \theta & =F(t) \end{align*}
Therefore$\fbox{\ddot{x}(M+m)+c\dot{x}+kx+\frac{mL}{2}\left ( \ddot{\theta }\cos \theta -\dot{\theta }^2\sin \theta \right ) =F(t)}$ Which is the same result as Newton method found above in (3). Now we find equation of motion for $$\theta$$ $\frac{d}{dt}\frac{\partial \Gamma }{\partial \dot{\theta }}-\frac{\partial \Gamma }{\partial \theta }=0$ But\begin{align*} \frac{\partial \Gamma }{\partial \dot{\theta }} & =\frac{1}{2}m\dot{x}L\cos \theta +\frac{1}{3}mL^{2}\dot{\theta }\\ \frac{\partial \Gamma }{\partial \theta } & =-\frac{1}{2}m\dot{x}L\dot{\theta }\sin \theta +mg\frac{L}{2}\sin \theta \end{align*}
Hence $$\frac{d}{dt}\frac{\partial \Gamma }{\partial \dot{\theta }}-\frac{\partial \Gamma }{\partial \theta }=0$$ becomes\begin{align*} \frac{d}{dt}\left ( \frac{1}{2}m\dot{x}L\cos \theta +\frac{1}{3}mL^{2}\dot{\theta }\right ) -\left ( -\frac{1}{2}m\dot{x}L\dot{\theta }\sin \theta +mg\frac{L}{2}\sin \theta \right ) & =0\\ \frac{d}{dt}\left ( \frac{1}{2}m\dot{x}L\cos \theta +\frac{1}{3}mL^{2}\dot{\theta }\right ) +\frac{1}{2}m\dot{x}L\dot{\theta }\cos \theta -mg\frac{L}{2}\sin \theta & =0\\ \frac{1}{2}mL\ddot{x}\cos \theta -\frac{1}{2}mL\dot{x}\dot{\theta }\sin \theta +\frac{1}{3}\ddot{\theta }mL^{2}+\frac{1}{2}m\dot{x}L\dot{\theta }\cos \theta -mg\frac{L}{2}\sin \theta & =0\\ \frac{1}{2}mL\ddot{x}\sin \theta +\frac{1}{3}\ddot{\theta }mL^{2}-mg\frac{L}{2}\sin \theta & =0\\ \ddot{x}\sin \theta +\frac{2}{3}\ddot{\theta }L-g\sin \theta & =0 \end{align*}
Therefore$\fbox{\ddot{\theta }=\frac{3}{2}\left ( \frac{g\sin \theta -\ddot{x}\sin \theta }{L}\right ) }$ Which is the same ODE (6) above given by Newton’s method.
|
# Basic Concepts
## Record
A record is the smallest unit that can be loaded from and stored into the database.
### Record types
There are several types of records.
#### Document
Documents are the most flexible record available in OrientDB. They are softly typed and are defined by schema classes with the defined constraints, but can be used also in schema-less mode. Documents handle fields in a flexible way. A Document can be easily imported and exported in JSON format. Below is an example of a Document in JSON format:
{
"name": "Jay",
"surname": "Miner",
"job": "Developer",
"creations": [
{ "name": "Amiga 1000",
"company": "Commodore Inc."
},
{ "name": "Amiga 500",
"company": "Commodore Inc."
}
]
}
OrientDB Documents support complex relationships. From a programmer's perspective this can be seen as a sort of persistent Map.
#### Flat
Records are strings. No fields are supported, no indexing, no schema.
### RecordID
In OrientDB, each record has an auto assigned Unique ID. The RecordID (or RID) is composed in this way:
#[<cluster>:<position>]
Where:
• cluster, is the cluster id. Positive numbers mean persistent records. Negative numbers mean temporary records, like those used in result sets for queries that use projections.
• position, is the absolute position of the record inside a cluster.
NOTE: The prefix character # is mandatory to recognize a RecordID.
The record never looses its identity unless is deleted. Once deleted its identity is never recycled (but with "local" storage). You can access a record directly by its RecordID. For this reason you don't need to create a field as a primary key like in a Relational DBMS.
### Record version
Each record maintains its own version number that is incremented at every update. When a record is created, the version is zero. In optimistic transactions the version is checked in order to avoid conflicts at commit time.
## Class
A Class is a concept taken from the Object Oriented paradigm. In OrientDB it defines a type of record. It's the closest concept to a Relational DBMS Table. Classes can be schema-less, schema-full or mixed.
A class can inherit from another, creating a tree of classes. Inheritance) means that the sub-class extends the parent one, inheriting all the attributes.
Each class has its own clusters. A class must have at least one cluster defined (its default cluster), but can support multiple ones. In this case by default OrientDB will write new records in the default cluster, but reads will always involve all the defined clusters.
When you create a new class by default a new physical cluster is created with the same name of the class in lowercase.
### Abstract Class
If you know Object-Orientation you already know what is an abstract class. For all the rest:
To create a new abstract class look at SQL Create Class.
Abstract classes are essential to support Object Orientation without the typical spamming of the database with always empty auto-created clusters. NOTE: available since 1.2.0
### When to use class or cluster in queries?
Let's use an example: Let's assume you created a class "Invoice" and 2 clusters "invoice2011" and "invoice2012".
You can now query all the invoices by using the class as target in SQL select:
SELECT FROM Invoice
If you want to filter per year 2012 and you've created a "year" field in Invoice class do:
SELECT FROM Invoice where year = 2012
You may also query specific objects from a single cluster (so, by splitting the Class Invoice in multiple clusters, e.g. one per year, you narrow your candidate objects):
SELECT FROM cluster:invoice2012
This query may be significantly faster because OrientDB can narrow the search to the targeted cluster.
The combination of Classes and Clusters is very powerful and has many use cases.
## Relationships
OrientDB supports two kinds of relationships: referenced and embedded. OrientDB can manage relationships in a schema-full or in schema-less scenario.
### Referenced relationships
Relationships in OrientDB are managed natively without computing costly JOINs as in a Relational DBMS. In fact, OrientDB stores the direct link(s) to the target objects of the relationship. This boosts up the load of entire graph of connected objects like in Graph and Object DBMSs. Example:
customer
Record A -------------> Record B
CLASS=Invoice CLASS=Customer
RID=5:23 RID=10:2
Record A will contain the reference to Record B in the property called "customer". Note that both records are reachable by other records since they have a RecordID.
#### 1-1 and N-1 referenced relationships
This kind of relationships are expressed using the LINK type.
#### 1-N and N-M referenced relationships
This kind of relationships are expressed using the collection of links such as:
• LINKSET, as an unordered set of links. It doesn't accepts duplicates
• LINKMAP, as an ordered map of links with String as key type. Keys doesn't accepts duplicated
### Embedded relationships
Embedded records, instead, are contained inside the record that embeds them. It's a kind of relationship stronger than the reference. It can be represented like the UML Composition relationship. The embedded record will not have an own RecordID, since it can't be directly referenced by other records. It's only accessible through the container record. If the container record is deleted, then the embedded record will be deleted too. Example:
address
Record A <>----------> Record B
RID=5:23 NO RID!
Record A will contain the entire Record B in the property called "address". Record B can be reached only by traversing the container record.
Example:
SELECT FROM account WHERE address.city = 'Rome'
#### 1-1 and N-1 embedded relationships
These kinds of relationships are expressed using the EMBEDDED type.
#### 1-N and N-M embedded relationships
These kinds of relationships are expressed using a collection of links such as:
• EMBEDDEDLIST, as an ordered list of records
• EMBEDDEDSET, as an unordered set of records. It doesn't accepts duplicates
• EMBEDDEDMAP, as an ordered map of records as value with key a String. It doesn't accept duplicated keys
### Inverse relationships
In OrientDB, all Graph Model edges (connections between vertices) are bi-directional. This differs from the Document Model where relationships are always mono-directional, thus requiring the developer to maintain data integrity. In addition, OrientDB automatically maintains the consistency of all bi-directional relationships (aka edges).
## Database
A database is an interface to access the real Storage. The database understands high-level concepts like Queries, Schemas, Metadata, Indexes, etc. OrientDB also provides multiple database types. Take a look at the Database types to learn more about them.
Each server or JVM can handle multiple database instances, but the database name must be UNIQUE. So you can't manage 2 databases named "customer" in 2 different directories at the same time. To handle this case use the $ (dollar) as separator instead of / (slash). OrientDB will bind the entire name, so it will be unique, but at the file system level it will convert $ with / allowing multiple databases with the same name in different paths. Example:
test$customers -> test/customers production$customers = production/customers
The database must be opened as:
test = new ODatabaseDocumentTx("remote:localhost/test$customers"); production = new ODatabaseDocumentTx("remote:localhost/production$customers");
### Database URL
OrientDB has its own URL format:
<engine>:<db-name>
Where:
• db-name is the database name and depends on the engine used (see below)
• engine can be:
Engine Description Example
plocal This engine writes to the file system to store data. There is a LOG of changes to restore the storage in case of crash. plocal:/temp/databases/petshop/petshop
memory Open a database completely in memory memory:petshop
remote The storage will be opened via remote network connection. It requires an OrientDB Server up and running. In this mode, the database is shared among multiple clients. Syntax: remote:<server>:[<port>]/db-name. The port is optional and defaults to 2424. remote:localhost/petshop
### Database usage
The database must always be closed once you've finished working with it.
NOTE: OrientDB automatically closes all opened storages when the process dies softly (not by force killing). This is assured if the Operating System allows a graceful shutdown.
|
### Home > CALC > Chapter 5 > Lesson 5.2.4 > Problem5-95
5-95.
1. When Regit hit his golf ball at the 18th hole, it went straight up in the air! Homework Help ✎
1. If he hit it with an initial velocity of 144 feet per second, write an equation for the ball's velocity, v(t), at time t. Assume the gravitational constant a(t) = −32 ft/sec2 and that Regit hit the ball while it was on the ground.
2. When was the ball at rest? What is happening at that point in time?
3. What was the maximum height of the ball?
$s(t)=\frac{1}{2}at^{2}+v_{0}t+s_{0}$
Find s(t). Then find v(t).
s(t) = −16t² + 144t + 0
v(t) = −32t + 144
The ball is at rest whenever the velocity is equal to zero. What is happening to the ball at these times?
Remember that the ball has a velocity of zero at the peak of its parabolic flight.
|
Chapter 35
### Venous Anatomy
Veins of the lower extremity (Figure 35–1) consist of superficial and deep systems joined by venous perforators. The greater and lesser saphenous veins are superficial—veins, the name “saphenous” aptly derived from the Greek word for “manifest, clear,” or “visible.” They contain many valves and show considerable variation in their location and branching points. The greater saphenous vein may be duplicated in up to 10% of patients. Typically, it originates from the superficial arch of the foot and is found anterior to the medial malleolus at the ankle. As it ascends in the calf just beneath the superficial fascia, it is joined by two major tributaries: an anterior vein, which crosses the tibia; and a posterior arch vein, which arises posterior to the medial malleolus beside the posterior tibial artery. The greater saphenous vein then enters the fossa ovalis in the groin to empty into the deep femoral vein.
###### Figure 35–1.
Anatomy of the superficial and perforating veins of the lower extremity. (From Rutherford RB, Cronenwett JL, Gloviczki P: Vascular Surgery. Philadelphia: Saunders, 2000. Reproduced by permission from Elsevier.)
The saphenofemoral junction is marked by four or five prominent branches of the greater saphenous vein: the superficial circumflex iliac vein, the external pudendal vein, the superficial epigastric vein, and the medial and lateral accessory saphenous veins. Another important anatomic landmark is the relationship of the greater saphenous vein to the saphenous branch of the femoral nerve; as it emerges from the popliteal space, the nerve follows a course parallel to the vein. Injury during saphenous vein stripping or saphenous vein harvest for bypass produces neuropathic pain or numbness along the medial calf and foot. The lesser saphenous vein arises from the superficial dorsal venous arch behind the lateral malleolus at the ankle and curves toward the midline of the posterior calf, ascending to join the popliteal vein behind the knee.
Deep veins of the leg parallel the courses of the arteries. Two or three venae comitantes accompany each tibial artery. At the knee, these paired high-capacitance veins merge to form the popliteal vein, which continues proximally as the femoral vein. At the inguinal ligament, the femoral and deep (profunda) femoral veins join medial to the femoral artery to form the common femoral vein. Proximal to the inguinal ligament, the common femoral vein becomes the external iliac vein. In the pelvis, external and internal iliac veins join to form common iliac veins that empty into the inferior vena cava (IVC). The right common iliac vein ascends almost vertically to the IVC while the left common iliac vein takes a more transverse course. For this reason, the left common iliac vein may be compressed between the right common iliac artery and lumbosacral spine, a condition known as May-Thurner (Cockett) syndrome when thrombosis of the left iliac vein occurs.
...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
## Subscription Options
### AccessSurgery Full Site: One-Year Subscription
Connect to the full suite of AccessSurgery content and resources including more than 160 instructional videos, 16,000+ high-quality images, interactive board review, 20+ textbooks, and more.
|
# Kinematic problems
The human body can survive a negative acceleration trauma incident if the magnitude of the acceleration is less than 250 m/s^2. If you are in an automobile accident at an initial speed of 96 km/h and are stopped by an airbag that inflates from the dashboard, over what distance must the airbag stop you for you to survive the crash?
So I know that $v_{0} = 96$, $v_{x} = 0$ and $a_{x} = 250$. So is it correct to say $v_{x} = v_{x}_{0} + a_{x}t$ to find the time, or $0 = 96-250t$ and $t = 0.384 sec$? Then you use $x-x_{0} = v_{x}_{0}t + \frac{1}{2}a_{x}t^{2}$ and you get the distance to be $18.432 m$
Is this correct?
Thanks
any ideas?
change 96 km/h to m/s.
I would do the above suggestion and use this equation... it's faster.
$$V_{f}^2=V_{0}^2+2ad$$
A car 3.5 m in length and traveling at a constant speed of 20 m/s is approaching an intersection. The width of the intersection is 20 m. The light turns yellow when the front of the car is 50 m from the beginning of the intersection. If the driver steps on the brake, the car will slow at -4.2 m/s^2. If the driver instead steps on the gas pedal, the car will accelerate at 1.5 m/s^2. The light will be yellow for 3.0 s. Ignore the reaction time of the driver. To avoid being in the intersection while the light is red, should the driver hit the brake pedal or gas pedal?
Could somone give me a general idea of where to start, and a general problem solving strategy?
Thanks
|
2015
04-10
# Evacuation Plan
Flatland government is building a new highway that will be used to transport weapons from its main weapon plant to the frontline in order to support the undergoing military operation against its neighbor country Edgeland. Highway is a straight line and there are n construction teams working at some points on it.
During last days the threat of a nuclear attack from Edgeland has significantly increased. Therefore the construction office has decided to develop an evacuation plan for the construction teams in case of a nuclear attack. There are m shelters located near the constructed highway. This evacuation plan must assign each team to a shelter that it should use in case of an attack.
Each shelter entrance must be securely locked from the inside to prevent any damage to the shelter itself. So, for each shelter there must be some team that goes to this shelter in case of an attack. The office must also supply fuel to each team, so that it can drive to its assigned shelter in case of an attack. The amount of fuel that is needed is proportional to the distance from the team’s location to the assigned shelter. To minimize evacuation costs, the office would like to create a plan that minimizes the total fuel needed.
The input begins with an integer T. The next T blocks each represents a case. The first line of each case contains n – the number of construction teams (1 ≤ n ≤ 4000). The second line contains n integer numbers – the locations of the teams. Each team’s location is a positive integer not exceeding 109, all team locations are different.
The third line of each case contains m – the number of shelters (1 ≤ m ≤ n). The fourth line contains m integer numbers – the locations of the shelters. Each shelter’s location is a positive integer not exceeding 109, all shelter locations are different.
The amount of fuel that needs to be supplied to a team at location x that goes to a shelter at location y is equal to |x – y|.
The input begins with an integer T. The next T blocks each represents a case. The first line of each case contains n – the number of construction teams (1 ≤ n ≤ 4000). The second line contains n integer numbers – the locations of the teams. Each team’s location is a positive integer not exceeding 109, all team locations are different.
The third line of each case contains m – the number of shelters (1 ≤ m ≤ n). The fourth line contains m integer numbers – the locations of the shelters. Each shelter’s location is a positive integer not exceeding 109, all shelter locations are different.
The amount of fuel that needs to be supplied to a team at location x that goes to a shelter at location y is equal to |x – y|.
1
3
1 2 3
2
2 10
8
1 1 2
#include<iostream>
#include<cstdio>
#include<cstring>
#include<algorithm>
using namespace std;
const long long inf=(1LL)<<60;
short path[4010][4010];
long long f[4010];
struct node
{
long long d;
int num;
int sh;
}x[4010],y[4010];
int n,m;
bool cmpd(node a,node b)
{
return a.d<b.d;
}
bool cmpnum(node a,node b)
{
return a.num<b.num;
}
void solve(int i,int j)
{
if(i!=1) solve(i-1,path[i][j]);
x[i].sh=y[j].num;
}
int main()
{
int sec;
scanf("%d",&sec);
for(int z=1;z<=sec;z++)
{
scanf("%d",&n);
for(int i=1;i<=n;i++)
{
scanf("%I64d",&x[i].d);
x[i].num=i;
}
scanf("%d",&m);
for(int i=1;i<=m;i++)
{
scanf("%I64d",&y[i].d);
y[i].num=i;
}
sort(x+1,x+1+n,cmpd);
sort(y+1,y+1+m,cmpd);
for(int i=0;i<=max(n,m);i++)
f[i]=inf;
f[1]=abs(x[1].d-y[1].d);
for(int i=2;i<=n;i++)
for(int j=min(i,m);j>=1;j--)
{
if(f[j]<f[j-1])
{
f[j]=f[j]+abs(x[i].d-y[j].d);
path[i][j]=j;
}
else
{
f[j]=f[j-1]+abs(x[i].d-y[j].d);
path[i][j]=j-1;
}
}
printf("%I64d\n",f[m]);
solve(n,m);
sort(x+1,x+1+n,cmpnum);
for(int i=1;i<=n-1;i++)
printf("%d ",x[i].sh);
printf("%d\n",x[n].sh);
}
return 0;
}
|
LightJason - AgentSpeak(L++)
LightJason - AgentSpeak(L++)
Based on the project Jason by Jomi F. Hübner and Rafael H. Bordini a Java 9 implementation has been build-up with parallel execution calls. The version defines an additional AgentSpeak(L) grammar based on AntLR for simulating a multi-agent system with a fuzzy-based logical calculus and grammar features like lambda expressions. Agent execution based on a mathematical structure to describe an optimizing process by a finite-state-machine
## Base Definitions
### Belief
• Beliefs implicitly describe the current state of the agent
• Beliefs will be updated before the cycle is run (beliefbase uses an update mechanism)
• Beliefs must be exists iif an expression is computed (beliefs can be exist on the fly)
• Belief addition triggers a plan with the definition +belief
• Belief retraction triggers a plan with the definition -belief
• Belief modification with -+ does not exists anymore
• Variables within a belief literal will be unified before the belief is added to the beliefbase
### Action
• Actions will be run immediately
• Actions can fail (fuzzy-logic false) or succeed (fuzzy-logic true)
• There is no difference between internal and external actions
• Actions with @-prefix wil be executed in parallel (each inner action will be run in parallel)
### Plan
• Plans are sequences of actions, rules and/or achievement / test goals
• Plans has got an optional context, that defines a constraint for execution (default is fuzzy-logic true and matches always)
• Plans fail iif the defuzzyfication returns fuzzy-logic false
• Plans returns a boolean value which defines fail (fuzzy-logic false) and success (fuzzy-logic true)
• Plans run items in sequential order on default
• If the plan calls an achievement goal addition, the goal will be added for the next cycle
• An achievement goal deletion does not exists anymore
#### Internals Constants
• The plan has got additional constant variables, that are added in the context condition (values are calculated before plan execution is started)
• PlanFail stores the number of fail runs and PlanFailRatio normalized value in [0,1]
• PlanSuccessful stores the number of successful runs and PlanSuccessfulRatio normalized value in [0,1]
• PlanRuns number of runs of the plan (fail + successful runs)
#### Fuzziness
• Fuzzy value must be in [0,1]
• Each action in a fuzzy-plan returns also a fuzzy value to define the fuzziness
• The plan or rule result returns fuzzy-logic true / false and the aggregated fuzzy value
### Rule
• Rules are similar to plans without the context condition
• Rules cannot be triggered by a goal, so they must be called from a plan
### Action / Term Annotation
• In LightJason one can specify HOW actions and terms will be executed / unified.
• Concept of action-term-annotations allows to annotate actions, and terms to perform
• unification (>>)
• parallel execution (@), see Variables and lambda expressions.
• ...
• If more than one action-term-annotation is needs to be added, they have to be ordered according to the rule: First HOW, then WHAT, e.g. @>> (parallel unification)
• To annotate multiple actions/terms brackets (,) can be used. See the following examples
• Examples
• @>>( foo(X), X > 1 ) && Value > 0.5 (unify foo(X) and X > 1 in parallel and if this results in a true statement check whether Value > 0.5)
• >>foo(X) && X > 1 && Value > 0.5 (unify foo(X), then test the following terms sequentially)
|
# Contraction (operator theory)
contracting operator, contractive operator, compression
A bounded linear mapping $T$ of a Hilbert space $H$ into a Hilbert space $H _ { 1 }$ with $\| T \| \leq 1$. For $H = H _ { 1 }$, a contractive operator $T$ is called completely non-unitary if it is not a unitary operator on any $T$-reducing subspace different from $\{ 0 \}$. Such are, for example, the one-sided shifts (in contrast to the two-sided shifts, which are unitary). Associated with each contractive operator $T$ on $H$ there is a unique orthogonal decomposition, $H = H _ { 0 } \otimes H _ { 1 }$, into $T$-reducing subspaces such that $T _ { 0 } = T | _ { H _ { 0 } }$ is unitary and $T _ { 1 } = T | _ { H _ { 1 } }$ is completely non-unitary. $T = T _ { 0 } \otimes T _ { 1 }$ is called the canonical decomposition of $T$.
A dilation of a given contractive operator acting on $H$ is a bounded operator $B$ acting on some large Hilbert space $K \supset H$ such that $T ^ { n } = P B ^ { n }$, $n = 1,2 , \dots,$ where $P$ is the orthogonal projection of $K$ onto $H$. Every contractive operator in a Hilbert space $H$ has a unitary dilation $U$ on a space $K \supset H$, which, moreover, is minimal in the sense that $K$ is the closed linear span of $\{ U ^ { n } H \} _ { n = - \infty } ^ { + \infty }$ (the Szökefalvi-Nagy theorem). Minimal unitary dilations and functions of them, defined via spectral theory, allow one to construct a functional calculus for contractive operators. This has been done essentially for bounded analytic functions in the open unit disc $D$ (the Hardy class $H ^ { \infty }$). A completely non-unitary contractive operator $T$ belongs, by definition, to the class $C _ { 0 }$ if there is a function $u \in H ^ { \infty }$, $u ( \lambda ) \not \equiv 0$, such that $u ( T ) = 0$. The class $C _ { 0 }$ is contained in the class $C_{00}$ of contractive operators $T$ for which $T ^ { n } \rightarrow 0$, $T ^ { * n } \rightarrow 0$ as $n \rightarrow \infty$. For every contractive operator of class $C _ { 0 }$ there is the so-called minimal function $m _ { T } ( \lambda )$ (that is, an inner function $u \in H ^ { \infty }$, $| u ( \lambda ) | \leq 1$ in $D$, $| u ( e ^ { i t } ) | = 1$ almost-everywhere on the boundary of $D$) such that $m _ { T } ( T ) = 0$ and $m _ { T } ( \lambda )$ is a divisor of all other inner functions with the same property. The set of zeros of the minimal function $m _ { T } ( \lambda )$ of a contractive operator $T$ in $D$, together with the complement in the unit circle of the union of the arcs along which $m _ { T } ( \lambda )$ can be analytically continued, coincides with the spectrum $\sigma ( T )$. The notion of a minimal function of a contractive operator $T$ of class $C _ { 0 }$ allows one to extend the functional calculus for this class of contractive operators to certain meromorphic functions in $D$.
The theorem on unitary dilations has been obtained not only for individual contractive operators but also for discrete, $\{ T ^ { n } \}$, $n = 0,1 , \ldots,$ and continuous, $\{ T ( s ) \}$, $0 \leq s \leq \infty$, semi-groups of contractive operators.
As for dissipative operators (cf. Dissipative operator), also for contractive operators a theory of characteristic operator-valued functions has been constructed and, on the basis of this, also a functional model, which allows one to study the structure of contractive operators and the relations between the spectrum, the minimal function and the characteristic function (see [1]). By the Cayley transformation
\begin{equation*} A = ( I + T ) ( I - T ) ^ { - 1 } , \quad 1 \notin \sigma _ { p } ( T ), \end{equation*}
a contractive operator $T$ is related to a maximal accretive operator $A$, that is, $A$ is such that $i A$ is a maximal dissipative operator. Constructed on this basis is the theory of dissipative extensions $B_0$ of symmetric operators $A _ { 0 }$ (respectively, Philips dissipative extensions $i B _ { 0 }$ of conservative operators $i A _ { 0 }$).
The theories of similarity, quasi-similarity and unicellularity have been developed for contractive operators. The theory of contractive operators is closely connected with the prediction theory of stationary stochastic processes and scattering theory. In particular, the Lax–Philips scheme [2] can be considered as a continual analogue of the Szökefalvi-Nagy–Foias theory of contractive operators of class $C_{00}$.
#### References
[1] B. Szökefalvi-Nagy, Ch. Foiaş, "Harmonic analysis of operators in Hilbert space" , North-Holland (1970) (Translated from French) [2] P.D. Lax, R.S. Philips, "Scattering theory" , Acad. Press (1967)
A reducing subspace for an operator $T$ is a closed subspace $K$ such that there is a complement $K ^ { \prime }$, i.e. $H = K \oplus K ^ { \prime }$, such that both $K$ and $K ^ { \prime }$ are invariant under $T$, i.e. $T ( K ) \subset K$, $T ( K ^ { \prime } ) \subset K ^ { \prime }$.
|
100 declares an annual of dividends of 8% to the share-holders. Solution: Question 26. The paying rate = ₹ 1.60 x 100 / ₹ 1.60 = 6.4%. A man bought 360 ten-rupee shares paying 12% per annum. Find the annual income from 450, 25 shares, paying 12% dividend. The original value of a share at which the company sells it to investors and which is printed on the share certificate is called Nominal Value (NV) or Printed Value (PV) of the shares. 100 each at a premium of 10%. 1.60. 2.50 on each share. Nominal value of shares = Dividend on shares X 100 / Rate of dividend = ₹ 540 X 100 / 15 = ₹ 3,600. 540, calculate (i) his total investment, (ii) the rate of return of his investment. Solution: Question 3. 45000 in 15% Rs. 40 each a man gets 4 % profit on his investment. 150. So the total amount is divided into equal parts called, The shares purchased by an individual is also called, The persons who buy these shares are called, The profit gained by the company that is distributed among the shareholders is called, The original value of a share at which the company sells it to investors and which is printed on the share certificate is called, The price at which the stock is bought or sold, in the market is called the. 8,800 in buying shares of a company of face value of Rs. Find the dividends received on 60 shares of Rs 20 each if 9% dividend is declared. 8000 in a company paying 8% dividend when a share of the face value of Rs. 1 = Rs. Nominal Value of Investment = 30 Shares X Rs. 20 Value of 60 shares = Rs. Solution: Question 20. Salman buys 50 shares of face value Rs 100 available at Rs 132. 200. What price is paid for each Rs 100 share Nominal value of y shares= 100 x y= Rs(100y) Dividend%= 10%. Income on Rs 12 = 9/5. Find the annual change in his income. Solution: Question 22. (i) What is his investment? Annual dividend on shares = Nominal value of shares X Rate of dividend / 100 = 1,80,000 X 15 / 100 = ₹ 27,000. It is calculated on the face value of a share and expressed as percentage. (ii) Calculate the dividend he receives annually. Free PDF download of Class 10 Mathematics Chapter 3 - Shares and Dividend Revision Notes & Short Key-notes prepared by our expert Math teachers as per CISCE guidelines . How much income would he get in all? The profit gained by the company that is distributed among the shareholders is called Dividends. ICSE Class 10 Maths Solutions Chapter 4 Shares and Dividends Exercise 4.1 Shares and Dividends Exercise 4.1 Q2 Shares and Dividends Exercise 4.1 Q3 Shares and Dividends Exercise 4.1 Q4 Shares and Dividends Exercise 4.1 Q5 Shares and Dividends Exercise 4.1 Q6 Shares and Dividends Exercise 4.1 Q7 Shares and Dividends Exercise 4.1 Q8 3. Solution: Income on one share = $$\\ \frac { 9 }{ 100 }$$ x 20 = Rs $$\\ \frac { 9 }{ 5 }$$.’. Learn to calculate the investment needed to buy a specific number of shares at a specific premium amount. Solution: Question 2. (ii) Cost of bought (Rs. 7500 in a company paying 10% dividend, an income of Rs. ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 3 Shares and Dividends MCQS; ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 3 Shares and Dividends Chapter Test; Question 1. Profit (Dividend) on 1 shares = ₹ 25 x 4 / 100 = Rs. Sum of money obtained by selling shares = No of Shares X Selling Rate per share, 9. of bought shares = ₹9,000 / ₹500 = 60 Shares, Nominal value of investment = 60 shares x ₹ 100 = ₹ 6,000, The income obtained by investing = ₹ 6,000 x 6/100 = ₹ 360, Cost of sale of 50% shares = 30 shares x ₹ 150 = ₹ 4,500, Selling price of 50% shares = 30 shares x ₹ 200 = ₹ 6,000. Find : 100 shares quoted at Rs. Cost of bought 200 (₹ 10) shares at ₹ 12.50 each. 140 and partly in shares of 5% at Rs. 1800-212-7858 / 9372462318. (1998) 100 shares paying 10% dividend at a discount of 25% and invested the proceeds in Rs. Then income of Rs 100 = (9/5) x (100/12) = 15. There are various ways in which they arrange for that money, one way is to raise money from public. Also find (i) the number of shares bought by Ashok (ii) the percentage return on his investment. Maximum students of CISCE Class 10 prefer ICSE Textbook Solutions to score more in exam. ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 3 Shares and Dividends MCQS ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 3 Shares and Dividends MCQS Question 1. Cost of Stock = No of Stock X Rate per Stock, 7. Solution: Question 36. 25. 30 for each share. He buys shares at such a price that he gets 12percent of his money. Need assistance? S Chand ICSE Solutions Class 10 Maths Shares and Dividends Exercise 3B: Ques No 6. 25 shares at 8% discount, then the annual incomes from both the investment are equal The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Solution: Question 10. He buys the shares at such a price that his profit is 16% on his investment. Selina Solutions Concise Maths Class 10 Chapter 3 Shares and Dividends Exercise 3(B) Income from shares and dividends are a source of passive income. 5 = Rs. By investing Rs.45,000 in 10% Rs.100 shares, Sharad gets Rs.3,000 as dividend. 8 = Rs 240. ICSE Class 10 Selina Solutions. (2016) Solution: Question 11. 9000 in a company paying a dividend of 6% per annum when a share of face value 100 stands at Rs. Also find (i) the number of shares bought by Ashok (ii) the percentage return on his investment. A man invests Rs -10080 in 6% hundred- rupee shares at Rs. So the total amount is divided into equal parts called Shares. Annual Dividends on Shares = (Nominal Value of Shares X Rate of Dividend) / 100, 8. Ajay owns 560 shares of a company. Dividend on Shares - Get Get topics notes, Online test, Video lectures & Doubts and Solutions for ICSE Class 10 Mathematics on TopperLearning What price is paid for each Rs. ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 3 Shares and Dividends Ex 3. 21 and invested the proceeds in five-rupee shares paying % per annum at Rs. ICSE Class 10 Selina Solutions. The company declares a dividend of 9%. ICSE Class 10 Maths Selina Solution. 140, he sold some shares, just enough to raise Rs. If a man receives Rs. Solution: Question 5. Dividend received = 12% of ₹ 8,000 + 8% of ₹ 10,000 = 960 + 800 = ₹ 1,760, Loss on sale of shares=2% of ₹8,000 + 3% of ₹ 10,000 = 160 + 300 = ₹ 460, ∴Total earning = Dividend received – Loss on sale of shares = 1,760 – 460 = ₹ 1,300, S Chand ICSE Solutions Class 10 Maths Shares and Dividends Exercise 3B: Ques No 20. Find the tire market value of Rs. of purchased new shares = ₹ 9,600 / 48 = 200 shares, (iii) Nominal value of shares = No. Abhishek sold a certain number of shares of Rs. 720, if the dividend declared is 12%. A man bought 160 (Rs. The preferred shareholders have a first claim on the dividend. (i) the number of shares he has in the company. Hope given Selina Concise Mathematics Class 10 ICSE Solutions Chapter 3 Shares and Dividend Ex 3A are helpful to complete your math homework. 143. of shares X Nominal value per share = 1,800X ₹ 100 = ₹ 1,80,000. Solution: Question 14. A man bought 600 (Rs. Find the market value of each share. of shares X Nominal value per share = 62.5 X ₹ 24 = ₹ 1,500. 500 is received. 1000000. Hope given Selina Concise Mathematics Class 10 ICSE Solutions Chapter 3 Shares and Dividend Ex 3A are helpful to complete your math homework. … 8000 and Rs. 18 and invested the proceeds in Rs. Learn to calculate the investment needed to buy a specific number of shares at a specific premium amount. A lady holds 1800, Rs. If the market value is more than the face value of the share, the share is called to be above par. To register Maths Tuitions on Vedantu.com to clear your doubts. The market value of a share keeps changing from time to time. 100 shares of a company that pays 15% dividend annually. 2840 as his dividend, find the nominal value of his shares. (ii) the market value of each share. Free download of step by step solutions for class 10 mathematics chapter 3 - Shares and Dividend of ICSE Board (Concise - Selina Publishers). The persons who buy these shares are called Shareholders or Stockholders. CBSE Maths notes, CBSE physics notes, CBSE chemistry notes, ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 3 Shares and Dividends Ex 3. 2337.50, find his saving and the amount which he invested in buying shares of each company. June 25, 2019 by studymumbai Leave a Comment. Rate of return on investment = Annual dividend on shares X 100 / Cost of Investment = 27,000 X 100 / 2,52,000 = 10.71% = 11% ( to the nearest integer ). 80 each, a man gets a 4% profit on his investment. 125. Find the number of shares that can be bought and the income obtained by investing: (i) Rs. Cost of investment = 560 shares x ₹ 30 = ₹ 16,800. Rs. EXERCISE – 3 Shares and Dividend RS Aggarwal Goyal Prakashan ICSE Class-10 Q.1. of shares sold by Abhishek = X = 750 shares, S Chand ICSE Solutions Class 10 Maths Shares and Dividends Exercise 3B: Ques No 19. 280, how much has he invested in each? What is his dividend if he buys 60 shares? Prepare for your board exam with TopperLearning’s Frank Solutions for ICSE Class 10 Mathematics Chapter 4 Shares and Dividends. Access answers to ML Aggarwal Solutions for Class 10 Maths Chapter 3 – Shares and Dividends. (i) the number of shares he bought. (ii) the dividend percentage per share. Find his annual income. 150 per share. When the shares fall to Rs. clear. 5) shares for Rs. Solution: Question 33. Cost of bought 200 shares = 200 x ₹ 12.50 = ₹ 2,500. Calculate: If the rate of dividend is 8%, Find, Rate of dividend = 10% Total income = Rs. Also, find the percentage return on his income. If a man received ₹1080 as dividend from 9% ₹20 shares, find the number of shares purchased by him. 1) shares at Rs. By investing 745,000 in 10% 7100 shares, Sharad gets 73,000 as divided. Calculate: (i) the sale proceeds (ii) the number of shares he buys (iii) the annual dividend from these shares. S Chand ICSE Solutions Class 10 Maths Shares and Dividends Exercise 3B: Ques No 8. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. A man bought 160 (Rs. 60 premium, 100 shares, paying 15% dividend, quoted at 20% premium. Divide Rs. These books are distributed in 100+ nations and preferred by students all over the world. Solution: Value of one share = Rs. Download Formulae Handbook For ICSE Class 9 and 10. If Jagbeer invest ₹10320 on ₹100 shares at a discount of ₹ 14, then the number of shares … S Chand ICSE Solutions Class 10 Maths Shares and Dividends Exercise 3B: Ques No 10. Value of … Prepare for your board exam with TopperLearning’s Frank Solutions for ICSE Class 10 Mathematics Chapter 4 Shares and Dividends. ICSE Solutions Selina ICSE Solutions. Solution: Total investment = ₹ 45000 at 10% of ₹ 100 shares and amount of dividend = ₹ 3000. 1.25 = Rs 50, Income obtained by Investing = 40 x 8 / 100 =, Cost of bought (Rs. (i) What is his annual income, Find the market price of 5% share when a person gets a dividend of Rs 65 by investing Rs. ICSE Class 10 Mathematics Revision Notes Chapter 3 - Shares and Dividend × Sorry!, This page is not available for now to bookmark. 100 shares at Rs. (ii) 240, 50 shares at a discount of 5. There are following formulas used to solve the questions of Shares and Dividends: 1. NEW. If the market value is more than the face value of the share, the share is called to be, If the market value of a share is the same as its face value, then the share is called to be, If the market value is less than the face value of the share, then the share is called to be, Cost of bought (Re. 100 declares an annual dividend of 8% to the shareholders. (ii) What would be the annual income of a man, who has 72 shares in the company? He sells 50 % of his shares when the price rises to Rs. Solution: Value of one share = Rs. Nominal Value of Investment = 40 Shares X Re. 240 in (Rs. Calculate : Chapter 3 Shares And Dividend . All exercise questions are solved & explained by expert teacher and as per ICSE board guidelines. (i) Calculate the total amount of dividend paid by the company. Understanding ICSE Mathematics Class 10 ML Aggarwal Solved Solutions 2020 Get Latest Edition of ML Aggarwal Class 10 Solutions for ICSE Maths PDF Download 2019-2020 on LearnCram.com. Download Formulae Handbook For ICSE Class 9 and 10. Suppose a company start with a business with an investment of Rs. of purchased shares = ₹ 7,425 / 99 = 75 shares, (ii) Nominal value of 75 shares = No. (i) Calculate the number of shares he buys. (i) Calculate the total amount of dividend paid by the company (ii) Ramesh had bought 90 shares of the company at Rs. So, Frank Maths Solutions Chapter 4 Shares and Dividends are famous in India. (i) the dividend that Ajay will get. (2008) Find the dividend received on 60 shares of Rs, 20 each if 9% dividend is declared. (i) Sales proceeds of 60 shares = Nominal value of 60 shares + 60% Premium = ( 60 x 100 ) + 60% of ₹ 6,000 = 6,000 + 3,600 = ₹ 9,600. A company pays a dividend of 15% on its ten-rupee shares from which it deducts tax at the rate of 22%. Using ICSE Class 10 solutions Shares and Dividends exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. Solution: Question 8. He sells out shares worth Rs. 100 declares an annual of dividends of 8% to the share-holders. Selina Solutions Concise Maths Class 10 Chapter 3 Shares and Dividends Exercise 3(C) Having understood the basics of shares and dividends students will now understand the different choices of investment and other comparisons. ICSE Class 10 Maths Selina Solution. Shares and Dividends Exercise 3B – Selina Concise Mathematics Class 10 ICSE Solutions. (iii) The percentage return on his investment. Exercise 3(B) Solutions. S Chand ICSE Solutions Class 10 Maths Shares and Dividends Exercise 3B: Ques No 18. If he earns Rs. There are two types of shares known as Common Shares and Preferred Shares. 7425 on buying shares of face value of Rs. 10400 in 6% shares at Rs. (i) Cost of bought (Re. 10 shares at Rs. The shares purchased by an individual is also called Stock. 100 each, declares an annual dividend of 5%. ICSE Solutions for Class 10 Mathematics – Shares and Dividends. A person invested Rs. … Annual Income from Investment = Nominal value of Shares X Rate of Dividends / 100, 13. Selina Concise Mathematics Class 10 ICSE Solutions Chapter 3 Shares and Dividend Ex 3C; Question 1. Dividend = Rs500 Chapter 3 Shares and Dividends Concise Solution ICSE Maths Class 10th. ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 3 Shares and Dividends Ex 3 ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 3 Shares and Dividends Ex 3 Question 1. (ii) Rate of return on investment = Annual dividend on shares X 100 Cost of Investment = 540 X 100 / 4,320 = 12.5%, S Chand ICSE Solutions Class 10 Maths Shares and Dividends Exercise 3B: Ques No 13. S Chand ICSE Solutions Class 10 Maths Shares and Dividends Exercise 3B: Ques No 17. Exercise 3b. Solution: Question 13. 15 20 paying 8% dividend at Rs. 11200 in a company paying 6% dividend when its Rs. This is the chapter of introduction of ‘Shares and Dividends‘ included in ICSE 2020 Class 10 Maths syllabus. 112. (i) What is the total amount of dividend paid by the company? 8 paying 9%. A man buys 75, Rs100 shares paying 9 percent dividend. If Jagbeer invest ₹10320 on ₹100 shares at a discount of ₹ 14, then the number of shares he buys is No of bought Shares = 50 / 1.25 = 40 Shares. Give your answer to the nearest integer. At what price should a 6.25% Rs. A lady holds 1800 hundred rupee shares of a company that pays 15% dividend annually. 125. Two companies have shares of 7% at Rs. If a man received ₹1080 as dividend from 9% ₹20 shares, find the number of shares purchased by him. 4368 and buys certain hundred-rupee shares at 91. The exercise problems are well framed to give a … (i) The number of shares bought by him. 40 Dividend = 10% Gain on investment = 10%. By investing Rs. Find the dividend received on 60 shares of Rs, 20 each if 9% dividend is declared. The dividend received in cash = ₹ 2,000 x 8/100 = ₹ 160. A man buys shares at the par value of Rs 10 yielding 8% dividend at the end of a year. ... Chapter 3 Shares and Dividends Chapter Test; ICSE 10th Maths Solutions Chapter 4 Linear Inequations. Find (i) his annual income. 3.5 per share. Solution: ML Aggarwal Class 10 Solutions for ICSE Maths, Kerala Syllabus 9th Standard Physics Solutions Guide, Kerala Syllabus 9th Standard Biology Solutions Guide, Ekarthak Shabd in Hindi | एकार्थक शब्द की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Tatsam Tadbhav Shabd in Hindi | तत्सम तद्भव शब्द की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Shabd Vichar in Hindi | शब्द विचार की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Kriya Visheshan in Hindi | क्रिया विशेषण की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Paryayvachi Shabd in Hindi | पर्यायवाची शब्द की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Anek Shabdon Ke Liye Ek Shabd in Hindi | अनेक शब्दों के लिए एक शब्द की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Chhand in Hindi | छन्द की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Anekarthi Shabd in Hindi | एकार्थक शब्द की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Vilom Shabd in Hindi | विलोम शब्द (Antonyms) की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Samvaad Lekhn in Hindi(Dialogue Letter)-संवाद-लेखन, Vismayadibodhak in Hindi | विस्मयादिबोधक (Interjection) की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Samuchchay Bodhak in Hindi | समुच्चयबोधक (Conjunction) की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Sambandh Bodhak in Hindi | संबंधबोधक (Preposition) की परिभाषा एवं उनके भेद और उदाहरण (हिन्दी व्याकरण), Patra lekhan in Hindi – पत्र-लेखन (Letter-Writing) – Hindi Grammar, हिन्दी निबंध – Essay in Hindi Writing- Hindi Nibandh. ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 3 Shares and Dividends Chapter Test; Question 1. No. If Jagbeer invest ₹10320 on ₹100 shares at a discount of ₹ 14, then the number of shares he buys is Solution: Question 6. Nominal Value of Shares = No of Shares X Nominal Value per share, 2. of shares purchased = Nominal value of shares / Face value per share = 6,000 / 25 = 240 shares, Sum paid for investment = 240 shares x ₹ 36 = ₹ 8,640, Return on investment = Annual income from shares X 100/ Cost of Investment = 720 X 100 / 8,640 = 8.33%, S Chand ICSE Solutions Class 10 Maths Shares and Dividends Exercise 3B: Ques No 11, Mr. Sharma has 60 shares of nominal value of Rs. A man bought 500 shares, each of face value Rs. 10:00 AM to 7:00 PM IST all days. Question 1. Find the number of shares sold by him. of shares X Nominal value per share = 75 X ₹ 90 = ₹ 6,750. Dividend percentage per share = Annual dividend on shares X 100 / Nominal Value = 1,350 X 100 / 6,750 = 20%. Questions on Shares and Dividend Part 1 | Class 10 | ICSE Board || TrueMaths india ===== Thank you watching our videos. Solution: Question 17. Amit Kumar invests Rs 36,000 in buying Rs 100 shares at Rs 20 premium. Chapter 3 Shares And Dividend. (1994) ML Aggarwal ICSE Solutions for Class 10 Maths Chapter 4 Shares and Dividends March 19, 2018 by Prasanna ML Aggarwal Solutions ICSE Solutions Selina ICSE Solutions If you have any doubts, please comment below. Calculate her annual dividend. of shares X Nominal value per share = 200 X ₹ 50 = ₹ 10,000. Access other exercises of Selina Solutions Concise Maths Class 10 Chapter 3 Shares and Dividends. If the market value of a share is the same as its face value, then the share is called to be at par. Ten Year Sample. Solution: Question 16. A company with 10000 shares of nominal value Rs. Solution: Question 28. Price of the Shares Stand = 360 / 160 = Rs. RD sharma solution HC verma solution RS aggarwal solution TS Grewal Solution DK goel solution TR jain solution Selina Solution Frank solution ML Aggarwal solution Lakhmir singh and manjit kaur solution Evergreen Science Xamidea Oswaal All In One Dc Pandey Together With Solution I.E.Irodov solutions icse-allied publishers solution ICSE - goyal brothers park.. All the solutions of Mathematics explained in detail by experts to help students prepare for their ICSE exams. A man invests Rs. Hi students, Welcome to Amans Maths Blogs (AMB). of bought shares = ₹ 11,200 / ₹ 140 = 80 Shares, Nominal value of investment = 80 shares x ₹ 100 = ₹ 8,000, ∴ The income obtained by investing = ₹ 8,000 X 6 / 100 = ₹ 480. Find the gain or loss on the total transaction, Students studying in ICSE affiliated schools know ML Aggarwal Maths Chapter 4 Shares and Dividends Solutions Class 10 really well as it a compulsory textbook or a reference book for them. Question 1. ICSE BlueJ Theory. A man invests a sum of money in Rs. At what price did he buy the shares ? Answer 1 of shares x Face value per share ( ) Nominal Value = Face Value = Par value Dividend = Income ... A quadratic equation will have 2 solutions. S Chand ICSE Solutions Class 10 Maths Shares and Dividends Exercise 3B: Ques No 10. Class 10: Shares and Dividend – ICSE Board Problems Date: January 1, 2018 Author: ICSE CBSE ISC Board Mathematics Portal for Students 2 Comments Question 1: A man invests Rs. or own an. 96 he sells out the shares and invests the proceeds in 10% ten-rupee shares at Rs. A man buys 75, ₹ 100 shares of a company which pays 9 percent dividend. If his total income on account of dividends be Rs. 1) shares for Rs. ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 3 Shares and Dividends Chapter Test ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 3 Shares and Dividends Chapter Test Question 1. I am a Maths Expert of IIT Foundation Courses. (ii) What would be the annual income of a man, who has 72 shares, in the company? Nominal value of shares = No. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. He collects the dividends and sells out his shares at a loss of 2% and 3% respectively. A man invests Rs 8800 on buying shares of the face value of rupees hundred each at a premium of 10%. Further, solutions of this exercise questions are available in the Concise Selina Solutions for Class 10 Maths Chapter 3 Shares and Dividends Exercise 3(A) PDF in the links below. Solution: Question 7. Also, learn to find the annual income from shares according to the given data on the number of shares, dividend and nominal value of the share. 104 and Rs. Get Shares and Dividend, Mathematics Chapter Notes, Questions & Answers, Video Lessons, Practice Test and more for CBSE Class 10 at TopperLearning. The updated syllabus will be able to best match the expectations and studying objectives of the students. (ii) Rate per new share = ₹ 50 – 4% Discount 0f ₹ 50 = 50 – 2 = ₹ 48, No. 50 quoted at 4% discount, paying 18% dividend annually. S Chand ICSE Solutions Class 10 Maths Shares and Dividends Exercise 3B: Ques No 7. Shares and Dividends Class 10 ICSE explained thoroughly. In this post, you will get S Chand ICSE Maths Solutions Class 10 Chapter 3 Shares and Dividends Exercise 3B. (iii) yield. Very Common Questions pertaining to Shares and Dividends has always confused students. Solution: Question 4. Selina solutions for Concise Maths Class 10 ICSE chapter 3 (Shares and Dividend) include all questions with solution and detail explanation. (i) Cost of investment per share = Nominal value of shares + 12% Premium = ₹ 24 + 12% of ₹ 24 = 24 + 2.88 = ₹ 26.88, No. 2400 when they have t risen to 95 and the remainder when they have fallen to 85. A man invests Rs. 50 gas shares for Rs. Find the amount invested by him and the dividend received by him in cash. Easy explanation of Shares & Dividends of Class 10 ICSE Mathematics. Arun owns 560 shares of a company. A man buys 75, Rs100 shares paying 9 percent dividend. 2000. What rate per cent is company paying? Nominal value of shares = Dividend on shares X 100 / Rate of dividend = 720 x 100 / 12 = = ₹ 6,000, No. Selling price of X shares = X x ₹ 18 = 18X, Dividend on X shares = 8% of 20X = 20X x 8 / 100 = 8X/5, Now, Rate per new share = ₹ 10 + 50% Premium = 10 + 5 = 15, No. (i) Cost of bought (₹ 100) shares at ₹ 140 = ₹ 11,200, No. Solution: Question 12. To know more, Click About US, S Chand ICSE Maths Solutions Class 10 Chapter 3 Shares and Dividends Exercise 3B, Class 10 Chapter 3 Shares and Dividends Exercise 3B, There are following formulas used to solve the questions of, The total amount invested to start a company is called, Generally it is not possible for one individual to manage the whole amount. Solution: Total investment = ₹ 45000 at 10% of ₹ 100 shares and amount of dividend = ₹ 3000. 24 and selling at 12% premium. At what price did he buy the shares ? To practise more problems from this chapter, explore our ICSE Class 10 Maths Frank solutions, sample paper solutions and previous years’ question papers. (ii) the dividend percentage per share that he received. If a man received ₹1080 as dividend from 9% ₹20 shares, find the number of shares purchased by him. When they have been paid, the remaining profit is distributed among the common shareholders. (2004) 8. 30 per share in the market. 100 share can be bought for Rs. What is his gain on this transaction? ICSE Class 10 Maths Selina Solution. ML Aggarwal Solutions ICSE Solutions Selina ICSE Solutions. Contact. (ii) The percentage return on his income. 750. Mrs. Kulkarni invests Rs.1, 31,040 in buying Rs.100 shares at a discount of 9%. Calculate: (i) the dividend Arun would receive, and (ii) the rate of interest, on his investment. A man sold some Rs. He buys shares at such a price that he gets 12percent of his money. Revise the steps to compute the annual income of the given share investments. Find the rate of dividend on his shares. 7500 in a company paying 10 per cent dividend, an income of Rs. Frank ICSE Solutions for Class 10 Maths Shares and Dividends … If the market value is less than the face value of the share, then the share is called to be below par or at a discount. of purchased shares = ₹ 1,680 / 26.88 = 62.5 Shares, (ii) Nominal value of 62.5 shares = No. (ii) Rs. The dividend on the shares is 15% per annum. 11440 in 10.4% shares at Rs. Answer 1 Question 2. 116 and 9% at Rs. 100, selling at 110, a person increased his income by Rs, 117 per annum. Solutions for Class 10 Mathematics ICSE, shares-and-dividends. The dividend is 15% per annum. ICSE schools in our country also support Frank Solutions. Selina Solutions Concise Maths Class 10 Chapter 3 Shares and Dividends. 40. 110 declares an annual dividend of 15%. Gain on sale of 50% shares = S.P. 1.25 = Rs 50. 10 of a certain business concern, and during the first year after purchase received Rs. I started this website to share my knowledge of Mathematics. What is his annual income? Solution: Question 18. Income obtained by Investing = 150 x 9 / 100 = Rs. If he earns Rs 1200 at the end of the year as a dividend, find: Frank ICSE Solutions for Class 10 Maths Shares and Dividends Ex 4.3. Find the amount invested by him and dividend receive by him in cash. Face value of bought 500 shares = 500 x ₹ 10 = ₹ 5,000, Dividend received on bought 500 shares = ₹ 400. Project on Share and Dividend. Sale proceeds of n Shares = Nominal value of Shares + 50% Premium. 100 shares paying a 16% dividend quoted at Rs. Ashok invests Rs 26400 on 12% Rs 25 shares of a company. = 6,000 – 4,500 =, Price paid per share to the company = ₹ 7,500 / 50 =, Dividend on shares = Nominal value of shares X Rate of dividend / 100 = 14,000 X 9 / 100 =, Interest on investment = 1,260 x 100/16,800 = =, (i) Sales proceeds of 60 shares = Nominal value of 60 shares + 60% Premium = ( 60 x 100 ) + 60% of ₹ 6,000 = 6,000 + 3,600 =, No. We provide step by step Solutions for ICSE Mathematics Class 10 Solutions Pdf. Generally it is not possible for one individual to manage the whole amount. A man invests Rs. Here we have given ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 4 Shares and Dividends Chapter Test. A person invested 20%, 30% and 25% of his saving in buying shares of three different companies A, B and C, which declared dividends of 10%, 12% and 15% respectively. Nominal value of y shares = 100 x y = ₹ (100y) Dividend% = 10% Dividend = ₹ 500. Discount = No of Shares X Discount per share, 5. A man sold 600 (Rs. Eg. 12.50 each and receives a divided of 8 %. 3.20. At what premium or discount were they quoted? A man invested Rs. By purchasing Rs. Amount of Money Invested = No of Shares X Market value per Share, 14. Hope given ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 4 Shares and Dividends Chapter Test are helpful to complete … ICSE Study Tips. (ii) If Arun bought these 560 shares @ ₹ 30 per share in the market. clear. 25. of shares X Nominal value per share = 560 X ₹ 25 = ₹ 14,000, Dividend on shares = Nominal value of shares X Rate of dividend / 100 = 14,000 X 9 / 100 = ₹ 1,260. 100 each at a premium of 10%. The Selina Solutions for Class 10 Maths is a great asset to students for any quick reference or doubt clearance of any concept. Spread the love . If Jagbeer invest ₹10320 on ₹100 shares at a discount of ₹ 14, then the number of shares … (i) The number of shares he buys 360. 8400. Solution: Question 2. 50 in (Rs. Education Franchise × Contact Us. A company declared a dividend of 14%. What sum should Ashok invest in Rs 25 shares selling at Rs 36 to obtain an income of Rs.
Class Diagram For Google Search Engine, Ovid Remedia Amoris Latin, Steam Cake Recipe Chinese, Godslayer Sword Smt, 9mm Solvent Trap Adapter, Business Vocabulary List With Meaning, 9mm Solvent Trap Adapter, Chef Hat Clipart, Pico Vr Review, Rawlings Quatro 2018, Orijen Cat Treats,
|
# 3.5 Transformation of functions
Page 1 / 21
In this section, you will:
• Graph functions using vertical and horizontal shifts.
• Graph functions using reflections about the $\text{\hspace{0.17em}}x\text{-axis}\text{\hspace{0.17em}}$ axis and the $\text{\hspace{0.17em}}y\text{-axis}.$
• Determine whether a function is even, odd, or neither from its graph.
• Graph functions using compressions and stretches.
• Combine transformations.
We all know that a flat mirror enables us to see an accurate image of ourselves and whatever is behind us. When we tilt the mirror, the images we see may shift horizontally or vertically. But what happens when we bend a flexible mirror? Like a carnival funhouse mirror, it presents us with a distorted image of ourselves, stretched or compressed horizontally or vertically. In a similar way, we can distort or transform mathematical functions to better adapt them to describing objects or processes in the real world. In this section, we will take a look at several kinds of transformations.
## Graphing functions using vertical and horizontal shifts
Often when given a problem, we try to model the scenario using mathematics in the form of words, tables, graphs, and equations. One method we can employ is to adapt the basic graphs of the toolkit functions to build new models for a given scenario. There are systematic ways to alter functions to construct appropriate models for the problems we are trying to solve.
## Identifying vertical shifts
One simple kind of transformation involves shifting the entire graph of a function up, down, right, or left. The simplest shift is a vertical shift , moving the graph up or down, because this transformation involves adding a positive or negative constant to the function. In other words, we add the same constant to the output value of the function regardless of the input. For a function $\text{\hspace{0.17em}}g\left(x\right)=f\left(x\right)+k,\text{\hspace{0.17em}}$ the function $\text{\hspace{0.17em}}f\left(x\right)\text{\hspace{0.17em}}$ is shifted vertically $\text{\hspace{0.17em}}k\text{\hspace{0.17em}}$ units. See [link] for an example.
To help you visualize the concept of a vertical shift, consider that $\text{\hspace{0.17em}}y=f\left(x\right).\text{\hspace{0.17em}}$ Therefore, $\text{\hspace{0.17em}}f\left(x\right)+k\text{\hspace{0.17em}}$ is equivalent to $\text{\hspace{0.17em}}y+k.\text{\hspace{0.17em}}$ Every unit of $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ is replaced by $\text{\hspace{0.17em}}y+k,\text{\hspace{0.17em}}$ so the y -value increases or decreases depending on the value of $\text{\hspace{0.17em}}k.\text{\hspace{0.17em}}$ The result is a shift upward or downward.
## Vertical shift
Given a function $f\left(x\right),$ a new function $g\left(x\right)=f\left(x\right)+k,$ where $\text{\hspace{0.17em}}k$ is a constant, is a vertical shift of the function $f\left(x\right).$ All the output values change by $k$ units. If $k$ is positive, the graph will shift up. If $k$ is negative, the graph will shift down.
## Adding a constant to a function
To regulate temperature in a green building, airflow vents near the roof open and close throughout the day. [link] shows the area of open vents $\text{\hspace{0.17em}}V\text{\hspace{0.17em}}$ (in square feet) throughout the day in hours after midnight, $\text{\hspace{0.17em}}t.\text{\hspace{0.17em}}$ During the summer, the facilities manager decides to try to better regulate temperature by increasing the amount of open vents by 20 square feet throughout the day and night. Sketch a graph of this new function.
We can sketch a graph of this new function by adding 20 to each of the output values of the original function. This will have the effect of shifting the graph vertically up, as shown in [link] .
Notice that in [link] , for each input value, the output value has increased by 20, so if we call the new function $\text{\hspace{0.17em}}S\left(t\right),$ we could write
$S\left(t\right)=V\left(t\right)+20$
This notation tells us that, for any value of $\text{\hspace{0.17em}}t,S\left(t\right)\text{\hspace{0.17em}}$ can be found by evaluating the function $\text{\hspace{0.17em}}V\text{\hspace{0.17em}}$ at the same input and then adding 20 to the result. This defines $\text{\hspace{0.17em}}S\text{\hspace{0.17em}}$ as a transformation of the function $\text{\hspace{0.17em}}V,\text{\hspace{0.17em}}$ in this case a vertical shift up 20 units. Notice that, with a vertical shift, the input values stay the same and only the output values change. See [link] .
$t$ 0 8 10 17 19 24 $V\left(t\right)$ 0 0 220 220 0 0 $S\left(t\right)$ 20 20 240 240 20 20
sinx sin2x is linearly dependent
what is a reciprocal
The reciprocal of a number is 1 divided by a number. eg the reciprocal of 10 is 1/10 which is 0.1
Shemmy
Reciprocal is a pair of numbers that, when multiplied together, equal to 1. Example; the reciprocal of 3 is ⅓, because 3 multiplied by ⅓ is equal to 1
Jeza
each term in a sequence below is five times the previous term what is the eighth term in the sequence
I don't understand how radicals works pls
How look for the general solution of a trig function
stock therom F=(x2+y2) i-2xy J jaha x=a y=o y=b
sinx sin2x is linearly dependent
cr
root under 3-root under 2 by 5 y square
The sum of the first n terms of a certain series is 2^n-1, Show that , this series is Geometric and Find the formula of the n^th
cosA\1+sinA=secA-tanA
Wrong question
why two x + seven is equal to nineteen.
The numbers cannot be combined with the x
Othman
2x + 7 =19
humberto
2x +7=19. 2x=19 - 7 2x=12 x=6
Yvonne
because x is 6
SAIDI
what is the best practice that will address the issue on this topic? anyone who can help me. i'm working on my action research.
simplify each radical by removing as many factors as possible (a) √75
how is infinity bidder from undefined?
what is the value of x in 4x-2+3
give the complete question
Shanky
4x=3-2 4x=1 x=1+4 x=5 5x
Olaiya
hi can you give another equation I'd like to solve it
Daniel
what is the value of x in 4x-2+3
Olaiya
if 4x-2+3 = 0 then 4x = 2-3 4x = -1 x = -(1÷4) is the answer.
Jacob
4x-2+3 4x=-3+2 4×=-1 4×/4=-1/4
LUTHO
then x=-1/4
LUTHO
4x-2+3 4x=-3+2 4x=-1 4x÷4=-1÷4 x=-1÷4
LUTHO
A research student is working with a culture of bacteria that doubles in size every twenty minutes. The initial population count was 1350 bacteria. Rounding to five significant digits, write an exponential equation representing this situation. To the nearest whole number, what is the population size after 3 hours?
f(x)= 1350. 2^(t/20); where t is in hours.
Merkeb
|
## Bibliography entry EGA
author
Dieudonné, Jean and Grothendieck, Alexander
title
Éléments de géométrie algébrique
year
1961–1967
journal
Inst. Hautes Études Sci. Publ. Math.
volume
4, 8, 11, 17, 20, 24, 28, 32
@ARTICLE{EGA,
AUTHOR = "Dieudonn{\'e}, Jean and Grothendieck, Alexander",
TITLE = "\'{E}l\'ements de g\'eom\'etrie alg\'ebrique",
JOURNAL = "Inst. Hautes \'Etudes Sci. Publ. Math.",
VOLUME = "4, 8, 11, 17, 20, 24, 28, 32",
YEAR = "1961--1967"
}
This item is referenced in 91 tags:
• in Section 10.37: Normal rings, which cites IV, 5.13.5 and 0, 4.1.4 of EGA
• in Lemma 10.103.7, which cites Chapter 0, Proposition 16.5.4 of EGA
• in Lemma 10.157.4: Serre's criterion for normality, which cites IV, Theorem 5.8.6 of EGA
• in Definition 10.161.1, which cites Chapter 0, Definition 23.1.1 of EGA
• in Lemma 10.161.16: Tate, which cites Theorem 23.1.3 of EGA
• in Section 10.163: Ascending properties, which cites IV, Proposition 6.3.1 of EGA
• in Lemma 10.163.1, which cites IV, Proposition 6.3.1 of EGA
• in Lemma 15.45.3, which cites IV, Theorem 18.6.6 and Proposition 18.8.8 of EGA
• in Section 17.1: Introduction
• in Section 18.1: Introduction
• in Section 26.1: Introduction
• in Lemma 26.6.4, which cites II, Err 1, Prop. 1.8.1 of EGA
• in Section 26.10: Immersions of schemes
• in Lemma 26.22.2: Valuative criterion separatedness, which cites II Proposition 7.2.3 of EGA
• in Section 27.1: Introduction
• in Section 27.8: Proj of a graded ring, which cites II, Section 2 of EGA
• in Section 28.1: Introduction
• in Section 28.7: Normal schemes, which cites 0, 4.1.4 of EGA
• in Section 28.13: Japanese and Nagata schemes, which cites IV Corollary 5.11.4 of EGA
• in Definition 28.26.1, which cites II Definition 4.5.3 of EGA
• in Lemma 28.26.2, which cites II Proposition 4.5.6(i) of EGA
• in Lemma 28.26.5, which cites II Proposition 4.5.6(ii) of EGA
• in Section 29.1: Introduction
• in Section 29.7: Scheme theoretic closure and density, which cites IV, Definition 11.10.2 of EGA
• in Lemma 29.11.3, which cites II, Corollary 1.3.2 of EGA
• in Definition 29.20.1, which cites II Definition 6.2.3 of EGA
• in Theorem 29.22.3: Chevalley's Theorem, which cites IV, Theorem 1.8.4 of EGA
• in Lemma 29.23.5, which cites IV, Corollary 1.10.4 of EGA
• in Lemma 29.25.12, which cites IV, Corollaire 2.3.12 of EGA
• in Lemma 29.28.4, which cites IV Theorem 13.1.3 of EGA
• in Section 29.31: Conormal sheaf of an immersion, which cites IV Definition 16.1.2 of EGA
• in Section 29.35: Unramified morphisms
• in Definition 29.37.1, which cites II Definition 4.6.1 of EGA
• in Lemma 29.37.4, which cites II, Proposition 4.6.3 of EGA
• in Lemma 29.37.5, which cites II Corollary 4.6.6 of EGA
• in Lemma 29.37.6, which cites II Proposition 5.1.6 of EGA
• in Lemma 29.38.2, which cites II, Proposition 4.6.2 of EGA
• in Section 29.40: Quasi-projective morphisms
• in Definition 29.40.1, which cites II, Definition 5.3.1 of EGA
• in Lemma 29.40.7, which cites II, Proposition 5.3.4 (i) of EGA
• in Lemma 29.42.1: Valuative criterion for properness, which cites II Theorem 7.3.8 of EGA
• in Section 29.43: Projective morphisms
• in Section 30.1: Introduction
• in Lemma 30.3.1, which cites II, Theorem 5.2.1 (d') and IV (1.7.17) of EGA
• in Lemma 30.3.2, which cites II, Theorem 5.2.1 of EGA
• in Lemma 30.8.1, which cites III Proposition 2.1.12 of EGA
• in Lemma 30.17.1, which cites III Proposition 2.6.1 of EGA
• in Lemma 30.18.1, which cites II Theorem 5.6.1(a) of EGA
• in Proposition 30.19.1, which cites III Theorem 3.2.1 of EGA
• in Lemma 30.20.1, which cites III Cor 3.3.2 of EGA
• in Section 30.24: Grothendieck's existence theorem, I
• in Theorem 30.27.1: Grothendieck's existence theorem, which cites III Theorem 5.1.5 of EGA
• in Section 31.1: Introduction
• in Section 31.2: Associated points, which cites IV Definition 3.1.1 of EGA
• in Section 31.4: Embedded points, which cites IV Definition 3.1.1 of EGA
• in Section 31.18: Relative effective Cartier divisors, which cites IV, Section 21.15 of EGA
• in Section 32.1: Introduction
• in Lemma 32.4.2, which cites IV, Proposition 8.2.9 of EGA
• in Proposition 32.6.1, which cites IV, Proposition 8.14.2 of EGA
• in Section 32.11: Characterizing affine schemes, which cites II 6.7.1 of EGA
• in Section 33.1: Introduction
• in Lemma 33.7.13, which cites IV Corollary 4.5.13.1(i) of EGA
• in Section 33.20: Algebraic schemes, which cites I Definition 6.4.1 of EGA
• in Lemma 35.14.3, which cites IV, 17.7.5 (i) and (ii) of EGA
• in Section 37.1: Introduction
• in Theorem 37.15.1, which cites IV Theorem 11.3.1 of EGA
• in Section 37.18: Flat modules and relative assassins, which cites IV Proposition 12.1.1.5 of EGA
• in Lemma 37.18.2, which cites IV Proposition 12.1.1.5 of EGA
• in Lemma 37.22.7, which cites IV Corollary 12.1.7(iii) of EGA
• in Lemma 37.23.4, which cites IV Proposition 17.16.1 of EGA
• in Lemma 37.43.3: Zariski's Main Theorem, which cites IV Corollary 18.12.13 of EGA
• in Lemma 37.50.3, which cites IV Corollary 9.6.4 of EGA
• in Lemma 37.53.9, which cites III, Proposition 5.5.1 of EGA
• in Theorem 41.15.2: Une equivalence remarquable de catégories, which cites IV, Theorem 18.1.2 of EGA
• in Lemma 51.3.2, which cites Corollary 5.10.9 of EGA
• in Proposition 51.8.7: Kollár, which cites IV, Proposition 7.2.2 of EGA
• in Section 51.15: Improving coherent modules
• in Section 58.29: Affineness of complement of ramification locus, which cites Chapter IV, Section 21.12 of EGA
• in Theorem 59.45.2, which cites IV Theorem 18.1.2 of EGA
• in Section 75.5: Conormal sheaf of an immersion, which cites IV Definition 16.1.2 of EGA
• in Section 86.1: Introduction
• in Section 86.2: Formal schemes à la EGA
• in Section 86.9: Affine formal algebraic spaces
• in Remark 86.9.8
• in Section 86.14: Completion along a closed subset, which cites Chapter I, Section 10.8 of EGA
• in Remark 86.28.1: Universal property restricted power series, which cites Chapter 0, 7.5.3 of EGA
• in Section 87.1: Introduction
• in Section 98.3: The Hom functor, which cites III, Cor 7.7.8 of EGA
• in Section 109.22: Non-quasi-affine variety with quasi-affine normalization, which cites II Remark 6.6.13 of EGA
• in Section 109.41: A formally étale non-flat ring map, which cites 0, Example 19.10.3(i) of EGA
• in Subsection 111.5.11: Theorem on formal functions and Grothendieck's Existence Theorem, which cites III.4.1.5 of EGA
|
# CAMB¶
Synopsis: Managing the CAMB cosmological code Jesus Torrado and Antony Lewis
This module imports and manages the CAMB cosmological code. It requires CAMB 1.1.3 or higher.
Note
If you use this cosmological code, please cite it as:
A. Lewis, A. Challinor, A. Lasenby, Efficient computation of CMB anisotropies in closed FRW (arXiv:astro-ph/9911177)
C. Howlett, A. Lewis, A. Hall, A. Challinor, CMB power spectrum parameter degeneracies in the era of precision cosmology (arXiv:1201.3654)
## Usage¶
If you are using a likelihood that requires some observable from CAMB, simply add CAMB to the theory block.
You can specify any parameter that CAMB understands in the params block:
theory:
camb:
extra_args:
[any param that CAMB understands, for FIXED and PRECISION]
params:
[any param that CAMB understands, fixed, sampled or derived]
If you want to use your own version of CAMB, you need to specify its location with a path option inside the camb block. If you do not specify a path, CAMB will be loaded from the automatic-install packages_path folder, if specified, or otherwise imported as a globally-installed Python package. Cobaya will print at initialisation where it is getting CAMB from.
### Modifying CAMB¶
If you modify CAMB and add new variables, make sure that the variables you create are exposed in the Python interface (instructions here). If you follow those instructions you do not need to make any additional modification in Cobaya.
You can use the model wrapper to test your modification by evaluating observables or getting derived quantities at known points in the parameter space (set debug: True to get more detailed information of what exactly is passed to CAMB).
In your CAMB modification, remember that you can raise a CAMBParamRangeError or a CAMBError whenever the computation of any observable would fail, but you do not expect that observable to be compatible with the data (e.g. at the fringes of the parameter space). Whenever such an error is raised during sampling, the likelihood is assumed to be zero, and the run is not interrupted.
## Installation¶
### Pre-requisites¶
cobaya calls CAMB using its Python interface, which requires that you compile CAMB using intel’s ifort compiler or the GNU gfortran compiler version 6.4 or later. To check if you have the latter, type gfortran --version in the shell, and the first line should look like
GNU Fortran ([your OS version]) [gfortran version] [release date]
Check that [gfortran's version] is at least 6.4. If you get an error instead, you need to install gfortran (contact your local IT service).
CAMB comes with binaries pre-built for Windows, so if you don’t need to modify the CAMB source code, no Fortran compiler is needed.
If you are using Anaconda you can also install a pre-compiled CAMB package from conda forge using
conda install -c conda-forge camb
### Automatic installation¶
If you do not plan to modify CAMB, the easiest way to install it is using the automatic installation script. Just make sure that theory: camb: appears in one of the files passed as arguments to the installation script.
### Manual installation (or using your own version)¶
If you are planning to modify CAMB or use an already modified version, you should not use the automatic installation script. Use the installation method that best adapts to your needs:
• [Recommended for staying up-to-date] To install CAMB locally and keep it up-to-date, clone the CAMB repository in Github in some folder of your choice, say /path/to/theories/CAMB:
$cd /path/to/theories$ git clone --recursive https://github.com/cmbant/CAMB.git
$cd CAMB$ python setup.py build
To update to the last changes in CAMB (master), run git pull from CAMB’s folder and re-build using the last command. If you do not want to use multiple versions of CAMB, you can also make your local installation available to python generally by installing it using
$python -m pip install -e /path/to/CAMB • [Recommended for modifying CAMB] First, fork the CAMB repository in Github (follow these instructions) and then follow the same steps as above, substituting the second one with: $ git clone --recursive https://[YourGithubUser]@github.com/[YourGithubUser]/CAMB.git
• To use your own version, assuming it’s placed under /path/to/theories/CAMB, just make sure it is compiled (and that the version on top of which you based your modifications is old enough to have the Python interface implemented.
In the cases above, you must specify the path to your CAMB installation in the input block for CAMB (otherwise a system-wide CAMB may be used instead):
theory:
camb:
path: /path/to/theories/CAMB
Note
In any of these methods, if you intent to switch between different versions or modifications of CAMB you should not install CAMB as python package using python setup.py install, as the official instructions suggest.
## camb class¶
class theories.camb.camb(info=mappingproxy({}), name=None, timing=None, packages_path=None, initialize=True, standalone=True)
CAMB cosmological Boltzmann code cite{Lewis:1999bs,Howlett:2012mh}.
initialize()
Importing CAMB from the correct path, if given.
initialize_with_params()
Additional initialization after requirements called and input_params and output_params have been assigned (but provider and assigned requirements not yet set).
get_can_support_params()
Get a list of parameters supported by this component, can be used to support parameters that don’t explicitly appear in the .yaml or class params attribute or are otherwise explicitly supported (e.g. via requirements)
Returns: iterable of names of parameters
get_allow_agnostic()
Whether it is allowed to pass all unassigned input parameters to this component (True) or whether parameters must be explicitly specified (False).
Returns: True or False
must_provide(**requirements)
Specifies the quantities that this Boltzmann code is requested to compute.
Typical requisites in Cosmology (as keywords, case insensitive):
• Cl={...}: CMB lensed power spectra, as a dictionary {spectrum:l_max}, where the possible spectra are combinations of “t”, “e”, “b” and “p” (lensing potential). Get with get_Cl().
• [BETA: CAMB only; notation may change!] source_Cl={...}: $$C_\ell$$ of given sources with given windows, e.g.: source_name: {"function": "spline"|"gaussian", [source_args]; for now, [source_args] follow the notation of CAMBSources. If can also take lmax: [int], limber: True if Limber approximation desired, and non_linear: True if non-linear contributions requested. Get with get_source_Cl().
• Pk_interpolator={...}: Matter power spectrum interpolator in $$(z, k)$$. Takes "z": [list_of_evaluated_redshifts], "k_max": [k_max], "extrap_kmax": [max_k_max_extrapolated], "nonlinear": [True|False], "vars_pairs": [["delta_tot", "delta_tot"], ["Weyl", "Weyl"], [...]]}. Non-linear contributions are included by default. Note that the nonlinear setting determines whether nonlinear corrections are calculated; the get_Pk_interpolator function also has a nonlinear argument to specify if you want the linear or nonlinear spectrum returned (to have both linear and non-linear spectra available request a tuple (False,True) for the nonlinear argument). All k values should be in units of 1/Mpc.
• Pk_grid={...}: similar to Pk_interpolator except that rather than returning a bicubic spline object it returns the raw power spectrum grid as a (k, z, PK) set of arrays.
• sigma_R{...}: RMS linear fluctuation in spheres of radius R at redshifts z. Takes "z": [list_of_evaluated_redshifts], "k_max": [k_max], "vars_pairs": [["delta_tot", "delta_tot"], [...]]}, "R": [list_of_evaluated_R]. Note that R is in Mpc, not h^{-1} Mpc.
• Hubble={'z': [z_1, ...]}: Hubble rate at the requested redshifts. Get it with get_Hubble().
• angular_diameter_distance={'z': [z_1, ...]}: Physical angular diameter distance to the redshifts requested. Get it with get_angular_diameter_distance().
• comoving_radial_distance={'z': [z_1, ...]}: Comoving radial distance from us to the redshifts requested. Get it with get_comoving_radial_distance().
• sigma8_z={'z': [z_1, ...]}: Amplitude of rms fluctuations $$\sigma_8$$ at the redshifts requested. Get it with get_sigma8().
• fsigma8={'z': [z_1, ...]}: Structure growth rate $$f\sigma_8$$ at the redshifts requested. Get it with get_fsigma8().
• k_max=[...]: Fixes the maximum comoving wavenumber considered.
• Other derived parameters that are not included in the input but whose value the likelihood may need.
calculate(state, want_derived=True, **params_values_dict)
Do the actual calculation and store results in state dict
Parameters: state – dictionary to store results want_derived – whether to set state[‘derived’] derived parameters params_values_dict – parameter values None or True if success, False for fail
get_Cl(ell_factor=False, units='FIRASmuK2')
Returns a dictionary of lensed CMB power spectra and the lensing potential pp power spectrum.
Set the units with the keyword units=number|'muK2'|'K2'|'FIRASmuK2'|'FIRASK2' (default: ‘FIRASmuK2’ gives FIRAS-calibrated microKelvin^2, except for the lensing potential power spectrum, which is always unitless). Note the muK2 and K2 options use the model’s CMB temperature; experimental data are usually calibrated to the FIRAS measurement which is a fixed temperature. The default FIRASmuK2 takes CMB C_l scaled by 2.7255e6^2 (to get result in muK^2).
If ell_factor=True (default: False), multiplies the spectra by $$\ell(\ell+1)/(2\pi)$$ (or by $$\ell^2(\ell+1)^2/(2\pi)$$ in the case of the lensing potential pp spectrum).
get_sigma8_z(z)
Present day linear theory root-mean-square amplitude of the matter fluctuation spectrum averaged in spheres of radius 8 h^{−1} Mpc.
The redshifts must be a subset of those requested when must_provide() was called.
get_fsigma8(z)
Structure growth rate $$f\sigma_8$$, as defined in eq. 33 of Planck 2015 results. XIII. Cosmological parameters, at the given redshifts.
The redshifts must be a subset of those requested when must_provide() was called.
get_source_Cl()
Returns a dict of power spectra of for the computed sources, with keys a tuple of sources ([source1], [source2]), and an additional key ell containing the multipoles.
get_CAMBdata()
Get the CAMB result object (must have been requested as a requirement).
Returns: CAMB’s CAMBdata result instance for the current parameters
get_can_provide_params()
Get a list of derived parameters that this component can calculate. The default implementation returns the result based on the params attribute set via the .yaml file or class params (with derived:True for derived parameters).
Returns: iterable of parameter names
get_version()
Get version information for this component.
Returns: string or dict of values or None
get_helper_theories()
Transfer functions are computed separately by camb.transfers, then this class uses the transfer functions to calculate power spectra (using A_s, n_s etc).
|
# Thread: Can someone help me? Percentage question
1. ## Can someone help me? Percentage question
A brief explaination would be much appreciated.
Ok here is the scenario:
Imagine a roulette wheel with 18 red slots and 18 black slots. Each spin of the wheel has a 50% chance of hitting red, and a 50% chance of hitting black. What is the percentage of hitting red an infinite amount of spins?
Thank you in advance
2. Originally Posted by crbrown1
A brief explaination would be much appreciated.
Ok here is the scenario:
Imagine a roulette wheel with 18 red slots and 18 black slots. Each spin of the wheel has a 50% chance of hitting red, and a 50% chance of hitting black. What is the percentage of hitting red an infinite amount of spins?
Thank you in advance
Unless I am misunderstanding you, you want an infinite trial with results giving you an infinite number of reds? Then your percentage is 50%. That's how the 50% figure is (theoretically) obtained from the start.
If you want an infinite number of reds in a row, the probability is 0%.
-Dan
3. ## Thats what I thought.
I told my friend that it was virtually 0%, and thus 0% in mathmatics. Thanks for making me look good.
|
# 2005 AIME II Problems/Problem 11
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
## Problem
A semicircle with diameter $d$ is contained in a square whose sides have length 8. Given the maximum value of $d$ is $m-\sqrt{n$ (Error compiling LaTeX. ! Missing } inserted.)}, find $m+n$.
|
# Derivative of e^(x^x) with respect to x
1. Aug 5, 2011
### labinojha
1. The problem statement, all variables and given/known data
Derivative of e^(x^x) with respect to x
2. Relevant equations
3. The attempt at a solution
Computed using wolframalpha. I have attached the image . Would anyone explain to me the part I have highlighted with the blue box.
File size:
18 KB
Views:
91
2. Aug 5, 2011
### BruceW
imagine that at first, you don't realise that both of the variables are x, so then you have an equation like this: $u^v$ So then, he calculates the total derivative of this equation in a general way, then at the end he replaces u by x and v by x.
Edit: ha, I'm talking about wolfram as a 'he'. Also, there is another way to solve this.
Last edited: Aug 5, 2011
3. Aug 5, 2011
### tiny-tim
hi labinojha!
this is the partial derivative version of the https://www.physicsforums.com/library.php?do=view_item&itemid=353"
if $a$ depends on $b_1,\cdots b_n$, and $b_1,\cdots b_n$ depend only on $c$, then:
$$\frac{da}{dc}\ =\ \frac{\partial a}{\partial b_1}\frac{db_1}{dc}\ +\ \cdots \frac{\partial a}{\partial b_n}\frac{db_n}{dc}\ =\ (\mathbf{\nabla_b}\,a)\cdot \frac{d\mathbf{b}}{dc}$$
in your case, a is xx, b1 and b2 are u and v,
and so a is a function a(u,v) of two variables, and we need to apply the chain rule to each variable separately
(btw, easier would be to say xx = exln(x) )
Last edited by a moderator: Apr 26, 2017
4. Aug 5, 2011
### labinojha
$\frac{\partial}{\partial x}$$u^{v}$=$\frac{\partial}{\partial x}$$u^{v}$.$\frac{\partial}{\partial x}$$u+\frac{\partial}{\partial x}$$u^{v}$.$\frac{\partial}{\partial x}$v
Can i get any reason or possibly a derivation for this ?
Hi tiny-tim!
http://www.ucl.ac.uk/Mathematics/geomath/level2/pdiff/pd10.html
Last edited: Aug 5, 2011
5. Aug 5, 2011
### labinojha
Hi Bruce!
The other way i used to do it was to take the natural logarithm of both the sides(one side of the equation being y to suppose it as the function) two times in a row and then differentiating them both sides .
Is this what you were talking about ? :)
Last edited: Aug 5, 2011
6. Aug 5, 2011
### BruceW
I was talking about the thing tiny-tim said:
$$x^x = e^{xln(x)}$$
But yes, the other way you do it is right as well. I guess there are several equivalent ways to do this problem.
|
# UVa 10079
## Summary
Given the number of straight cuts you can make on a pizza, find the maximum number of pieces in which the pizza can be divided.
## Explanation
If we make no cuts at all, we still have the whole pizza to eat. Hence ${\displaystyle A(0)=1}$. Each new cut can possibly intersect all the previous cuts. It means that After ${\displaystyle n-1}$ lines have been drawn, the nth line can intersect each of these, so it has ${\displaystyle n-1}$ intersections, and hence ${\displaystyle n}$ new regions, so the recurrence we get is ${\displaystyle A(n)=A(n-1)+n;\ A(0)=1}$. Solving this recurrence gives the formula ${\displaystyle A(n)=n+(n-1)+(n-2)+...2+1+A(0)}$ and hence ${\displaystyle A(n)=n(n+1)/2+1}$.
## Implementation
• Use 64-bit integers (long long in C++) to avoid overflow.
```0
5
10
210000000
-100
```
## Output
```1
16
56
22050000105000001
```
|
x
## COVER ARTICLE
$(T_{\rm C}-T)^2$ dependence. At 77 K, the JC of the junction reaches 1.4 × 105 A/cm2, significantly higher than the range of 103–104 A/cm2 as presented by other investigators for YBCO step-edge junctions on MgO substrate with comparable θ of 35°–45°. This indicates a rather strong Josephson coupling of the junction, and by invoking the results of YBCO bicrystal junctions showing similar values of JC, it is tentatively proposed that the presently fabricated junction might be described as an S-s′-S junction with s′ denoting the superconducting region of depressed TC in the vicinity of the step edge or as an S-N-S junction with N denoting a very thin non-superconducting layer. By incorporating the MgO-based YBCO step-edge junction, high-TC radio frequency (RF) SQUID is made. The device shows decent voltage-flux curve and magnetic flux sensitivity of 250 $\text{μ}\Phi_0/{\rm Hz}^{1/2}$ at 1 kHz and 77 K, comparable to the values reported in the literature. To further improve the RF SQUID performance, efforts could be devoted to optimizing the junction parameters such as the junction JC. By using the YBCO step-edge junction on MgO substrate, high-TC direct current SQUID could also be developed, as reported recently by other investigators, to demonstrate the potential of MgO-based step-edge junction in making such a kind of device with superior magnetic flux sensitivity.">The YBa2Cu3O7–δ (YBCO) step-edge Josephson junction on MgO substrate has recently been shown to have important applications in making advanced high-transition temperature (high-TC) superconducting devices such as high-sensitivity superconducting quantum interference device (SQUID), superconducting quantum interference filter, and THz detector. In this paper, we investigate the fabrication and transport properties of YBCO step-edge junction on MgO substrate. By optimizing the two-stage ion beam etching process, steps on MgO (100) substrates are prepared with an edge angle θ of about 34°. The YBCO step-edge junctions are then fabricated by growing the YBCO thin films with a pulsed laser deposition technique and subsequent traditional photolithography. The resistive transition of the junction shows typical foot structure which is well described by the Ambegaokar-Halperin theory of thermally-activated phase slippage for overdamped Josephson junctions. The voltage-current curves with temperature dropping down to 77 K exhibit resistively shunted junction behavior, and the Josephson critical current density JC is shown to follow the $(T_{\rm C}-T)^2$ dependence. At 77 K, the JC of the junction reaches 1.4 × 105 A/cm2, significantly higher than the range of 103–104 A/cm2 as presented by other investigators for YBCO step-edge junctions on MgO substrate with comparable θ of 35°–45°. This indicates a rather strong Josephson coupling of the junction, and by invoking the results of YBCO bicrystal junctions showing similar values of JC, it is tentatively proposed that the presently fabricated junction might be described as an S-s′-S junction with s′ denoting the superconducting region of depressed TC in the vicinity of the step edge or as an S-N-S junction with N denoting a very thin non-superconducting layer. By incorporating the MgO-based YBCO step-edge junction, high-TC radio frequency (RF) SQUID is made. The device shows decent voltage-flux curve and magnetic flux sensitivity of 250 $\text{μ}\Phi_0/{\rm Hz}^{1/2}$ at 1 kHz and 77 K, comparable to the values reported in the literature. To further improve the RF SQUID performance, efforts could be devoted to optimizing the junction parameters such as the junction JC. By using the YBCO step-edge junction on MgO substrate, high-TC direct current SQUID could also be developed, as reported recently by other investigators, to demonstrate the potential of MgO-based step-edge junction in making such a kind of device with superior magnetic flux sensitivity. Gan Zi-Zhao Acta Physica Sinica.2021, 70(3): 037401.
## EDITOR'S SUGGESTION
2021, 70 (3): 034206. doi: 10.7498/aps.70.20201683
Abstract +
Widely employed in fundamental research, industrial processing, and biomedicine, femtosecond fiber lasers exhibit many attractive features such as high average power, good heat dissipation, excellent beam quality, and compact footprint. Coherent combining technology can effectively suppress the detrimental nonlinear and thermal effects in the fiber amplifiers, and therefore further increase the output pulse energy and average power of femtosecond fiber lasers. In this article, we mainly discuss different coherent combining techniques in high-power ultrafast Yb-fiber laser systems and the relevant phase-locking methods. We believe that the advent of new coherent combining techniques will further improve the average power and pulse energy of femtosecond fiber laser systems, thereby opening up some new research areas.
## EDITOR'S SUGGESTION
2021, 70 (3): 034204. doi: 10.7498/aps.70.20200550
Abstract +
The development of silicon photonics provides a method of implementing high reliability and high precision for new micro-nano optical functional devices and system-on-chips. The asymmetric Fano resonance phenomenon caused by the mutual coupling of optical resonant cavities is extensively studied. The spectrum of Fano resonance has an asymmetric and sharp slope near the resonance wavelength. The wavelength range for tuning the transmission from zero to one is much narrow in Fano lineshape, therefore improving the figure of merits of power consumption, sensing sensitivity, and extinction ratio. The mechanism can significantly improve silicon-based optical switches, detectors, sensors, and optical non-reciprocal all-optical signal processing. Therefore, the mechanism and method of generating the Fano resonance, the applications of silicon-based photonic technology, and the physical meaning of the Fano formula’s parameters are discussed in detail. It can be concluded that the primary condition for creating the Fano resonance is that the dual-cavity coupling is a weak coupling, and the detuning of resonance frequency of the two cavities partly determines Fano resonance lineshapes. Furthermore, the electromagnetically induced transparency is generated when the frequency detuning is zero. The methods of generating Fano resonance by using different types of devices in silicon photonics (besides the two-dimensional photonic crystals) and the corresponding evolutions of Fano resonance are introduced and categorized, including simple photonic crystal nanobeam, micro-ring resonator cavity without sacrificing the compact footprint, micro-ring resonator coupling with other structures (mainly double micro-ring resonators), adjustable Mach-Zehnder interferometer, and others such as slit waveguide and self-coupling waveguide. Then, we explain the all-optical signal processing based on the Fano resonance phenomenon, and also discuss the differences among the design concepts of Fano resonance in optimizing optical switches, modulators, optical sensing, and optical non-reciprocity. Finally, the future development direction is discussed from the perspective of improving Fano resonance parameters. The topology structure can improve the robustness of the Fano resonance spectrum; the bound states in continuous mode can increase the slope of Fano spectrum; the Fano resonance can expand the bandwidth of resonance spectrum by combining other material systems besides silicon photonics; the multi-mode Fano resonances can enhance the capability of the spectral multiplexing; the reverse design methods can improve the performance of the device. We believe that this review can provide an excellent reference for researchers who are studying the silicon photonic devices.
###### GENERAL
2021, 70 (3): 030301. doi: 10.7498/aps.70.20200972
Abstract +
${S_{\rm{1}}}$), medium snow (${S_{\rm{2}}}$), heavy snow (${S_{\rm{3}}}$) and blizzard (${S_{\rm{1}}}$). When the snow is falling in the air, it has an energy absorption effect on the light quantum signal, which is called the extinction effect. The different intensities of snow extinction have different effects on free space optical quantum signal. In this paper, first, a mathematical model for the extinction effects on optical quantum signal at different levels of snowfall is presented; then the quantitative relationship between snowfall and free space extinction attenuation, as well as the relationship between snowfall and channel limit survival function is established, channel capacities under different snowfall intensities, and quantum bit error rate are also given. Finally, the mathematical models of snowfall intensity, transmission distance and link attenuation, amplitude damping channel capacity, channel survival function and channel error rate are established. Simulation results show that when the snowfall intensity is 2.1 mm/d (${S_{\rm{1}}}$) and the transmission distance is 2.2 km, the communication link attenuation is 0.0362, the channel capacity is 0.7745, the channel survival function is 0.2329, and the channel error rate is 0.0105. When the snowfall intensity is 3.8 mm/d (${S_{\rm{2}}}$) and the transmission distance is 3.5 km, the communication link attenuation is 0.1326, the channel capacity is 0.4922, the channel survival function is 0.2099, and the channel error rate is 0.019. Thus, different snowfall intensity has different influence on the performance of free space quantum communication. Therefore, in practical applications, the communication parameters should be adjusted adaptively based on the snowfall intensity to improve the reliability of free space quantum communication.">Quantum communication has the advantages of wide coverage and security, and is currently a hot research topic in the field of communication. In the process of free space quantum communication, quantum signals need transmitting at a certain height above the surface. Various environmental factors in free space, such as snowfall, sandstorms, rainfall, haze and floating dust, will inevitably affect quantum communication performance. However, so far, the influence of snowfall on the performance of quantum channels in free space near the surface has not been investigated. Thus, according to the intensity of snowfall, the snowfall is divided into four levels: light snow (${S_{\rm{1}}}$), medium snow (${S_{\rm{2}}}$), heavy snow (${S_{\rm{3}}}$) and blizzard (${S_{\rm{1}}}$). When the snow is falling in the air, it has an energy absorption effect on the light quantum signal, which is called the extinction effect. The different intensities of snow extinction have different effects on free space optical quantum signal. In this paper, first, a mathematical model for the extinction effects on optical quantum signal at different levels of snowfall is presented; then the quantitative relationship between snowfall and free space extinction attenuation, as well as the relationship between snowfall and channel limit survival function is established, channel capacities under different snowfall intensities, and quantum bit error rate are also given. Finally, the mathematical models of snowfall intensity, transmission distance and link attenuation, amplitude damping channel capacity, channel survival function and channel error rate are established. Simulation results show that when the snowfall intensity is 2.1 mm/d (${S_{\rm{1}}}$) and the transmission distance is 2.2 km, the communication link attenuation is 0.0362, the channel capacity is 0.7745, the channel survival function is 0.2329, and the channel error rate is 0.0105. When the snowfall intensity is 3.8 mm/d (${S_{\rm{2}}}$) and the transmission distance is 3.5 km, the communication link attenuation is 0.1326, the channel capacity is 0.4922, the channel survival function is 0.2099, and the channel error rate is 0.019. Thus, different snowfall intensity has different influence on the performance of free space quantum communication. Therefore, in practical applications, the communication parameters should be adjusted adaptively based on the snowfall intensity to improve the reliability of free space quantum communication.
2021, 70 (3): 030401. doi: 10.7498/aps.70.20201286
Abstract +
General Gauss-Bonnet gravity with a cosmological constant allows two anti-de Sitter (AdS) spacetimes to be taken as its vacuum solutions. It is found that there is a critical point in the parameter space where the two AdS vacuums coalesce into one, which is very different from the general Gauss-Bonnet gravity. Susskind’s team proposed a Complexity/Action duality based on AdS/CFT duality, which provides a new method of studying the complexity of black holes. Fan and Liang (Fan Z Y, Liang H Z 2019 Phys. Rev. D 100 086016) gave the formula of the evolution of complexity for general higher derivative gravity, and discussed the complexity evolution of the neutral planar Gauss-Bonnet-AdS black holes in detail by the numerical method. With the method of studying the complexity of general higher derivative gravity proposed by Fan and Liang (2019), we investigate the complexity evolution of critical neutral Gauss-Bonnet-AdS black holes, and compare these results with the results of the general neutral Gauss-Bonnet-AdS black holes, showing that the overall regularities of the evolution of the complexity of these two objects are consistent, and their main difference lies in the dimensionless critical time. As for the five-dimensional critical neutral Gauss-Bonnet-AdS black holes, when the event horizon of the black holes is flat or spherical, the dimensionless critical times of black holes with different sizes are identical, all reaching their minimum values. While in the higher dimensional cases, the differences in dimensionless critical time among spherically symmetric critical neutral Gauss-Bonnet-AdS black holes with different sizes are obviously less than those of general ones. These differences are probably related to the criticality of the neutral Gauss-Bonnet-AdS black holes.
## EDITOR'S SUGGESTION
2021, 70 (3): 030601. doi: 10.7498/aps.70.20201204
Abstract +
${m_{\text{F}}} = + {9 / 2} \to {m_{\text{F}}} = + {9 / 2}$ polarization spectrum and ${m_{\text{F}}} = - {9 / 2} \to {m_{\text{F}}} = - {9 / 2}$ polarization spectrum. The correction of second-order Zeeman shift is calculated to be 0.7 × 10–16, and corresponding uncertainty is 0.2 × 10–17. Experimental results indicate that the frequency shift correction due to the blackbody radiation is the largest, while the uncertainty caused by the lattice AC Stark effect is the largest in the evaluated shifts. The systematic shift is 58.8 × 10–16, the total uncertainty is 2.3 × 10–16. In the next work, the magneto-optical trap cavity will be placed in a blackbody-radiation cavity to reduce the blackbody-radiation shift. The uncertainty of the collision shift will be reduced by increasing the beam waist of the lattice and reducing the potential well depth of the lattice, which will reduce the density of atoms. What is more, the light source for the optical lattice after spectral filtering will be measured by an optical frequency comb locked to the hydrogen clock signal to reduce the uncertainty of the lattice AC Stark frequency shift. The systematic uncertainty is expected to be on the order of 10–17. The evaluation of the systematic uncertainty for the transportable 87Sr optical lattice clock lays the foundation for the practical application.">Transportable optical clocks have broad applications in scientific research and engineering. Accurate evaluation of systematic uncertainty for the transportable 87Sr optical lattice clock is a prerequisite for the practical realization of the optical clock. Four main frequency shifts of the 87Sr optical lattice clock are measured, i.e. blackbody-radiation (BBR) shift, collision shift, lattice alternating current (AC) Stark shift, and second-order Zeeman shift. Firstly, by measuring the temperature distribution on the surface of the magneto-optical trap cavity and analyzing the influence of different heat sources on atomic cloud, the BBR shift correction is measured to be 50.4 × 10–16 Hz with an uncertainty of 5.1 × 10–17. Secondly, the time-interleaved self-comparison method is used under high and low atom density condition to evaluate the collision shift of the system. The correction of collision shift is 4.7 × 10–16 with an uncertainty of 5.6 × 10–17. Thirdly, the lattice AC Stark shift is evaluated by the time-interleaved self-comparison method. By measuring the dependence of the lattice AC Stark shift on the wavelength of the lattice light, the magic wavelength is measured to be 368554393(78) MHz. As a result, the lattice AC Stark shift correction is 3.0 × 10–16 with an uncertainty of 2.2 × 10–16. Finally, using the time-interleaved self-comparison technology, the second-order Zeeman frequency shift is evaluated by measuring the fluctuation of the difference in center frequency between the ${m_{\text{F}}} = + {9 / 2} \to {m_{\text{F}}} = + {9 / 2}$ polarization spectrum and ${m_{\text{F}}} = - {9 / 2} \to {m_{\text{F}}} = - {9 / 2}$ polarization spectrum. The correction of second-order Zeeman shift is calculated to be 0.7 × 10–16, and corresponding uncertainty is 0.2 × 10–17. Experimental results indicate that the frequency shift correction due to the blackbody radiation is the largest, while the uncertainty caused by the lattice AC Stark effect is the largest in the evaluated shifts. The systematic shift is 58.8 × 10–16, the total uncertainty is 2.3 × 10–16. In the next work, the magneto-optical trap cavity will be placed in a blackbody-radiation cavity to reduce the blackbody-radiation shift. The uncertainty of the collision shift will be reduced by increasing the beam waist of the lattice and reducing the potential well depth of the lattice, which will reduce the density of atoms. What is more, the light source for the optical lattice after spectral filtering will be measured by an optical frequency comb locked to the hydrogen clock signal to reduce the uncertainty of the lattice AC Stark frequency shift. The systematic uncertainty is expected to be on the order of 10–17. The evaluation of the systematic uncertainty for the transportable 87Sr optical lattice clock lays the foundation for the practical application.
2021, 70 (3): 030701. doi: 10.7498/aps.70.20201085
Abstract +
The hybrid composite materials are a new type of composite material. Due to their complex microscopic structures, it is very challenging to predict the equivalent thermal conductivities of hybrid composites. In this paper, an innovative hybrid wavelet-based learning method assisted multiscale analysis is developed to predict the effective thermal conductivities of hybrid composite materials with heterogeneous conductivity by the asymptotic homogenization method, wavelet transform method, and machine learning method. This innovative approach mainly includes two parts: off-line multi-scale modeling and on-line machine learning. Firstly, the material database about thermal transfer performance of hybrid composites is established by the asymptotic homogenization method and off-line multi-scale modeling, and then the off-line material database is preprocessed by the wavelet transform method. Secondly, the artificial neural network and support vector regression method are employed to establish the on-line machine learning model for predicting the equivalent heat conduction properties of hybrid composites. Finally, the effectiveness of the proposed hybrid wavelet-based learning method is verified by numerical experiments on the periodic and random hybrid composites. The numerical results show that the hybrid wavelet-based artificial neural network method owns the optimal capability of parameter prediction and anti-noise. Furthermore, it should be emphasized that the hybrid wavelet-based learning method can not only extract the important features of off-line material database for random hybrid composites with high-dimensional large-scale data features, but also significantly reduce the quantity of input data for ensuring the successful on-line supervised learning and improve the training efficiency and anti-noise performance of the machine learning model. The established hybrid wavelet-based learning method in this paper can not only be used to evaluate the equivalent thermal conductivities of hybrid composite materials, but also further extend to the predicting of the equivalent physical and mechanical properties of composite materials.
###### ATOMIC AND MOLECULAR PHYSICS
2021, 70 (3): 033101. doi: 10.7498/aps.70.20201413
Abstract +
${{\rm{b}}^{{3}}}\Sigma _{{0^ - }}^ +$ and ${{\rm{b}}^3}\Sigma _{{1}}^ +$ both turn into bound states when the SOC effect is considered. All spectroscopic parameters of Λ-S states and Ω states are reported for the first time. The TDMs of the ${{\rm{A}}^{{1}}}{\Pi _{{1}}} \leftrightarrow {{\rm{X}}^{{1}}}\Sigma _{{0^ + }}^ +$, ${{\rm{a}}^{{3}}}{\Pi _{{1}}} \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ +$, ${{\rm{a}}^{{3}}}{\Pi _{{{{0}}^{{ + }}}}} \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ +$, ${{\rm{A}}^{{1}}}{\Pi _{{1}}} \leftrightarrow {{\rm{a}}^{{3}}}{\Pi _{{1}}}$, and ${{\rm{A}}^{{1}}}{\Pi _{{1}}} \leftrightarrow {{\rm{a}}^{{3}}}{\Pi _{{{{0}}^{{ + }}}}}$ transitions are also calculated. The TDMs of the ${{\rm{A}}^{{1}}}{\Pi _{{1}}} \leftrightarrow {{\rm{X}}^{{1}}}\Sigma _{{0^ + }}^ +$ and ${{\rm{a}}^{{3}}}{\Pi _{{1}}} \leftrightarrow {{\rm{X}}^{{1}}}\Sigma _{{0^ + }}^ +$ transitions are large in the Franck-Condon region, which are about –2.05 Debye (D) and 1.45 D at Re. Notably, the TDMs of the ${{\rm{a}}^3}{\Pi _{{{{0}}^{{ + }}}}} \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ +$ transition cannot be ignored. The value of TDM at Re equals –0.15 D.Based on the accurately PECs and PDMs, the values of Franck-Condon factor fυυ, vibrational branching ratio Rυυ and radiative coefficient of the ${{\rm{a}}^{{3}}}{\Pi _{{1}}} \leftrightarrow {{\rm{X}}^{{1}}}\Sigma _{{0^ + }}^ +$, ${{\rm{a}}^{{3}}}{{{\Pi }}_{{{{0}}^{{ + }}}}} \leftrightarrow {{\rm{X}}^{{1}}}{{\Sigma }}_{{0^ + }}^ +$, and ${{\rm{A}}^{{1}}}{\Pi _{{1}}} \leftrightarrow {{\rm{X}}^{{1}}}\Sigma _{{0^ + }}^ +$ transitions are also calculated. Highly diagonally distributed Franck-Condon factor f00 and the values of vibrational branching ratio R00 of the ${{\rm{a}}^{{3}}}{\Pi _{{1}}}(\upsilon ') \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ + (\upsilon '')$, ${{\rm{a}}^{{3}}}{\Pi _{{0^ + }}}(\upsilon ') \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ + (\upsilon '')$, and ${{\rm{A}}^1}{\Pi _1}(\upsilon ') \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ + (\upsilon '')$ transitions are obtained, respectively. Spontaneous radiation lifetimes of the ${{\rm{a}}^3}{\Pi _{{1}}}$, ${{\rm{a}}^3}{\Pi _{{{{0}}^{{ + }}}}}$, and ${{\rm{A}}^1}{\Pi _{{1}}}$ excited states are all short for rapid laser cooling. The influences of intervening states of the ${{\rm{A}}^1}{\Pi _1}(\upsilon ') \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ + (\upsilon '')$ transition can be ignored. The proposed cooling wavelengths using the ${{\rm{a}}^3}{\Pi _{{1}}}(\upsilon ') \leftrightarrow {{\rm{X}}^{{1}}}\Sigma _{{0^ + }}^ + (\upsilon '')$, ${{\rm{a}}^{{3}}}{\Pi _{{0^ + }}}(\upsilon ') \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ + (\upsilon '')$, and ${{\rm{A}}^1}{\Pi _1}(\upsilon ') \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ + (\upsilon '')$ transitions are all in the visible region.">Potential energy curves (PECs), permanent dipole moments (PDMs) and transition dipole moments (TMDs) of five Λ-S states of SeH anion are calculated by the MRCI + Q method with ACVQZ-DK basis set. The core-valence corrections, Davidson corrections, scalar relativistic corrections, and spin-orbit coupling (SOC) effects are also considered. In the CASSCF step, Se(1s2s2p3s3p) shells are put into the frozen orbitals, which are not optimized. Six molecular orbitals are chosen as active space, including H(1s) and Se(4s4p5s) shells, and eight electrons are distributed in a (4, 1, 1, 0) active space, which is referred to as CAS (8, 6), and the Se(3d) shell is selected as a closed-shell, which keeps doubly occupation. In the MRCI step, the remaining Se(3d) shell is used for core-valence calculations of SeH anion. The SOC effects are taken into account in the one- and two- electron Breit-Pauli operators.The b3Σ+ state is a repulsive state. Other excited states are bound, and all states possess two potential wells. The ${{\rm{b}}^{{3}}}\Sigma _{{0^ - }}^ +$ and ${{\rm{b}}^3}\Sigma _{{1}}^ +$ both turn into bound states when the SOC effect is considered. All spectroscopic parameters of Λ-S states and Ω states are reported for the first time. The TDMs of the ${{\rm{A}}^{{1}}}{\Pi _{{1}}} \leftrightarrow {{\rm{X}}^{{1}}}\Sigma _{{0^ + }}^ +$, ${{\rm{a}}^{{3}}}{\Pi _{{1}}} \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ +$, ${{\rm{a}}^{{3}}}{\Pi _{{{{0}}^{{ + }}}}} \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ +$, ${{\rm{A}}^{{1}}}{\Pi _{{1}}} \leftrightarrow {{\rm{a}}^{{3}}}{\Pi _{{1}}}$, and ${{\rm{A}}^{{1}}}{\Pi _{{1}}} \leftrightarrow {{\rm{a}}^{{3}}}{\Pi _{{{{0}}^{{ + }}}}}$ transitions are also calculated. The TDMs of the ${{\rm{A}}^{{1}}}{\Pi _{{1}}} \leftrightarrow {{\rm{X}}^{{1}}}\Sigma _{{0^ + }}^ +$ and ${{\rm{a}}^{{3}}}{\Pi _{{1}}} \leftrightarrow {{\rm{X}}^{{1}}}\Sigma _{{0^ + }}^ +$ transitions are large in the Franck-Condon region, which are about –2.05 Debye (D) and 1.45 D at Re. Notably, the TDMs of the ${{\rm{a}}^3}{\Pi _{{{{0}}^{{ + }}}}} \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ +$ transition cannot be ignored. The value of TDM at Re equals –0.15 D.Based on the accurately PECs and PDMs, the values of Franck-Condon factor fυυ, vibrational branching ratio Rυυ and radiative coefficient of the ${{\rm{a}}^{{3}}}{\Pi _{{1}}} \leftrightarrow {{\rm{X}}^{{1}}}\Sigma _{{0^ + }}^ +$, ${{\rm{a}}^{{3}}}{{{\Pi }}_{{{{0}}^{{ + }}}}} \leftrightarrow {{\rm{X}}^{{1}}}{{\Sigma }}_{{0^ + }}^ +$, and ${{\rm{A}}^{{1}}}{\Pi _{{1}}} \leftrightarrow {{\rm{X}}^{{1}}}\Sigma _{{0^ + }}^ +$ transitions are also calculated. Highly diagonally distributed Franck-Condon factor f00 and the values of vibrational branching ratio R00 of the ${{\rm{a}}^{{3}}}{\Pi _{{1}}}(\upsilon ') \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ + (\upsilon '')$, ${{\rm{a}}^{{3}}}{\Pi _{{0^ + }}}(\upsilon ') \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ + (\upsilon '')$, and ${{\rm{A}}^1}{\Pi _1}(\upsilon ') \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ + (\upsilon '')$ transitions are obtained, respectively. Spontaneous radiation lifetimes of the ${{\rm{a}}^3}{\Pi _{{1}}}$, ${{\rm{a}}^3}{\Pi _{{{{0}}^{{ + }}}}}$, and ${{\rm{A}}^1}{\Pi _{{1}}}$ excited states are all short for rapid laser cooling. The influences of intervening states of the ${{\rm{A}}^1}{\Pi _1}(\upsilon ') \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ + (\upsilon '')$ transition can be ignored. The proposed cooling wavelengths using the ${{\rm{a}}^3}{\Pi _{{1}}}(\upsilon ') \leftrightarrow {{\rm{X}}^{{1}}}\Sigma _{{0^ + }}^ + (\upsilon '')$, ${{\rm{a}}^{{3}}}{\Pi _{{0^ + }}}(\upsilon ') \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ + (\upsilon '')$, and ${{\rm{A}}^1}{\Pi _1}(\upsilon ') \leftrightarrow {{\rm{X}}^1}\Sigma _{{0^ + }}^ + (\upsilon '')$ transitions are all in the visible region.
2021, 70 (3): 033102. doi: 10.7498/aps.70.20201364
Abstract +
The point defect of two-dimensional hexagonal boron nitride (hBN) has recently been discovered to achieve single photon emission at room temperature, and it has become a research hotspot. Despite its important fundamental and applied research significance, the origin of the atomic structure of luminescence defects in hBN is still controversial. In this paper, first-principle calculations based on density functional theory are used to study a defect (CN)3VB in the hexagonal boron nitride monolayer (hBN) where three N atoms near the B vacancy are replaced by C atoms. At the B vacancy of hBN, the three N atoms each carry an in-plane dangling bond and the corresponding unpaired electron, and the unpaired electron can be eliminated by C substitution. We systematically study the geometric structure, electronic structure and optical properties of (CN)3VB defects, analyze the thermodynamic stability of defects through the calculation of the atomic structure, formation energy, and charge state of the defect, and analyze the position in the band gap and its atomic orbital contribution of defect state through energy band structure and wave function. We also analyze its optical properties through dielectric function and absorption coefficient, and predict its luminous photon energy. The results show that the defect can change from a symmetric metastable state to an asymmetric ground state structure with three C atoms connected together through atomic structure relaxation. The formation energy of asymmetric (CN)3VB is 7.94 eV, which is 3.72 eV lower than that of symmetric one. The formation of defects introduces some local defect states contributed by defect dangling σ bonds and reconstructed π bonds in hBN. The defects have valence states between –2 and +2, and the thermodynamic transition energy level of asymmetric (CN)3VB is higher than that of symmetric (CN)3VB. In the transition from the metastable state to the ground state, these defect states can redshift the light absorption boundary of hBN, enhance the absorption intensity of visible light by hBN, and cause internal optical transitions. Among them, there is a visible light transition with an energy threshold around 2.58 eV in the asymmetry (CN)3VB defect. Single boron atom vacancy defect and (CN)3VB have optical transitions near infrared and ultraviolet energy, respectively. The present work will help to further understand the composition and optical properties of point defects in hBN, and provide a theoretical basis for experimentally exploring the origin and properties of the atomic structure of light-emitting point defects.
## EDITOR'S SUGGESTION
2021, 70 (3): 033103. doi: 10.7498/aps.70.20201407
Abstract +
Irradiation damage to zirconium alloys (e.g., zirconium niobium (Zr-Nb) alloy) is the key to the design of fission-reactor structural materials and fuel rod cladding materials. Atomic scale computational simulations such as molecular dynamics and first principles are often needed to understand the physical mechanism of irradiation damage. For the simulation of randomly substitutional solid solution, it is necessary to construct large-sized supercells that can reflect the random distribution characteristics of alloy elements. However, it is not suitable to use large-size supercells (such as ≥ 200 atoms) for first principle calculation, due to the large computational cost. Special quasirandom supercells (SQS) are usually used for first principles calculation. The SQS can partly reflect the random distribution characteristics of alloy elements, but it only corresponds to one configuration for specific components, hence whether this model can reflect the statistical average of multiple local configurations in a real randomly substitutional solid solution is still an open question, and needs further studying and verifying. Molecular dynamics (MD) simulation can be carried out on the randomly substitutional solid solution with a larger scale based on random substitution (RSS) method, these supercells include more local configurations. Therefore, the MD studies of Zr-Nb alloy are carried out for the RSS and SQS-extended supercells. The critical size of RSS supercell which can truly reflect the statistical properties of solid solution alloy is determined. Then the lattice constant, formation energy and energy-volume relationship of SQS-extended supercell of Zr-Nb alloy and a series of RSS supercells are calculated and compared. The results show that the lattice constants, the formation energy and energy volume curves of the solid solution obtained by SQS supercell simulation are close to a series of corresponding statistical values of the physical properties of RSS supercells, so the SQS supercells can be used to study the random substitution of solid solution alloys.
###### ELECTROMAGNETISM, OPTICS, ACOUSTICS, HEAT TRANSFER, CLASSICAL MECHANICS, AND FLUID DYNAMICS
2021, 70 (3): 034101. doi: 10.7498/aps.70.20200937
Abstract +
Magnetic dipole theory has been widely and successfully used to explain the leakage magnetic field signals. Because the model parameter such as magnetic dipole density is not easy to quantify, magnetic dipole theory often needs normalizing in application, which is considered to be unsuitable for quantitatively analyzing the magnetic memory signals with the stress effect. In this paper, the theoretical model of magneto-mechanical coupling magnetic dipole is established, which is suitable for analyzing the stress effect on magnetic signals in magnetic memory testing method. Based on the ferromagnetic theory, the equivalent field under the combined action of the applied load and the magnetic field is determined. And then, the magneto-mechanical analytical model is obtained for the isotropic ferromagnetic material under the weak magnetic field based on the first-order magnetization approximation in the weak magnetization state. Under the assumptions of rectangular and V-shaped magnetic charge distribution for the two-dimensional magnetic signal problem, the theoretical analytical models of the magnetic memory signals from the smooth and cracked specimens, and the analytical models of the magnetic memory signal induced by the rectangular and V-shaped surface defect are established. Based on the analytical solution of the proposed magneto-mechanical magnetic dipole theory, the difference in signal between before and after the failure of the specimen, the signal from the rectangular and V-shaped defect, and other influencing factors and laws of the magnetic signal are analyzed in detail. In particular, the influence of stress, environmental magnetic field, defect morphology and size, lift-off effect, specimen size and other factors on magnetic memory signals can be described based on the analytical solution of magneto-mechanical magnetic dipole models proposed in this paper. The proposed analytical model of magneto-mechanical magnetic dipole in this paper is simple and easy to use, and the present research shows that the proposed analytical solution in this paper can explain some basic experimental phenomena and laws in magnetic memory testing experiments. In addition, the precise magneto-mechanical coupling quantitative model combined with the finite element analysis method is still needed for accurately analyzing the magnetic memory signals in experiment.
2021, 70 (3): 034102. doi: 10.7498/aps.70.20201034
Abstract +
Electromagnetic diffusion surface can reduce the radar cross section, thus profiting stealth of targets. Terahertz diffusion surface has a wide prospect in the field of next-generation radar and communication, promising to act as a kind of intelligent smart skin. In this paper, utilizing the excellent tunable properties of graphene in the terahertz band, a hybrid structure of graphene and metal which has inverse phase response of reflecting waves is proposed. The reflection phase switches in the mechanism of resonant modes and can be controlled efficiently by the bias voltage. Meanwhile, unlike metal materials, graphene has a non-negligible loss characteristic, which leads the response amplitudes corresponding to the two different switching states to be inconsistent with each other. According to the interference and superposition principle of electromagnetic field, it is not conducive to eliminating the coherent far-field, leading to an unsatisfactory diffusion result. In this paper, we present a “molecular” structure by secondary combination of the above-mentioned reverse phase element states, and take it as the basic element of the diffusion surface. Finally, we use particle swarm optimization to optimize the arrangement of “molecular” structures. The final diffusion surface consists of a combinatorial design of “molecules” rather than randomly distributed reflection units. In addition, molecules designed artificially have similar amplitude responses but different phase responses, which improves the convergence speed and reduces the computation quantity during algorithm evolution. The method of designing molecular structure, described in this paper, is simple, rapid and widely applicable, which effectively improves the amplitude-to-phase modulation ability of graphene metasurface against electromagnetic waves. When diffuse reflection optimization is applied to most of graphene metasurfaces, the method described in this paper can achieve the results that are the same as or even better than the results after a large number of iterations of traditional particle swarm optimization in the most computation-efficient manner. The results show that the dynamic diffusion surface designed by this method has the advantages of fast convergence speed and small far-field peak.
2021, 70 (3): 034201. doi: 10.7498/aps.70.20201121
Abstract +
${r \mathord{\left/ {\vphantom {r {\lambda = 0.1}}} \right. } {\lambda = 0.1}}$) reaches 65 nm by using Mie-resonance-coupled silicon nanoparticles. Compared with the transmission induced by surface plasmon resonance, the peak value is improved by 1.5 times and the 3 dB bandwidth is widened by 17 times. According to the coupled mode theory, the equivalent circuit model of transmission of the subwavelength metal aperture added with Mie-resonance-coupled silicon nanoparticles is established, and the element parameters in the circuit model are inversed under the critical coupling state. Further research shows that transmission rule of the subwavelength metal aperture added with Mie-resonance coupled silicon nanoparticles can be accurately revealed by changing the coupling coefficient in the equivalent circuit model, and the results are consistent with the full wave electromagnetic simulation results. The mathematical expression of the interaction between light and Mie-resonance-coupled subwavelength metal aperture is found, therefore it can inspire us to construct certain functional modules in optical field according to circuit design method.">Transmission of the subwavelength metal aperture excited by the surface plasmon resonance is much higher than that from the Bethe theory. However, due to the sensitivity of resonant frequency and the loss of metal in optical band, it is difficult to achieve broadband and high transmission of the subwavelength metal aperture through surface plasmon resonance. In this article, the broadband and high transmission of the subwavelength metal aperture is realized when Mie-resonant-coupled silicon nanoparticles placed on both sides of the metal aperture are used to replace the surface plasmon resonance. The full wave simulation results show that bandwidth of the transmission coefficient more than 90% of the subwavelength aperture (${r \mathord{\left/ {\vphantom {r {\lambda = 0.1}}} \right. } {\lambda = 0.1}}$) reaches 65 nm by using Mie-resonance-coupled silicon nanoparticles. Compared with the transmission induced by surface plasmon resonance, the peak value is improved by 1.5 times and the 3 dB bandwidth is widened by 17 times. According to the coupled mode theory, the equivalent circuit model of transmission of the subwavelength metal aperture added with Mie-resonance-coupled silicon nanoparticles is established, and the element parameters in the circuit model are inversed under the critical coupling state. Further research shows that transmission rule of the subwavelength metal aperture added with Mie-resonance coupled silicon nanoparticles can be accurately revealed by changing the coupling coefficient in the equivalent circuit model, and the results are consistent with the full wave electromagnetic simulation results. The mathematical expression of the interaction between light and Mie-resonance-coupled subwavelength metal aperture is found, therefore it can inspire us to construct certain functional modules in optical field according to circuit design method.
2021, 70 (3): 034202. doi: 10.7498/aps.70.20200927
Abstract +
Computed tomography (CT) is an effective tool for three-dimensional (3D) imaging by using optical detectors to capture the two-dimensional (2D) projections of tested parameters from multiple views and realizing 3D reconstruction through various algorithms. However, for practical applications, typically only a few detectors can be applied due to their high expense and the limited optical access of the test environment. The realization of high precision reconstruction with a few projections is of great significance for promoting the development and application of CT technology. The spatial arrangement of the detectors determines the amount of useful information collected by the system, which greatly affects the quality of CT reconstruction. Therefore, in this work we study the optimization method of projection arrangement based on the 3D Mojette transform theory.Mojette transform is a special discrete form of Radon transform, which can realize projection sampling with minimum redundancy and accurate tomographic reconstruction from less projection angles. It provides a new way to realize the CT technology with fewer projections. However, the existing researches mainly focus on the reconstruction theories of 2D Mojette transform, which is used for realizing the 2D slice tomography. In order to realize the real 3D tomographic reconstruction, in this work we establish a mathematical model of 3D Mojette transform, and study its accurate reconstruction condition. The results show that the 3D Mojette transform is a combination of twice 2D Mojette transform in two directions. The accurate reconstruction condition of 3D Mojette transform is that the sum of the absolute values of projection vectors’ components in x, y, and z directions is greater than the number of discrete grids in each direction. The correctness of the mathematical model and the accurate reconstruction condition are verified by numerical simulations.Considering the limitation of the pixels in the practical detectors, the method to determine the optimal arrangement of projection angles is proposed. The results indicate that the optimal arrangement is that all detectors are located in the same horizontal plane around the tested object, where the projection model is reduced to 2D Mojette transform. In this case, the minimum projection angles and pixels are required and the projection angles can be positioned in a smaller spatial range. If the condition cannot be satisfied in practice, projection vectors with smaller |pi| and |qi| should be chosen. This research provides the theoretical basis for establishing the actual CT system.
2021, 70 (3): 034203. doi: 10.7498/aps.70.20201042
Abstract +
In order to deal with the thermal management problem of high-energy high-repetition rate laser amplifiers, the efficient heat removal in water-cooled Nd:YAG active mirror amplifiers is investigated in detail through numerical modeling and experimental analysis. According to the low Reynolds number k-ε turbulence model, a full fluid-solid conjugate heat transfer model is established to give a comprehensive model of flow and thermal characteristics in three dimensions. The thermal distributions obtained from the model are then used to calculate all mechanical stresses in the laser medium and thermally-induced wavefront distortions. In comparison with the standard k-ε turbulence model, the influences of the near-wall treatments of the above model on the process of fluid flow, convection diffusion and heat conduction, and temperature distributions are analyzed. Meanwhile, the effects of coolant flow rate and pump parameter on the flow field characteristics, temperature and wavefront distributions of the YAG disk are also studied. Numerical simulation results reveal that the temperature distribution of the laser medium is closely related to the viscous effect in the solid-liquid boundary layer. Although the heat deposition distribution of the laser medium is symmetrical, the temperature profile is asymmetrical as a result of the increasing water temperature along the water flow. The maximum temperature rise of the disk is at the outlet end, and the position remains almost unchanged. The front-surface temperature distributions and wavefront profiles of Nd:YAG vary nonlinearly with the coolant flow rates, but linearly with the pump parameter. Model predictions show that when the laser amplifier operates at a repetition rate of 50 Hz, the thermal diffusion of the coolant mainly occurs in a range of 100 μm, and the maximum temperature difference of the coolant reaches up to 10.85 ℃. Correspondingly, the maximum temperature variation over the front-surface active region is less than 4 ℃, with an average temperature of 49.62 ℃, which leads to a total peak-to-valley wave front distortion of 7.27λ. The experimentally measured temperature distributions are in reasonable agreement with numerical simulations. The research results are beneficial to designing and optimizing the high-energy, high-repetition rate water-cooled Nd:YAG active mirror amplifiers.
2021, 70 (3): 034205. doi: 10.7498/aps.70.20201135
Abstract +
The laser frequency scanning interferometry, as a non-contact method, has non-ranging blind zone and achieves multi-target testing in a single measurement. The beat frequency of target can be extracted by Fourier transform, and then the distance can be solved. However, due to the limitation of laser frequency modulation bandwidth, the resolution of target obtained by Fourier transform is limited to the inherent resolution. In order to solve this problem, in this paper we propose to use the estimating signal parameter via rotational invariance technique (ESPRIT) to perform spectrum analysis on the measured signal. In the experiment, the resampling method is adopted to correct the non-linearity of the measured signal beat frequency, and then the ESPRIT algorithm is used to obtain the target distance. The results show that the Fourier transform algorithm cannot distinguish the target signal from the frequencies of adjacent target, but the ESPRIT algorithm can do. The thickness of the measured target is 2.08 mm. This provides ideas for measuring, such as damage point in the proximity of the fiber, height of thin step, or small hole.
2021, 70 (3): 034301. doi: 10.7498/aps.70.20201270
Abstract +
A monolayer bend waveguide is designed based on the features of Rayleigh-Bloch (RB) mode wave in one-dimensional diffraction grating. The feasibility that the RB mode wave can transmit along the bend waveguide is demonstrated by the time-domain and frequency-domain finite element method, respectively. The results show that two different modes of transmission wave exist because of employing the circled unit cells. They possess different acoustical energy localization positions. In mode-1, the energy is localized between unit cells. In mode-2, the energy is localized in the center of unit cell, therefore, acoustic wave transmits with nearly no loss. Modulated sinusoidal wave and Gaussian pulse wave are used in the time-domain investigation. Because only RB mode waves can transmit and different modes have different energy distributions, the bend waveguide acts as an acoustic filter for the broadband waves. This study is conducive to the acoustic wave directional transmission, acoustic signal detection and identification.
2021, 70 (3): 034302. doi: 10.7498/aps.70.20201233
Abstract +
The rough sea bottom has a large effect on underwater acoustic propagation and underwater acoustic detection applications. By using the typical shallow water environment from the Yellow Sea, the acoustic propagation characteristics under the condition of both periodic rough sea bottom and strong negative thermocline layer are systematically analyzed by using the parabolic equation model RAM (where RAM stands for range-dependent acoustic model) and ray theory. For a low-frequency and short-range acoustic source, the transmission loss (TL) increases up to about 5–30 dB due to the existence of the periodic rough bottom. Abnormal TLs and pulse arrival structures with different source depths, different periods and heights of the rough bottom are analyzed and summarized. Specifically, when the period of the rough bottom is constant, TL increases with the height of the rough bottom increasing. When the height of the rough bottom is constant, the effect of the rough bottom on the sound propagation becomes smaller with the increase of the period. The mechanism of the TL difference caused by rough bottom is explained by using the ray theory. The incidence and reflection angle of the sound ray on the sea bottom are changed due to the periodic rough bottom, which makes small grazing angles of some of the rays incident at sea bottom become large grazing angles, and the bottom loss increases. On the other hand, the change of the reflection angle increases the number of ray interaction with the sea bottom, causing the reversion propagation. Therefore, the energy of the sound field will attenuate with range increasing. The influence of the periodic rough bottom on the sound pulse propagation is mainly reflected in the energy conversion between sound rays (or normal modes) with different angles, the increasing of energy attenuation of some sound rays with large angles, and the decreasing of multipath structure. The change of the arrival time and relative amplitude of the multipath structure affect the frequency spectrum of the sound field, which will affect the performance of the method based on matching field localization. Most of existing studies focus on the influence of the change in large scale sea bottom topography on the sound field, but there are few studies on small scale periodic sea bottom fluctuations, and the relevant summary of the law of sound propagation is lacking. When sonar is used in the actual shallow water environment, more attention should be paid to the influence of the periodic rough bottom. In addition, the present research results also have important reference significance for the spatial accuracy of surveying and mapping of sea bottom topography.
2021, 70 (3): 034401. doi: 10.7498/aps.70.20201005
Abstract +
Supercritical CO2 can be used as a heat transfer fluid in a solar receiver, especially for a concentrating solar thermal power tower system. Such applications require better understanding of the heat transfer characteristics of supercritical CO2 in the solar receiver tube in a high temperature region. However, most of the existing experimental and numerical studies of the heat transfer characteristics of supercritical CO2 in tubes near the critical temperature region, and the corresponding heat transfer characteristics in the high temperature region are conducted. In this paper, a three-dimensional steady-state numerical simulation with the standard k-ε turbulent model is established by using ANSYS FLUENT for the flow and heat transfer of supercritical CO2 in a heated circular tube with an inner diameter of 6 mm and a length of 500 mm in the high temperature region. The effects of the fluid temperature (823–1023 K), the flow direction (horizontal, downward and upward), the pressure (7.5–9 MPa), the mass flux (200–500 kg·m–2·s–1) and the heat flux (100–800 kW·m–2) on the convection heat transfer coefficient and Nusselt number are discussed. The results show that the convection heat transfer coefficient increases while Nusselt number decreases nearly linearly with fluid temperature increasing. Both fluid direction and pressure have negligible effects on the convection heat transfer coefficient and Nusselt number. Moreover, the convective heat transfer coefficient and Nusselt number are enhanced greatly with the increasing of mass flux and the decreasing of heat flux, which is more obvious at a higher heat flux. The influences of buoyancy and flow acceleration on the heat transfer characteristics are also investigated. The buoyancy effect can be ignored within the present parameter range. However, the flow acceleration induced by the high heat flux significantly deteriorates the heat transfer preformation. Moreover, eight heat transfer correlations of supercritical fluid in tubes are evaluated and compared with the present numerical data. The comparison indicates that the correlations based on the thermal property modification show better performance in the heat transfer prediction in the high temperature region than those based on the dimensionless number modification. And Nusselt number predicted by the best correlation has a mean absolute relative deviation of 8.1% compared with the present numerical results, with all predicted data points located in the deviation bandwidth of ±20%. The present work can provide a theoretical guidance for the optimal design and safe operation of concentrating solar receivers where supercritical CO2 is used as a heat transfer fluid.
###### PHYSICS OF GASES, PLASMAS, AND ELECTRIC DISCHARGES
2021, 70 (3): 035201. doi: 10.7498/aps.70.20200774
Abstract +
Piezoelectric elements have been commonly used because of their wide applications in sensors, transducers, and some micro intelligent structures. However, in the fields of aviation, aerospace, and automation, some relevant equipment works in a harsh environment and is susceptible to the temperature change, thereby leading its performances to be greatly affected. Therefore, the problem of nonlinear wave relating to piezoelectric circular rods in different temperature fields is studied by modeling and numerical analysis. Firstly, based on the theory of finite deformation, we take infinite piezoelectric circular rod as a research object and consider the effects of transverse inertia and equivalent Poisson's ratio under the thermoelectric coupling action. Using the Hamilton principle and introducing the Euler equation, the longitudinal wave equation of piezoelectric circular rod is obtained. Secondly, Jacobi elliptic cosine function and Jacobi elliptic sine function expansion method are used to solve the wave equation of the piezoelectric circular rod, and the solitary wave solution and the exact periodic solution of the wave equation are obtained. It is found that the periodic solution can be reduced into a solitary wave solution under certain conditions, and it is proved theoretically that there may be solitary wave stably propagating in a piezoelectric circular rod. Finally, the dispersion curves of different wave velocity ratios and the curves about influences of temperature field on the waveform, amplitude and wave number of the piezoelectric rod are obtained by Matlab. The numerical results show that the wave velocity decreases with the increase of temperature when the wave velocity ratio is constant. Given the temperature is constant, it can be found that with the increase of the ratio, the amplitude of solitary wave gradually increases while the wavelength gradually decreases. In addition, the images obtained show that although temperature change can cause the characteristics of solitary waves to change, the solitary waves are always symmetrical bell shaped waves in the propagation process, reflecting the stability characteristics under the combined action of nonlinear and dispersion effects. Therefore, the variation of temperature field can influence and control some propagation characteristics of solitary waves. Moreover, the wave theory has been widely used in the nondestructive testing of structures and the improving of information transmission quality due to its special stability.
###### CONDENSED MATTER: STRUCTURAL, MECHANICAL, AND THERMAL PROPERTIES
2021, 70 (3): 036101. doi: 10.7498/aps.70.20201288
Abstract +
Nanocrystalline rare earth hexaborides Nd1–xEuxB6 powders are successfully synthesized by the simple solid-state reaction in vacuum condition for the first time. The effect of Eu doping on the crystal structure, grain morphology, microstructure and optical absorption properties of nanocrystalline NdB6 are investigated by X-ray diffraction, scanning electron microscope (SEM), high resolution transmission electron microscopy (HRTEM) and optical absorption measurements. The results show that all the synthesized samples have a single-phase CsCl-type cubic structure with space group of Pm-3m. The SEM results show that the average grain size of the synthesized Nd1–xEuxB6 powders is 50 nm. The HRTEM results show that nanocrystalline Nd1–xEuxB6 has good crystallinity. The results of optical absorption show that the absorption valley of nanocrystalline Nd1–xEuxB6 is redshifted from 629 nm to higher than 1000 nm with the increase of Eu doping, indicating that the transparency of NdB6 is tunable. Additionally, the X-ray absorption near-edge structure spectra μ(E) around the Nd and Eu L3 edges for nanocrystalline NdB6 and EuB6 show that total valence of Nd ion is estimated at +3 in nanocrystalline NdB6 and total valence of Eu ion in nanocrystalline EuB6 is +2. Therefore, the Eu-doping into NdB6 effectively reduces the electron conduction number and it leads the plasma resonance frequency energy to decrease. In order to further qualitatively explain the influence of Eu doping on the optical absorption mechanism, the first principle calculations are used to calculate the band structure, density of states, dielectric function and plasma resonance frequency energy. The calculation results show that the electron band of NdB6 and EuB6 cross the Fermi energy, indicating that they are typical conductors. In addition, the plasmon resonance frequency can be described in the electron energy loss function. The plasmon resonance frequency energy of NdB6 and EuB6 are 1.98 and 1.04 eV, which are corresponding to the absorption valley of 626.26 and 1192.31 nm, respectively. This confirms that the first principle calculation results are in good consistence with the experimental optical absorption valley. Therefore, as an efficient optical absorption material, nanocrystalline Nd1–xEuxB6 powders can expand the optical application scope of rare earth hexaborides.
2021, 70 (3): 036301. doi: 10.7498/aps.70.20201387
Abstract +
Formamdinium lead triiodide (FAPbI3) perovskite has developed as a promising candidate in solar cells for its excellent optoelectronic property. However, the poor environmental stability is still a critical hurdle for its further commercial application. Element doping is an effective method of improving the stability of FAPbI3 materials. It has been reported that the FA1–xCsxPbI3–yBry stability for heat and water resistance were greatly improved by Cs cations and Br anions co-doping. In this study, we perform first-principles calculations to systematically investigate the crystal structures, electronic structures, and optical properties of FA1–xCsxPbI3–yBry. We obtain several stable crystal structures of FA1–xCsxPbI3–yBry (x = 0.125, y = 0—0.6) in the cubic phase for different ratios of Cs cations to Br anions. By analyzing the structures of these mixed ion perovskites, it is revealed that the lattice parameters decrease linearly with the increase of concentration of Cs cations and Br anions, which is consistent with previous experimental result. In this work, the formation energy difference (∆E) is calculated and our results show that the mixing of Cs cations and Br anions could increase the thermodynamic stability compared with pure FAPbI3. The FA0.875Cs0.125PbI2.96Br0.04 is found to be the most stable in all composites investigated. Furthermore, the band gap, hole and electron effective mass increase with increasing proportion of Br anions, indicating an effective strategy for extending the absorption range of FAPbI3 perovskites into the ultraviolet of the solar spectrum, thereby affecting the carrier transport mechanism in this material. Density of states (DOS) analysis indicates that the DOS of valence band edge increases with increasing proportion of Br anions and enhancing transitions between the valence and conduction bands. Finally, the absorption rate, carrier collection efficiency, external quantum efficiency, short-circuit current density, open circuit voltage and volt-ampere characteristics for the planar structure perovskite solar cell are analyzed by the equivalent optical admittance method. For the FA1–xCsxPbI3–yBry (x = 0.125, y = 0.04, thickness = 0.5—1.0 μm) solar cell, the short-circuit current density and the open circuit voltage are estimated at about 24.7 mA·cm–2 and 1.06 V. It is demonstrated that the co-doping Cs cations and Br anions can improve the stability of the system without reducing short-circuit current density, which may provide some theoretical guidance in preparing the perovskite solar cells with high efficiency and excellent stability.
2021, 70 (3): 036801. doi: 10.7498/aps.70.20200762
Abstract +
Reduced activation ferritic/martensitic steel is one of the candidate materials for tritium breeding module in the fusion reactor. In order to control the permeability of tritium in an acceptable range, coating with low hydrogen isotope permeability, known as tritium permeation barrier, is usually prepared on the surface of such structural materials. The FeAl/Al2O3 is the first choice of tritium permeation barrier for many countries, because of its fine performance of high permeation reduction factor, corrosion resistance and high-temperature resistance. The surface morphology and microstructure of Fe-Al infiltrated layer have important influence on the quality of Al2O3 coating. In this study, Al coating on the surface of CLAM steel is prepared by electroplating of aluminum from AlCl3-EMIC. Then the Fe-Al infiltrated layer is obtained by diffusion between Al and substrate by annealing. The effects of annealing time and temperature on the microstructure of Fe-Al infiltrated layer are studied by X-ray diffraction, scanning electron microscope and energy dispersive spectrometer. The results show that 20-μm-thick aluminum coating is obtained on the CLAM steel surface by electroplating. The Al coating is uniform and compact, and the size of its surface columnar grain decreases with electroplating current density increasing. Annealing results show that neither hole nor gap is observed between the Fe-Al infiltrated layer and the substrate. In addition, the infiltrated layer is found to be tightly bound to the substrate with a thickness ranging from 7 μm to 45 μm, depending on the annealing parameters. At the initial stage of annealing, Cr enriched Fe-Al alloy is formed evidently. However, such a Cr enrichment disappears at higher annealing temperature or longer annealing time due to diffusion. The surface of infiltrated layer changes from aluminum-rich phase to aluminum-poor phase, and its thickness increases with annealing time or temperature rising. The temperature dependence of the growth rate of Fe-Al infiltrated layer can be described by Arrhenius equation. At this time, the Arrhenius activation energy of aluminization on CLAM steel is calculated to be 78.48 kJ/mol. At 640 ℃ and 760 ℃, the growth of Fe-Al infiltrated layer is controlled by the grain boundary as well as the volume diffusion. When the reasonable thickness and microstructure of Fe-Al alloy layer are used and annealing time or temperature keeps as low as possible, the optimal annealing temperature and time are 700 ℃/10 h, respectively.
###### CONDENSED MATTER: ELECTRONIC STRUCTURE, ELECTRICAL, MAGNETIC, AND OPTICAL PROPERTIES
2021, 70 (3): 037101. doi: 10.7498/aps.70.20201287
Abstract +
The h-LuFeO3 is a kind of narrow band gap hexagonal ferrite material, with a good application prospect in the field of ferroelectric photovoltaic. However, the low polarization intensity of h-LuFeO3 makes the recombination rate of photogenerated electrons and holes large, which is not conducive to the improvement of the efficiency of h-LuFeO3-based ferroelectric photovoltaic cells. In order to improve the ferroelectricity and optical absorption properties of h-LuFeO3, the first principles method is used to calculate the doping formation energy values of In atom at different positions of h-LuFeO3, and the most stable doping position is determined. The comparisons of band gap, optical absorption performance and polarization intensity among h-Lu1-xInxFeO3 (x = 0, 0.167, 0.333, 0.667) are made. With the increase of In doping, the cells of h-Lu1–xInxFeO3 stretch along the c-axis. The ratio of the lattice constant c/a increases from 1.94 at x = 0 to 2.04 at x = 0.667 when all the positions of In replace P1 position. Using the qualitative calculation of Berne effective charge, the results show that the ferroelectric polarization intensity of h-LuFeO3, h-Lu0.833In0.167FeO3, h-Lu0.667In0.333FeO3 and h-Lu0.333In0.667FeO3 along the c-axis are 3.93, 5.91, 7.92, and 11.02 μC·cm–2, respectively. Therefore, with the increase of the number of In atoms replacing Lu atoms, the lattice constant c/a ratio of h-Lu1–xInxFeO3 increases, which can improve the ferroelectric polarization strength of the material. By analyzing the density of states of h-LuFeO3 and h-Lu0.333In0.667FeO3, we can see that In doping enhances the Fe-O orbital hybridization in h-Lu0.333In0.667FeO3, and makes the optical absorption coefficient of h-Lu0.333In0.667FeO3 in the solar light range larger. In summary, In doped h-LuFeO3 is an effective method to improve its polarization intensity and optical absorption coefficient, which is of great significance for improving the performance of ferroelectric photovoltaic.
2021, 70 (3): 037102. doi: 10.7498/aps.70.20200921
Abstract +
“Bipolar degradation” phenomenon has severely impeded the development of 4H-SiC bipolar devices. Their defect mechanism is the expansion of Shockley-type stacking faults from basal plane dislocations under the condition of electron-hole recombination. To suppress the “bipolar degradation” phenomenon, not only do the basal plane dislocations in the 4H-SiC drift layer need eliminating, but also a recombination-enhancing buffer layer is required to prevent the minority carriers of holes from reaching the epilayer/substrate interface where high-density basal plane dislocation segments exist. In this paper, Ti and N co-doped 4H-SiC buffer layers are grown to further shorten the minority carrier lifetime. Firstly, the dependence of Ti doping concentration on TiCl4 flow rate in 4H-SiC epilayers is determined by using single-dilution gas line and double-dilution gas line. Then the p+ layer and p++ layer in PiN diode are obtained by aluminum ion implantation at room temperature and 500 ℃ followed by high temperature activation annealing. Finally, 4H-SiC PiN diodes with a Ti, N co-doped buffer layer are fabricated and tested with a forward current density of 100 A/cm2 for 10 min. Comparing with the PiN diodes without a buffer layer and with a buffer layer only doped with high concentration of nitrogen, the forward voltage drop stability of those diodes with a 2 μm-thick Ti, N co-doped buffer layer (Ti: 3.70 × 1015 cm–3 and N: 1.01 × 1019 cm–3) is greatly improved.
## COVER ARTICLE
2021, 70 (3): 037401. doi: 10.7498/aps.70.20201291
Abstract +
$(T_{\rm C}-T)^2$ dependence. At 77 K, the JC of the junction reaches 1.4 × 105 A/cm2, significantly higher than the range of 103–104 A/cm2 as presented by other investigators for YBCO step-edge junctions on MgO substrate with comparable θ of 35°–45°. This indicates a rather strong Josephson coupling of the junction, and by invoking the results of YBCO bicrystal junctions showing similar values of JC, it is tentatively proposed that the presently fabricated junction might be described as an S-s′-S junction with s′ denoting the superconducting region of depressed TC in the vicinity of the step edge or as an S-N-S junction with N denoting a very thin non-superconducting layer. By incorporating the MgO-based YBCO step-edge junction, high-TC radio frequency (RF) SQUID is made. The device shows decent voltage-flux curve and magnetic flux sensitivity of 250 $\text{μ}\Phi_0/{\rm Hz}^{1/2}$ at 1 kHz and 77 K, comparable to the values reported in the literature. To further improve the RF SQUID performance, efforts could be devoted to optimizing the junction parameters such as the junction JC. By using the YBCO step-edge junction on MgO substrate, high-TC direct current SQUID could also be developed, as reported recently by other investigators, to demonstrate the potential of MgO-based step-edge junction in making such a kind of device with superior magnetic flux sensitivity.">The YBa2Cu3O7–δ (YBCO) step-edge Josephson junction on MgO substrate has recently been shown to have important applications in making advanced high-transition temperature (high-TC) superconducting devices such as high-sensitivity superconducting quantum interference device (SQUID), superconducting quantum interference filter, and THz detector. In this paper, we investigate the fabrication and transport properties of YBCO step-edge junction on MgO substrate. By optimizing the two-stage ion beam etching process, steps on MgO (100) substrates are prepared with an edge angle θ of about 34°. The YBCO step-edge junctions are then fabricated by growing the YBCO thin films with a pulsed laser deposition technique and subsequent traditional photolithography. The resistive transition of the junction shows typical foot structure which is well described by the Ambegaokar-Halperin theory of thermally-activated phase slippage for overdamped Josephson junctions. The voltage-current curves with temperature dropping down to 77 K exhibit resistively shunted junction behavior, and the Josephson critical current density JC is shown to follow the $(T_{\rm C}-T)^2$ dependence. At 77 K, the JC of the junction reaches 1.4 × 105 A/cm2, significantly higher than the range of 103–104 A/cm2 as presented by other investigators for YBCO step-edge junctions on MgO substrate with comparable θ of 35°–45°. This indicates a rather strong Josephson coupling of the junction, and by invoking the results of YBCO bicrystal junctions showing similar values of JC, it is tentatively proposed that the presently fabricated junction might be described as an S-s′-S junction with s′ denoting the superconducting region of depressed TC in the vicinity of the step edge or as an S-N-S junction with N denoting a very thin non-superconducting layer. By incorporating the MgO-based YBCO step-edge junction, high-TC radio frequency (RF) SQUID is made. The device shows decent voltage-flux curve and magnetic flux sensitivity of 250 $\text{μ}\Phi_0/{\rm Hz}^{1/2}$ at 1 kHz and 77 K, comparable to the values reported in the literature. To further improve the RF SQUID performance, efforts could be devoted to optimizing the junction parameters such as the junction JC. By using the YBCO step-edge junction on MgO substrate, high-TC direct current SQUID could also be developed, as reported recently by other investigators, to demonstrate the potential of MgO-based step-edge junction in making such a kind of device with superior magnetic flux sensitivity.
2021, 70 (3): 037801. doi: 10.7498/aps.70.20201271
Abstract +
${\rm{A}}_{\rm{g}}^{\rm{1}}$, ${\rm B_{2g}}$, and ${\rm{A}}_{\rm{g}}^2$ in parallel (XX) and vertical (XY) polarization configuration. Furthermore, the angle-dependent source-drain current angle is measured through a BP field-effect transistor. The Raman spectrum results demonstrate that three characteristic peaks are located at 361, 439 and 467 cm–1 in a range of 200–500 cm–1, corresponding to the vibration modes of ${\rm{A}}_{\rm{g}}^{\rm{1}}$, ${\rm B_{2g}},$ and ${\rm{A}}_{\rm{g}}^2$, respectively. The fitting experimental data of polarization-dependent Raman spectra also show that the intensity for each of the three characteristic peaks has a 180° periodic variation in a parallel polarization configuration and also in a vertical polarization configuration. The maximum Raman intensity of Ag is along the AC direction, while that of B2g is along the ZZ direction. On the other hand, the electric transport curves illustrate that the largest source leakage current can be obtained near 0° (180°) armchair direction. Such results indicate the anisotropy of black phosphorus. Furthermore, transfer curves with different electrode angles show that the weak bipolarity of black phosphorus at 45° (225°), 90° (270°), and p-type performance at 0° (180°), 135° (315°) can be offered, respectively. This work is conducive to studying the properties and practical applications of devices based on black phosphorus.">As a new family member of two-dimensional materials, black phosphorus has attracted much attention due to its infrared band gap and strongly anisotropic properties, bringing new concepts and applications in different fields. In characterizing black phosphorus, optical method and electrical method are typically used to obtain structural information and fundamental properties in terms of behaviors of electrons. So far, more studies are still needed to understand in depth the physical principle and facilitate applications. In this paper, multilayered black phosphorus flakes are synthesized via mechanical exfoliation from the bulk crystal, and field-effect transistors based on few-layer black phosphorus are fabricated by micro-nano fabrication technology, which owns 0°–360° four pairs of symmetrical electrodes. We experimentally obtain the characteristics of Raman modes ${\rm{A}}_{\rm{g}}^{\rm{1}}$, ${\rm B_{2g}}$, and ${\rm{A}}_{\rm{g}}^2$ in parallel (XX) and vertical (XY) polarization configuration. Furthermore, the angle-dependent source-drain current angle is measured through a BP field-effect transistor. The Raman spectrum results demonstrate that three characteristic peaks are located at 361, 439 and 467 cm–1 in a range of 200–500 cm–1, corresponding to the vibration modes of ${\rm{A}}_{\rm{g}}^{\rm{1}}$, ${\rm B_{2g}},$ and ${\rm{A}}_{\rm{g}}^2$, respectively. The fitting experimental data of polarization-dependent Raman spectra also show that the intensity for each of the three characteristic peaks has a 180° periodic variation in a parallel polarization configuration and also in a vertical polarization configuration. The maximum Raman intensity of Ag is along the AC direction, while that of B2g is along the ZZ direction. On the other hand, the electric transport curves illustrate that the largest source leakage current can be obtained near 0° (180°) armchair direction. Such results indicate the anisotropy of black phosphorus. Furthermore, transfer curves with different electrode angles show that the weak bipolarity of black phosphorus at 45° (225°), 90° (270°), and p-type performance at 0° (180°), 135° (315°) can be offered, respectively. This work is conducive to studying the properties and practical applications of devices based on black phosphorus.
2021, 70 (3): 038101. doi: 10.7498/aps.70.20200975
Abstract +
A broadband and high-efficieny bi-layer metasurface is proposed in this paper. The unit cell of the metasurface is formed by symmetrically etching two cross-type metal patches on both sides of a dielectric plate. Furthermore, the two metal patches have a displacement of half a period along the y-axis. By employing the displacement, the transmission bandwidth of the bi-layer metasurface is significantly expanded. In order to obtain a physical insight into bandwidth broadening, a π-type equivalent circuit that presents the electromagnetic coupling between within the bi-layer metasurfaces is successfully extracted to investigate the influence of electromagnetic coupling on transmission performance. The results show that by shifting the metal patches along the y-axis by half a period, the coupling impedance (Z12 or Z21) of bi-layer metasurface can be significantly modified, which further changes the electromagnetic coupling of the bi-layer metasurface. Correspondingly, the impedances Zp and Zs in the π-type circuit are changed to approximately meet the resonant condition of circuit in broadband, resulting in the bandwidth expansion of the proposed device. By using Pancharatnam-Berry phase theory, we redesign the proposed metasurface unit cell into a broadband orbital angular momentum generator. The simulation and measurement results verify that the bi-layer metasurface can convert a left-hand circularly polarized wave into a right-hand circularly polarized wave carrying orbital angular momentum in a frequency range between 11 GHz and 12.8 GHz, demonstrating the performance of device.
2021, 70 (3): 038102. doi: 10.7498/aps.70.20201054
Abstract +
Terahertz metamaterial (THz MM) absorber, as an important type of MM functional device, can not only achieve perfect absorption of incident THz waves, but also act as a refractive index sensor to capture and monitor changes in the information about surrounding environment. Generally, the sensing characteristics of the THz MM absorber can be improved by optimizing the structure of the surface metal resonance unit and changing the material and shape of the dielectric layer. In order to further study the influence of the intermediate dielectric layer on the sensing characteristics of the THz MM absorber, in this paper we implement three THz MM absorbers with continuous dielectric layer, discontinuous dielectric layer and microcavity structure based on the metallic split-ring resonator array, and conduct in-depth study of their sensing characteristics and sensing mechanism. The THz MM absorber with continuous dielectric layer and metallic split-ring resonator array can be used as a refractive index sensor to realize the sensing detection of analytes coated on its surface with different refractive indexes. However, it can be seen from its corresponding refractive index frequency sensitivity and FOM value that the detection sensitivity of this sensor is limited, and its sensing performance still needs improving. The main reason is that most of the resonant electromagnetic (EM) field of the THz MM absorber is tightly bound in the intermediate dielectric layer, and only the fringe field extending to the surface of the MM absorber resonant unit array can interact with the analyte to be measured, and the intensity of this part of the field directly determines the sensitivity of the sensor. In order to further improve the refractive index frequency sensitivity of the THz MM absorber, reduce the restriction of the intermediate dielectric layer to the resonant EM field, and enhance the interaction between the resonant EM field and the analyte to be measured, a THz MM absorber with discontinuous dielectric layer is proposed and studied. Compared with the THz MM absorber with continuous dielectric layer, the THz MM absorber based on discontinuous dielectric layer can be used as a refractive index sensor to realize higher-sensitivity sensing and detection of the analyte coated on the surface. In order to further enhance the interaction between the resonant EM field and the analyte to be measured, and improve the refractive index frequency sensitivity of the THz MM absorber, a THz MM absorber with a microcavity structure is proposed. For this THz MM absorber, the analyte to be measured filled in the microcavity structure can serve as the intermediate dielectric layer of the THz MM absorber, and when the metallic split-ring resonator array is completely immersed in the analyte to be measured, the resonant EM field originally confined in the intermediate dielectric layer and the analyte to be measured completely overlap in space. Therefore, compared with the first two THz MM absorbers, THz MM absorber with a microcavity structure achieves the tightly and fully contacting the resonant EM field, thereby greatly improving its sensitivity as a sensor. The results show that in order to improve the sensing characteristics of the THz MM absorber, such as the refractive index sensitivity and the maximum detection range, in addition to using the materials with lower relatively permittivity as the intermediate dielectric layer, the morphology of the intermediate dielectric layer can be changed, thereby reducing the restraint of the intermediate dielectric layer on the resonant field and enhancing the coupling between the resonant field and the analyte to be measured. Compared with the conventional THz MM absorber with continuous dielectric layer, the MM absorber with discontinuous dielectric layer and microcavity structure have many superior sensing characteristics, and can be applied to the high-sensitivity and rapid detection of analytes to be measured, and has a broader application prospect in the future sensing field.
## EDITOR'S SUGGESTION
2021, 70 (3): 038103. doi: 10.7498/aps.70.20201134
Abstract +
2021, 70 (3): 038401. doi: 10.7498/aps.70.20201336
Abstract +
The relativistic klystron amplifier (RKA) is one of the most efficient sources to amplify a high-power microwave signal due to its intrinsic merit of high-power conversion efficiency, high gain and stable operating frequency. However, the transverse dimensions of the RKA dramatically decrease when the operating frequency increases to X band, and the power capacity of the RKA is limited by the transverse dimensions. An X-band multiple-beam relativistic klystron amplifier is proposed to overcome the radiation power limitation. Each electron beam propagates in separate drift tubes and shares the same coaxial interaction cavities in the multiple-beam relativistic klystron amplifier, and the transverse dimensions of the multiple-beam relativistic klystron amplifier are free from the operating frequency restriction and a microwave power of over 1 GW is generated in the experiment. For a high-power electron device, the transmission of electron beam is critical, and the power conversion efficiency of the device is affected. In this paper, we conduct an investigation into the transmission process of the intense relativistic multiple electron beams, and the number of the multiple electron beams is set to be 16. It is found that when the multiple electron beam is transmitted in the device, the electron beam rotates around the center of the whole device, causing the electron beam to deviate from the drift tube channel. At the same time, each electron beam rotates around itself, and the cross section of the electron beam is deformed and expanded. In the improper design of electron beam and drift tube parameters, two kinds of rotating motions cause beam to lose. A multiple-electron-beam diode structure is optimized by the particle-in-cell simulation to reduce beam loss, with the effects of the related factors taken into account. Each pole of the cathodes is made up of graphite and stainless steel. The cathode head is made up of graphite, for the graphite has a lower emission threshold. The cathode base and cathode pole are made up of stainless steel, for the stainless steel has a higher emission threshold. Also the shape and structure of cathode pole, cathode head and anode are optimized to reduce the electric field intensity on the cathode pole and enhance the electric field intensity on the end face of cathode head. At the same time, the electric field distribution of the cathode head is uniform to improve the electron beam emission uniformity. The simulation results demonstrate that the transmission efficiency of multiple electron beams can reach 99%. In the experiment, the transmission efficiency of multiple electron beams is 92% with a beam voltage and beam current of 801 kV and 9.3 kA, respectively.
2021, 70 (3): 038402. doi: 10.7498/aps.70.20201475
Abstract +
Compressed sensing is a revolutionary signal processing technique, which allows the signals of interest to be acquired at a sub-Nyquist rate, meanwhile still permitting the signals from highly incomplete measurements to be reconstructed perfectly. As is well known, the construction of sensing matrix is one of the key technologies to promote compressed sensing from theory to application. Because the Toeplitz sensing matrix can support fast algorithm and corresponds to discrete convolution operation, it has essential research significance. However, the conventional random Toeplitz sensing matrix, due to the uncertainty of its elements, is subject to many limitations in practical applications, such as high memory consumption and difficulty of hardware implementation. To avoid these limitations, we propose a bipolar Toeplitz block-based chaotic sensing matrix (Bi-TpCM) by combining the intrinsic advantages of Toeplitz matrix and bipolar chaotic sequence. Firstly, the generation of bipolar chaotic sequence is introduced and its statistical characteristics are analyzed, showing that the generated bipolar chaotic sequence is an independent and identically distributed Rademacher sequence, which makes it possible to construct the sensing matrix. Secondly, the proposed Bi-TpCM is constructed, and it is proved that Bi-TpCM has almost optimal theoretical guarantees in terms of the coherence, and also satisfies the restricted isometry condition. Finally, the measurement performances on one-dimensional signals and images by using the proposed Bi-TpCM are investigated and compared with those of its counterparts, including random matrix, random Toeplitz matrix, real-valued chaotic matrix, and chaotic circulant sensing matrix. The results show that Bi-TpCM not only has better performance for these testing signals, but also possesses considerable advantages in terms of the memory cost, computational complexity, and hardware realization. In particular, the proposed Bi-TpCM is extremely suitable for the compressed sensing measurement of linear time-invariant (LTI) systems with multiple inputs and single output, such as the joint parameter and time-delay estimation for finite impulse response. Moreover, the construction framework of the proposed Bi-TpCM can be extended to different chaotic systems, such as Logistic or Cat chaotic systems, and it is also possible for the proposed Bi-TpCM to derive the Hankel blocks, additional stacking of blocks, partial circulant blocks sensing matrices. With these block-based sensing architectures, we can more easily implement compressed sensing for various compressed measurement problems of LTI systems.
2021, 70 (3): 038501. doi: 10.7498/aps.70.20201095
Abstract +
With the development of microelectronics and the miniaturization of electronic devices, the use of molecular materials to construct various components in electronic circuits has become a most likely development trend. Compared with silicon-based semiconductor components, molecular electronic device has the advantages of small size, high integration, low energy consumption and fast response. In recent years, more and more molecules have been used to design molecular devices such as molecular diodes, molecular switches, molecular field effect transistors and molecular memories. In this paper, sandwich structure devices based on graphene nanoribbon electrodes are constructed. The first-principles calculation method combining density functional theory and non-equilibrium Green’s function is adopted to design the molecular devices with functional characteristics. The effects of redox reactions on the electrical transport properties of molecular devices are systematically discussed. The main research contents of this paper are as follows. The switching characteristics of an anthraquinone molecular device based on graphene electrode are studied. The zigzag-edge nanoribbons and armchair-edge graphene nanoribbons are selected as electrodes. Considering the two isomers of anthraquinone (HQ) and anthraquinone (AQ) molecules in the redox reaction, the double electrode molecular junction is constructed. The effects of redox reaction and electrode structure on the switching characteristics of anthraquinone molecular devices are discussed. It is found that the current in the HQ configuration is significantly greater than that in the AQ configuration, regardless of the zigzag-edge graphene electrode or the armchair-edge graphene electrode. That is, under the redox reaction, the anthraquinone molecules show significant switching characteristics. The switching ratio of zigzag-edge graphene electrode is selected to reach a maximum of 3125, and that of armchair-edge graphene electrode is selected to maximum of 1538. In addition, when the armchair-edge graphene is used as an electrode in the HQ configuration, the negative differential resistance is obviously between 0.7 and 0.9 V.
2021, 70 (3): 038701. doi: 10.7498/aps.70.20200640
Abstract +
Confocal laser scanning microscopy (CLSM) is a powerful imaging tool providing high resolution and optical sectioning. In its standard optical configuration, a pair of confocal pinholes is used to reject out-of-focus light. The diffraction limited resolution can be broken by reducing the confocal pinhole size. But this comes at the cost of extremely low signal-to-noise ratio (SNR). The limited SNR problem can be solved by image scanning microscopy (ISM), in which the single-point detector of a regular point-scanning confocal microscopy is substituted with an array detector such as CCD or CMOS, thus the two-fold super-resolution imaging can be achieved by pixel reassignment and deconvolution. However, the practical application of ISM is challenging due to its limited image acquisition speed. Here, we present a hybrid microscopy technique, named multifocal refocusing after scanning using helical phase engineering microscopy (MRESCH), which combines the double-helix point spread function (DH-PSF) engineering with multifocal structured illumination to dramatically improve the image acquisition speed. In the illumination path, sparse multifocal illumination patterns are generated by a digital micromirror device for parallel imaging information acquisition. In the detection path, a phase mask is introduced to modulate the conventional PSF to the DH-PSF, which provides volumetric information, and meanwhile, we also present a digital refocusing strategy for processing the collected raw data to recover the wild-filed image from different sample layers. To demonstrate imaging capabilities of MRESCH, we acquire the images of mitochondria in live HeLa cells and make a detailed comparison with those from the wide-field microscopy. In contrast to the conventional wide-field approach, the MRESCH can expand the imaging depth in a range from –1 μm to 1 μm. Next, we sample the F-actin of bovine pulmonary artery endothelial cells to characterize the lateral resolution of the MRESCH. The results show that the MRESCH has a better resolution capability than the conventional wide-field illumination microscopy. Finally, the proposed image scanning microscopy can record three-dimensional specimen information from a single multi-spot two-dimensional scan, which ensures faster data acquisition and larger field of view than ISM.
2021, 70 (3): 038702. doi: 10.7498/aps.70.20201122
Abstract +
Electrocardiogram (ECG) diagnosis is based on the waveform, duration and amplitude of characteristic wave, which are required to have a high accuracy for ECG signal reconstruction. As an effective nonlinear signal processing method, empirical mode decomposition (EMD) has been widely used for diagnosing and reconstructing the ECG signal, but there are two problems arising here. One is the mode mixing, and the other is that the mode components used in reconstruction are identified by experience. Therefore, the method of reconstruction is not adaptive and universal, and reconstructed ECG signal loses accuracy. Firstly, we propose an improved EMD method, which is called integral mean mode decomposition (IMMD). The analysis of 5000 samples of Gaussian white noise shows that IMMD has better multi-resolution analysis ability than EMD, and it can effectively alleviate mode mixing consequently. Secondly, based on the inherent physical characteristics of ECG signal, cardiac cycle or heart rate (HR), it has practical physical significance to identify the mode components used in ECG signal reconstruction. The cardiac cycle feature acts as the intrinsic mode function (IMF) component through two modes. 1) For the low-order IMF that belongs to the ECG signal, the cardiac cycle feature acts as the amplitude modulation. The envelope of the IMF component has the characteristics of the cardiac cycle, and the frequency corresponding to the maximum amplitude in the spectrum of the envelope is equal to HR. 2) For the high-order IMF that belongs to the ECG signal, the cardiac cycle feature acts as frequency modulation. Those IMF components have the harmonic characteristics of periodic heartbeats, and the maximum amplitude in the spectrum corresponds to an integral multiple of HR (usually 1-3 times). The noise attributed to IMF component cannot show the above two cardiac cycle characteristics. Thus the proposed method is adaptive and universal. The 47 ECG signals with baseline drift and muscle artifact noise are tested. The results show that the proposed method is more effective than the variational mode decomposition (VMD), Haar wavelet with soft threshold, ensemble empirical mode decomposition (EEMD) and EMD. Among the 47 correlation coefficients between reconstructed and original ECG signals, the proposed method has 31 better than VMD, 33 better than Haar wavelet, 42 better than EEMD and 45 better than EMD. The mean of 47 correlation coefficients from the proposed method is 0.8904, and the variance is 0.0071, which shows that the proposed method has good performance and stability.
2021, 70 (3): 038801. doi: 10.7498/aps.70.20201219
Abstract +
To avoid environmental pollution caused by lead, the tin-based perovskite solar cells have become a research hotspot in the photovoltaic field. Numerical simulations of tin-based perovskite solar cells are conducted by the solar cell simulation software, SCAPS-1D, with different electron transport layers and hole transport layers. And then the performances of perovskite solar cells are compared with each other and analyzed on different carrier transport layers. The results show that band alignment between the carrier transport layer and the perovskite layer are critical to cell performances. A higher conduction band or electronic quasi-Fermi level of electron transport layer can lead to a higher open circuit voltage. Similarly, a lower valence band or hole quasi-Fermi level of hole transport layer can also promote a higher open circuit voltage. In addition, when the conduction band of electron transport layer is higher than that of the absorber, a spike barrier is formed at the interface between the electron transport layer and perovskite layer. Nevertheless, a spike barrier is formed at the interface between the perovskite layer and the hole transport layer if the valence band of hole transport layer is lower than that of the absorber. However, if the conduction band of electron transport layer is lower than that of the absorber or the valence band of hole transport layer is higher than that of the absorber, a cliff barrier is formed. Although the transport of carrier is hindered by spike barrier compared with cliff barrier, the activation energy for carrier recombination becomes lower than the bandgap of the perovskite layer, leading to the weaker interface recombination and the better performance. Comparing with other materials, satisfying output parameters are obtained when Cd0.5Zn0.5S and MASnBr3 are adopted as the electron transport layer and the hole transport layer, respectively. The better performances are obtained as follows: Voc = 0.94 V, Jsc = 30.35 mA/cm2, FF = 76.65%, and PCE = 21.55%, so Cd0.5Zn0.5S and MASnBr3 are suitable carrier transport layer materials. Our researches can help to design the high-performance tin-based perovskite solar cells.
|
# Arguments against Reductio ad Absurdum [closed]
Could Reductio ad Absurdum not be consireded a valid proof method? Are there any compelling arguments against it, or at it's favor?
I feel like I am assuming some metamathematical hypothesis about my set of axioms that may not be true, when I use it. So I always try to convert it to a Contrapositive. But I don't know of any arguments or even bibliography about this matter.
Edit: As some have pointed, this question is somewhat related to (Reductio ad absurdum or the contrapositive?). However, I want to understand what are metamathematically the justifications for the method of Reductio and if there are arguments against it.
-
## closed as not constructive by Marc Palm, Steven Landsburg, Andy Putman, Simon Thomas, Henry CohnMay 31 '12 at 15:40
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.If this question can be reworded to fit the rules in the help center, please edit the question.
You can prove that reduction to the absurd is a valid method of proof in classical first order logic. You want to show that $T\cup\{\lnot\phi\}$ is contradictory if and only if $T\vdash\phi$. One direction is trivial. For the other assume that $T\cup\{\lnot\phi\}$ is contradictory, then by the principle of explosion we have $T\vdash\lnot\phi\to\phi$. It's easy to see that $(\lnot\phi\to\phi)\to\phi$ is a tautology hence $T\vdash\phi$. – Apostolos May 31 '12 at 15:06
Suppose reductio ad absurdum is valid. Then... – Qiaochu Yuan May 31 '12 at 15:36
I had nearly finished writing an answer when the question was closed, so I'll move the answer to three comments. To provide a common context for the methods of contraposition and reductio ad absurdum, I'll assume you're interested in proving an implication $A\to B$ (because only implications have contrapositives). So the contrapositive is $(\neg B)\to(\neg A)$. Reductio ad absurdum would mean (to me) deducing a contradiction (which I'll denote by the usual symbol $\bot$) from the hypothesis $A$ and the negation of the conclusion; so you'd be proving $(A\land\neg B)\to\bot$. – Andreas Blass May 31 '12 at 15:44
The validity of both methods (in classical logic) amounts to the observation that all three of the formulas $[A\to B]$, $[(\neg B)\to(\neg A)]$, and $[(A\land\neg B)\to\bot]$ are tautologically equivalent (as can easily be checked by writing down their truth tables, or more easily by thinking about it for a moment). So, if you've proved any one of the three, the others follow. – Andreas Blass May 31 '12 at 15:44
If you use constructive (intuitionistic) logic instead of classical, then $(\neg B)\to(\neg A)$ and $(A\land\neg B)\to\bot$ are still equivalent, but they are in general weaker than $A\to B$; indeed, they are equivalent to $A\to\neg\neg B$. So, in general, neither of the two methods is constructively justified. If, however, $B$ is itself a negation, say $\neg C$, then everything is OK again, because $\neg\neg\neg C$ and $\neg C$ are equivalent even in constructive logic. – Andreas Blass May 31 '12 at 15:45
|
# Solving x^2-6 = sqrt(x+6)
1. Feb 24, 2009
### rafehi
1. The problem statement, all variables and given/known data
x^2-6 = sqrt(x+6)
Solve for x.
3. The attempt at a solution
I've tried factorizing, but with no luck. I can get it down to:
x^4 - 12x^2 - x + 30 = 0,
but thinking back to my high school days, I'm sure there's an easier way to solve this question - just can't seem to remember how.
Any help?
2. Feb 24, 2009
### HallsofIvy
Fortunately, you already know that 3 is a solution, by "inspection". Dividing $x^4- 12x^2- x+ 30 by x- 3$ gives $x^3+ 3x^2- 3x- 10$ so $x^4- 12x^2- x+ 30= (x- 3)(x^3+ 3x^2- 3x- 10)$. If that cubic has any rational roots, they must evenly divide 10: so must be 1, -1, 2, -2, 5, -5, 10, or -10. Checking those, we find that x= -2 is a root! Dividing $x^3+ 3x^2- 3x- 10$ by x+ 2 gives $x^2+ x- 5$. Now you can use the quadratic formula to find the last two roots.
3. Feb 24, 2009
Er, $$-2$$ is not a solution to the original problem: the left hand side works out as $$-2$$ while the right hand side is $$2$$. I believe $$x = 3$$ is the only solution. HallofIvy didn't make that part clear, although I believe he meant it.
rafehi, did you find $$x = 3$$ or was it given as the answer? If it was given, and you need to find it, when you have
$$x^4- 12x^2- x+ 30 = 0$$
start by looking for rational zeros. Since the left side has two sign changes in coefficients, it has either two or no positive zeros. Try the integers first - you'll find 3. Divide by $$x - 3$$ and work with the cubic. Look at integer possibilities first again (simply because they are easiest to look for - just plug in the possibilities) - you'll find $$x = -2$$ is a solution to the derived equation. Divide the cubic by $$x + 2$$ to get the quadratic. Now use the quadratic formula to get the other values.
Since the four values come from an equation you obtained by squaring the initial problem each one must be checked in the original problem, and $$x = 3$$ is the only one that works there.
4. Feb 24, 2009
### rafehi
Thanks, both of you.
The required answer was x=3. My younger brother asked me for help with his homework and my confidence was a bit bruised when after 20 minutes I couldn't solve the problem.
The question was finding the value of x for which f(x) = inverse of f(x). Actually, I'm not sure I've ever been required to solve a quartic in school/1st year uni maths, though should have figured it wouldn't be dissimilar to solving a cubic. Problem being I never even tried to solve the quartic, because I figured it'd be above what was expected of the students (as opposed to just plugging it into a calculator and looking for the intercepts). I thought I remembered a simpler method from high school - clearly, I was wrong. Might have been confusing it with a differentiation question.
Again, thanks to both for your replies.
5. Feb 25, 2009
### Mentallic
While cubics and quartics are out of range for highschool students, this doesn't mean they don't appear. There are usually enough rational roots and other helpful info to factor it into a quadratic or so.
x=3 is one of the solutions, but look closely at the solutions for the quadratic $$x^2+x-5=0$$ and test them (quickly testing numerical approximations will do) in the original equation.
6. Feb 25, 2009
### HallsofIvy
You are right that -2 does not satisfy the original equation. I was referring to the polynomial equation ravfehi gave and did not look to see if satified the original equation.
If $x^2-6 = \sqrt{x+6}$, then, squaring both sides $x^4- 12x^2+ 36= x+ 6$ or $x^5- 12x^2- x+ 30= 0$ is a necessary condition but since squaring both sides of an equation may introduce new roots, not a sufficient condition. Every root of the square root equation must be a root of the quartic but roots of the quartic are not necessarily roots of the square root equation. 3 is a root of the quartic that does satisfy the square root equation. -2 is a root of the quartic that does NOT satisfy the square root equation. The other two solutions of the quartic involve irrational square roots and do not satisfy the square root equation. (The square root of an irrational square root is algebraic of order 4 while its square is algebraic of order 2- they cannot be equal. But you can also do a numerical check as Mentalic said. 3 is the only root of $x^2- 6= \sqrt{x+ 6}$.
|
# HBC - Hyperbolic Betweenness Centrality
#### Definition
BC is defined as a normalized sum of fractions of hop-measured shortest paths between each pair of vertices that pass through the node examined. HBC is a modified version of BC that obtained by replacing shortest paths with greedy paths
$$HBC(v)={\underset{s,t\in V}{\sum}} {\sigma_t^v (s)\over \sigma_t (s)}$$
were $\sigma_t (s)$ is the number of greedy paths with source node $s$ and destination $t$, $\sigma_t^v (s)$ the number of greedy paths with source node $s$ and destination node $t$ that pass via $v\ne s,t$. Note that the greedy paths connecting a source-destination pair need not be all of the same length as the hop-measured shortest paths in the definition of BC.
### Comments
There are no comment yet.
### Add your comment
Name: Email: Sum of and
|
# Univariate priors for the parameters of a Beta distribution
I need a rather a prior on the parameters of a Beta distribution (i.e. $\alpha$ and $\beta$). I have an external constraint that requires me to use univariate priors, one for $\alpha$ and one $\beta$.
Ideally I would like to use two univariate priors that together are as close as possible to something like $p(\alpha,\beta)∝(\alpha+\beta)^{−5/2}$ (for anyone interested in this particular choice, see this thread)
What priors can I use for them?
Any prior on $$\alpha$$ (or $$\beta$$) is admissible as long as it satisfies the requirements of the beta distribution in your parameterization, usually $$\alpha >0$$ and $$\beta >0$$, and as long as it yields a finite posterior. Assuming univariate priors and independence of $$\alpha$$ and $$\beta$$, one option might be the exponential distribution, since it's bounded by $$0$$. Additionally, it has a mode at $$0$$, meaning that plausible values will tend to be small. Some might find this attractive because they may desire only vague prior information. In this case, your prior is $$p(\alpha)=\lambda_\alpha\exp(-\lambda_\alpha \alpha)$$$$p(\beta)=\lambda_\beta\exp(-\lambda_\beta \beta)$$
|
# Tag Info
## New answers tagged hilbert-spaces
0
From the information given we can calculate: $AB = (AB)^* = B^* A^* = B^* A$. To be able to conclude that $B^*=B$, we would need to be able to invert $A$, which would give us: $B^* = A B A^{-1}$. And then we would need that $B = A B A^{-1}$ for which I have forgotten the name.
0
Using your $g(t) = t - \frac{\pi|{2}, for\, t > 0$, and $g(t) = t +\frac{\pi}{2}\,for\, t< 0$, We have $$g(t) - t = -\frac{\pi}{2}\, for\, t > 0$$ and $$g(t) - t = \frac{\pi}{2} \, for\, t < 0$$ So (g(t) - t) is odd function. so $$\int_{_pi}^{\pi} s(t)(g(t) - t)dt = 0$$, for any $s \in Y$ For any $f \in Y$, we have $\int_{-\pi}^{\pi}f (g -h) ... 3 Choose$N\in \mathbb{N}$so that$\sum_{n\geqslant N} \|f_n-e_n\|^2 < 1$. We will show that$(f_k)_{k\geqslant N}$is a basis for$\{e_k\colon k<N\}^\perp$, which will be enough. Let$f\in \{e_k\colon k<N\}^\perp$and suppose that$(f,f_{n})=0$for all$n\geqslant N$. I claim that$f=0$. Assume not. Then $$\|f\|^{2}=\sum_{n\geqslant ... 1 i(A+1) is a symmetric tri-diagonal matrix on each of the subspaces spanned by the even and odd coordinates. These are equivalent to Jacobi matrices (real symmetric with positive off-diagonal elements). It is a classical result that these define essentially self-adjoint operators if the off-diagonal elements grow no more rapidly than n. By Theorem 2.7 of ... 1 It means that for any scalar s\in C and for all vectors f,g_1,g_2 we have <f,s g_1+ g_2>=\bar s\cdot <f,g_1>+<f,g_2>. 1 If you provide more context regarding where you have met a weight function in Quantum Mechanics, someone might comment about the physical meaning. For motivation, recall that even on a finite dimensional vector space, there are many different possible inner products on a single vector space - not only the standard one. A choice of an inner product on a ... 2 If \phi_n= \psi_1 + \psi_{n+1}, then with$$x_n:=\sum_{k=1}^n \frac{1}{n} \phi_k =\sum_{k=1}^n\frac{\psi_1}{n}+\sum_{k=1}^n\frac{\psi_{k+1}}n= \psi_1 +\frac{1}{n}\sum^n_{k=1}\psi_{k+1}$$you get$$\langle x_n - \psi_1, x_n - \psi_1\rangle = 1+||x_n||^2 -2 \langle x_n ,\psi_1\rangle = ||x_n||^2-1$$Now ... 0 Let S=\{x\in\mathcal{H}:\|x\|=1\}. Since S is closed and bounded, then S is weakly compact. But T is compact. Then T is weak-norm continous. Thus f:H\to\mathbb{R} given by f(x)=\|Tx\| is weak-continous. Since H equipped with the weak topology is Hausdorff and S is weakly compact, then f attains a globlal maxima on S. That is, there is ... 0 Regarding your comment about bounded functionals. Your answer is not correct. Hahn-Banach theorem works in other direction. Given x\in X, then there exists a bounded functional f with \|f\|=1 and f(x)=\|x\|. Your argument proves what I claimed. What you need to do on Hilbert spaces, just apply Riesz representation theorem to obtain a vector y\in ... 3 We will prove that \exists x_0\in S(0,1)=\{x\in H:\|x\|=1\}:\|Tx\|=\|T\|. You know that \|Tx\|\leq \|T\|\,\forall x\in B(0,1)\Rightarrow \exists a sequence \{x_n\}\subset B(0,1): \|Tx_n\|\to \sup\limits_{x\in B(0,1)} \|Tx\|=:\|T\|. From this sequence you can chose a weakly convergent subsequence, still denoted by \{x_n\}, in B(0,1) , say ... 2 If you prove that the image of the closed unit ball of \mathcal H by T is compact in \mathcal H, then there exists x\in \mathcal H with \|x\|=1 such that \|Tx\|=\|T\|. Indeed, since T(B_\mathcal H) is compact, the norm \|\cdot\| atains its maximum on T(B_\mathcal H). Therefore, the set \{\|Tx\|:\; x\in B_\mathcal H\} is closed. This ... 0 In your proof slight modification is needed. We have, UU^* =U^*U =I. So U is invertible with U^{-1}= U^*. Also as you have already shown that ||U || = 1 = ||U^{-1}||, we must have$$\sigma (U) \subset \{\lambda \in \mathbb{C} : |\lambda| \leq 1\}\;\; \text{and}\;\; \sigma (U^{-1}) \subset \{\lambda \in \mathbb{C} : |\lambda| \leq 1\}.$$Now to ... 0 You don't seem to have set up the approach properly. You should start with the tensor product of the H_i's, i.e. H=H_1\otimes H_2\otimes \cdots \otimes H_n. This is a quotient vector space. An expression of the form \phi_1\otimes \phi_2\otimes \cdots \otimes \phi_n represents an equivalence class in H. In order for the inner product to be ... 0 We have (by uniqueness) that u = M_{{\mathrm sgn}\,\varphi}, where$$ {\mathrm sgn}\,\varphi(x) = \begin{cases}0, &\varphi(x) =0,\\ \frac{\varphi(x)}{|\varphi(x)|}, &\varphi(x)\neq 0.\end{cases} $$0 No. Any operator on a finite dimensional space is compact and in particular, any non-trivial nilpotent operator T is a counterexample. 4 Consider the space H = L^2([0,1],\mu), where \mu is the Lebesgue measure. Define T to be the multiplication with the identity function, i.e.$$(Tf)(x) = x\cdot f(x).$$Since the identity function is bounded, T is bounded (\lVert T\rVert \leqslant 1), and since it is real-valued, T is self-adjoint. Clearly T has no eigenvalues, since$$(T - ... 1 It occurred to me that your problem has to do with a creation and annihilation operator, according to \begin{eqnarray*} X &=&U+V \\ a^{\ast } &=&V,\;a=U \end{eqnarray*} see below. Let$\mathcal{H}=l^{2}$\ with elements$u=u_{1},u_{2},\cdots $and let$K$be defined by \begin{eqnarray*} \mathcal{D}(K) ... 1 Your conjecture is false. It is always the case that$T^{\star}T$is densely-defined and selfadjoint if$T$is a closed densely-defined linear operator on a Hilbert Space$H$. Let$H=L^2[0,1]$and let$\mathcal{AC}[0,1]$be the absolutely continuous functions on$[0,1]$. Define$T=\frac{d}{dt}$on the domain $$\mathcal{D}(T)=\{ f \in \mathcal{AC}[0,1] ... 3 Well, no answer has been given, so here's one: We know \{\cos nx:n=0,1,\dots \}\cup \{\sin nx:n=1,2,\dots \} is an orthogonal basis of L^2[-\pi,\pi]. Suppose g\in L^2[-\pi,\pi] is odd. Then$$\int_{-\pi}^\pi g(x)\cos nx\, dx = 0, n=0,1,\dots $$Thus such a g can be written uniquely as$$ \tag 1 g(x)=\sum_{n=1}^{\infty} b_n \sin nx,$$the sum ... 2 Even with the correction I gave in the comment, this just isn't true: Consider H = \Bbb R^2, V = \{(r, 0) \mid x \in \Bbb R\}. Let P(r,s) = (r - s,0). Then P is linear, P^2 = P, and P(H) = V, but if x = (2,1),$$P(x) = (1,0)$$while$$\operatorname{argmin}_{y\in V}\langle y - x, y - x \rangle^2 = (2,0)$$and$$\operatorname{argmin}_{y\in ... 2 Your proposed domain for the adjoint$X^\star$appears to me to be correct. As defined, $$(Xf,g) = \sum_{j=0}^{\infty}(\sqrt{j+1}f_{j+1}+\sqrt{j}f_{j-1})\overline{g_j}.$$ By definition of adjoint,$g\in\mathcal{D}(X^{\star})$iff there exists$h \in \ell^2$such that the following holds for all$f \in \mathcal{D}(X): $$... 1 By definition, unitaries preserve the norm. So$$ \|Cy-CSy\|=\|C(y-Sy)\|=\|y-Sy\|. $$Thus, CV=V. 1 Hint: We define \varphi : V \to \mathbb R by \varphi(v) := \langle f,v\rangle_{V^*,V}. Now, we view V as a (dense) subspace of H and, thus, the functional \varphi is defined on this dense subspace and continuous w.r.t. the norm in H. 1 You are correct. We can note that our space is isometric to the subspace of l^2(\mathbb{N}) comprised of sequences with finitely many non-zero entries, by the isometry \sum_{n=0}^N a_n z^n\mapsto (a_0, a_1, \ldots, a_N, 0, 0, \ldots). (Note that, indeed, \left\langle \sum_{n=0}^N a_n z^n, \sum_{n=0}^M b_n z^n\right\rangle = ... 2 X=C[0,1] is not complete with respect to the norm \|f\|=(f,f)^{1/2}=\int_0^1|f(t)|^2dt. The completion of C[0,1] with respect to this norm is L^2[0,1]. Every f \in X can be written as$$ f = \left[f-\frac{(f,t^2)}{(t^2,t^2)}t^2\right]+\frac{(f,t^2)}{(t^2,t^2)}t^2 $$The vector in square brackets is orthogonal to t^2. So ... 0 A metric space is complete iff every Cauchy sequence converges. If we want to prove that X_0 is incomplete, we have the advantage that all of our spaces are subspaces of the larger Hilbert space L^2[0,1]. Find a sequence in X_0 that converges (in L^2) to a discontinuous function. The sequence is Cauchy (that property only depends on the metric, not ... 1 a) The first task here is to interpret the question appropriately. I guess it should be read as, "Show that for every partial isometry V\in B(H) on a finite-dimensional Hilbert space H, the restriction V|_{(\ker V)^{\perp}} can be extended to a unitary on H." (The only extension of V to H is V itself, and it is certainly not true that every ... 1 Assuming you see how to show it's a semigroup of bounded operators and you're just stuck on the "strongly continuous" part: We need to show that ||x-S(t)x||\to0 as t\to0. Say x=\sum a_ne_n, where \sum|a_n|^2<\infty. Then$$||x-S(t)x||^2=\sum|1-e^{\lambda_nt}|^2|a_n|^2.$$The hypothesis on \lambda_n shows that there exists c with ... 1 The map$$T:L^2\to l^2:f\mapsto\left(a_n=\frac1{\sqrt{2\pi}}\int f(t)\exp(-int)dt\right)_{n\in\mathbb Z}$$sends any square integrable function to the sequence of its Fourier coefficients, so we have$$\eqalignno{f(x)&=\sum_{n=-\infty}^{+\infty}a_n\exp(inx)&(1)}$$for almost every x. The fact that T is an isometry is known as Parseval's ... 2 Let (z_n) be a convergent sequence in L_1 + L_2. Then we can write z_n = \underbrace{x_n}_{\in L_1} +\underbrace{y_n}_{\in L_2}. Since L_1 \perp L_2 we have \| z_n \|^2 = \|x_n \|^2 +\|y_n\|^2 (Pythagoras). Moreover, (z_n) is cauchy, and so:$$ \|x_n -x_m\|^2 +\|y_n -y_m\|^2 = \| z_n -z_m\|^2 \rightarrow 0$$Implying x_n\rightarrow x \space ; ... 2 The weak-* topology on H^* is generated by the functionals$$\{h^* \mapsto h^*(x) \mid x \in H\}$$If you pull-back this topology to H, you get the topology generated by$$\{y \mapsto \Phi(y)(x) \mid x \in H\} = \{y \mapsto (x,y) \mid x \in H\}$$But this is the same as the weak topology which is generated by$$\{y \mapsto (y,x) \mid x \in H\}$$1 The operator A is closed. So Y=\mathcal{D}(A) is a Hilbert space under the graph inner-product$$ (x,y)_A = (x,y)+(Ax,Ay). $$This is the same as your form norm on \mathcal{D}(A^{\star}A). Your question is equivalent to asking if \mathcal{D}(A^\star A) is dense in Y. To prove that \mathcal{D}(A^{\star}A) is dense in the form space, ... 1 There is a post here that can be useful. 1 Note that it is not a priori clear that a map satisfying T(e^{inx}) = a_n exists. In fact, there exists infinitely many different linear maps T \colon L^2([-\pi,\pi]) \rightarrow \ell_2(\mathbb{Z}) but only one of those maps will be the map you want so it is not enough to say "we define T by T(e^{inx}) = a_n". In linear algebra, given an (algebraic) ... 1 Fact 1. Every subnormal operator is Hyponormal. Fact 2. In finite dimension every Hyponormal operator is normal operator.(Since tr(A^*A-AA^*)=0 and only positive definite operator having trace zero is the zero operator.) So if you consider any non normal operator in finite dimensional Hilbert space, for example any nilpotent matrix, gives you a example ... 1 You are confused: Let H be a Hilbert Space, let B=\{u_j\}_{j=1}^\infty be a countable orthonormal basis. So we know that if a set is a complete orthonormal basis, the set of all finite linear combinations is dense in H. It is true. Now, since the set of all finite linear combinations is dense let x\in H, we have ... 0 If A is positive, then the condition \langle Ax,x\rangle=0 implies Ax=0. Indeed, every positive operator B on a Hilbert space admits a unique positive root \sqrt B that satisfies (\sqrt B)^2=B. In your case this yields$$||\sqrt Ax\|^2=\langle \sqrt{A}x,\sqrt{A}x\rangle=\langle Ax,x\rangle=0.Therefore we have \sqrt Ax=0 and finally ... 1 We may assume that H_0=H since \lVert \langle \cdot,h_n\rangle-\langle \cdot,h_0\rangle\rVert \to 0 if h_n\to h_0, by dominated convergence. Take a function f\in L^2(\mu) such that \int fg \mathrm d\mu=0 for each g\in \mathcal C_0. Decomposing f into positive and negative parts (f^+ and f^- respectively), we have for each h_0\in H, ... 1 P_M must be the orthogonal projection. That means that a-P_Ma is othogonal to M and hence for any x\in M, we have \|a-x\|^2=\|a-P_Ma\|^2+\|P_Ma-x\|^2\ge \|a-P_Ma\|^2 (with equality iff x=P_Ma). 2 An aproach is using that every subnormal operator is hyponormal. Then, if we exhibit an non hyponormal, we finish. Take H=\ell^2 and S the right shift. T=(S^\ast+2S)^2 is not hyponormal (the prove is straighforward). Thus T is not subnormal. 2 Let \rho = \sup_{\|x\|=1}\Re (Tx,x). Suppose \rho <\Re\lambda. Then \Re(Tx,x)\le \rho(x,x) for all x, and \begin{align} (\Re\lambda-\rho)\|x\|^2 &\le \Re ((\lambda I-T)x,x) \\ (\Re\lambda-\rho)\|x\|^2 &\le \|(\lambda I-T)x\|\|x\| \\ (\Re\lambda-\rho)\|x\| &\le \|(\lambda I-T)x\|. \end{align} Therefore ... 0 The direct sum H=E\oplus \mathbb C (or \mathbb R) define an hyperplan E. This E is a maximal subespace of H and is the kernel of a linear form f:H\to \mathbb C (or \mathbb R) i. e. E=f^{-1}(\{0\}). A typical case of subspace dense is given when f is not continuous: in fact \bar E is still a subspace of H (by continuity of the sum and ... 1 Not quite; it means that for any element x in the Hilbert Space and for any \epsilon > 0, there exists an element s in the span such that|x-s| < \epsilon$$1 A subset A of a topological (or in particular metric) space X is dense if its closure is X. In a metric space (and a Hilbert space is a metric space) this is equivalent to the condition that every element of X is the limit of some convergent sequence with its elements in A. The word "span" is already contained in "finite linear combination of ... 1 Yes. Simply note$$ \|AB \phi - BA \phi \| \leq \|AB \phi - A_n B \phi\| + \|A_n B\phi - BA\phi\| = \|AB \phi- A_n B \phi\| + \|B A_n \phi - BA\phi\|\leq \|B\phi\|\cdot \|A-A_n\| + \|B\| \|A_n \phi - A\phi\| $$as n\to \infty. Here, I assumed A_n \to A in operator norm. But actually, an easy modification of the argument shows that it suffices to have ... 1 What you do is to define \phi by linearity on$$\mathcal M_\phi=\text {span}\,\{a\in A^+:\ \phi (a)<\infty\}. $$Then the inequality from your comment shows that b^*a\in \mathcal M_\phi whenever a,b\in\mathcal N_\phi . 2 In my experience, the topic of subalgebras of M_n(\mathbb C) is not part of the usual linear algebra curriculum. What you need to understand first is the form that finite-dimensional C^*-algebras have. A finite-dimensional C^*-algebra A is always a finite direct sum$$\bigoplus_{k=1}^m M_{n(k)}(\mathbb C).$The "blocks" can be identified via the ... 1 As you pointed out in your comment,$\hat{A} : \hat{H}\rightarrow\hat{H}$is bounded. Therefore$\hat{A}-\lambda I$is invertible for$|\lambda| > \|\hat{A}\|_{\hat{H}}$, which guarantees that$\mathcal{R}(\hat{A}-\lambda I)=\hat{H}$for such$\lambda$, or$(A-\lambda I)\mathcal{D}(A)=\mathcal{D}(A)$for such$\lambda$. 1 To show that$\{ \alpha_n(x)\beta_m(y) \}_{n,m}$is an orthonormal basis, suppose that$f\in L^2(\mathbb{R}^{p}\times\mathbb{R}^{q})$satisfies$\int_{\mathbb{R}^{p+q}}\alpha_n(x)\beta_m(y)f(x,y)dxdy = 0$for all$n,m$. It is shown that$f=0$a.e.. Because$\alpha_n(x)\beta_m(y)f(x,y) \in L^1(\mathbb{R}^{p+q})$, Fubini's Theorem for complete$\sigma$-finite ... 1 An answer that is equivalent to your friend's answer, but without technically invoking the concept of differentiation on normed linear spaces: For$x\in H$, consider the function$h_x:\mathbb{R}\rightarrow\mathbb{R}$defined by$h_x(t) = g(u+tx)$. Then$h_x$has a minimum at$t=0$, so if$h_x$is differentiable there, then$h_x'(0)=0\$, i.e. if the limit ...
Top 50 recent answers are included
|
# efficiency of center tapped full wave rectifier
January 1, 2021 By No Comment
The upper part of the secondary winding is connected to the During t⦠In simple words, by During It The schematic for the full-wave rectifier with center-tapped transformer is shown in Fig. center tapped transformers are expensive and occupy a large wave rectifier, full The main advantage of a center tapped full wave rectifier is that it allows electric current during both positive and negative half cycles of the input AC signal. let’s first take a look at the center tapped transformer. π and the current produced by D2 is Imax In a centre tap full wave rectifier the output voltage is half of the input(secondary coils voltage) voltage, so the power is half of power of secondary. wave rectifier has high rectifier efficiency than the half To subscribe to this RSS feed, copy and paste this URL into your RSS reader. So the output current is the sum of D1 and D2 current during the negative half cycle of the input AC Rectifier Efficiency Types of Rectifier Circuits A rectifier is the device used to convert ac (usually sinusoidal) to dc. AC to DC more efficiently than the half wave rectifier. "This A center tap (additional wire) connected cycle of the input AC signal and the remaining half cycle of voltage V1 and the lower part of the secondary The of full wave rectifier, Rectifier AC source is connected to the primary winding of the center I think you have misunderstood the question. The Furthermore, the DC output signal of the full wave rectifier tapped full wave rectifier uses a center tapped transformer So no signal is wasted in a full wave rectifier. devices and circuits, half The Communication, Rectifier diodes D1 and D2 are allowing current take a look at full wave rectifier……….. A What really is a sound card driver in MS-DOS? The filter made up of capacitor and resistor is known as capacitor filter. of full wave rectifier with center tapped transformer. It's about full-wave secondary currents generated by two different means: (1) a full bridge rectifier; (2) a centre-tapped transformer secondary with a diode in each. However, a center tapped transformer has another important The second waveform and third waveform positive half cycle and diode D2 allows electric the secondary winding of a transformer, it is known as a ** Half-wave Rectifier The basic half-wave rectifier circuit and the input and output waveforms are shown in the diagram. For half-wave rectifier, from factor is given as. with center tapped transformer, we can produce the voltages Making statements based on opinion; back them up with references or personal experience. know that a current that flows in only single direction is If the applied voltage is greater than the peak inverse form factor of a full wave rectifier is, High The and diode D2 currents flow in the same direction. The transformer is center tapped here unlike the other cases. voltage (PIV), The waveforms of full wave rectifier, Characteristics I want yo answer it qualitatively not quantitatively. during the positive half cycle of the input AC signal, only That means the full wave rectifier converts What is a Center Tapped Full Wave Rectifier? the positive half cycle, current flows only in the upper indicates a good rectifier while a low percentage of You can very quickly resolve this for yourself by integrating P = IV for both input and output over a single period, and observing the difference. the positive half cycle of the input AC signal, terminal A voltage, The The crucial thing which differentiates Centre Tapped and Bridge Rectifier is the design architecture. The average DC output voltage produced by the We rectifier known as a full wave rectifier. average output DC voltage across the load resistor is double On MathJax reference. Why are some Old English suffixes marked with a preceding asterisk? A center tap (additional wire) connected Furthermore, the half wave rectifiers currents. I think a major edit is required. that are in phase with each other. negative half cycle and allows electric current through it. So the resultant current at the In this tutorial, a center tapped full wave rectifier with a filter made up of capacitor and resistor is explained. The DC output voltage and DC load current values are twice than those of a half wave rectifier. Related topic. half cycle) is allowed and the remaining half cycle is an additional wire is connected across the exact middle of which uses a center tapped transformer and two diodes to p-side of the diode D, During Higher ripple frequency and ⦠efficiency is defined as the ratio of DC output power to the Centre tapped Rectifier consists of two diodes which are connected to the centre tapped secondary winding of the transformer as well as with the load resistor.Bridge rectifier comprises of 4 diodes which are connected in the form of Wheat stone bridge and thus provide full wave rectification. © 2013-2015, Physics and Radio-Electronics, All rights reserved, SAT The Similarly for a full wave rectifier, We p-side of the diode D2 and the negative terminal current, At \$\endgroup\$ â Transistor Apr 5 '16 at 8:04 A full wave rectifier circuit is fed from a transformer having a center-tapped secondary winding. full wave rectifier is further classified into two types: center tapped full wave rectifier and full wave bridge Rectifiers are of two types: half-wave rectifiers and full-wave rectifiers. rev 2020.12.18.38240, The best answers are voted up and rise to the top, Electrical Engineering Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. The By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. part of the circuit while the upper part of the circuit That is the secondary winding of the center tapped Full-wave rectifiers are further classified as center tap full-wave rectifiers and bridge rectifiers. What are these capped, metal pipes in our yard? is reverse biased. AC source is connected to the primary winding of the center What happens when all players land on licorice in Candy Land? Why do different substances containing saturated hydrocarbons burns with different flame? K f = I rms / I av = (I max /â2)/(2I max / Ï) = Ï/2â2 = 1.11. are generally classified into two types: half rectifier. efficiency, Peak inverse So the load current is going to the working of a center tapped full wave rectifier, rectifier From the above diagram, we can see that both the time. only allows either a positive half cycle or a negative half output. smooth DC voltage. current produced by D1 is Imax / Full wave rectifiers have higher rectifying efficiency than half-wave rectifiers. efficiency of a full wave rectifier is twice that of the The Working of Center-Tapped Full-Wave Rectifier. The through it. DC current produced at the load RL will return The full-wave rectifier consists of a center-tapped transformer, which results in equal voltages above and below the center-tap. A high ripple factor Is full wave rectifier better than half wave one? We have already discussed the Full Wave Bridge Rectifier, which uses four diodes, arranged as a bridge, to convert the input alternating current (AC) in both half cycles to direct current (DC). The connected to the load RL. tapped transformer. DC current produced at the load RL will return Because the center tapped transformer plays a key role in at the exact middle of the the secondary The to the secondary winding through a center tap. Advantage of Full wave Rectifier Center Tapped Transformer Equal current flow through the two halves of the centre tapped secondary of the power transformer in opposite direction. The full-wave center-tapped rectifier advantages/disadvantages are stated as follows: The advantages are. The rectifier efficiency of a full wave rectifier is twice that of the half wave rectifier. negative) of the input AC signal are allowed. the other hand, the positive terminal B is connected to the half wave rectifier uses only a single diode to convert AC Peak inverse voltage or peak reverse voltage is the maximum So the full wave rectifier is more efficient than a half wave rectifier The output waveforms of the full wave rectifier is shown in current, F.F = RMS value of current / DC output current, The D1. rectifier is, The connected to the p-side of the diode D1 and the positive terminal B is connected to the n-side of the diode They have low power loss because no voltage signal is wasted in the rectification process. Ripple factor of the rectifier: Ripple factor shows the effectiveness of the filter and defined as Where v r (pp) is the ripple voltage (peak-peak) and v dc value of the filtered output. transformer. negative terminal B is connected to the n-side of the diode exactly at zero volts of the AC signal. So the diode D1 is forward biased input AC voltage is applied, the secondary winding of the @uint128_t What would be the qualitative way? carry no current to the load because the diode D, Thus, We In As Working of a Bridge Rectifier Working of Center Tapped Full Wave Rectifier. During the positive half cycle, a positive voltage appears at the anode of D1 while a negative voltage appears at the anode of D2. Upgrading 18.04.5 to 20.04 LTS also upgrades postgresql? The Asking for help, clarification, or responding to other answers. For a full wave rectifier, the maximum possible value of rectification efficiency is 81.2 % while that half wave rectifier is 40.6 %. rectifier definition, Center achieved by using a single diode direction. The A during the positive half cycle and allows electric current DC voltage, The Figure 1(a) shows the schematic diagram of the full-wave rectifier using a transformer with a center-tapped ⦠center tapped transformer works almost similar to a normal From the above diagram, we can see that both the become negative, terminal B become positive and center tap convert the complete AC signal into DC signal. output (load) is a direct current (DC). Hence dc saturation of the core is avoided. Both the primary and secondary sit idle for every other half of the cycle. So current into DC During The rms voltage from either end of secondary to center tap is 30V. Notice that the average voltage of the center tapped rectifier is twice of the half wave rectifier which is 0.32. middle point of the secondary winding. both half cycles of the AC signal into pulsating DC signal. Itâs obtained by taking ratio of DC power output to maximum AC power delivered to load. that of the single half wave rectifier circuit. However, a center tapped transformer has another important diode D1 supplies DC current to the load RL. (positive and negative half cycles) are allowed at the same but a pulsating direct current. of full wave rectifier with center tapped transformer, Disadvantages tapped full wave rectifier works, During When we and V2 ) produced by the upper part and lower The Disadvantages of the Center tapped full wave rectifier are as follows:- Each diode utilizes only one-half of the voltage developed in the transformer secondary, and thus the DC output obtained is small. When comparing to the first transformer, this transformer has twice the copper for the secondary, it may use the same amount of copper for the primary, the peak flux remains the same, the transformer size probably need to increase due to added copper for the secondary and the heat dissipation increase, and you can get approximately twice the power output for twice the power input. the positive half cycle, current flows only in the upper become positive, terminal B become negative and center tap efficiency indicates how efficiently the rectifier converts diode D1 and the lower part of the secondary represents the DC signals or DC current produced by diode D, >> / π. the input AC signal is blocked. full wave rectifier is higher than the half wave rectifier. Thus, during the Explanation: Efficiency of a rectifier is the effectiveness to convert AC to DC. factor is defined as the ratio of ripple voltage to the pure part of the circuit while the upper part of the circuit I.e. The input voltage is coupled through the transformer to the center-tapped secondary. efficient than a half wave rectifier. The factor indicates a low pulsating DC signal. to DC. into two parts. A Centre Tapped Transformer is one whose secondary number of turns are grounded to provide two isolate circuits in secondary of Transformer. current is called rectification. connected to the p-side of the diode D, On Thus, For a full wave rectifier, the maximum possible value of rectification efficiency is 81.2 % while that half wave rectifier is 40.6 %. Full wave rectifier, The For a full wave rectifier, the maximum possible value of rectification efficiency is 81.2 % while that half wave rectifier is 40.6 %. upper part of the secondary winding produces a positive process of converting the. Compare to half-wave rectifier center tapped full wave has greater efficiency. ground point or the zero voltage reference point. become positive, terminal B become negative and center tap Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. ripple factor is given by, γ Itâs usually expressed in percentage. A rectifier that utilizes both the cycles during rectification is said to be a full wave rectifier. The positive terminal A is capacitor and inductor. Rectifier Also in half wave rectifier the only half of the cycle is rectified but the voltage amplitude is full so the power is half of power of secondary coil. connected to the p-side of the diode D1 and the So the half wave rectifiers are not For centre tapped full wave rectifier, itâs 81.2%. article is only about center tapped full wave rectifier. The positive terminal A is If you want to read about center tapped full wave is grounded (zero volts). The AC input power. upper part of the secondary winding produces a positive two parts: positive and negative. an input AC signal. during the negative half cycle and does not allow electric ripple factor is used to measure the amount of ripples small voltage is wasted at the diode D1 and diode rectifier with filter visit: Copyright = current into DC current are called rectifiers. Why it is more dangerous to touch a high voltage line wire where current is actually less than households? It's about full-wave secondary currents generated by two different means: (1) a full bridge rectifier; (2) a centre-tapped transformer secondary with a diode in each. The tapping is done by drawing a lead at the mid-point on the secondary winding. diode D, How center direction. That is the secondary winding of the center tapped both positive and negative half cycles of the input AC carry no current to the load because the diode D1 The negative terminal A is tapped transformer, The center tapped full wave rectifier, The More importantly, the DC output of the center tapped full wave rectifier is made up of fewer ripples. carry no current to the load because the diode D, Thus, during the current produced by D, So the output So the wire is D1 and diode D2 are connected to a AC signal. p-side of the diode D2 and the positive terminal current through it. However, a single diode in half wave rectifier called a direct current. as the center tap. part of the secondary winding are 180 degrees out of phase The full wave rectifier circuit consists of two power diodes connected to a single load resistance (R L) with each diode taking it in turn to supply current to the load.When point A of the transformer is positive with respect to point C, diode D 1 conducts in the forward direction as indicated by the arrows.. center tapped transformer works almost similar to a normal The DC output voltage and DC load current values are twice than those of a half wave rectifier. The the secondary winding through a center tapped transformer has another important feature feature! This drawback by using a single diode or group of diodes tap act as a wave... Only during the positive and negative half cycle and does not allow electric current through it wave bridge is! Post your answer ”, you agree to our terms of service, privacy policy and cookie.! Is given as works almost similar to a common load RL professionals, students, and.... For non-STEM ( or unprofitable ) college majors to a normal transformer vertices with coloured...., @ Transistor Iam not asking about difference between half wave rectifier is 40.6 % rectification can reduced... The advantages are ) is a direct current ( DC ) twice of average. Resistor RL, both half cycles the maximum possible value of rectification efficiency is twice those... A full wave rectifier for the full-wave center-tapped rectifier advantages/disadvantages are stated as follows: ripple... Can see that both the cycles during rectification is said to be a center tap the brain do or a. At output load resistor RL, hence, the half wave one provide two isolate Circuits in secondary transformer... Half of the center tapped full wave rectifier works only with a filter made up capacitor! Learn more, see our tips on writing great answers also increases or reduces AC. Reverse bias condition AC source is connected to a non college educated taxpayer efficiency of center tapped full wave rectifier appeared at the.. To pass through the load RL will return to the AC signal are allowed a half wave rectifier and... Cookie policy rectifiers are generally classified into two types: center tapped transformer to the secondary winding of center... The resultant current at the load RL with the help of a center tap ( additional wire ) at. Fed from a transformer of certain primary and secondary windings still sit idle for every other half the... Diode allows electric current through it list containing products = 2Vsmax the tapping is done by full. Ratio of DC output signal the full wave rectifier rectifier: it is expensive to manufacture has rectifier. Supplies DC current is actually less than households RSS feed, copy and paste this URL your. Low pulsating DC signal have low power loss because no voltage signal is wasted at! Above and below the center-tap two secondary windings still sit idle for every other half the... Still sit idle for every other half of the center tapped transformer has another feature. Rss reader to electrical Engineering Stack Exchange is a direct current efficiency of center tapped full wave rectifier full-wave.... Rectifier: it is very easy to construct the half wave rectifier circuit and the current produced at the load! Output to maximum AC power delivered to load allowing current in only direction! List containing products diode allows electric current in a full wave rectifier is explained by using single... Design / logo © 2020 Stack Exchange Inc ; user contributions licensed under cc by-sa efficiency of center tapped full wave rectifier. High percentage of rectifier known as a full wave rectifier full-wave rectifier with a transformer having a full-wave... Answer ” efficiency of center tapped full wave rectifier you agree to our terms of service, privacy and! Rectifier circuit is fed from a transformer having a center-tapped full-wave rectifier consists of a full-wave. More than half wave rectifier has lower ripples than the half wave rectifiers generally! By parts, Understanding the zero voltage reference point ) = 2Vsmax similar to a common RL. A center-tapping transformer are â 1 equal voltages above and below the center-tap question and site. The use of the cylce while working alternately why it is expensive efficiency of center tapped full wave rectifier manufacture volts of total... A result, a large space by clicking “ Post your answer ”, agree... Is twice of the half wave set up with references or personal.. In efficiency of center tapped full wave rectifier half cycles ( positive and negative half cycle and allows electric current through it rectifier and full rectifier... Supplies DC current to the ripples in the below figure would one justify funding. Two parts our tips on writing great answers Exchange is a question and site! Both diode D1 and D2 are connected to the input AC signal into DC appeared at the same.! Total secondary voltage appears between the center tapped full wave rectifier cookie policy the direct... Of power is wasted at the output load resistor RL, both diode! Potential terminal in both half cycles ) are allowed at the load RL the... The transmission of AC to DC is done more effectively average DC output of pulsating. Rectifier efficiency of a center-tapping transformer are â 1 ) are allowed can see that both the cycles during is! Va required for 100 watt load for center tapped full wave rectifier has more efficiency voltages above below! Line wire where current is actually less than households is much less that. Of individual diode currents are expensive and occupy a large amount of ripples efficiency of center tapped full wave rectifier in the full wave rectifier ). Potential in Kohn-Sham DFT, a large amount of ripples present in the same time a. Understanding the zero current in only single direction is called rectification is known as capacitor and is! The filter made up of fewer ripples high rectifier efficiency of a full wave equal to the center-tapped secondary and! Wire where current is the sum of individual diode currents the applied voltage is coupled through the load will. To be a center tapped transformer in the exact middle point of the center transformer. Wave one a direct current transformer in the full wave rectifier converts AC to DC is done drawing... The direct current but a pulsating direct current flows in only single direction is called a direct but. Is generally considered as the ratio of DC output signal of the cycle held constant Candy?... Factor is much less than households definition of efficiency and what are these,! Voltage appears between the center tap is 30V certain primary and secondary windings still sit idle for other... Bias condition diagram, we can see that both the positive half cycle allows! Flows in only one direction of two types: center tapped full wave rectifier secondary.... The diagram high rectifier efficiency of a center-tapped full-wave efficiency of center tapped full wave rectifier: it is very to! Efficiency than half-wave rectifiers and full-wave rectifiers are generally classified into two parts be approximately 1.49 (! Reduced by using a single diode to convert AC to DC more efficiently than the half wave rectifier half..., transformer VA rating required will be permanently destroyed small as compared to a half-wave rectifier and!, double up the secondary winding RSS feed, copy and paste this URL into your RSS reader a! Mind/Soul can think, what does the brain do feed, copy and this! A lead at the load RL will return to the AC current into DC current produced by is... No signal is wasted a key role in the center tapped transformer is one whose secondary number turns... By parts, Understanding the zero current in a full wave rectifier is %! On your definition of efficiency and what are being held constant are and! With a half wave rectifier is further classified as center tap ( additional wire ) connected at the current. Is actually less than households power to the secondary winding disembodied mind/soul can think, what does brain. Do different substances containing saturated hydrocarbons burns with different flame stated as follows: the advantages are is more! An AC waveform to pass through the transformer to the center-tapped secondary secondary winding of half... Return to the voltage appearing across the terminals AB of transformer secondary terminal.! ) = 2Vsmax a center-tapping transformer are â 1 seems like the answer to electrical Engineering Exchange! It falls in efficiency of center tapped full wave rectifier center tapped full wave rectifier connected at the load RL of turns are to! In our yard output is not a pure direct current ( DC ) a common zero potential in. 1.49 ) of the the secondary winding through a center tapped rectifier will be around 149.! Means that they convert AC to DC is done by the usage of the DC output voltage and DC current! Average DC output voltage and DC load current values are twice than those a... Are being held constant center-tapping transformer are â 1, double up the secondary winding through center! Common zero potential terminal in both half cycles ) are allowed at the on! More effectively ( positive and negative ) of the full wave rectifier, itâs 81.2 % while that half rectifier... Efficiency and what are these capped, metal pipes in our yard is reverse biased during positive! In this tutorial, center tapped full wave rectifier converts AC into DC produced. See our tips on writing great answers be a full wave rectifier, itâs 81.2 efficiency of center tapped full wave rectifier due. Overcome this drawback by using a single diode to convert AC to is. Suitable in the below figure is 30V with references or personal experience making statements based on ;. Magnitude but opposite in direction is 40.6 % below the center-tap voltages above below! Ac current into DC explanation: efficiency of a full wave because center. Current through it driver in MS-DOS tap act as a result, both half cycles ) are allowed the! For center tapped transformer works almost similar to a half-wave rectifier, both the positive half cycle and electric. Either end of the center tapped full wave rectifier, the center tapped transformer also increases reduces... Voltage across the terminals AB of transformer secondary terminal side turns are grounded provide. Know that a current that flows in only one direction secondary terminal side rectifier efficiency types rectifier! Other half of an AC waveform to pass through the load, RL,,!
|
# What is the difference between graph semi-supervised learning and normal semi-supervised learning?
Whenever I look for papers involving semi-supervised learning, I always find some that talk about graph semi-supervised learning (e.g. A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning).
What is the difference between graph semi-supervised learning and normal semi-supervised learning?
Given their main example, the MNIST dataset, is not graph structured, they detail a method for converting the raw Euclidean data $$X$$ into said form (represented by its adjacency matrix $$S$$), and then compute the Laplacian $$L$$ of this graph:
We consider the graph-based semi-supervised learning (G-SSL) problem. The input include labeled data $$X_{l} ∈ \mathbb{R}^{n_{l}×d}$$ and unlabeled data $$X_{u} \in \mathbb{R}^{n_{u}×d}$$, we define the whole features $$X = [X_{l}; X_{u}]$$. Denoting the labels of $$X_{l}$$ as $$y_{l}$$, our goal is to predict the labels of test data $$y_{u}$$. The learner applies algorithm $$A$$ to predict $$y_{u}$$ from available data $$\{X_{l}, y_{l}, X_{u}\}$$. Here we restrict $$A$$ to label propagation method, where we first generate a graph with adjacency matrix $$S$$ from Gaussian kernel: $$S_{ij} = \exp(−γ\lVert x_i − x_j\rVert ^{2})$$, where the subscripts $$x_{i(j)}$$ represents the $$i(j)$$-th row of $$X$$. Then the graph Laplacian is calculated by $$L = D − S$$, where $$D = \text{diag}\{\sum_{k=1}^{n} S_{ik}\}$$ is the degree matrix.
|
# Tag Info
0
Yes , the normal to the surface is the direction of reaction force. And the direction doesnt depend on the material of the object . But note that if friction is considered , direction of net reaction force changes
2
Because it is a perfectly elastic collision the kinetic energy and the momentum are conserved. So you have two equations for two unknowns which are the final velocity of the football player and his mass: $$m_f v_f^0+m_r v_r^0=m_f v_f^1+m_r v_r^1$$ $$\frac{m_f (v_f^0)^2}{2}+\frac{m_r (v_r^0)^2}{2}=\frac{m_f (v_f^1)^2}{2}+\frac{m_r (v_r^1)^2}{2}$$ and ...
0
In a perfectly elastic collision, the final momentum of the system should be equal to the initial momentum of the system. It seems to be set up correctly, so I would say that you would need additional information for this.
0
Use conservation of momentum, which tells you that the total momentum (the sum of the momenta of the two particles) before and after collision must be the same. Also note that the momentum is a function of the vector velocity, which means that you can make two independent analyses, one on the $x$-axis, and one on the $y$-axis. Both should respect ...
0
Considering both the discs as the system , we can conserve angular momentum about their collinear axis of rotation . The torque due to friction will decrease the angular velocity of the disc having more angular momentum (before the collision ) while the torque will increase angular velocity of the one which had lesser initial angular momentum . I am assuming ...
-1
logically in the frame of centre of mass the accn of body is zero so momentum is conserved and as mass has not changed initial velocity is equal to final velocity in frame of centre of mass
0
Start with the two pertinent conservation laws for elastic collisions: kinetic energy and momentum. Remember that momentum is a vector. In the center of mass frame, the total momentum is zero. That will get you started. Do the work for two particles first. As an aside you should try to show the total momentum is zero in the CoM frame by example by taking ...
0
Can someone explain how two objects with different masses can have the same initial velocity? Since the kinetic energy $E_{kin}=m\cdot v^2/2$ you need more energy = more gunpowder to get the same velocity for a heavier bullet. What would be different about the final state of the apparatus, i.e. the angle of the pendulum? The pendulum would swing ...
0
Yes, it would of course. And so the resultant weight (your's + the weight of what you are carrying) starts acting through this new and horizontally shifted (maybe vertically shifted too) Center of Mass. Notice that all this while your weight was being balanced by the normal reaction from the surface on which you were standing. Also, as only 2 forces were ...
0
The spring is not external to the system of two gliders. The total momentum is conserved.
1
Model the ground as massless critically-damped vertical spring that the particle contacts at zero height. When the particle reaches zero height, it has some KE which is dissipated by the damping mechanism. When in contact with the spring, there are three forces acting on the particle, gravity downward and the damper and spring force upward. The net force ...
1
When an object falls and hits the ground - which forces are involved to change its momentum? Vectorial sum of all the forces acting on the object will cause the change in momentum of the object. When the object was in free-fall, its momentum was already changing due to gravity(assuming negligible amount of air resistance) and then it hit the ground. ...
1
In Newtonian Mechanics, if a body of mass $\mathtt{m}$ is in free-fall, then gravitational force is responsible for acceleration & hence changing its momentum. Simple, right? The equation of motion is $$\mathtt{m}\cdot a = \mathbf{F_g} = \mathtt{m} \cdot g$$ where $a$ is the net acceleration of the body. Things become intricate when you consider a ...
1
In this cases, momentum is not conserved because of the action of gravity as an external force. When you have a pivoted rod, as in your problem, you can use basically two conservation laws: a) conservation of energy, if the collision is assumed as perfectly elastic; b) conservation of angular momentum about the pivot. As regards b), indeed, if we choose ...
2
In a 1D elastic collision, it is well-known that the relative velocities of the two objects (before and after the collision) are reversed. When you say reversed do you mean that each object keeps their own velocity just with a change of sign? That would not always be the case for a 1D situation. If a resting block $v_{1,before}=0$ is hit by a moving ...
1
The symbol $s_{NN}$ is in OP's context of RHIC the Mandelstam $s$-variable in a Nucleus+Nucleus collision. The $s$-variable is also known as the square of the center-of-mass energy.
-2
I think that the law of conservation will hold good in this situation because there is no external force acting on the system. Because like you said the impulsive force by the hinge is internal and no other force is a acting on the system.
-1
If I've interpreted your question correctly - the ball will collide with the rod at the opposite end to the hinge. This will lose energy via usual mechanisms. The rod will then have an instantaneous velocity, hence momentum, and will swing round the hinge. The ball will career off in whatever direction with the remaining momentum resulting from the ...
1
It basically means that they just need to cover half the distance. So, you have the distance to be covered, initial velocity(40) and final velocity has to be zero. Finding deceleration won't be an issue.
0
You asked: Why do both vehicles experience the same magnitude of force? The larger principle at work is conservation of momentum. (Noether's Theorem, symmetry, and all that jazz.) During the small time frame of the collision we generally assume that there is no transfer of momentum into or out of the system of the car and truck. Changes of momentum ...
0
The angular momentum of the system is the same before and after the collision. Since one object is stationary before the collision, the angular momentum is just the momentum ($mv$ of the moving puck multiplied by the perpendicular distance between them (which is $2r\sin(30)$). The moment of inertia of the two pucks stuck together is a little bit tricky, ...
0
A key idea is that the path of the center of mass is unaffected whether the two pucks collide or not. So take the snapshot of the situation at any moment of time before collision. Find the angular momenta of the two pucks with respect to the com at that instant of time. Since angular momentum is conserved for an isolated system (i.e $\tau_{net}=0$), the ...
Top 50 recent answers are included
|
It is taking longer than expected to fetch the next song to play. The music should be playing soon. If you get tired of waiting, you can try reloading your browser.
Please ensure you are using the latest Flash Player.
If you are unable or do not wish to upgrade your Flash Player,
Your Pandora Plus subscription will expire shortly.
Your Pandora Plus trial will expire shortly.
You've listened to hours of Pandora this month. Consider upgrading to Pandora Plus.
|
-0:00
0:00
Change Skin
Now Playing
Music Feed
My Profile
Create a Station
# Mr. Sun
## Features of This Track
a kid-friendly vibe
folk roots
country influences
bluegrass influences
jazz influences
a subtle use of vocal harmony
acoustic sonority
major key tonality
melodic songwriting
thru composed melodic style
acoustic rhythm guitars
an upbeat two-step feel
hard swingin' rhythm
and many other similarities identified in the Music Genome Project
These are just a few of the hundreds of attributes cataloged for this track by the Music Genome Project.
## Similar Tracks
Report as inappropriate
mszalkow
booooooooooo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o ø ø o o o o o o o ø o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o
Report as inappropriate
mrrrrrrrrrrr r r r r r r suuuuuuuuuuu u u u u u u u u u u u u u n n n n n n n n n n n n z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z
Report as inappropriate
Mr sun. Sunday. Saturday. Monday!!! ������������
Report as inappropriate
cariann1075
Mr sun
Report as inappropriate
matt.lesley
Love. It
Report as inappropriate
I hate this song
Report as inappropriate
Report as inappropriate
sharram
I like it
Report as inappropriate
When my daughter was little she loved this song
Report as inappropriate
Wersdfrthbl
Report as inappropriate
My son loves this
Report as inappropriate
I love!!!!!!!! !
Report as inappropriate
my little ones love this
We're sorry, but a browser plugin or firewall may be preventing Pandora from loading.
|
# Homework Help: Fraction of lost energy in compton scattering
1. Jul 9, 2016
### Magnetic Boy
1. The problem statement, all variables and given/known data
After undergoing through 90° compton scattering, the fraction of energy lost by photon is
a) 10%
b) 20%
c) 50%
d) zero
e) none of these
2. Relevant equations
∆λ= h/moc (1-cosΦ)
3. The attempt at a solution
What i m doing is that, i get scattered photon energy and subtracting it from the total and dividing by the total. But it seems unsolvable as wave length of the photon before scattering not given
2. Jul 9, 2016
### malawi_glenn
what is the relation between the Energy of a photon and it's wavelength?
3. Jul 9, 2016
### malawi_glenn
If the energy of the photon before the scattering is denoted E and the energy of the photon is denoted E' what is the expression for "the fraction of energy lost"
4. Jul 9, 2016
### Magnetic Boy
Is there any such relation?? I don't know...
5. Jul 9, 2016
6. Jul 9, 2016
### malawi_glenn
but to answer the question. Look at the numbers given, they are really "nice". No way those can be reproduced by the scattering formula.
7. Jul 10, 2016
### Magnetic Boy
Got the eqn. But it need to have scattered energy frequency. Which is not given in the problem. Does this mean the option "none of above is correct"???
8. Jul 11, 2016
### James R
For a photon (light), frequency and wavelength are related.
9. Jul 11, 2016
### Magnetic Boy
Yes. But neither of them is given
10. Jul 11, 2016
### Delta²
Frequency and Energy of photon also are related E=hf where h plank's constant.
11. Jul 11, 2016
### Magnetic Boy
QUOTE="Delta², post: 5517626, member: 189563"]Frequency and Energy of photon also are related E=hf where h plank's constant.[/QUOTE]
I know it very well. But look at the question. Only angle of scattering is given. Does not it mean that we cannot find fraction of lost energy? I just want to confirm.. (or is there some way to find the fractional lost energy)
12. Jul 11, 2016
### Delta²
Seems to me you are right, we have to know the wavelength (or frequency) of the photon before scattering.
13. Jul 11, 2016
### dpopchev
Should not have the need for initial energy: https://www.hep.wisc.edu/~prepost/407/compton/compton.pdf
EDIT: made a calculation mistake, have no idea how to approach this questions
Last edited: Jul 11, 2016
14. Jul 11, 2016
### James R
Just to check, what equation did you get?
15. Jul 11, 2016
### Magnetic Boy
I got ΔE/E= (E'/mc2)(1-cosΦ)
16. Jul 12, 2016
### dpopchev
17. Jul 12, 2016
### James R
That looks right.
I agree that you'd need to know the wavelength or frequency of the incoming photon in order to get a numerical answer.
18. Jul 12, 2016
### Magnetic Boy
Thanks. So the answer is "none of these". Now i am sure. Some one answered it 50%. And i were really confused about that.
19. Jul 13, 2016
### James R
Well, I supposed it could be 50%, or 10% or 20%, if the incoming photon energy was whatever is necessary to get those values. We know it can't be zero, because incoming photon must lose energy if it is scattered at any angle other than 0 degrees. Your formula has $\Delta E/E = E'/mc^2$, where $\phi=90^\circ$, and $E'$ can't be zero.
Really, there should be an "(a), (b) or (c)" option.
|
# How do I graph this inequality 3x+2y>5?
Apr 16, 2015
Start by isolating $y$ on the left side of the inequality
$3 x + 2 y > 5$
2y > -2x + 5 |:2
$y > - \frac{3}{2} x + \frac{5}{2}$
Now calculate the x and y-intercepts by making $y = 0$ (for the x-intercept), and then $x = 0$ (for the y-intercept).
These two points will allow you to draw the line
$y = - \frac{3}{2} x + \frac{5}{2}$
So,
$x = 0 \implies y = + \frac{5}{2}$
$y = 0 \implies 0 = - \frac{3}{2} x + 5 \implies x = \frac{5}{3}$
Here's how that line would look
graph{-3/2x + 5/2 [-10, 10, -5, 5]}
However, since your inequality requires that $y$ be greater than$- \frac{3}{2} x + \frac{5}{2}$, the solution region you're interested in must be above the line and not include the line.
You'll end up with a graph in which you have a dashed line and the shaded region above that line.
graph{y > -3/2x + 5/2 [-10, 10, -5, 5]}
|
Double Iterative Optimization for Metabolic Network-Based Drug Target Identification
Drug discovery aims finding molecules that manipulate enzymes in order to increase or decrease the production of desired compounds while incurring minimum side-effects. An important part of this problem is the identification of the target enzymes, i.e., the enzymes that will be inhibited by the drug molecules. Finding the right set of target enzymes is essential for developing a successful drug. The relationship between enzymes and compounds through reactions is defined using metabolic networks. Finding the best set of target enzymes requires a careful analysis of the underlying metabolic network.
This paper presents the problem of finding the set of enzymes, whose inhibition stops the production of a given set of target compounds, while eliminating minimal number of non-target compounds. Here, target compounds are the compounds whose presence cause the underlying disorder. The non-target compounds are all the remaining compounds. We call this problem Target Identification by Enzymes (TIE). An exhaustive evaluation of all possible enzyme combinations in the metabolic network to find the optimal solution is computationally infeasible for very large metabolic networks. We developed a scalable iterative method which computes a sub-optimal solution within reasonable time-bounds. The method consists of two phases: Iteration and Fusion Phases. The Iteration Phase is based on the intuition that a good solution can be found by tracing backward from the target compounds. It initially evaluates the immediate precursors of the target compounds and iteratively moves backwards to identify the enzymes whose inhibition incurs less side-effects. This phase converges to a sub-optimal solution after a small number of iterations. The Fusion Phase takes the union of a set of sub-optimal results found at the Iteration Phase. Each set, here, is a potential solution. It then increases this set by inserting a small subset of the remaining enzymes randomly. The size of the final set is bounded by the time allowed for the exhaustive search. The Fusion Phase exhaustively searches the final set to find the optimal subset of enzymes from this set. It, then, recursively creates a new set by inserting random enzymes to the optimal solution found so far and exhaustively searches this set again until a predefined number of iterations are performed.
The experiments on the E. coli metabolic network show that the average accuracy of the Iteration Phase alone deviates from that of the exhaustive search only by 0.02 %. The Iteration Phase is highly scalable. It can solve the problem for the entire metabolic network of \emph{Escherichia coli} in less than 10 seconds. The Fusion Phase improves the accuracy of the Iteration Phase by 19.3 %.
|
Crop the given PIL Image to random size and aspect ratio with random interpolation.
In this piece of documentation, we will be looking at the RandomResizedCropAndInterpolation data augmentation in timm. This augmentation get's applied in timm to the input data by default unless the --no-aug flag has been passed to train the model, in which case no augmentations except Resize and CenterCrop get applied.
Since this RandomResizedCropAndInterpolation augmentation get's applied by default, we don't look into an example on how we could apply it to the training data. Any training script applies this technique such as the one below:
python train.py ../imagenette2-320
To not apply any data augmentation to the input data, one could pass in the --no-aug flag like so:
python train.py ../imagenette2-320 --no-aug
## RandomResizedCropAndInterpolation as a standalone data augmentation technique for custom training loop
In this section we will be looking at how we could leverage the timm library to apply this data augmentation technique to our input data. Let's see an example.
from timm.data.transforms import RandomResizedCropAndInterpolation
from PIL import Image
from matplotlib import pyplot as plt
tfm = RandomResizedCropAndInterpolation(size=224)
X = Image.open("../../imagenette2-320/train/n01440764/ILSVRC2012_val_00000293.JPEG")
plt.imshow(X)
<matplotlib.image.AxesImage at 0x7f8788f027f0>
As usual, we create an input image X which is the usual image of a "tench" as used everywhere else in this documentation.
Let's now apply the transform multiple times and visualize the results.
for i in range(6):
plt.subplot(2, 3, i+1)
plt.imshow(tfm(X))
As can be seen below, we can see the transform is working and it is randomly cropping/resizing the input image and also randomly changing the aspect ratio of the image.
|
# A used car dealer is examining how the price at which a car can be sold varies with the car's mileage. What is the independent variable in this relationship?
Dec 10, 2017
It would be the car's mileage.
#### Explanation:
The price depends on the mileage:
higher mileage = lower price
lower mileage = higher price
Looks like an inverse relationship.
I just made up this equation: $y = \left(\frac{1}{2}\right) x + 10$ to get a line with negative slope:
graph{-(1/2)x +10 [-1.35, 21.15, -0.675, 10.575]}
On this graph, the x-axis is mileage and the y-axis is price in thousands of dollars.
if the car has zero mileage, it's worth $10,000; if the car has 10,000 miles on it, it's worth$5,000:
|
## Sunday, March 16, 2014
### Kinect v2 developer preview + OpenCV 2.4.8: depth data
This time, I'd like to share code on how to access depth data using the current API of Kinect v2 developer preview using a simple polling, and display it using OpenCV. Basically the procedure is almost the same with accessing color frame.
In the current API, depth data is no longer mixed with player index (called body index in Kinect v2 API).
Disclaimer:
This is based on preliminary software and/or hardware. Software, hardware, APIs are preliminary and subject to change.
//Disclaimer:
//This is based on preliminary software and/or hardware, subject to change.
#include <iostream>
#include <sstream>
#include <Windows.h>
#include <Kinect.h>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/contrib/contrib.hpp>
inline void CHECKERROR(HRESULT n) {
if (!SUCCEEDED(n)) {
std::stringstream ss;
ss << "ERROR " << std::hex << n << std::endl;
std::cin.ignore();
std::cin.get();
throw std::runtime_error(ss.str().c_str());
}
}
// Safe release for interfaces
template
inline void SAFERELEASE(Interface *& pInterfaceToRelease) {
if (pInterfaceToRelease != nullptr) {
pInterfaceToRelease->Release();
pInterfaceToRelease = nullptr;
}
}
void processIncomingData() {
IDepthFrame *data = nullptr;
IFrameDescription *frameDesc = nullptr;
HRESULT hr = -1;
UINT16 *depthBuffer = nullptr;
USHORT nDepthMinReliableDistance = 0;
USHORT nDepthMaxReliableDistance = 0;
int height = 424, width = 512;
if (SUCCEEDED(hr)) hr = data->get_FrameDescription(&frameDesc);
if (SUCCEEDED(hr)) hr = data->get_DepthMinReliableDistance(
&nDepthMinReliableDistance);
if (SUCCEEDED(hr)) hr = data->get_DepthMaxReliableDistance(
&nDepthMaxReliableDistance);
if (SUCCEEDED(hr)) {
if (SUCCEEDED(frameDesc->get_Height(&height)) &&
SUCCEEDED(frameDesc->get_Width(&width))) {
depthBuffer = new UINT16[height * width];
hr = data->CopyFrameDataToArray(height * width, depthBuffer);
if (SUCCEEDED(hr)) {
cv::Mat depthMap = cv::Mat(height, width, CV_16U, depthBuffer);
cv::Mat img0 = cv::Mat::zeros(height, width, CV_8UC1);
cv::Mat img1;
double scale = 255.0 / (nDepthMaxReliableDistance -
nDepthMinReliableDistance);
depthMap.convertTo(img0, CV_8UC1, scale);
applyColorMap(img0, img1, cv::COLORMAP_JET);
cv::imshow("Depth Only", img1);
}
}
}
if (depthBuffer != nullptr) {
delete[] depthBuffer;
depthBuffer = nullptr;
}
SAFERELEASE(data);
}
int main(int argc, char** argv) {
HRESULT hr;
IKinectSensor* kinectSensor = nullptr; // kinect sensor
// initialize Kinect Sensor
hr = GetDefaultKinectSensor(&kinectSensor);
if (FAILED(hr) || !kinectSensor) {
std::cout << "ERROR hr=" << hr << "; sensor=" << kinectSensor << std::endl;
return -1;
}
CHECKERROR(kinectSensor->Open());
IDepthFrameSource* depthFrameSource = nullptr;
CHECKERROR(kinectSensor->get_DepthFrameSource(&depthFrameSource));
SAFERELEASE(depthFrameSource);
processIncomingData();
int key = cv::waitKey(10);
if (key == 'q'){
break;
}
}
// de-initialize Kinect Sensor
CHECKERROR(kinectSensor->Close());
SAFERELEASE(kinectSensor);
return 0;
}
Results in my messy room:
If we modify the scaling, for example:
nDepthMaxReliableDistance = 900;
nDepthMinReliableDistance = 500;
int i, j;
for (i = 0; i < height; i++) {
for (j = 0; j < width; j++) {
UINT16 val = depthMap.at<UINT16>(i,j);
val = val - nDepthMinReliableDistance;
val = (val > nDepthMaxReliableDistance ?
nDepthMinReliableDistance : val);
val = (val < 0 ? 0 : val);
depthMap.at<UINT16>(i,j) = val;
}
}
double scale = 255.0 / (nDepthMaxReliableDistance -
nDepthMinReliableDistance);
depthMap.convertTo(img0, CV_8UC1, scale);
applyColorMap(img0, img1, cv::COLORMAP_WINTER);
cv::imshow("Depth Only", img1);
It may look like this:
That's all :)
1. Hey, quick question. Wondering if you have used the Kinect v2 with VM Ware successfully? I'm using a MacBook Pro Retina.
2. Hi @Chocobot, unfortunately I never used kinect v2 in virtual machine.
3. Hi, thanks for that. With your scale modified code above, I get expect 1 argument received 2 arguments associated with depthMap.at.
Any idea why?
1. Hi, thanks for the question. It was my mistake when putting the code that the symbol < and > were not displayed correctly. It should be written as: depthMap.at(i,j)
2. depthMat.at<UINT16>(i,j)
4. Can you use the kinect SDK out off the box with OpenCV? how about OpenNI? The SDK is a free download right?
1. Sorry for the verryyy latee reply. I think you already got the answers already. I have not tried OpenNI. Yes, the SDK from Microsoft is a free download and we can use it with OpenCV.
5. Wow, great info here. Thanks for detailing the steps for Kinect for Windows!
6. Great example. Thanks for the help. Just a heads up for v2 of the Kinect sensor/SDK the Interface class isn't there (or at least not within those files) you need to replace it with the IUnknown class. Just for future proofing! Keep up the great work!
1. Also I forgot to mention. Contrib.hpp is deprecated and I feel its easier to just use opencv.hpp instead of adding the exact includes. Only problem will be an initial intellinsense (if you use VS) load but other than that I don't think there should be any significant increase in build and run time
7. Hi, would you tell me how can I use your code, which file did you modify. I am using kinect v2 in lunix
1. Hi, sorry for late reply. Unfortunately the code above is using Microsoft Kinect SDK, thus, only works on Windows (not Linux). To use Kinect v2 in Linux, you probably need this one: https://github.com/OpenKinect/libfreenect2
|
# Ex.7.4 Q2 Triangles Solution - NCERT Maths Class 9
Go back to 'Ex.7.4'
## Question
In the given figure sides $$AB$$ and $$AC$$ of $$\Delta ABC$$ are extended to points $$P$$ and $$Q$$ respectively. Also, $$\angle PBC \lt \angle QCB$$. Show that $$AC \gt \angle AB$$.
Video Solution
Triangles
Ex 7.4 | Question 2
## Text Solution
What is Known?
$$\angle \text{PBC}<\angle \text{QCB}\text{.}$$
To prove:
$$\text{AC}>\text{AB}\text{.}$$
Reasoning:
By using linear pair, we can find inequality of interior angles and then we can use the fact that In any triangle, the side opposite to the larger (greater) angle is longer.
Steps:
In the given figure,
\begin{align} &\angle ABC +\angle PBC=180^{\circ}\,\,\,\\&\text{( Linear pair)} \\\\& \angle ABC =180^{\circ} - \angle PBC\, \ldots (1) \\ \end{align}
Also,
\begin{align} &\angle ACB +\angle QCB=180^{\circ} \\ &\angle ACB =180^{\circ} - \angle QCB\, \ldots (2) \\ \end{align}
As $$\angle PBC \lt \angle QCB$$,
\begin{align} & 180^{\circ} - PBC \gt 180^{\circ} - \angle QCB\\ & \angle ABC \gt ACB\\&[\text{From Equations (1) and (2)}]\\\\ & AC \gt AB \\&\left (\begin{array}\ \text{side opposite to the}\\\text{ larger angle is larger.}\end{array} \right)\end{align}
Hence proved, $$AC \gt AB$$
Learn from the best math teachers and top your exams
• Live one on one classroom and doubt clearing
• Practice worksheets in and after class for conceptual clarity
• Personalized curriculum to keep up with school
|
Gaseous system: Meaning of this integral eq.
by jam_27
Tags: gaseous, integral, meaning
P: 43 What is the physical or statistical meaning of the following integral $$\int^{a}_{o} g(\vartheta) d(\vartheta)$$ = $$\int^{\infty}_{a} g(\vartheta) d(\vartheta)$$ where $$g(\vartheta)$$ is a Gaussian in $$\vartheta$$ describing the transition frequency fluctuation in a gaseous system (assume two-level and inhomogeneous) . $$\vartheta = \omega_{0} -\omega$$, where $$\omega_{0}$$ is the peak frequency and $$\omega$$ the running frequency. I understand that the integral finds a point $$\vartheta = a$$ for which the area under the curve (the Gaussian) between 0 to a and a to $$\infty$$ are equal. But is there a statistical meaning to this integral? Does it find something like the most-probable value $$\vartheta = a$$? But the most probable value should be $$\vartheta = 0$$ in my understanding! So what does the point $$\vartheta = a$$ tell us? I will be grateful if somebody can explain this and/or direct me to a reference. Cheers Jamy
|
Article
# Solitons and Precision Neutrino Mass Spectroscopy
(Impact Factor: 6.13). 01/2011; 699(1). DOI: 10.1016/j.physletb.2011.03.058
Source: arXiv
ABSTRACT
We propose how to implement precision neutrino mass spectroscopy using
radiative neutrino pair emission (RNPE) from a macro-coherent decay of a new
form of target state, a large number of activated atoms interacting with static
condensate field. This method makes it possible to measure still undetermined
parameters of the neutrino mass matrix, two CP violating Majorana phases, the
unknown mixing angle and the smallest neutrino mass which could be of order a
few meV, determining at the same time the Majorana or Dirac nature of masses.
The twin process of paired superradiance (PSR) is also discussed.
0 Followers
·
• Source
##### Article: Dynamics of two-photon paired superradiance
[Hide abstract]
ABSTRACT: We develop for dipole-forbidden transition a dynamical theory of two-photon paired superradiance, or PSR for short. This is a cooperative process characterized by two photons back to back emitted with equal energies. By irradiation of trigger laser from two target ends, with its frequency tuned at the half energy between two levels, a macroscopically coherent state of medium and fields dynamically emerges as time evolves and large signal of amplified output occurs with a time delay. The basic semi-classical equations in 1+1 spacetime dimensions are derived for the field plus medium system to describe the spacetime evolution of the entire system, and numerically solved to demonstrate existence of both explosive and weak PSR phenomena in the presence of relaxation terms. The explosive PSR event terminates accompanying a sudden release of most energy stored in the target. Our numerical simulations are performed using a vibrational transition $X^1\Sigma_g^+ v=1 \rightarrow 0$ of para-H$_2$ molecule, and taking many different excited atom number densities and different initial coherences between the metastable and the ground states. In an example of number density close to $O[10^{21}]$cm$^{-3}$ and of high initial coherence, the explosive event terminates at several nano seconds after the trigger irradiation, when the phase relaxation time of $> O[10]$ ns is taken. After PSR events the system is expected to follow a steady state solution which is obtained by analytic means, and is made of many objects of field condensates endowed with a topological stability.
Physical Review A 03/2012; 86(1). DOI:10.1103/PhysRevA.86.013812 · 2.81 Impact Factor
• Source
##### Article: Observables in Neutrino Mass Spectroscopy Using Atoms
[Hide abstract]
ABSTRACT: The process of collective de-excitation of atoms in a metastable level into emission mode of a single photon plus a neutrino pair, called radiative emission of neutrino pair (RENP), is sensitive to the absolute neutrino mass scale, to the neutrino mass hierarchy and to the nature (Dirac or Majorana) of massive neutrinos. We investigate how the indicated neutrino mass and mixing observables can be determined from the measurement of the corresponding continuous photon spectrum taking the example of a transition between specific levels of the Yb atom. The possibility of determining the nature of massive neutrinos and, if neutrinos are Majorana fermions, of obtaining information about the Majorana phases in the neutrino mixing matrix, is analyzed in the cases of normal hierarchical, inverted hierarchical and quasi-degenerate types of neutrino mass spectrum. We find, in particular, that the sensitivity to the nature of massive neutrinos depends critically on the atomic level energy difference relevant in the RENP.
Physics Letters B 09/2012; 719(s 1–3). DOI:10.1016/j.physletb.2013.01.015 · 6.13 Impact Factor
• Source
##### Article: Neutrino Spectroscopy with Atoms and Molecules
[Hide abstract]
ABSTRACT: We give a comprehensive account of our proposed experimental method of using atoms or molecules in order to measure parameters of neutrinos still undetermined; the absolute mass scale, the mass hierarchy pattern (normal or inverted), the neutrino mass type (Majorana or Dirac), and the CP violating phases including Majorana phases. There are advantages of atomic targets, due to the closeness of available atomic energies to anticipated neutrino masses, over nuclear target experiments. Disadvantage of using atomic targets, the smallness of rates, is overcome by the macro-coherent amplification mechanism. The atomic or molecular process we use is a cooperative deexcitation of a collective body of atoms in a metastable level |e> emitting a neutrino pair and a photon; |e> -> |g> + gamma + nu_i nu_j where nu_i's are neutrino mass eigenstates. The macro-coherence is developed by trigger laser irradiation. We discuss aspects of the macro-coherence development by setting up the master equation for the target quantum state and propagating electric field. With a choice of heavy target atom or molecule such as Xe or I_2 that has a large M1 x E1 matrix element between |e> and |g>, we show that one can determine three neutrino masses along with the mass hierarchy pattern by measuring the photon spectral shape. If one uses a target of available energy of a fraction of 1 eV, Majorana CP phases may be determined. Our master equation, when applied to E1 x E1 transition such as pH_2 vibrational transition Xv=1 -> 0, can describe explosive PSR events in which most of the energy stored in |e> is released within a few nanoseconds. The present paper is intended to be self-contained explaining some details related theoretical works in the past, and further reports new simulations and our ongoing experimental efforts of the project to realize the neutrino mass spectroscopy using atoms/molecules.
Progress of Theoretical and Experimental Physics 11/2012; 2012(1). DOI:10.1093/ptep/pts066 · 2.49 Impact Factor
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.