text
stringlengths
104
605k
## College Physics (4th Edition) An internal energy of $1029~J$ is produced per kilogram. The increase in internal energy is equal to the initial gravitational potential energy of the water at a height of 105 meters: $\Delta E = mgh$ $\Delta E = (1.0~kg)(9.80~m/s^2)(105~m)$ $\Delta E = 1029~J$ An internal energy of $1029~J$ is produced per kilogram.
# staircase.EM: Estimate gauged sites hyperparameters In EnviroStat: Statistical Analysis of Environmental Space-Time Processes ## Description Estimate \cal{H}_g hyperparameters of the gauged sites using the EM algorithm, using the staircase of the missing data to determine the default block structure. ## Usage 1 2 3 staircase.EM(data, p = 1, block = NULL, covariate = NULL, B0 = NULL, init = NULL, a = 2, r = 0.5, verbose = FALSE, maxit = 20, tol = 1e-06) ## Arguments data data matrix, grouped by blocks each with stations having the same number of missing observations. The blocks are organized in order of decreasing number of missing observations, ie. block 1 has more missing observations than block2. Default structure: Each column represent data from a station; rows are for time Blocks are decided based on the number of missing observations p number of pollutants measured at each stations. (first p columns of y are for p pollutants from station 1, block 1). block a vector indicating the number of stations in each block - from 1 to K covariate design matrix for covariates created with model.matrix with as.factor B0 Provided if the hyperparameter β_0 (B0) is known and not estimated init Initial values for the hyperparameters; output of this function can be used for that a When p=1, the type-II MLE's for delta's are not available. Delta's are assumed to follow a gamma distribution with parameters (a,r) r When p=1, the type-II MLE's for delta's are not available. Delta's are assumed to follow a gamma distribution with parameters (a,r) verbose flag for writing out the results at each iteration maxit the default maximum number of iterations tol the convergence level. ## Details The estimated model is as follows: • data \sim MVN ( z \times β , {\rm kronecker}(I, Σ) ) • β \sim MVN (β_0 , {\rm kronecker}(F^{-1} , Σ ) ) • Σ \sim GIW (Θ , δ ) Θ is a collection of hyperparameters including ξ_0, Ω, Λ, H^{-1}. ## Value A list with following elements: Delta The estimated degrees freedom for each of the blocks (list) Omega The estimated covariance matrix between pollutants Lambda The estimated conditional covariance matrix between stations in each block given data at stations in higher blocks (less missing data) - (list) Xi0 The estimated slopes of regression between stations in each blocks and those in higher blocks (list). Note that τ_{0i} = {\rm kronecker}(ξ_0, diag(p)) - same across stations for each pollutants. Beta0 Coefficients - assumed to be the same across stations for each pollutant Finv Scale associated with β_0 Hinv The estimated hyperparameters (list) - inverse of H_j Psi The estimated (marginal) covariance matrix between stations block From input data From input covariate From input Lambda.1K The inverse Bartlett decomposition (eqn 23?) ## See Also staircase.hyper.est EnviroStat documentation built on May 30, 2017, 5:38 a.m.
# 12V AC motorcycle regulator Status Not open for further replies. #### enduro250z Joined Jul 6, 2010 69 Last edited: #### Suzukiman Joined May 1, 2010 94 One thing to determine is whether the heatsink was completely isolated from any of the components as usually the heatsink will in some way be grounded to the motorcycle so it might as well be wired internally to ground if the schematic allows that. TO3 transistors usually have a mica insulator between the metal case and the heatsink with white heat conducting paste. If the case is the collector then the case will only be grounded to the heatsink if the schematic requires a direct ground on the collector otherwise it would be isolated from the case. Can you varify if any component terminates to the heatsink in any way? This will help confirm that the schematic is correct as per your module. enduro250s If you PM me your email address to me I will send you a writeup I did on the use of AC regulators on motorcycles. I have retrofitted these on many Honda XL250 and XL500 bikes to solve bulb blowing and weak headlamp issues as well as rewinding the ligtning coil. ****** Moderator's Note: These are public forums. All advice and help should be made in the forums for the benefit of all. Offfers of private help violate the spirit of the forums. ******* At the moment it seems as if yours is the simplest to construct, but will require quite a large footprint which is not always possible on some bikes. The OEM and aftermarket AC regulators are very small, some use the same heatsink as the rectfier bridge with all components inside, ground mounting through the center bolt and only one wire to tap into the AC between the lighting coil and the headlight. Those I have opened have only 3 components which were imposible to identify as the potting resin was too hard and the components were not marked. Last edited by a moderator: #### enduro250z Joined Jul 6, 2010 69 For some reason i cant PM you. Can you try and send me one. If you cant send me a PM i will post an email address of mine i don mind making public. You can then email me there and then i will transfer you to my proper email address. *****Moderator's note: We insist that help and advice be given publicly. Offers of private help are not in the spirit of these forums. ***** Well, no i cant see any component terminated to the heat sink. The TO3 case was mounted with 2 secrews and in the screw holes were bushes so the screws cant be tounching the case. The rectifier was mounted through its center stud. Its also has the wight paste under it but as the case is not part of the circuit this also was no electrically connected to the heatsink. The onle wires comming out the unit were a black and red wire. These came from the AC terminals on the rectifier. Again not attached to the heat sink. The other terminals of the rectifier went to the zener diode and the transistor. The other end of the zener joined in with the went to the other terminal on the transistor. Now when i think about it, the diode and the + wire from the rectifer seem to be connected by the screw through the loop terminal and the threads conduct to the threads of the TO3 case as Solcar suggested. The heat sink is annoddized i beleive which wont conduct electricity. The TO3 case i am now 90% sure is not electrically connected to the heatsink. The TO3 screws are isolated from the holes in the heat sink by plastic bushes and as the heatsink is annodized the mounting surface of the TO3 could no way be conducting to the heat sink. The shim between TO3 and heatsink appears to be metallic. Is Mica metallic? Yes i am familiar OEM honda XR/XL regulators. I know a few guys who have de potted some motorbike CDI units and they tried the old chip and dig method but then i think one guy used a heat gun to melt/soften the resin. Ive just spent a while searching the net and still cant really find anythign on a simple circuit diagram for a AC regulator. I can find these aftermarket ones made by Tympanium USA, but some bike shops ask \$30 US for these. They are rated at 225 watts. These have been around since the 70's and i dont think have changed. They have been used on the winning Baja 1000 desert racing bikes over the years. Yes we could buy one of these but its not the same as satisfing your curiosity and making your own! I am pretty sure mine in the black heat sink can be made smaller. You can use a smaller transistor and rectifier or maybe just 4 zeners arranged in a bridge? Im not sure how much work they do and if they need to be in a single component and mounted to the heat sink. I guess it depends on how much power you want to dump as to what rating of transistor you want use. I have seen 16amp bridge rectifiers that are less than half the size of the 35amp one in my unit and they still have a stud mount. I was reading on a website about transisitors and the TO3's seem to get praised for there robustness but they are a pain to mount. Last edited by a moderator: #### retched Joined Dec 5, 2009 5,208 Why not post it here so everyone can read it? As you can see, the longer you leave your thread open, and the more people that see it, eventually you will get the info you want. Posting it here may help others know EXACTLY whats up. #### enduro250z Joined Jul 6, 2010 69 I had another look at my unit in on the black heat sink and i will say i am not 100% i have got the red and black wire (AC connections at the rectifier) the correct way around. As you can see those wires are no longer on the rectifier in the photo. I only went by my drawing i still had which i did 4 years ago when i first pulled this apart. I assume i did it right, but im not 100% sure and i can not remember. That would be the main part im iffy on and perhaps if the anode or cathode of the zener is connected to the transistor. My old drawing says anode connects to base of the transistor so i just assume i got it all right when i drew it many years ago. #### Solcar Joined Jun 8, 2007 21 Yeah, the trouble with a PMOS is the body diode. It'll conduct in reverse and result in high power dissipation. [eta] Actually, there are several problems with using a P-ch MOSFET besides the body diode. If Vgs drops lower than -20v, the MOSFET will be destroyed. Gate would have to be clamped to the source using a Zener or the like. The MOSFET will have slow turn-on and turn-off times, leading to high power dissipation. That's why I was fiddling around with the TRIAC. Actually, since the input is rectified in the toy I came up with, an SCR would be more appropriate. Just as long as the current through the gate could be boosted to say, 20mA or so, it would kick in until the current fell through 0. That's basically what I was trying to do; disable the SCR gate when the output was high enough. I definitely forgot about the body diode. Now that you said that, I did that and made a circuit not long ago that used a transistor to shunt current away from the SCR gate when the output voltage was high enough. #### Solcar Joined Jun 8, 2007 21 Hi Enduro250s, the case of an MJ802 transistor is the collector. Since the cathode of the zener diode connects to the collector in these types amplified zener diode circuits, the the zener can't have a nylon bushing insulating it from the case of the MJ802 in order for it to work. The heat sink probably is best considered to be at ground potential because the anodizing might get scratched through or the threads of the mounting screws might contact the inside of the mounting holes. But, it is good practice to include a wire from the emitter of the transistor to ground also. The devices from Jaycar that you linked to are not able to dissipate very much power. Even the MJ802 can only dissipate its fully rated 200 watts if the heat sink is big enough. The MJ802 brings back memories of my wilder audio amplifier days because I had gotten from a yard sale a used Tiger amplifier kit that used that transistor. #### Suzukiman Joined May 1, 2010 94 Enduro250z I cannot PM you it says you have chosen not to receive PM's. You can activate it and I will try again. Mica is like a transparent bit of plastic, but heat resistant and an insulator. It seems as if the intention of your circuit was to have the collector grounded to the heatsink which makes sense. Possibly a nut, screwhead or washer contacted the heatsink and TO3 case as the collecter must in some way have been wired and not left insulated from the circuit. #### enduro250z Joined Jul 6, 2010 69 Im in user control panel but cant seem to find where i can allow or disallow PM's Well i think i have it set right??? I tryed to send suzukiman a PM but i cant, but if i try to send a PM to Solcar there is the option for that. Ok just send me it here <snip> Last edited by a moderator: #### enduro250z Joined Jul 6, 2010 69 I checked again and i can see no way that collector on the transistor is electrically connected to the heat sink. The case mounting screws are isolated from the drilled holes in the heatsink by plastic bushes. I only had the one screw in the loop terminal remaining but i imagine that the other one had a fiber washer under the head of the screw aswell to provide extra security and not rely on the annodizing under the head to insulate the screw from the case. I am 99.9% sure the cathode and + from rectifier (joined in that loop terminal) make electrical contact to the collector via the screw through the loop terminal and then the screw makes contact to the threads in the TO3 case. There appears to be a couple shims under the transistor. One looks like a silvery metal and i 'think' theres a thin plasticy film jn between the transistor and the metal like shim. Now does that all make sense to you guys? #### Norfindel Joined Mar 6, 2008 326 The Zener + BJT circuit is probably something like this, but with a full-wave rectifier. This would be a minimal version: The components i used are the ones that come with LTSpice. If making this circuit, every component should be chosen to meet the requirements. Here's the circuit's output: It will probably dim the lights too much, but you can always make the full-wave version. #### Attachments • 8.4 KB Views: 692 • 10 KB Views: 640 #### Suzukiman Joined May 1, 2010 94 Im in user control panel but cant seem to find where i can allow or disallow PM's Well i think i have it set right??? I tryed to send suzukiman a PM but i cant, but if i try to send a PM to Solcar there is the option for that. Ok just send me it here <snip> Hi, Go into "Control panel", Then "Edit Options" and you should find the box to check. I have sent you an email with the document. Mine is set to allow PM's, but maybe I am too new on the forum to qualify??? Last edited by a moderator: #### enduro250z Joined Jul 6, 2010 69 #### Suzukiman Joined May 1, 2010 94 Enduro250z, I have tidied the schematic of the working AC regulator up a bit. We just need someone to help confirm if this layout is 100% correct and will work. So far this is the lowest part count and easy to assemble method found. Hopefully Solcar and Sgt Wookie or anyone else can help confirm if this configuration is correct. #### Attachments • 102 KB Views: 1,168 #### enduro250z Joined Jul 6, 2010 69 Good stuff there. Yes i would also like to know if we have it 100% right but im fairly certain ive got it all right off my original unit. In your case i reckon you dould downsize the components as you wont be needing to dump much more than 100 whats if your using these on stock XL/XR 250/500's etc, but for me i am interested in making some that can handle high power. I have found some TO-264 200 watt transistors which i think would be better than the TO-3 which is a bit of a hassle to mount idealy. On my old unit all they dis was silicone over the back side if it. Since ive basically determined that none of the components are electrically connected to the heat sink, i would bolt it to the bike frame and get extra heat disipation through the bike frame and cooling air of the bike moving along if i dont mount it under the tank. Those universal ones with a stud mount hole in the middle supposedly handle 225 watts and they dont even have a finned heat sink. They just rely on the center bolt stud mounting and the bike frame. Also since no components are connected to the heatsink, it wouldnt matter if the heatsink was electrically connected to the bike chassis. I have spent ages and ages trying to find a heat sink the same as my old one but cant find one close to the same size which is 87 x 70 x 31mm and is much the same size as DC regulator rectifiers found on many road bikes. it would be good if someone could run the circuit diagram above through a simulator if possible. Last edited: #### Suzukiman Joined May 1, 2010 94 Enduro250z what is the transistor you found? I am also looking around for an easily obtainable and cheaper one specifically not a TO3 type. #### tom66 Joined May 9, 2009 2,595 If you assume the regulator is dropping at most 5 volts (that would be 15.5 volts a.c.) at 3 amps that is 15 watts power dissipation. Choose 20 watts, then you have the following results: http://uk.farnell.com/jsp/search/browse.jsp?N=1004177+597237+373620&No=0&getResults=true&appliedparametrics=true&locale=en_UK&catalogId=&prevNValues=1004177+597237&filtersHidden=false&appliedHidden=false&originalQueryURL=%2Fjsp%2Fsearch%2Fbrowse.jsp%3FN%3D1004177%26No%3D0%26getResults%3Dtrue%26appliedparametrics%3Dtrue%26locale%3Den_UK%26catalogId%3D%26prevNValues%3D1004177 2SC6081 by Sanyo is readily available and cheap - about 68p. 20 watts, 50 volts Vce. It will need a good heatsink. It comes in a TO-220 case. #### enduro250z Joined Jul 6, 2010 69 There are heaps here www.jaycar.com MJ15003 in TO-264 case and rated at 250watts. Jaycar part number is ZT-2230 and its a NPN type. Tom66, depending on what bike we have and the out put of the stator will depend on how much the regulator needs to handle. Im guessing suzukimans Hondas with unmodified stators are putting out around 60-80watts standard. That would mean if he wants to run with no lights during the day, he needs a regulator that can shunt 60-80 watts to the chassis. For me im looking to make them handle some more power as my stators will be modified for highter output and i would like to run with no lights in the day so i would need a more powerfull transistor which is just a component change. If you look at the photos on an earlier page you will see a small regulaor with a hole in the middle. These are in a die cast case 42 x 42 x 21 with no fins and these are capable of dumping 225 watts and largly rely on the bikes chassis as teh heat sink too and have no problems with that amount of power. These regualtors are mostly going to be mounted where they can get a bit of moving airflow too. So we dont need to have a heatsink the size of a shoe box like we would normally need if we were designing a audio amp of something like that. Last edited: #### tom66 Joined May 9, 2009 2,595 I think you misunderstand. The regulator only dissipates power it needs to; it does not need to dissipate all the power the alternator can handle. In general, power dissipation of a transistor is: $$P_{dis} = V_{ce}(I_b + I_c)$$ $$I_b$$ is often only a few 10's of mA so it can ignored in this calculation, so more simply the power dissipation is such: $$P_{dis} = V_{ce} {\times} I_c$$ So if the transistor is only dropping 5 volts and is passing 3 amps, 15 watts gets wasted as heat. Conservation of energy; the transistor has to either give the energy to the load, or waste it as heat. Of course you wouldn't want to run a 15 watt transistor at 15 watts because that rating is for 25°C with some kind of magic liquid nitrogen cooling, which is why I would suggest a 20 watt or 30 watt transistor for the job. If no significant load is connected, then the power dissipation is a few milliwatts due to base current. Milliwatts are usually not a concern. I calculated the 3 amps from the original requirements of this circuit which was to power a 35W light bulb. Of course if you were powering say a 70W light bulb @ 12V, you would have ~6 amps, and approximately double power dissipation in the transistor, about 30W. Status Not open for further replies.
# How insert a character at the begin of every line from a source code? I'm writing a lecture with many source codes for example. The codes are stored in severus different files. I use listing and minted packages for formatting the sources. I would like add character at the begin of every line of some codes like: • Bash shell ($or #) • Python shell (>>>) • Matlab (>>) • Other shells (>) I tried some solutions which have given on: But both solutions have the same problem when I try: \begin{foo_environment} \input{bar.txt} \end{foo_environment} In both solutions the linebreaking are ingnored I have tried executing a bash command: sed 's/^/$ /' bar.sh > bar.txt This works locally but I can't do the same in Overleaf and Sharelatex This is a sample code bar.sh: cd .. mount /dev/sda3 /mnt/ cd ~/.ssh/ ll cd ~ ls -al The result must be: $cd ..$ mount /dev/sda3 /mnt/ $cd ~/.ssh/$ ll $cd ~$ ls -al • Could you make this a complete example, by adding everything from \documentclass through \end{document}? That gives us something to start from. Nov 21, 2016 at 13:00 • Ofcourse. I'll adding in some minutes. Thanks Nov 21, 2016 at 13:12 With listings you can redefine the numberstyle: \documentclass{article} \usepackage{listings} \lstset{ language=tex, basicstyle=\footnotesize\ttfamily\selectfont, keepspaces=true, numbers=left, numbersep=5pt, numberstyle=\numberwithprompt, } \newcommand{\lstprompt}{>>>} \newcommand\numberwithprompt[1]{\footnotesize\ttfamily\selectfont#1 \lstprompt} \begin{document} \begin{lstlisting} a b c \end{lstlisting} \end{document} minted.sty uses fancyvrb.sty to typeset the minted environments. fancyvrb.sty provides a macro named \FancyVerbFormatLine to change individual line formatting. You can define your own macro for your environment and plug it into minted with the formatcom key. Code: \documentclass{article} \usepackage{minted} \newcommand{\BashFancyFormatLine}{% \def\FancyVerbFormatLine##1{\$\,##1}% } \begin{document} \noindent Some Text \begin{minted}[formatcom=\BashFancyFormatLine]{bash} cd .. mount /dev/sda3 /mnt/ cd ~/.ssh/ ll cd ~ ls -al \end{minted} some text \end{document} You can apply the same procedure to external files by using \inputminted: \documentclass{article} \usepackage{minted,filecontents} \begin{filecontents*}{bash.sh} cd .. mount /dev/sda3 /mnt/ cd ~/.ssh/ ll cd ~ ls -al \end{filecontents*} \newcommand{\BashFancyFormatLine}{% \def\FancyVerbFormatLine##1{\$\,##1}% } \begin{document} \noindent Now read the same code from a file: \inputminted[formatcom=\BashFancyFormatLine]{bash}{bash.sh} \end{document} • Thank you. That is a more elegant way to solve. But the problem is when We use \input{bar.txt} for import a text file insted to use a text into the .tex file Nov 21, 2016 at 14:15 • @AdolfoCorrea - See update in my answer. Nov 21, 2016 at 14:31 With listings you can hook into the line numbering code. This approach is fine if you don't put \labels in your code -- I never have and didn't even know you could -- but if you do, when you refer to the line numbers they will be followed by whatever prompt text you define. In that case see Ulrike Fischer's answer. Here's an example, which defines a "prompt" (\lstprompt) as ">>>" (python style) in line 13 and applies it in line 14. \documentclass{article} \usepackage{listings} \lstset{ language=tex, basicstyle=\footnotesize\ttfamily\selectfont, keepspaces=true, numbers=left, numbersep=5pt, numberstyle=\footnotesize\ttfamily\selectfont\, } \newcommand{\lstprompt}{>>>} \renewcommand*\thelstnumber{{\the\value{lstnumber}}\lstprompt} \begin{document} \lstinputlisting{lst_test.tex} \end{document} And the output (using the code above as a test case): Of course you can insert spaces or otherwise tweak at will. Here's a $sign with no numbering: \newcommand{\lstprompt}{\$} \renewcommand*\thelstnumber{\lstprompt} • Thank you for your help. I'm going to test your solution and next I'm goin to vote. Nov 21, 2016 at 14:18 • I wouldn't redefine \thelstnumber -- it can also be used for references and the additional code would disturb this. Nov 21, 2016 at 14:34 • @UlrikeFischer having used listings without referring to line numbers -- in fact only to the (sub)section containing the listing -- it would seem reasonable in some cases to do this. I also considered whether the line number separator might provide a way in (which in many ways I would prefer. Nov 21, 2016 at 15:16 • @UlrikeFischer now I know that putting labels inside your listings is even possible, I've added a note to the top of my answer commending yours for those who use such things. Nov 21, 2016 at 15:57
# Dividing money brain teaser ## Recommended Posts Hi everyone, thougt i would post this intersting one: 7 Sales executives have 1 million dollars to divide amongst themselves. The most senior sales executive propses a particular split and then everyone votes (each person's vote is equal). If at least 50% of the people accept, then the money is divided the way that was suggested. Otherwise the sales executive who propsed it gets fired...and then we move on to the next senior sales exec and the whole process repeats. The executives are rational (want to keep their jobs first and also get as much money as possible), and they also would prefer fewer executives in the group if given a choice (all else equal). How should the cash be split ? ##### Share on other sites well the optimal group is going to be 4, because after 3 people get fired for not winning they will start to get nervous. and split the money evenly between them all. so imo 1mill/4 ##### Share on other sites Is wheelin' and dealin' allowed ("Vote against this next proposal; my turn is next and I'll reward you nicely")? Merged post follows: Consecutive posts merged Without wheelin' and dealin', the top person offers a four-way split amongst himself and execs #3, #5, and #7. Let's suppose it goes all the way down to #6. #6's proposal is easy: Everything goes to #6. #6 will vote for this proposal, garnering the needed 50%. Exec #7 will be screwed if it gets down to #6. Before it gets down to #6, #5's proposal needs to be sunk. Unlike #6, #5 needs 1 cohort to go along with his proposal. #7 will go along with anything #5 proposes so long as it is better than nothing. Exec #6 will be screwed if it gets down to #5. Before it gets down to #5, #4's proposal needs to be sunk. #4 also needs 1 cohort to garner 50%, the obvious target being #6. Execs #5 and #7 will be screwed by #4's proposal. Before it gets down to #4, #3's proposal needs to be sunk. #3 needs two cohorts, the two who would be screwed by #5's proposal. #3 offers a three-way split between himself, #5, and #7. Execs #4 and #6 will be screwed if it gets down to #3. Before it gets down to #3, #2's proposal needs to be sunk. #2 also needs two cohorts to get 50%, and these are execs #4 and #6. Execs #3, #5, and #7 will be screwed if it gets down to #2. Before it gets down to #2, #1's proposal needs to be sunk. #1 needs three cohorts: Execs #3, #5, and #7. Exec #1 should offer a four-way split amongst himself and execs #3, #5, and #7. ##### Share on other sites Hi everyone, thougt i would post this intersting one: 7 Sales executives have 1 million dollars to divide amongst themselves. The most senior sales executive propses a particular split and then everyone votes (each person's vote is equal). If at least 50% of the people accept, then the money is divided the way that was suggested. Otherwise the sales executive who propsed it gets fired...and then we move on to the next senior sales exec and the whole process repeats. The executives are rational (want to keep their jobs first and also get as much money as possible), and they also would prefer fewer executives in the group if given a choice (all else equal). How should the cash be split ? The second least senior executive should vote no on every proposal, and attempt to get everyone else to vote no. Once it gets down to the second least senior executive and the least senior executive, the second least, would now be in the position to choose how to delegate the money, and his proposal will pass because 1/2 is 50%. The most senior executive and most likely the second most, and third most are going to get sacked, no matter which way they decide to split the money up. The problem is that every person next in line to decide how to split up the money is going to vote no on the current proposal. And the person who ends up with the ultimate position of power in the end is the second least senior executive, and he can choose to keep it all, and his vote will make the 50% needed. Maybe, the third least senior executive has a good shot to counter this though, if the deal comes to him, he can delegate half to him and half to the least senior executive, leaving the second least senior executive out. And two out of three would pass. For this reason, the second least senior executive, is going to want to deal with the fourth senior executive. If the fourth senior executive comes in and offers half to to second least and half to him, he/she could pass that delegation with the 50%. This will cause the least senior, and the third least senior to want to team up with the fifth most senior to delegate the money between them. Then the second, fourth, will include the sixth and will want to team up to delegate the money between them so they don't get left out. In response the least senior, third, and fifth and seventh will team up and delegate the money between them as 1/4ths. So I seem to have come to the same conclusion as DH, which makes me feel pretty confident in my conclusion considering DH seems to be a pretty smart guy. Edited by toastywombel ##### Share on other sites The senior can split the money equally between 4 people (the ones he likes), and nothing (zero) to the three last ones. He would have gathered 4 votes against three, and win. Quite unfair, but 250.000 in his pocket. At least, no one is fired. ##### Share on other sites I think it depends on how much value there is to removing competition, and how much value to losing one's job (although that last bit may not matter if you assume pure rationality). ##### Share on other sites The senior can split the money equally between 4 people (the ones he likes), and nothing (zero) to the three last ones. He would have gathered 4 votes against three, and win. Quite unfair, but 250.000 in his pocket. At least, no one is fired. There is no reason the second most senior exec would accept such a deal. Suppose he rejects the deal along with those other three. The first deal is sunk, and so is the topmost exec. The new top exec only needs to find two cohorts to attain the requisite 50% vote. That's a bigger slice of the pie for our former #2 (new #1) exec -- and he's the new #1 exec. ##### Share on other sites If the order of voting was known before hand, the first exec would split the money evenly between #2, 3 and 4, forgoing his own compensation (or grant himself one penny) but keeping him his job and giving #2, #3 and #4 a split they would not otherwise be entitled to due to their place in line. Merged post follows: Consecutive posts merged Oops, hang on a minute... scratch that. #4 would be entitled to $500,000. So yeah, it is in #6s best interest to always vote no, so #6 will never be floated any money. So the money would be divided like this: #1 -$0.01 #2 - $333,333.33 #3 -$333,333.33 #4 - $0 #5 -$0 #6 - $0 #7 -$333,333.33 This way #2 and #3 get a split greater than or equal to what they would expect anyway, and #7 Would certainly vote for it as they would be eligible for no money otherwise. For #1, passing on the money is the only way to keep their job. Edited by jryan Consecutive posts merged. ##### Share on other sites Read post #7, jryan. Why would exec #2 vote for your proposal? If he joins execs #4, #5, and #6 and votes against exec #1's proposal, exec #2 can get the same amount of money as offered by exec #1 and he will get exec #1's job. ##### Share on other sites Let's keep score. The fraction indicates a n-way split but not necessarily evenly. Let's suppose it goes all the way down to #6. #6's proposal is easy: Everything goes to #6. #6 will vote for this proposal, garnering the needed 50%. Exec #7 will be screwed if it gets down to #6. Score: Exec #1: loss of job Exec #2: loss of job Exec #3: loss of job Exec #4: loss of job Exec #5: loss of job Exec #6: 1 mill, -5 competitors Exec #7: -5 competitors Before it gets down to #6, #5's proposal needs to be sunk. Unlike #6, #5 needs 1 cohort to go along with his proposal. #7 will go along with anything #5 proposes so long as it is better than nothing. Exec #6 will be screwed if it gets down to #5. Score: Exec #1: loss of job Exec #2: loss of job Exec #3: loss of job Exec #4: loss of job Exec #5: 1/2 mil, -4 competitors Exec #6: -4 competitors Exec #7: 1/2 mil, -4 competitors Before it gets down to #5, #4's proposal needs to be sunk. #4 also needs 1 cohort to garner 50%, the obvious target being #6. Execs #5 and #7 will be screwed by #4's proposal. Score: Exec #1: loss of job Exec #2: loss of job Exec #3: loss of job Exec #4: 1/2 mil, -3 competitors Exec #5: -3 competitors Exec #6: 1/2 mil, -3 competitors Exec #7: -3 competitors Before it gets down to #4, #3's proposal needs to be sunk. #3 needs two cohorts, the two who would be screwed by #5's proposal. #3 offers a three-way split between himself, #5, and #7. Execs #4 and #6 will be screwed if it gets down to #3. Score: Exec #1: loss of job Exec #2: loss of job Exec #3: 1/3 mil, -2 competitors Exec #4: -2 competitors Exec #5: 1/3 mil, -2 competitors Exec #6: -2 competitors Exec #7: 1/3 mil, -2 competitors Before it gets down to #3, #2's proposal needs to be sunk. #2 also needs two cohorts to get 50%, and these are execs #4 and #6. Execs #3, #5, and #7 will be screwed if it gets down to #2. Score: Exec #1: loss of job Exec #2: 1/3 mil, -1 competitor Exec #3: -1 competitor Exec #4: 1/3 mil, -1 competitor Exec #5: -1 competitor Exec #6: 1/3 mil, -1 competitor Exec #7: -1 competitor Before it gets down to #2, #1's proposal needs to be sunk. #1 needs three cohorts: Execs #3, #5, and #7. Exec #1 should offer a four-way split amongst himself and execs #3, #5, and #7. Exec #1: 1/4 mil Exec #2: Exec #3: 1/4 mil Exec #4: Exec #5: 1/4 mil Exec #6: Exec #7: 1/4 mil If -1 competitor is worth 1/4 mil, then this proposal will be rejected by #7. ##### Share on other sites Well shoot. That puts some unknowns into the original question (salary differences, etc.) that would need to be considered. But in your #3,5 and 7 scenario it is not in #5's best interest to vote for #1's solution as you spelled it out as he is better off voting against #1 and holding out for a $500,000 pay day that he would likely get in later votes. Edited by jryan ##### Link to comment ##### Share on other sites Let's keep score. The fraction indicates a n-way split but not necessarily evenly. Nice job. That the spit does not have to be even is, I think, crucial. Since the OP did not provide any information regarding the value of each position the question is essentially unanswerable. The OP also did not answer my question regarding wheelin' and dealin', and that can obviously change the outcome immensely. If -1 competitor is worth 1/4 mil, then this proposal will be rejected by #7. Of course, one way to overcome this is to not offer an even split. That #1 gets to keep his job might well be worth a lot more than 1/4 million. Heck, it might be worth so much that exec #1 will sweeten the pot with some of his own money. He could offer$1 million each to execs #3, 5, and 7, for example. I would expect that jumping up a notch in the hierarchy is worth more to exec #3 than it would be to exec #7. For most people, pay levels out as one progresses. Pay raises can be quite phenomenal for fresh-outs. Some fresh-outs simply aren't qualified to do fresh-out level work, and the pay for them reflects that. Once fresh-outs have proven their worth their pay jumps by quite a bit (percentage wise). After that, pay raises start becoming rather pathetic; eventually they barely keep pace with inflation. This is not the case in the cutthroat executive world. Pay starts going off the charts the higher one climbs. Exec #1 is most likely paid more, a whole lot more, than #2, #2 is paid a lot more than #3. Things probably start to flatten out from there. To forestall a rebellion by any one of his odd numbered cohorts, exec #1 may want to offer more to #3 than #5, and more to #5 than #7. As said earlier, the OP didn't supply enough information needed to truly solve the problem. All we can do is speculate. Merged post follows: Consecutive posts merged ##### Share on other sites Here is a quick matrix, based on the assumption that the proposer is most interested in keeping their job, and therefor has no real leverage: Round 1 - 7, total votes needed=3, pool=5, share=$333,333.33 Round 2 - 6, total votes needed=2, pool=4), share=$500,000.00 Round 3 - 5, total votes needed=2, pool=3), Share=$500,000.00 Round 4 - 4, total votes needed=1, pool=2), share=$500,000.00 Round 5 - 3, total votes needed=1, pool=1), Share=$???????????? Round 6 - 2, total votes needed=0, pool=0), Share=$1,000,000.00 By round 5 Exec#5 really has no real control, and may need to hand all $1,000,000 to Exec#7 just to keep their job. So I see no reason to treat #7 any differently than #6. As a matter of fact, #6 now seems completely out of the running for any money as round 5 would almost certainly resolve the issue in #7s favor as #5 would be looking to keep their job. #5 would definitely have leverage over 7 in a "take it or leave it" fashion, but then #7 would have leverage over #5 in a "make me happy or you're fired" fashion. Merged post follows: Consecutive posts merged After some consideration I think that #7 should be left out of the equation rather than #6. If #6 is a rational person then they have to realize that, while they can't lose their job, they have no chance of ever getting to make their "$1 million to me" proposal. #5 would split the bonus with #7 before that (though the nature of that split would be interesting!). So I would change my group to 2,3 and 6 getting $333,333.33 as 2 and 6 have nothing to lose by voting yes, and 3 won't get a better deal either way. Merged post follows: Consecutive posts merged Also, I figured you all might be interested in this article: http://euclid.trentu.ca/math/bz/pirates_gold.pdf It's a discussion of a similar application of game theory.. but I think the restrictions and demands are sufficiently different that we can cast out their conclusion. ##### Link to comment ##### Share on other sites Here is a quick matrix, based on the assumption that the proposer is most interested in keeping their job, and therefor has no real leverage: jryan, you are thrashing here. You are just throwing out solutions with no logic behind then. This is not the way to solve these kinds of problems. Also, I figured you all might be interested in this article: http://euclid.trentu.ca/math/bz/pirates_gold.pdf It's a discussion of a similar application of game theory.. but I think the restrictions and demands are sufficiently different that we can cast out their conclusion. First off, interesting find. I suspect the OP changed the problem from pirates to executives so we wouldn't be able to find a solution on the 'net. Do note that the author came to the same conclusion that I came to: This is an evens versus odds proposition. The following paragraph in that paper is key (epmphasis mine): The secret to analyzing all such games of strategy is to work backward from the end. At the end, you know which decisions are good and which are bad. Having established that, you can transfer that knowledge to the next-to-last decision and so on. Working from the beginning, in the order in which the decisions are actually taken, doesn’t get you very far. The reason is that strategic decisions are all about “What will the next person do if I do this?” so the decisions that follow yours are important. The ones that come before yours aren’t, because you can’t do anything about them anyway. This pirate problem is almost exactly the same problem as the topic of this thread, only it is a bit better specified. The reason that the pirate problem is a bit better stated is that there is no particular advantage accrued by moving up a notch on the fierceness totem pole. This problem is about executives, not pirates. In most companies, a huge financial advantage accrues from moving up the executive seniority totem pole. That isn't true in all companies. Some are much more egalitarian. (Prototypical example: Ben & Jerry's, at least up until 2000 when Ben and Jerry sold out.) Compensation is fairly flat in such companies; what is accrued in advancing up the totem pole is a bit more glory at the expense of a lot more headaches. The division in this kind of company would be simple: Even shares for all, with everyone patting each other on the back for a job well done. No mention of wheelin' and dealin' (the thing that salescritters do for a living). Imagine a more cutthroat corporation than a Ben & Jerry's. Exec #4 pulls aside execs #5, #6, and #7. "We can sink every proposal up to mine. I'll split the pot between us, and we can all move three notches up the ladder. BTW, hit men are cheap; don't think of voting down my proposal." This wheelin' and dealin' makes for a solution that is not a Nash equilibrium. The simplistic Nash equilibrium solution is simple: Exec #1 offers$1 each to execs #3, #5, and #7. The remaining $999,997 dollars goes to exec #1. This simplistic split is not realistic. Even ignoring the dollar value inherent in moving up the executive ladder, one of those three odd numbered executives may reject exec #1's proposal on grounds of fairness. So they lose a buck; exec #1's proposal is not fair. We humans (and other animals) appear to have some kind of built-in fairness mechanism. Google "Ultimatum game" for more. ##### Link to comment ##### Share on other sites This simplistic split is not realistic. Even ignoring the dollar value inherent in moving up the executive ladder, one of those three odd numbered executives may reject exec #1's proposal on grounds of fairness. So they lose a buck; exec #1's proposal is not fair. We humans (and other animals) appear to have some kind of built-in fairness mechanism. Google "Ultimatum game" for more. A good point, and one of the many reasons that we are not rational (or at least sometimes appear not to be). In part this is because the game of life has multiple rounds. If this one game was all there were, rejecting a proposal on the basis of unfairness might not be rational. But in a multi-round game you can make it clear that you reject "unfair" proposals, logic be damned. In the end, this sense of fairness can benefit you, as others fear to offer you an unfair proposal. ##### Link to comment ##### Share on other sites A good point, and one of the many reasons that we are not rational (or at least sometimes appear not to be). In part this is because the game of life has multiple rounds. If this one game was all there were, rejecting a proposal on the basis of unfairness might not be rational. But in a multi-round game you can make it clear that you reject "unfair" proposals, logic be damned. In the end, this sense of fairness can benefit you, as others fear to offer you an unfair proposal. Indeed. In fact, that's probably the whole function of anger. Being spiteful is self-harming pretty much by definition, but the threat of spite warns others not to mess with you. ##### Link to comment ##### Share on other sites But in a multi-round game you can make it clear that you reject "unfair" proposals, logic be damned. In the end, this sense of fairness can benefit you, as others fear to offer you an unfair proposal. This is a good observation. While being "fair" and "irrational" may lose you the$1 you may have otherwise gained, it also forces the others to offer more than the $1 because they do need your participation (or at least the participation of a majority). If you are the top guy and a majority of the others won't settle for$1, wouldn't you offer, say $100k each so that you can get perhaps$400k instead of nothing? If you aren't playing "fair" and are being entirely rational, all you will get is the $1 if you are not the top guy. Why settle for that when by being irrational you can get more? Merged post follows: Consecutive posts merged Indeed. In fact, that's probably the whole function of anger. Being spiteful is self-harming pretty much by definition, but the threat of spite warns others not to mess with you. And then sometimes you have to play it out so that others know you aren't just bluffing. So what if you, as exec # 7, lose$1? Its not a big deal because on the next time everyone else knows they have to offer more, perhaps considerably more, to get your participation. ##### Share on other sites jryan, you are thrashing here. You are just throwing out solutions with no logic behind then. This is not the way to solve these kinds of problems We all are until the obvious solution arises. As you mentioned, the fairness aspect plays into this decision, which is why #5 is a problem. 7 can't happen, and 6 is easy, but #5 places "Job" directly at odds with "Bonus", and without job specifics (salary, turnover, etc.) we are left guessing. If #5 made $350,000 a year salary, and #7 made$150,000 then the fair split would be $400,000 to #5 and$600,000 to #7.... but we don't know that so we can't answer that. If we assume that #5s compentation is $1 million or greater then we can assume that the fair split is to give all the money to #7 and #7 really has all of the leverage in that case. I think that even absent the information there is enough here to answer the question, however, and there is a way to deduce a fair distribution in round 1 that would garner support from the 3 necessary voters due to the variability of possibilities after the initial offer is rejected. A bird in the hand is worth two in the bush. In this case, assuming$1 million+ salaries, neither 6 or 3 are guaranteed any money in any permutation and would accept the offer made by #1 because it is the only given that have while #2 would accept it because they are guaranteed money and job which the wouldn't be in round #2. ##### Share on other sites I think that even absent the information there is enough here to answer the question, however, and there is a way to deduce a fair distribution in round 1 that would garner support from the 3 necessary voters due to the variability of possibilities after the initial offer is rejected. No, there isn't. You are making up information about salaries and such precisely because the problem lacks specificity. The only solution that makes any sense given the (limited and incomplete) information at hand is a 999997,0,1,0,1,0,1 split. The even numbered execs get nothing, execs numbers 3, 5, and 7 get a pittance, and exec #1 gets almost everything. ## Create an account Register a new account
Last edited by Arashit Saturday, August 8, 2020 | History 2 edition of Full information maximum likelihood estimation with autocorrelated errors found in the catalog. Full information maximum likelihood estimation with autocorrelated errors AysД±t Tansel # Full information maximum likelihood estimation with autocorrelated errors ## by AysД±t Tansel Published . Written in English Subjects: • Economics, Mathematical. • Edition Notes The Physical Object ID Numbers Statement by Aysit Tansel. Series [Ph. D. theses / State University of New York at Binghamton -- no. 421], Ph. D. theses (State University of New York at Binghamton) -- no. 421. Pagination 130 leaves ; Number of Pages 130 Open Library OL22069691M to the full information estimation methods of Sargan (66), Hendry (^5), Chow and Fair (13), Fair (21), and Dhrymes (l6). Sargan (66) considered the maximum likelihood estimation of a system of dynamic simultaneous equations with errors satisfying a vector autoregressive process. Hendry (4$), following Sargan's work, applied numerical methods to. Maximum likelihood estimation or otherwise noted as MLE is a popular mechanism which is used to estimate the model parameters of a regression model. Other than regression, it . In ML estimation, in many cases what we can compute is the asymptotic standard error, because the finite-sample distribution of the estimator is not known (cannot be derived). Strictly speaking,$\hat \alpha\$ does not have an asymptotic distribution, since it converges to a real number (the true number in almost all cases of ML estimation). The estimators solve the following maximization problem The first-order conditions for a maximum are where indicates the gradient calculated with respect to, that is, the vector of the partial derivatives of the log-likelihood with respect to the entries gradient is which is equal to zero only if Therefore, the first of the two equations is satisfied if where we have used the. SAS/ETS® User's Guide. Search; PDF; EPUB; Feedback; More. Help Tips; Accessibility; Table of Contents; Topics. In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models ly, it is the variance of the score, or the expected value of the observed Bayesian statistics, the asymptotic distribution of the. You might also like Snow Queen Snow Queen Renaissance and English humanism. Renaissance and English humanism. Clermont Seminary, John Thomas and Charles Carré, and John Sanderson, professors, disciplinary rules to be strictly observed in said seminary Clermont Seminary, John Thomas and Charles Carré, and John Sanderson, professors, disciplinary rules to be strictly observed in said seminary The quest for Corvo The quest for Corvo Results of geochemical sampling within the Wells resource area, Elko County, Nevada (portions of the Wells and Elko 2⁰ sheets) Results of geochemical sampling within the Wells resource area, Elko County, Nevada (portions of the Wells and Elko 2⁰ sheets) Using digital video monitoring systems in fisheries Using digital video monitoring systems in fisheries United States postal history United States postal history Online reference aids Online reference aids ### Full information maximum likelihood estimation with autocorrelated errors by AysД±t Tansel Download PDF EPUB FB2 In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood. Full Information Maximum Likelihood Estimation with Autocorrelated Errors: A Numerical Approach Article (PDF Available) in Gelişme dergisi = Studies in development 18() January   The plan of the study as follows. In section 2 a cursory review of the linear expenditure =system is presented. Section 3 provides a statement of the alternative statistical assumptions and the full information maximum likelihood estimation method used in obtaining the required parameter estimates. Section 4 describes the application and by: If there were sufficient information about the variance-covariance structure of the errors in the model of section 2, the ideal estimation method would be full information maximum likelihood (see, e.g., Eichengreen, Watson and Grossman ()). In most applications, however, the precise form of the time series model for the disturbances is not. Another advanced missing data method is Full Information Maximum Likelihood. In this method, missing values are not replaced or imputed, but the missing data is handled within the analysis model. The model is estimated by a full information maximum likelihood method, that way all available information is used to estimate the model. In full information maximum likelihood the population. "Maximum likelihood estimation is a general method for estimating the parameters of econometric models from observed data. The principle of maximum likelihood plays a central role in the exposition of this book, since a number of estimators used in econometrics can be derived within this framework. Examples include ordinary least squares, generalized least squares and full-information maximum 4/5(1). Introduction to Statistical Methodology Maximum Likelihood Estimation Exercise 3. Check that this is a maximum. Thus, p^(x) = x: In this case the maximum likelihood estimator is also unbiased. Example 4 (Normal data). Maximum likelihood estimation can be applied to. A Suggested Method of Estimation for Spatial Interdependent Models with Autocorrelated Errors, and An Application to A County Expenditure Model Article in Papers in Regional Science 72(3)   I am performing standard multivariable linear regression (interval dependent variable) with a dataset that has 12% missing cases under listwise deletion. I am assuming MCAR or MAR. I would like to avail of full information maximum likelihood (FIML) estimation in Mplus as a means of handling the missing data. I am using the following estimator. In econometrics, Prais–Winsten estimation is a procedure meant to take care of the serial correlation of type AR(1) in a linear ved by Sigbert Prais and Christopher Winsten init is a modification of Cochrane–Orcutt estimation in the sense that it does not lose the first observation, which leads to more efficiency as a result and makes it a special case of feasible. A method of estimation of nonlinear simultaneous equations models based on the maximization of a likelihood function, subject to the restrictions imposed by the structure. The FIML estimator estimates all the equations and all the unknown parameters jointly and is asymptotically efficient when the errors are normally distributed. See also limited information maximum likelihood estimation. The paper proceeds to examine the properties of the full information maximum likelihood (FIML) estimator on data with measurement errors. In contrast to the estimation results for the single equation methods, it is found that FIML does a good job in pinning down the true parameters on simulated data, confirming the findings by Fuhrer et al. A method of estimation of a single equation in a linear simultaneous equations model based on the maximization of the likelihood function, subject to the restrictions imposed by the structure. The LIML estimator is efficient among the single equation estimators when the errors are normally distributed. See also full information maximum likelihood estimation. 1 Paper Handling Missing Data by Maximum Likelihood Paul D. Allison, Statistical Horizons, Haverford, PA, USA ABSTRACT Multiple imputation is rapidly becoming a popular method for handling missing data, especially with easy-to-use. Analysis of the full, incomplete data set using maximum likelihood estimation is available in AMOS. AMOS is a structural equation modeling package, but it can run multiple linear regression models. AMOS is easy to use and is now integrated into SPSS, but it will not produce residual plots, influence statistics, and other typical output from. Chapter 1 provides a general overview of maximum likelihood estimation theory and numerical optimization methods, with an emphasis on the practical implications of each for applied work. Chapter 2 provides an introduction to getting Stata to fit your model by maximum likelihood. Chapter 3 is an overview of the mlcommand and. You can also use maximum likelihood estimation in this case. The standard errors are like Huber-White. MLR provides Huber-White standard errors. With WLSMV you do not need to provide a weight. The Muthen et al paper is on the website under Papers. See also Muthén, B. & Satorra, A. In St you can also estimate the system with the method of full-information maximum likelihood (FIML) by typing sem (y1. Maximum likelihood estimation A key resource is the book Maximum Likelihood Estimation in Stata, Gould, Pitblado and Sribney, Stata Press: 3d ed., A good deal of this presentation is adapted from that excellent treatment of the subject, which I recommend that you buy if you are going to work with MLE in Stata. To perform maximum. Maximum Likelihood Estimation with Stata, Fourth Edition is the essential reference and guide for researchers in all disciplines who wish to write maximum likelihood (ML) estimators in Stata. Beyond providing comprehensive coverage of Stata’s ml command for writing ML estimators, the book presents an overview of the underpinnings of maximum. Baltagi, B. H. and Bresson, G. (). Maximum likelihood estimation and Lagrange multiplier tests for panel seemingly unrelated regressions with spatial lag and spatial errors: An application to hedonic housing prices in Paris. Journal of Urban Economics, 69(1)– Baltagi, B. H. and Deng, Y. ().A different approach to the simultaneous equation bias problem is the full information maximum likelihood (FIML) estimation method. FIML does not require instrumental variables, but it assumes that the equation errors have a multivariate normal distribution. 2SLS and 3SLS estimation do not assume a particular distribution for the errors.Introduction to Maximum Likelihood Estimation Eric Zivot it can be shown that maximum likelihood estimator is the best estimator among all possible estimators (especially for large sample • The formulas for the standard errors of the plug-in principle estimates come from the formulas for the standard errors of the MLEs.
# files input variables¶ This document lists and provides the description of the name (keywords) of the files input variables to be used in the input file for the abinit executable. ## get1den¶ Mnemonics: GET the first-order density from _1DEN file Mentioned in topic(s): topic_nonlinear, topic_ElPhonInt Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [35/1053] in all abinit tests, [9/133] in abinit tutorials Relevant only for non self consistent RF calculations (e.g. to get electron phonon matrix elements) or for non linear RF calculations (to get mixed higher order derivatives you need several perturbed densities and wave functions). Indicate the files from which first-order densities must be obtained, in multi-dataset mode (in single dataset mode, use ird1den). NOTE: a negative value of a “get” variable indicates the number of datasets to go backwards; it is not the number to be subtracted from the current dataset to find the proper dataset. As an example: ndtset 3 jdtset 1 2 4 getXXX -1 refers to dataset 2 when dataset 4 is initialized. ## get1wf¶ Mnemonics: GET the first-order wavefunctions from _1WF file Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [33/1053] in all abinit tests, [7/133] in abinit tutorials Eventually used when ndtset > 0 (in the multi-dataset mode), to indicate starting wavefunctions, as an alternative to ird1wf. One should first read the explanations given for these latter variables. This variable is typically used to chain the calculations in the multi-dataset mode, since they describe from which dataset the OUTPUT wavefunctions are to be taken, as INPUT wavefunctions of the present dataset. See also discussion in getwfk. ## getbscoup¶ Mnemonics: GET the Bethe-Salpeter COUPling block from… Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [0/1053] in all abinit tests, [0/133] in abinit tutorials Eventually used when ndtset > 0 (multi-dataset mode) and, in the case of a Bethe-Salpeter calculation to indicate that the starting coupling block of the excitonic Hamiltonian will be taken from the output of a previous dataset. It is used to chain the calculations, since it describes from which dataset the OUTPUT coupling block is to be taken, as INPUT of the present dataset. If getbscoup == 0, no such use of previously computed coupling block file is done. If getbscoup is positive, its value gives the index of the dataset to be used as input. If getbscoup is -1, the output of the previous dataset must be taken, which is a frequently occurring case. If getbscoup is a negative number, it indicates the number of datasets to go backward to find the needed file. In this case, if one refers to a non existent data set (prior to the first), the coupling block is not initialised from a disk file, so that it is as if getbscoup = 0 for that initialisation. ## getbseig¶ Mnemonics: GET the Bethe-Salpeter EIGenstates from… Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [4/1053] in all abinit tests, [0/133] in abinit tutorials Eventually used when ndtset > 0 (multi-dataset mode) and, in the case of a Bethe-Salpeter calculation to indicate that the starting excitonic eigenstates are to be taken from the output of a previous dataset. It is used to chain the calculations, since it describes from which dataset the OUTPUT eigenstates are to be taken, as INPUT eigenstates of the present dataset. If getbseig == 0, no such use of previously computed output eigenstates file is done. If getbseig is positive, its value gives the index of the dataset from which the output states is to be used as input. If getbseig is -1, the output eigenstates of the previous dataset must be taken, which is a frequently occurring case. If getbseig is a negative number, it indicates the number of datasets to go backward to find the needed file. In this case, if one refers to a non existent data set (prior to the first), the eigenstates are not initialised from a disk file, so that it is as if getbseig = 0 for that initialisation. ## getbsreso¶ Mnemonics: GET the Bethe-Salpeter RESOnant block from… Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [4/1053] in all abinit tests, [0/133] in abinit tutorials Eventually used when ndtset > 0 (multi-dataset mode) and, in the case of a Bethe-Salpeter calculation to indicate that the starting resonant block of the excitonic Hamiltonian will be taken from the output of a previous dataset. It is used to chain the calculations, since it describes from which dataset the OUTPUT resonant block is to be taken, as INPUT of the present dataset. If getbsreso == 0, no such use of previously computed resonant block file is done. If getbsreso is positive, its value gives the index of the dataset to be used as input. If getbsreso is -1, the output of the previous dataset must be taken, which is a frequently occurring case. If getbsreso is a negative number, it indicates the number of datasets to go backward to find the needed file. In this case, if one refers to a non existent data set (prior to the first), the resonant block is not initialised from a disk file, so that it is as if getbsreso = 0 for that initialisation. ## getddb¶ Mnemonics: GET the DDB from… Mentioned in topic(s): topic_ElPhonInt, topic_TDepES Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [21/1053] in all abinit tests, [9/133] in abinit tutorials This variable should be used when performing electron-phonon or temperature-dependent calculations in semiconductors with the legacy implementation that computes the e-ph matrix elements at the end of the DFPT run (for the new EPH code, see eph_task). More detailed explanation: The Born effective charge as well as the dielectric tensor will be read from a previous DFPT calculations of the electric field at q=Gamma. The use of this variable will trigger the cancellation of a residual dipole that leads to an unphysical divergence of the GKK with vanishing q-points. The use of this variable greatly improves the k-point convergence speed as the density of the k-point grid required to obtain the fulfillment of the charge neutrality sum rule is usually prohibitively large. If getddb == 0, no such use of previously computed Born effective charge and dielectric tensor is done. If getddb is positive, its value gives the index of the dataset from which the output density is to be used as input. If getddb is -1, the output density of the previous dataset must be taken, which is a frequently occurring case. If getddb is a negative number, it indicates the number of datasets to go backward to find the needed file. NOTE: a negative value of a “get” variable indicates the number of datasets to go backwards; it is not the number to be subtracted from the current dataset to find the proper dataset. As an example: ndtset 3 jdtset 1 2 4 getXXX -1 refers to dataset 2 when dataset 4 is initialized. Note also that, starting Abinit v9, one can also use getddb_filepath to specify the path of the file directly. ## getddb_filepath¶ Mnemonics: GET the DDB from FILEPATH Mentioned in topic(s): topic_multidtset Variable type: string Dimensions: scalar Default value: None Test list (click to open). Rarely used, [10/1053] in all abinit tests, [9/133] in abinit tutorials Specify the path of the DDB file using a string instead of the dataset index. Alternative to getddb and irdddb. The string must be enclosed between quotation marks: getddb_filepath "../outdata/out_DDB" ## getddk¶ Mnemonics: GET the DDK wavefunctions from _1WF file Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [79/1053] in all abinit tests, [14/133] in abinit tutorials Eventually used when ndtset > 0 (in the multi-dataset mode), to indicate starting wavefunctions, as an alternative to irdwfk,irdwfq,ird1wf,irdddk. One should first read the explanations given for these latter variables. The getwfk, getwfq, get1wf and getddk variables are typically used to chain the calculations in the multi-dataset mode, since they describe from which dataset the OUTPUT wavefunctions are to be taken, as INPUT wavefunctions of the present dataset. We now focus on the getwfk input variable (the only one used in ground- state calculations), but the rules for getwfq and get1wf are similar, with _WFK replaced by _WFQ or _1WF. If getwfk ==0, no use of previously computed output wavefunction file appended with _DSx_WFK is done. If getwfk is positive, its value gives the index of the dataset for which the output wavefunction file appended with _WFK must be used. If getwfk is -1, the output wf file with _WFK of the previous dataset must be taken, which is a frequently occurring case. If getwfk is a negative number, it indicates the number of datasets to go backward to find the needed wavefunction file. In this case, if one refers to a non existent data set (prior to the first), the wavefunctions are not initialised from a disk file, so that it is as if getwfk =0 for that initialisation. Thanks to this rule, the use of getwfk -1 is rather straightforward: except for the first wavefunctions, that are not initialized by reading a disk file, the output wavefunction of one dataset is input of the next one. In the case of a ddk calculation in a multi dataset run, in order to compute correctly the localisation tensor, it is mandatory to declare give getddk the value of the current dataset (i.e. getddk3 3 ) - this is a bit strange and should be changed in the future. NOTE: a negative value of a “get” variable indicates the number of datasets to go backwards; it is not the number to be subtracted from the current dataset to find the proper dataset. As an example: ndtset 3 jdtset 1 2 4 getXXX -1 refers to dataset 2 when dataset 4 is initialized. ## getdelfd¶ Mnemonics: GET the 1st derivative of wavefunctions with respect to ELectric FielD, from _1WF file Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [9/1053] in all abinit tests, [0/133] in abinit tutorials Eventually used when ndtset > 0 (in the multi-dataset mode), to indicate starting wavefunctions, as an alternative to irdwfk,irdwfq,ird1wf,irdddk. One should first read the explanations given for these latter variables. The getwfk, getwfq, get1wf and getddk variables are typically used to chain the calculations in the multi-dataset mode, since they describe from which dataset the OUTPUT wavefunctions are to be taken, as INPUT wavefunctions of the present dataset. We now focus on the getwfk input variable (the only one used in ground- state calculations), but the rules for getwfq and get1wf are similar, with _WFK replaced by _WFQ or _1WF. If getwfk ==0, no use of previously computed output wavefunction file appended with _DSx_WFK is done. If getwfk is positive, its value gives the index of the dataset for which the output wavefunction file appended with _WFK must be used. If getwfk is -1, the output wf file with _WFK of the previous dataset must be taken, which is a frequently occurring case. If getwfk is a negative number, it indicates the number of datasets to go backward to find the needed wavefunction file. In this case, if one refers to a non existent data set (prior to the first), the wavefunctions are not initialised from a disk file, so that it is as if getwfk =0 for that initialisation. Thanks to this rule, the use of getwfk -1 is rather straightforward: except for the first wavefunctions, that are not initialized by reading a disk file, the output wavefunction of one dataset is input of the next one. In the case of a ddk calculation in a multi dataset run, in order to compute correctly the localisation tensor, it is mandatory to declare give getddk the value of the current dataset (i.e. getddk3 3 ) - this is a bit strange and should be changed in the future. NOTE: a negative value of a “get” variable indicates the number of datasets to go backwards; it is not the number to be subtracted from the current dataset to find the proper dataset. As an example: ndtset 3 jdtset 1 2 4 getXXX -1 refers to dataset 2 when dataset 4 is initialized. ## getden¶ Mnemonics: GET the DENsity from… Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [232/1053] in all abinit tests, [39/133] in abinit tutorials Eventually used when ndtset > 0 (multi-dataset mode) and, in the case of a ground-state calculation, if iscf<0 (non-SCF calculation), to indicate that the starting density is to be taken from the output of a previous dataset. It is used to chain the calculations, since it describes from which dataset the OUTPUT density are to be taken, as INPUT density of the present dataset. If getden == 0, no such use of previously computed output density file is done. If getden is positive, its value gives the index of the dataset from which the output density is to be used as input. If getden is -1, the output density of the previous dataset must be taken, which is a frequently occurring case. If getden is a negative number, it indicates the number of datasets to go backward to find the needed file. In this case, if one refers to a non existent data set (prior to the first), the density is not initialised from a disk file, so that it is as if getden = 0 for that initialisation. Thanks to this rule, the use of getden -1 is rather straightforward: except for the first density, that is not initialized by reading a disk file, the output density of one dataset is input of the next one. Be careful: the output density file of a run with non-zero ionmov does not have the proper name (it has a “TIM” indication) for use as an input of an iscf<0 calculation. One should use the output density of a ionmov == 0 run. NOTE: a negative value of a “get” variable indicates the number of datasets to go backwards; it is not the number to be subtracted from the current dataset to find the proper dataset. As an example: ndtset 3 jdtset 1 2 4 getXXX -1 refers to dataset 2 when dataset 4 is initialized. ## getden_filepath¶ Mnemonics: GET the DEN file from FILEPATH Mentioned in topic(s): topic_multidtset Variable type: string Dimensions: scalar Default value: None Test list (click to open). Rarely used, [3/1053] in all abinit tests, [1/133] in abinit tutorials Specify the path of the DEN file using a string instead of the dataset index. Alternative to getden and irdden. The string must be enclosed between quotation marks: getden_filepath "../outdata/out_DEN" ## getdkde¶ Mnemonics: GET the mixed 2nd derivative of wavefunctions with respect to K and electric field, from _1WF file Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [9/1053] in all abinit tests, [0/133] in abinit tutorials Eventually used when ndtset > 0 (in the multi-dataset mode), to indicate starting wavefunctions, as an alternative to irdwfk,irdwfq,ird1wf,irdddk. One should first read the explanations given for these latter variables. The getwfk, getwfq, get1wf and getddk variables are typically used to chain the calculations in the multi-dataset mode, since they describe from which dataset the OUTPUT wavefunctions are to be taken, as INPUT wavefunctions of the present dataset. We now focus on the getwfk input variable (the only one used in ground- state calculations), but the rules for getwfq and get1wf are similar, with _WFK replaced by _WFQ or _1WF. If getwfk ==0, no use of previously computed output wavefunction file appended with _DSx_WFK is done. If getwfk is positive, its value gives the index of the dataset for which the output wavefunction file appended with _WFK must be used. If getwfk is -1, the output wf file with _WFK of the previous dataset must be taken, which is a frequently occurring case. If getwfk is a negative number, it indicates the number of datasets to go backward to find the needed wavefunction file. In this case, if one refers to a non existent data set (prior to the first), the wavefunctions are not initialised from a disk file, so that it is as if getwfk =0 for that initialisation. Thanks to this rule, the use of getwfk -1 is rather straightforward: except for the first wavefunctions, that are not initialized by reading a disk file, the output wavefunction of one dataset is input of the next one. In the case of a ddk calculation in a multi dataset run, in order to compute correctly the localisation tensor, it is mandatory to declare give getddk the value of the current dataset (i.e. getddk3 3 ) - this is a bit strange and should be changed in the future. NOTE: a negative value of a “get” variable indicates the number of datasets to go backwards; it is not the number to be subtracted from the current dataset to find the proper dataset. As an example: ndtset 3 jdtset 1 2 4 getXXX -1 refers to dataset 2 when dataset 4 is initialized. ## getdkdk¶ Mnemonics: GET the 2nd derivative of wavefunctions with respect to K, from _1WF file Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [11/1053] in all abinit tests, [2/133] in abinit tutorials Eventually used when ndtset > 0 (in the multi-dataset mode), to indicate starting wavefunctions, as an alternative to irdwfk,irdwfq,ird1wf,irdddk. One should first read the explanations given for these latter variables. The getwfk, getwfq, get1wf and getddk variables are typically used to chain the calculations in the multi-dataset mode, since they describe from which dataset the OUTPUT wavefunctions are to be taken, as INPUT wavefunctions of the present dataset. We now focus on the getwfk input variable (the only one used in ground- state calculations), but the rules for getwfq and get1wf are similar, with _WFK replaced by _WFQ or _1WF. If getwfk ==0, no use of previously computed output wavefunction file appended with _DSx_WFK is done. If getwfk is positive, its value gives the index of the dataset for which the output wavefunction file appended with _WFK must be used. If getwfk is -1, the output wf file with _WFK of the previous dataset must be taken, which is a frequently occurring case. If getwfk is a negative number, it indicates the number of datasets to go backward to find the needed wavefunction file. In this case, if one refers to a non existent data set (prior to the first), the wavefunctions are not initialised from a disk file, so that it is as if getwfk =0 for that initialisation. Thanks to this rule, the use of getwfk -1 is rather straightforward: except for the first wavefunctions, that are not initialized by reading a disk file, the output wavefunction of one dataset is input of the next one. In the case of a ddk calculation in a multi dataset run, in order to compute correctly the localisation tensor, it is mandatory to declare give getddk the value of the current dataset (i.e. getddk3 3) - this is a bit strange and should be changed in the future. NOTE: a negative value of a “get” variable indicates the number of datasets to go backwards; it is not the number to be subtracted from the current dataset to find the proper dataset. As an example: ndtset 3 jdtset 1 2 4 getXXX -1 refers to dataset 2 when dataset 4 is initialized. ## getdvdb¶ Mnemonics: GET the DVDB from… Mentioned in topic(s): topic_ElPhonInt Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [13/1053] in all abinit tests, [8/133] in abinit tutorials This variable can be used when performing electron-phonon calculations with optdriver = 7 to read a DVDB file produced in a previous dataset. For example, one can concatenate a dataset in which an initial set of DFPT potentials on a relatively coarse q-mesh is interpolated on a denser q-mesh using eph_task = 5 and eph_ngqpt_fine. Note also that, starting Abinit v9, one can also use getdvdb_filepath to specify the path of the file directly. ## getdvdb_filepath¶ Mnemonics: GET the DVDB file from FILEPATH Mentioned in topic(s): topic_multidtset Variable type: string Dimensions: scalar Default value: None Test list (click to open). Rarely used, [10/1053] in all abinit tests, [8/133] in abinit tutorials Specify the path of the DVDB file using a string instead of the dataset index. Alternative to getdvdb and irddvdb. The string must be enclosed between quotation marks: getdvdb_filepath "../outdata/out_DVDB" ## getefmas¶ Mnemonics: GET the EFfective MASses from… Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [1/1053] in all abinit tests, [0/133] in abinit tutorials Eventually used when ndtset > 0 (multi-dataset mode). Only relevant for optdriver=7 and eph_task=6. If set to 1, take the data from a _EFMAS file as input. The latter must have been produced using prtefmas. ## gethaydock¶ Mnemonics: GET the HAYDOCK restart file from… Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [0/1053] in all abinit tests, [0/133] in abinit tutorials Eventually used when ndtset > 0 (multi-dataset mode) and, in the case of a Bethe-Salpeter calculation to indicate that the Haydock iterative technique will be restarted from the output of a previous dataset. If gethaydock == 0, no such use of previously computed coupling block file is done. If gethaydock is positive, its value gives the index of the dataset to be used as input. If gethaydock is -1, the output of the previous dataset must be taken, which is a frequently occurring case. If gethaydock is a negative number, it indicates the number of datasets to go backward to find the needed file. In this case, if one refers to a non existent data set (prior to the first), the coupling block is not initialised from a disk file, so that it is as if gethaydock = 0 for that initialisation. ## getocc¶ Mnemonics: GET OCC parameters from… Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [2/1053] in all abinit tests, [0/133] in abinit tutorials This variable is typically used to chain the calculations, in the multi- dataset mode (ndtset > 0), since it describes from which dataset the array occ is to be taken, as input of the present dataset. The occupation numbers are EVOLVING variables, for which such a chain of calculations is useful. If getocc == 0, no such use of previously computed output occupations is done. If getocc is positive, its value gives the index of the dataset from which the data are to be used as input data. It must be the index of a dataset already computed in the SAME run. If getocc is -1, the output data of the previous dataset must be taken, which is a frequently occurring case. If getocc is a negative number, it indicates the number of datasets to go backward to find the needed data. In this case, if one refers to a non existent data set (prior to the first), the date is not initialised from a disk file, so that it is as if getocc == 0 for that initialisation. NOTE that a non-zero getocc MUST be used with occopt == 2, so that the number of bands has to be initialized for each k point. Of course, these numbers of bands must be identical to the numbers of bands of the dataset from which occ will be copied. The same is true for the number of k points. NOTE: a negative value of a “get” variable indicates the number of datasets to go backwards; it is not the number to be subtracted from the current dataset to find the proper dataset. As an example: ndtset 3 jdtset 1 2 4 getXXX -1 refers to dataset 2 when dataset 4 is initialized. ## getpot_filepath¶ Mnemonics: GET the KS POTential from FILEPATH Mentioned in topic(s): topic_multidtset Variable type: string Dimensions: scalar Default value: None Test list (click to open). Rarely used, [4/1053] in all abinit tests, [3/133] in abinit tutorials This variable defines the path of the POT file containing the KS ground-state potential that should be used in input. At present, it is mainly used in the EPH code when performing calculation with the Sternheimer equation. Note that the path must be inserted between quotation marks. Note also that relative paths are interpreted according to the working directory in which Abinit is executed! ## getqps¶ Mnemonics: GET QuasiParticle Structure Mentioned in topic(s): topic_multidtset, topic_GW, topic_Susceptibility, topic_SelfEnergy Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [15/1053] in all abinit tests, [0/133] in abinit tutorials Used when ndtset > 0 (multi-dataset mode) and optdriver = 3, or 4 (screening or sigma step of a GW calculation), to indicate that the eigenvalues and possibly the wavefunctions have to be taken from a previous quasi-particle calculation (instead of the usual DFT starting point). This is to achieve quasi-particle self-consistency. See also irdqps NOTE: a negative value of a “get” variable indicates the number of datasets to go backwards; it is not the number to be subtracted from the current dataset to find the proper dataset. As an example: ndtset 3 jdtset 1 2 4 getXXX -1 refers to dataset 2 when dataset 4 is initialized. ## getscr¶ Mnemonics: GET SCReening (the inverse dielectric matrix) from… Mentioned in topic(s): topic_multidtset, topic_GW, topic_SelfEnergy Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [72/1053] in all abinit tests, [8/133] in abinit tutorials Used when ndtset > 0 (multi-dataset mode) and optdriver = 4 (sigma step of a GW calculation), to indicate that the dielectric matrix (_SCR file) is to be taken from the output of a previous dataset. It is used to chain the calculations, since it describes from which dataset the OUTPUT dielectric matrix is to be taken, as INPUT of the present dataset. Note also that, starting Abinit v9, one can also use getscr_filepath to specify the path of the file directly. If getscr == 0, no such use of previously computed output _SCR file is done. If getscr is positive, its value gives the index of the dataset from which the output _SCR file is to be used as input. If getscr is -1, the output _SCR file of the previous dataset must be taken, which is a frequently occurring case. If getscr is a negative number, it indicates the number of datasets to go backward to find the needed file. In this case, if one refers to a non existent data set (prior to the first), the _SCR file is not initialised from a disk file, so that it is as if getscr = 0 for that initialisation. NOTE: a negative value of a “get” variable indicates the number of datasets to go backwards; it is not the number to be subtracted from the current dataset to find the proper dataset. As an example: ndtset 3 jdtset 1 2 4 getXXX -1 refers to dataset 2 when dataset 4 is initialized. ## getscr_filepath¶ Mnemonics: GET the SCR file from FILEPATH Mentioned in topic(s): topic_multidtset Variable type: string Dimensions: scalar Default value: None Test list (click to open). Rarely used, [1/1053] in all abinit tests, [0/133] in abinit tutorials Specify the path of the SCR file using a string instead of the dataset index. Alternative to getscr and irdscr. The string must be enclosed between quotation marks: getscr_filepath "../outdata/out_SCR" ## getsigeph_filepath¶ Mnemonics: GET the SIGEPH from FILEPATH Mentioned in topic(s): topic_multidtset Variable type: string Dimensions: scalar Default value: Output filename of the present dataset Test list (click to open). Rarely used, [1/1053] in all abinit tests, [1/133] in abinit tutorials This variable defines the path of the SIGEPH file with the e-ph self-energy results that should be used as input for further analysis. At present, it is used by the transport driver (eph_task = 7) to read the lifetimes needed to compute carrier mobilities within the RTA. ## getsuscep¶ Mnemonics: GET SUSCEPtibility (the irreducible polarizability) from… Mentioned in topic(s): topic_multidtset, topic_GW, topic_SelfEnergy Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [2/1053] in all abinit tests, [0/133] in abinit tutorials Used when ndtset > 0 (multi-dataset mode) and optdriver = 4 (sigma step of a GW calculation), to indicate that the irreducible polarizability (_SUSC file) is to be taken from the output of a previous dataset. It is used to chain the calculations, since it describes from which dataset the OUTPUT susceptibility is to be taken, as INPUT of the present dataset. Performing a GW calculations starting from the _SUSC file instead of the _SCR file presents the advantage that starting from the irreducible polarizability, one can calculate the screened interaction using different expressions without having to perform a screening calculation from scratch. For example, it is possible to apply a cutoff to the Coulomb interaction in order to facilitate the convergence of the GW correction with respect to the size of the supercell (see vcutgeo and icutcoul) If getsuscep == 0, no such use of previously computed output _SUSC file is done. If getsuscep is positive, its value gives the index of the dataset from which the output _SUSC file is to be used as input. If getsuscep is -1, the output _SUSC file of the previous dataset must be taken, which is a frequently occurring case. If getsuscep is a negative number, it indicates the number of datasets to go backward to find the needed file. In this case, if one refers to a non existent data set (prior to the first), the _SUSC file is not initialised from a disk file, so that it is as if getsuscep = 0 for that initialisation. NOTE: a negative value of a “get” variable indicates the number of datasets to go backwards; it is not the number to be subtracted from the current dataset to find the proper dataset. As an example: ndtset 3 jdtset 1 2 4 getXXX -1 refers to dataset 2 when dataset 4 is initialized. ## getwfk¶ Mnemonics: GET the wavefunctions from _WFK file Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [511/1053] in all abinit tests, [61/133] in abinit tutorials Eventually used when ndtset > 0 (in the multi-dataset mode), to indicate starting wavefunctions, as an alternative to irdwfk,. Note also that, starting Abinit v9, one can also use getwfk_filepath to specify the path of the file directly. The getwfk , getwfq, get1wf and getddk variables are typically used to chain the calculations in the multi-dataset mode, since they describe from which dataset the OUTPUT wavefunctions are to be taken, as INPUT wavefunctions of the present dataset. We now focus on the getwfk input variable (the only one used in ground-state calculations), but the rules for getwfq and get1wf are similar, with _WFK replaced by _WFQ or _1WF. If getwfk == 0, no use of previously computed output wavefunction file appended with _DSx_WFK is done. If getwfk is positive, its value gives the index of the dataset for which the output wavefunction file appended with _WFK must be used. If getwfk is -1, the output wf file with _WFK of the previous dataset must be taken, which is a frequently occurring case. If getwfk is a negative number, it indicates the number of datasets to go backward to find the needed wavefunction file. In this case, if one refers to a non existent data set (prior to the first), the wavefunctions are not initialised from a disk file, so that it is as if getwfk = 0 for that initialisation. Thanks to this rule, the use of getwfk -1 is rather straightforward: except for the first wavefunctions, that are not initialized by reading a disk file, the output wavefunction of one dataset is input of the next one. NOTE: a negative value of a “get” variable indicates the number of datasets to go backwards; it is not the number to be subtracted from the current dataset to find the proper dataset. As an example: ndtset 3 jdtset 1 2 4 getXXX -1 refers to dataset 2 when dataset 4 is initialized. ## getwfk_filepath¶ Mnemonics: GET the wavefunctions from WFK PATH Mentioned in topic(s): topic_multidtset Variable type: string Dimensions: scalar Default value: None Test list (click to open). Rarely used, [9/1053] in all abinit tests, [8/133] in abinit tutorials Specify the path of the WFK file using a string instead of the dataset index. Alternative to getwfk and irdwfk. The string must be enclosed between quotation marks: getwfk_filepath "../outdata/out_WFK" ## getwfkfine_filepath¶ Mnemonics: GET the fine wavefunctions from FILEPATH Mentioned in topic(s): topic_multidtset Variable type: string Dimensions: scalar Default value: None Test list (click to open). Rarely used, [3/1053] in all abinit tests, [1/133] in abinit tutorials Specify the path of the fine WFK file using a string instead of the dataset index. Alternative to getwfkfine and irdwfkfine. The string must be enclosed between quotation marks: getwfkfine_filepath "../outdata/out_WFK" ## getwfq¶ Mnemonics: GET the wavefunctions from _WFQ file Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [37/1053] in all abinit tests, [5/133] in abinit tutorials Eventually used when ndtset > 0 (in the multi-dataset mode), to indicate starting wavefunctions, as an alternative to irdwfq. Note also that, starting Abinit v9, one can also use getwfq_filepath to specify the path of the file directly. The getwfk, getwfq , get1wf and getddk variables are typically used to chain the calculations in the multi-dataset mode, since they describe from which dataset the OUTPUT wavefunctions are to be taken, as INPUT wavefunctions of the present dataset. See discussion in getwfk ## getwfq_filepath¶ Mnemonics: GET the k+q wavefunctions from WFQ PATH Mentioned in topic(s): topic_multidtset Variable type: string Dimensions: scalar Default value: None Test list (click to open). Rarely used, [1/1053] in all abinit tests, [0/133] in abinit tutorials Specify the path of the WFQ file using a string instead of the dataset index. Alternative to getwfq and irdwfq. The string must be enclosed between quotation marks: getwfq_filepath "../outdata/out_WFQ" ## indata_prefix¶ Mnemonics: INput DATA PREFIX Mentioned in topic(s): topic_Control Variable type: string Dimensions: scalar Default value: None Test list (click to open). Rarely used, [2/1053] in all abinit tests, [1/133] in abinit tutorials Prefix for input files. Replaces the analogous entry in the obsolete files_file . This variable is used when Abinit is executed with the new syntax: abinit run.abi > run.log 2> run.err & If this option is not specified, a prefix is automatically constructed from the input file name provided the filename ends with an extension, e.g. .ext. (.abi is recommended) If the input filename does not have a file extension, a default is provided. ## ird1den¶ Mnemonics: Integer that governs the ReaDing of 1st-order DEN file Mentioned in topic(s): topic_nonlinear Variable type: integer Dimensions: scalar Default value: 1 if iscf < 0, 0 otherwise. Test list (click to open). Rarely used, [1/1053] in all abinit tests, [1/133] in abinit tutorials If first order density is needed in single dataset mode (for example in nonlinear optical response), use ird1den = 1 to read first-order densities from _DENx files produced in other calculations. In multi-dataset mode use get1den. When iscf < 0, the reading of a DEN file is always enforced. A non-zero value of ird1den is treated in the same way as other “ird” variables. For further information about the files file, consult the abinit help file. ## ird1wf¶ Mnemonics: Integer that governs the ReaDing of _1WF files Mentioned in topic(s): topic_DFPT Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [2/1053] in all abinit tests, [1/133] in abinit tutorials Indicates eventual starting wavefunctions. As alternative, one can use the input variables getwfk, getwfq, get1wf or getddk. Ground-state calculation: • only irdwfk and getwfk have a meaning • at most one of irdwfk or getwfk can be non-zero • if irdwfk and getwfk are both zero, initialize wavefunctions with random numbers for ground state calculation. • if irdwfk = 1: read ground state wavefunctions from a disk file appended with _WFK, produced in a previous ground state calculation. Response-function calculation: • one and only one of irdwfk or getwfk MUST be non-zero • if irdwfk = 1: read ground state k -wavefunctions from a disk file appended with _WFK, produced in a previous ground state calculation. • only one of irdwfq or getwfq can be non-zero, if both of them are non-zero, use as k + q file the one defined by irdwfk and/or getwfk • if irdwfq = 1: read ground state k+q -wavefunctions from a disk file appended with _WFQ, produced in a previous ground state calculation. • at most one of ird1wf or get1wf can be non-zero • if both are zero, initialize first order wavefunctions to zeroes • if ird1wf = 1: read first-order wavefunctions from a disk file appended with _1WFx, produced in a previous response function calculation. • at most one of irdddk or getddk can be non-zero • one of them must be non-zero if an homogeneous electric field calculation is done (presently, a ddk calculation in the same dataset is not allowed) • if irdddk = 1: read first-order ddk wavefunctions from a disk file appended with _1WFx, produced in a previous response function calculation. For further information about the files file, consult the abinit help file. ## irdbscoup¶ Mnemonics: Integer that governs the ReaDing of COUPling block Mentioned in topic(s): topic_BSE Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [0/1053] in all abinit tests, [0/133] in abinit tutorials Start the Bethe-Salpeter calculation from the BSC file containing the coupling block produced in a previous run. ## irdbseig¶ Mnemonics: Integer that governs the ReaDing of BS_EIG file Mentioned in topic(s): topic_BSE Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [2/1053] in all abinit tests, [0/133] in abinit tutorials Start the Bethe-Salpeter calculation from the BS_EIG containing the exciton eigenvectors produced in a previous run. ## irdbsreso¶ Mnemonics: Integer that governs the ReaDing of RESOnant block Mentioned in topic(s): topic_BSE Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [4/1053] in all abinit tests, [0/133] in abinit tutorials Start the Bethe-Salpeter calculation from the BSR file containing the resonant block produced in a previous run. ## irdddb¶ Mnemonics: Integer that governs the ReaDing of DDB file Mentioned in topic(s): topic_ElPhonInt Variable type: integer Dimensions: scalar Default value: 1 if iscf < 0, 0 otherwise. Test list (click to open). Rarely used, [2/1053] in all abinit tests, [0/133] in abinit tutorials This variable should be used when performing electron-phonon or temperature- dependence calculations. The Born effective charge as well as the dielectric tensor will be read from a previous DFPT calculations of the electric field at q=Gamma. The use of this variable will trigger the cancellation of a residual dipole that leads to an unphysical divergence of the GKK with vanishing q-points. The use of this variable greatly improves the k-point convergence speed as the density of the k-point grid required to obtain the fulfillment of the charge neutrality sum rule is usually prohibitively large. A non-zero value of irdddb is treated in the same way as other “ird” variables. For further information about the files file, consult the abinit help file. Note also that, starting Abinit v9, one can also use getddb_filepath to specify the path of the DDB file directly. ## irdddk¶ Mnemonics: Integer that governs the ReaDing of DDK wavefunctions, in _1WF files Mentioned in topic(s): topic_DFPT Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [7/1053] in all abinit tests, [2/133] in abinit tutorials Indicates eventual starting wavefunctions. As alternative, one can use the input variables getwfk, getwfq, get1wf or getddk. Ground-state calculation: • only irdwfk and getwfk have a meaning • at most one of irdwfk or getwfk can be non-zero • if irdwfk and getwfk are both zero, initialize wavefunctions with random numbers for ground state calculation. • if irdwfk = 1: read ground state wavefunctions from a disk file appended with _WFK, produced in a previous ground state calculation Response-function calculation: • one and only one of irdwfk or getwfk MUST be non-zero • if irdwfk = 1: read ground state k -wavefunctions from a disk file appended with _WFK, produced in a previous ground state calculation • only one of irdwfq or getwfq can be non-zero, if both of them are non-zero, use as k + q file the one defined by irdwfk and/or getwfk • if irdwfq = 1: read ground state k+q -wavefunctions from a disk file appended with _WFQ, produced in a previous ground state calculation • at most one of ird1wf or get1wf can be non-zero • if both are zero, initialize first order wavefunctions to zeroes • if ird1wf = 1: read first-order wavefunctions from a disk file appended with _1WFx, produced in a previous response function calculation • at most one of irdddk or getddk can be non-zero • one of them must be non-zero if an homogeneous electric field calculation is done (presently, a ddk calculation in the same dataset is not allowed) • if irdddk = 1: read first-order ddk wavefunctions from a disk file appended with _1WFx, produced in a previous response function calculation For further information about the files file, consult the abinit help file. ## irdden¶ Mnemonics: Integer that governs the ReaDing of DEN file Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 1 if iscf < 0, 0 otherwise. Test list (click to open). Rarely used, [8/1053] in all abinit tests, [5/133] in abinit tutorials Start the ground-state calculation from the density file of a previous run. When iscf < 0, the reading of a DEN file is always enforced. A non-zero value of irdden is treated in the same way as other “ird” variables. For further information about the files file, consult the abinit help file. ## irddvdb¶ Mnemonics: Integer that governs the ReaDing of DVDB file Mentioned in topic(s): topic_ElPhonInt Variable type: integer Dimensions: scalar Default value: None Test list (click to open). Rarely used, [0/1053] in all abinit tests, [0/133] in abinit tutorials This variable can be used when performing electron-phonon calculations with optdriver = 7 to read an input DVDB file. See also getdvdb ## irdefmas¶ Mnemonics: Integer to ReaD the EFfective MASses from… Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [0/1053] in all abinit tests, [0/133] in abinit tutorials Eventually used when ndtset > 0 (multi-dataset mode). Only relevant for optdriver=7 and eph_task=6. If set to 1, take the data from a _EFMAS file as input. The latter must have been produced using prtefmas in another run. ## irdhaydock¶ Mnemonics: Integer that governs the ReaDing of the HAYDOCK restart file Mentioned in topic(s): topic_BSE Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [0/1053] in all abinit tests, [0/133] in abinit tutorials Used to re-start the Haydock iterative technique from the HAYDR_SAVE file produced in a previous run. ## irdqps¶ Mnemonics: Integer that governs the ReaDing of QuasiParticle Structure Mentioned in topic(s): topic_GW, topic_multidtset, topic_Susceptibility, topic_SelfEnergy Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [3/1053] in all abinit tests, [0/133] in abinit tutorials Relevant only when optdriver = 3 or 4. Indicate the file from which the eigenvalues and possibly the wavefunctions must be obtained, in order to achieve a self-consistent quasi-particle calculations. See also getqps ## irdscr¶ Mnemonics: Integer that governs the ReaDing of the SCReening Mentioned in topic(s): topic_GW, topic_multidtset, topic_SelfEnergy Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [10/1053] in all abinit tests, [5/133] in abinit tutorials Relevant only when optdriver = 4. Indicate the file from which the dielectric matrix must be obtained. As alternative, one can use the input variable getscr. When optdriver = 4, at least one of irdscr or getscr (alternatively, irdsuscep or getsuscep) must be non-zero. A non-zero value of irdscr is treated in the same way as other “ird” variables. For further information about the files file, consult the abinit help file. ## irdsuscep¶ Mnemonics: Integer that governs the ReaDing of the SUSCEPtibility Mentioned in topic(s): topic_GW, topic_multidtset, topic_SelfEnergy Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [2/1053] in all abinit tests, [0/133] in abinit tutorials Relevant only when optdriver = 4. Indicate the file from which the irreducible polarizability must be obtained. As alternative, one can use the input variable getsuscep. When optdriver = 4, at least one of irdsuscep or getsuscep (alternatively, irdscr or getscr) must be non-zero. A non-zero value of irdsuscep is treated in the same way as other “ird” variables. For further information about the files file, consult the abinit help file. ## irdwfk¶ Mnemonics: Integer that governs the ReaDing of _WFK files Mentioned in topic(s): topic_multidtset Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [94/1053] in all abinit tests, [16/133] in abinit tutorials Indicates eventual starting wavefunctions. As alternative, one can use the input variables getwfk, getwfq, get1wf or getddk. Ground-state calculation: • only irdwfk and getwfk have a meaning • at most one of irdwfk or getwfk can be non-zero • if irdwfk and getwfk are both zero, initialize wavefunctions with random numbers for ground state calculation. • if irdwfk = 1: read ground state wavefunctions from a disk file appended with _WFK, produced in a previous ground state calculation. Response-function calculation: • one and only one of irdwfk or getwfk MUST be non-zero • if irdwfk = 1: read ground state k -wavefunctions from a disk file appended with _WFK, produced in a previous ground state calculation • only one of irdwfq or getwfq can be non-zero, if both of them are non-zero, use as k + q file the one defined by irdwfk and/or getwfk • if irdwfq = 1: read ground state k+q -wavefunctions from a disk file appended with _WFQ, produced in a previous ground state calculation • at most one of ird1wf or get1wf can be non-zero • if both are zero, initialize first order wavefunctions to 0’s. • if ird1wf = 1: read first-order wavefunctions from a disk file appended with _1WFx, produced in a previous response function calculation • at most one of irdddk or getddk can be non-zero • one of them must be non-zero if an homogeneous electric field calculation is done (presently, a ddk calculation in the same dataset is not allowed) • if irdddk = 1: read first-order ddk wavefunctions from a disk file appended with _1WFx, produced in a previous response function calculation For further information about the files file, consult the abinit help file. ## irdwfq¶ Mnemonics: Integer that governs the ReaDing of _WFQ files Mentioned in topic(s): topic_DFPT Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [5/1053] in all abinit tests, [0/133] in abinit tutorials Indicates eventual starting wavefunctions. As alternative, one can use the input variables getwfk, getwfq, get1wf or getddk. Ground-state calculation: • only irdwfk and getwfk have a meaning • at most one of irdwfk or getwfk can be non-zero • if irdwfk and getwfk are both zero, initialize wavefunctions with random numbers for ground state calculation. • if irdwfk = 1: read ground state wavefunctions from a disk file appended with _WFK, produced in a previous ground state calculation Response-function calculation: • one and only one of irdwfk or getwfk MUST be non-zero • if irdwfk = 1: read ground state k -wavefunctions from a disk file appended with _WFK, produced in a previous ground state calculation • only one of irdwfq or getwfq can be non-zero, if both of them are non-zero, use as k + q file the one defined by irdwfk and/or getwfk • if irdwfq = 1: read ground state k+q -wavefunctions from a disk file appended with _WFQ, produced in a previous ground state calculation • at most one of ird1wf or get1wf can be non-zero • if both are zero, initialize first order wavefunctions to 0’s. • if ird1wf = 1: read first-order wavefunctions from a disk file appended with _1WFx, produced in a previous response function calculation • at most one of irdddk or getddk can be non-zero • one of them must be non-zero if an homogeneous electric field calculation is done (presently, a ddk calculation in the same dataset is not allowed) • if irdddk = 1: read first-order ddk wavefunctions from a disk file appended with _1WFx, produced in a previous response function calculation For further information about the files file, consult the abinit help file. ## kssform¶ Mnemonics: Kohn Sham Structure file FORMat Mentioned in topic(s): topic_Susceptibility, topic_SelfEnergy Variable type: integer Dimensions: scalar Default value: 1 Test list (click to open). Rarely used, [10/1053] in all abinit tests, [2/133] in abinit tutorials Governs the choice of the format for the file that contains the Kohn-Sham electronic structure information, for use in GW calculations, see the input variables optdriver and nbandkss. • kssform = 1, a single file.kss (double precision) containing complete information on the Kohn Sham Structure (eigenstates and the pseudopotentials used) will be generated through full diagonalization of the complete Hamiltonian matrix. The file has at the beginning the standard abinit header. • kssform = 3, a single file.kss (double precision) containing complete information on the Kohn Sham Structure (eigenstates and the pseudopotentials used) will be generated through the usual conjugate gradient algorithm (so, a restricted number of states). The file has at the beginning the standard abinit header. Warning For the time being, istwfk must be 1 for all the k-points. ## outdata_prefix¶ Mnemonics: OUTput DATA PREFIX Mentioned in topic(s): topic_Control Variable type: string Dimensions: scalar Default value: None Test list (click to open). Rarely used, [2/1053] in all abinit tests, [1/133] in abinit tutorials Prefix for output files. Replaces the analogous entry in the obsolete files_file . This variable is used when Abinit is executed with the new syntax: abinit run.abi > run.log 2> run.err & If this option is not specified, a prefix is automatically constructed from the input file name provided the filename ends with an extension, e.g. .ext. (.abi is recommended) If the input filename does not have a file extension, a default is provided. ## output_file¶ Mnemonics: OUTPUT FILE Mentioned in topic(s): topic_Control Variable type: string Dimensions: scalar Default value: None Test list (click to open). Moderately used, [15/1053] in all abinit tests, [13/133] in abinit tutorials String specifying the name of the main output file when Abinit is executed with the new syntax: abinit run.abi > run.log 2> run.err & If not specified, the name of the output file is automatically generated by replacing the file extension of the input file with .abo. To specify the filename in the input use the syntax: output_file "t01.out" with the string enclosed between double quotation marks. ## pp_dirpath¶ Mnemonics: PseudoPotential DIRectory PATH Mentioned in topic(s): topic_Control Variable type: string Dimensions: scalar Default value: Test list (click to open). Very frequently used, [1049/1053] in all abinit tests, [133/133] in abinit tutorials This variable specifies the directory that will prependeded to the names of the pseudopotentials specified in pseudos. This option is useful when all your pseudos are gathered in a single directory in your file system and you don not want to type the absolute path for each pseudopotential file. This variable is used when Abinit is executed with the new syntax: abinit run.abi > run.log 2> run.err & The string must be quoted in double quotation marks: pp_dirpath "/home/user/my_pseudos/" pseudos "al.psp8, as.psp8" If pp_dirpath is not present, the filenames specified in pseudos are used directly. ## prt1dm¶ Mnemonics: PRinT 1-DiMensional potential and density Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [13/1053] in all abinit tests, [0/133] in abinit tutorials If set >= 1, provide one-dimensional projection of potential and density, for each of the three axis. This corresponds to averaging the potential or the density on bi-dimensional slices of the FFT grid. ## prtden¶ Mnemonics: PRinT the DENsity Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 if nimage > 1, 1 otherwise. Test list (click to open). Moderately used, [342/1053] in all abinit tests, [39/133] in abinit tutorials If set to 1 or a larger value, provide output of electron density in real space rho(r), in units of electrons/Bohr^3. If ionmov == 0, the name of the density file will be the root output name, followed by _DEN. If ionmov /= 0, density files will be output at each time step, with the name being made of • the root output name, • followed by _TIMx, where x is related to the time step (see later) • then followed by _DEN The file structure of this unformatted output file is described in this section. If prtden is lower than 0, two files will be printed for restart every prtden step, with the names being made of • the root temporary name, • followed by _DEN_x, where x is 0000 or 0001 alternatively. • The most recent of the two files should be used for restart, and copied to root input name_DS2_DEN • To perform a restart, in a multidataset mode, use ndtset 2 and jdtset 2 3 (that is 2 datasets, numbered 2 and 3) • In the dataset 2, get the density you just copied (getden2 -1), perform a non self-consistent calculation and print the wave function (prtwf2 1) • In the dataset 3, get the previous wf(getwfk3 -1), and continue the calculation • This complicated procedure is due to the fact that reading the density is only allowed for a non sc calculation, and also for a dataset different of 0 or the previous one, the option we choose here. Please note that in the case of PAW (%usepaw = 1) calculations, the _DEN density output is not the full physical electron density. If what is wanted is the full physical electron density, say for post-processing with AIM or visualization, prtden > 1 will produce physical electron density or other interesting quantities (see below). Nevertheless, even in the PAW case, when chaining together calculations where the density from one calculation is to be used in a subsequent calculation, it is necessary to use the _DEN files and not one of the other files produced with prtden > 1, i.e. _PAWDEN, ATMDEN_xxx or else. Note that the usual _DEN file is always generated as soon as prtden >= 1. Options 2 to 6 for prtden are relevant only for %usepaw = 1 and control the output of the full electron density in the PAW case: prtden=2 causes generation of a file _PAWDEN that contains the bulk valence charge density together with the PAW on-site contributions, and has the same format as the other density files. prtden=3 causes generation of a file _PAWDEN that contains the bulk full charge density (valence+core) prtden=4 causes generation of three files _ATMDEN_CORE, _ATMDEN_VAL and _ATMDEN_FULL which respectively contain the core, valence and full atomic protodensity (the density of the individual component atoms in vacuum superposed at the bulk atomic positions). This can be used to generate various visualizations of the bonding density. prtden=5 options 2 and 4 taken together. prtden=6 options 3 and 4 taken together. prtden=7 causes the generation of all the individual contributions to the bulk valence charge density: n_tilde-n_hat (_N_TILDE), n_onsite (_N_ONE) and n_tilde_onsite (_NT_ONE). This is for diagnosis purposes only. Options 3 to 6 currently require the user to supply the atomic core and valence density in external files in the working directory. The files must be named properly; for example, the files for an atom of type 1 should be named: “core_density_atom_type1.dat” and “valence_density_atom_type1.dat”. The file should be a text file, where the first line is assumed to be a comment, and the subsequent lines contain two values each, where the first one is a radial coordinate and the second the value of the density n(r). Please note that it is n(r) which should be supplied, not n(r)/r^2. The first coordinate point must be the origin, i.e. **r = 0 **. The atomic densities are spherically averaged, so assumed to be completely spherically symmetric, even for open shells. NOTE: in the PAW case, DO NOT use _PAWDEN or _ATMDEN_xxx files produced by prtden > 1 to chain the density output from one calculation as the input to another, use the _DEN file for that. ## prtdos¶ Mnemonics: PRinT the Density Of States Mentioned in topic(s): topic_printing, topic_ElecDOS Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [39/1053] in all abinit tests, [5/133] in abinit tutorials Provide output of Density of States if set to 1…5. Can either use a smearing technique ( prtdos = 1 or 4), or the tetrahedron method ( prtdos = 2, 3 or 5). If prtdos = 3 or 4, provide output of angular-momentum projected Local Density of States inside a sphere centered on different atoms (all or only those specified by iatsph), and possibly output m-decomposed LDOS if prtdosm is defined. The resolution of the linear grid of energies for which the DOS is computed can be tuned thanks to dosdeltae. If prtdos = 1, the smeared density of states is obtained from the eigenvalues, properly weighted at each k point using wtk, and smeared according to occopt and tsmear. All levels that are present in the calculation are taken into account (occupied and unoccupied). In order to compute the DOS of an insulator with prtdos = 1, compute its density thanks to a self-consistent calculation (with a non-metallic occopt value, 0, 1 or 2), then use prtdos = 1, together with iscf = -3, and a metallic occopt, between 3 and 7, providing the needed smearing. If prtdos = 1, the name of the DOS file is the root name for the output files, followed by “_DOS”. • Note 1: occopt must be between 3 and 7. • Note 2: The sampling of the Brillouin Zone that is needed to get a converged DOS is usually much finer than the sampling needed to converge the total energy or the geometry of the system, unless tsmear is very large (hence the DOS is not obtained properly). A separate convergence study is needed. If prtdos = 2, the DOS is computed using the tetrahedron method. As in the case of prtdos = 1, all levels that are present in the calculation are taken into account (occupied and unoccupied). In this case, the k-points must have been defined using the input variable ngkpt or the input variable kptrlatt. There must be at least two non-equivalent points in the Irreducible Brillouin Zone to use prtdos = 2. It is strongly advised that you use a non-shifted k-point grid (shiftk 0 0 0): such grids naturally contain more extremal points (band minima and maxima at Gamma or at the zone- boundaries) than shifted grids, and lead to more non-equivalent points than shifted grids, for the same grid spacing. There is no need to take care of the occopt or tsmear input variables, and there is no subtlety to be taken into account for insulators. The computation can be done in the self- consistent case as well as in the non-self-consistent case, using iscf = -3. This allows one to refine the DOS at fixed starting density. In that case, if ionmov == 0, the name of the potential file will be the root output name, followed by _DOS (like in the prtdos = 1 case). However, if ionmov /= 0, potential files will be output at each time step, with the name being made of • the root output name, • followed by _TIMx, where x is related to the time step (see later) • then followed by _DOS. If prtdos = 3, the same tetrahedron method as for prtdos = 2 is used, but the angular-momentum projected (l=0,1,2,3,4) DOS in sphere centered on the atoms is computed (not directly the total atom-cenetered DOS). The preparation of this case, the parameters under which the computation is to be done, and the file denomination is similar to the prtdos = 2 case. However, three additional input variables might be provided, describing the atoms that are the center of the sphere (input variables natsph and iatsph), as well as the radius of this sphere (input variable ratsph). In case of PAW, ratsph radius has to be greater or equal to the largest PAW radius of the atom types considered (which is read from the PAW atomic data file; see rc_sph or r_paw). Additionally, printing and/or approximations in PAW mode can be controlled with pawprtdos keyword (in particular, pawprtdos = 2 can be used to compute quickly a very good approximation of the DOS). If prtdos = 4, delivers the sphere-projected DOS (like prtdos = 3), on the basis of a smearing approach (like prtdos = 1). See (like prtdos = 1 for the additional input variables to be specified. If prtdos = 5, delivers the spin-spin DOS in the nspinor == 2 case, using the tetrahedron method (as prtdos = 2). ## prtdosm¶ Mnemonics: PRinT the Density Of States with M decomposition Mentioned in topic(s): topic_printing, topic_ElecDOS Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [1/1053] in all abinit tests, [0/133] in abinit tutorials Relevant only when prtdos = 3. If set to 1, the m-decomposed LDOS is delivered in DOS file. Note that prtdosm computes the M-resolved partial dos for complex spherical harmonics,giving e.g. DOS(L,M) == DOS(L,-M) (without spin-orbit). In the contrary, the DFT+U occupation matrix, see dmatpawu is in the real spherical harmonics basis. If set to 2, the m-decomposed LDOS is delivered in DOS file. In this case, prtdosm computes the M-resolved partial dos for real spherical harmonics in the same basis as the DFT+U occupation matrix. ## prteig¶ Mnemonics: PRinT EIGenenergies Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 if nimage > 1, 1 otherwise. Test list (click to open). Moderately used, [115/1053] in all abinit tests, [21/133] in abinit tutorials If set to 1, a file *_EIG, containing the k-points and one-electron eigenvalues is printed. ## prtelf¶ Mnemonics: PRinT Electron Localization Function (ELF) Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [3/1053] in all abinit tests, [0/133] in abinit tutorials If set to 1 or a larger value, provide output of ELF in real space elf(r). This is a dimensionless quantity bounded between 0 and 1. The name of the ELF file will be the root output name, followed by _ELF. Like a _DEN file, it can be analyzed by cut3d. However unlike densities, in case of spin polarized calculations, the spin down component can not be obtained by subtracting the spin up component to the total ELF. Hence when spin polarized calculations are performed the code produces also output files with _ELF_UP and _ELF_DOWN extensions. (For technical reasons these files contain also two components but the second is zero. So to perform analysis of _ELF_UP and _ELF_DOWN files with cut3d you have to answer “ispden= 0 → Total density” when cut3d ask you which ispden to choose. Also remember that spin down component can not be obtained by using cut3d on the _ELF file. Sorry for the inconvenience, this will be fixed in the next release.) ELF is not yet implemented in non collinear spin case. If prtelf is set to 2, in the case of spin polarized calculation, the total ELF is computed from an alternative approach which should better take into account the existence of spin dependent densities (see the documentation in /doc/theory/ELF of your ABINIT repository) Please note that ELF is not yet implemented in the case of PAW (%usepaw = 1) calculations. ## prtfsurf¶ Mnemonics: PRinT Fermi SURFace file Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [2/1053] in all abinit tests, [0/133] in abinit tutorials If set to 1, provide Fermi surface file in the BXSF format (Xcrysden) If prtfsurf = 1, a _BXSF file readable by XCrySDen will be produced at the end of the calculation. The file contains information on the band structure of the system and can be used to visualize the Fermi surface or any other energy isosurface. prtfsurf = 1 is compatible only with SCF calculations (iscf > 1) or NSCF runs in which the occupation factors and Fermi level are recalculated once convergence is achieved (iscf = -3). The two methods should produce the same Fermi surface provided that the k-meshes are sufficiently dense. The k-mesh used for the sampling of the Fermi surface can be specified using the standard variables ngkpt, (shiftk, and nshiftk. Note, however, that the mesh must be homogeneous and centered on gamma (multiple shifts are not supported by Xcrysden) ## prtgden¶ Mnemonics: PRinT the Gradient of electron DENsity Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [4/1053] in all abinit tests, [0/133] in abinit tutorials If set to 1 or a larger value, provide output of gradient of electron density in real space grho(r), in units of Bohr^-(5/2). The names of the gradient of electron density files will be the root output name, followed by _GDEN1, _GDEN2, GDEN3 for each principal direction (indeed it is a vector). Like a _DEN file, it can be analyzed by cut3d. The file structure of this unformatted output file is described in this section. ## prtgeo¶ Mnemonics: PRinT the GEOmetry analysis Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [6/1053] in all abinit tests, [0/133] in abinit tutorials If set to 1 or a larger value, provide output of geometry analysis (bond lengths and bond angles). The value of prtgeo is taken by the code to be the maximum coordination number of atoms in the system. It will deduce a maximum number of “nearest” and “next-nearest” neighbors accordingly, and compute corresponding bond lengths. It will compute bond angles for the “nearest” neighbours only. If ionmov == 0, the name of the file will be the root output name, followed by _GEO. If ionmov /= 0, one file will be output at each time step, with the name being made of • the root output name, • followed by _TIMx, where x is related to the time step (see later) • then followed by _GEO The content of the file should be rather self-explanatory. No output is provided by prtgeo is lower than or equal to 0. If prtgeo > 0, the maximum number of atoms (natom) is 9999. ## prtgkk¶ Mnemonics: PRinT the GKK matrix elements file Mentioned in topic(s): topic_printing, topic_ElPhonInt Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [5/1053] in all abinit tests, [1/133] in abinit tutorials If set to 1, provide output of electron-phonon “gkk” matrix elements, for further treatment by mrggkk utility or anaddb utility. Note that symmetry will be disabled for the calculation of the perturbation, forcing the inclusion of all k-points and all perturbation directions. Additional information on electron-phonon treatment in ABINIT is given in the tutorial eph tutorial. ## prtgsr¶ Mnemonics: PRinT the GSR file Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: prtgsr = 0 Test list (click to open). Rarely used, [1/1053] in all abinit tests, [0/133] in abinit tutorials If set to 1, ABINIT will produce a GSR file at the end of the GS calculation. The GSR file contains the most important GS results (band structure, forces, stresses, electronic density). The GSR file can be read by AbiPy and used for further post-processing. ## prtkbff¶ Mnemonics: PRinT Kleynman-Bylander Form Factors Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Only relevant if: iomode == 3 Test list (click to open). Rarely used, [1/1053] in all abinit tests, [0/133] in abinit tutorials This input variable activates the output of the Kleynman-Bylander form factors in the netcdf WFK file produced at the end of the ground-state calculation. Remember to set iomode to 3. The form factors are needed to compute the matrix elements of the commutator [Vnl, r] of the non-local part of the (NC) pseudopotentials. This WFK file can therefore be used to perform optical and/or many-body calculations with external codes such as DP/EXC and Yambo. The option is ignored if PAW. Important At the time of writing November 11, 2020, istwfk must be set to 1 for all k-points in the IBZ since external codes do not support wavefunctions given on the reduced G-sphere. Moreover useylm must be 0 (default if NC pseudos). ## prtkden¶ Mnemonics: PRinT the Kinetic energy DENsity Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 1 if usekden == 1 and nimage == 1 else 0 Test list (click to open). Rarely used, [9/1053] in all abinit tests, [0/133] in abinit tutorials If set to 1 or a larger value, provide output of kinetic energy density in real space tau(r), in units of Bohr^-5. The name of the kinetic energy density file will be the root output name, followed by _KDEN. Like a _DEN file, it can be analyzed by cut3d. The file structure of this unformatted output file is described in this section. Note that the computation of the kinetic energy density must be activate, thanks to the input variable usekden. Please note that kinetic energy density is not yet implemented in the case of PAW (%usepaw = 1) calculations. ## prtkpt¶ Mnemonics: PRinT the K-PoinTs sets Mentioned in topic(s): topic_printing, topic_Output, topic_k-points Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [16/1053] in all abinit tests, [1/133] in abinit tutorials If set /= 0, proceeds to a detailed analysis of different k point grids. Works only if kptopt is positive, and neither kptrlatt nor ngkpt are defined. ABINIT will stop after this analysis. Different sets of k point grids are defined, with common values of shiftk. In each set, ABINIT increases the length of vectors of the supercell (see kptrlatt) by integer steps. The different sets are labelled by “iset”. For each k point grid, kptrlen and nkpt are computed (the latter always invoking kptopt = 1, that is, full use of symmetries). A series is finished when the computed kptrlen is twice larger than the input variable kptrlen. After the examination of the different sets, ABINIT summarizes, for each nkpt, the best possible grid, that is, the one with the largest computed kptrlen. Note that this analysis is also performed when prtkpt = 0, as soon as neither kptrlatt nor ngkpt are defined. But, in this case, no analysis report is given, and the code selects the grid with the smaller ngkpt for the desired kptrlen. However, this analysis takes some times (well sometimes, it is only a few seconds - it depends on the value of the input kptrlen), and it is better to examine the full analysis for a given cell and set of symmetries, shiftk for all the production runs. If set to -2, the code stops in invars1 after the computation of the irreducible set and a file named kpts.nc with the list of the k-points and the corresponding weights is produced ## prtlden¶ Mnemonics: PRinT the Laplacian of electron DENsity Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [4/1053] in all abinit tests, [0/133] in abinit tutorials If set to 1 or a larger value, provide output of Laplacian of electron density in real space grho(r), in units of Bohr^-(7/2). The name of the Laplacian of electron density file will be the root output name, followed by _LDEN. Like a _DEN file, it can be analyzed by cut3d. The file structure of this unformatted output file is described in this section. ## prtpot¶ Mnemonics: PRinT total POTential Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [28/1053] in all abinit tests, [0/133] in abinit tutorials If set >=1, provide output of the total (Kohn-Sham) potential (sum of local pseudo-potential, Hartree potential, and xc potential). If ionmov == 0, the name of the potential file will be the root output name, followed by _POT. If ionmov /= 0, potential file will be output at each time step, with the name being made of • the root output name, • followed by _TIMx, where x is related to the time step (see later) • then followed by _POT. The file structure of this unformatted output file is described in this section. No output is provided by a negative value of this variable. ## prtpsps¶ Mnemonics: PRint the PSPS file Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [1/1053] in all abinit tests, [0/133] in abinit tutorials If set to 1, the code produces a netcdf file (PSPS.nc) with the internal tables used by Abinit to apply the pseudopotential part of the KS Hamiltonian. The data can be visualized with AbiPy. If prtpsps is set to -1, the code will exit after the output of the PSPS.nc file. ## prtspcur¶ Mnemonics: PRinT the SPin CURrent density Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [1/1053] in all abinit tests, [0/133] in abinit tutorials If set to 1 or a larger value, provide output of the current density of different direction spins (x,y,z) in the whole unit cell. Should require spinorial wave functions nspinor = 2. Experimental: this does not work yet. ## prtstm¶ Mnemonics: PRinT the STM density Mentioned in topic(s): topic_printing, topic_STM Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [3/1053] in all abinit tests, [0/133] in abinit tutorials If set to 1 or a larger value, provide output of the electron density in real space rho(r), made only from the electrons close to the Fermi energy, in a range of energy (positive or negative), determined by the (positive or negative, but non-zero) value of the STM bias stmbias. This is a very approximate way to obtain STM profiles: one can choose an equidensity surface, and consider that the STM tip will follow this surface. Such equidensity surface might be determined with the help of Cut3D, and further post-processing of it (to be implemented). The big approximations of this technique are: neglect of the finite size of the tip, and position- independent transfer matrix elements between the tip and the surface. The charge density is provided in units of electrons/Bohr^3. The name of the STM density file will be the root output name, followed by _STM. Like a _DEN file, it can be analyzed by cut3d. The file structure of this unformatted output file is described in this section. For the STM charge density to be generated, one must give, as an input file, the converged wavefunctions obtained from a previous run, at exactly the same k-points and cut-off energy, self-consistently determined, using the occupation numbers from occopt = 7. In the run with positive prtstm , one has to use: Note that you might have to adjust the value of nband as well, for the treatment of unoccupied states, because the automatic determination of nband will often not include enough unoccupied states. When prtstm is non-zero, the stress tensor is set to zero. No output of _STM file is provided by prtstm lower or equal to 0. No other printing variables for density or potentials should be activated (e.g. prtden has to be set to zero). ## prtsuscep¶ Mnemonics: PRinT the SUSCEPtibility file (the irreducible polarizability) Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [5/1053] in all abinit tests, [0/133] in abinit tutorials If set to 0, no _SUSC file will be produced after the screening calculation, only the _SCR file will be output. ## prtvclmb¶ Mnemonics: PRinT V CouLoMB Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [4/1053] in all abinit tests, [0/133] in abinit tutorials If set >= 0 outputs a file with the Coulomb potential, defined as Hartree + local Pseudopotential. If prtvclmb=1 and in case of PAW (%usepaw > 0), the full core potential is added for the Hartree part, with the on-site corrections vh1 - vht1. If prtvclmb=2, only the smooth part of the Coulomb potential is output. ## prtvha¶ Mnemonics: PRinT V_HArtree Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [15/1053] in all abinit tests, [0/133] in abinit tutorials If set >=1, provide output of the Hartree potential. If ionmov == 0, the name of the potential file will be the root output name, followed by _VHA. If ionmov /= 0, potential files will be output at each time step, with the name being made of • the root output name, • followed by _TIMx, where x is related to the time step (see later) • then followed by _VHA. The file structure of this unformatted output file is described in this section. No output is provided by a negative value of this variable. ## prtvhxc¶ Mnemonics: PRinT V_HXC Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [18/1053] in all abinit tests, [0/133] in abinit tutorials If set >=1, provide output of the sum of the Hartree potential and xc potential. If ionmov == 0, the name of the potential file will be the root output name, followed by _VHXC. If ionmov /= 0, potential files will be output at each time step, with the name being made of • the root output name, • followed by _TIMx, where x is related to the time step (see later) • then followed by _VHXC. The file structure of this unformatted output file is described in this section. No output is provided by a negative value of this variable. ## prtvol¶ Mnemonics: PRinT VOLume Mentioned in topic(s): topic_printing, topic_Output Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [225/1053] in all abinit tests, [10/133] in abinit tutorials Control the volume of printed output. In particular, this concerns the explicit echo of eigenenergies and residuals for all bands and k points in the main output file. Also, the analysis of the value and location of the maximal density (and magnetization). Standard choice is 0. Positive values (all are allowed) generally print more and more in the output and log files, while negative values are for debugging (or preprocessing only), and cause the code to stop at some point. • 0 → The eigenenergies and residuals for all bands and k points are not echoed in the main output file. There are exceptions: the eigenvalues of the first k point are printed at the end of the SCF loop, and also, if iscf = -2 and kptopt<=0, the eigenvalues for all the k points are printed anyway, for a maximum of 50 k-points. Due to some subtlety, if for some dataset prtvol is non-zero, the limit for input and output echoes cannot be enforced, so it is like if prtvol = 1 for all the datasets for which prtvol was set to 0. • 1 → the eigenvalues for the first k-point are printed in all cases, at the end of the SCF loop. • 2 → all the eigenvalues and the residuals are printed at the end of the SCF loop. Also, the analysis of the value and location of the maximal density (and magnetization) is printed. • 3 → Print memory information for lobpcg. • 4 → Like 3 and prints information of lobpcg algorithm convergence. • 10 → the eigenvalues are printed for every SCF iteration, as well as other additions. Debugging options: • = -1 → stop in abinit (main program), before call driver. Useful to see the effect of the preprocessing of input variables (memory needed, effect of symmetries, k points…) without going further. Run very fast, on the order of the second. • =-2 → same as -1, except that print only the first dataset. All the non default input variables associated to all datasets are printed in the output file, but only for the first dataset. Also all the input variables are written in the NetCDF file “OUT.nc”, even if the value is the default. • = -3 → stop in gstate, before call scfcv, move or brdmin. Useful to debug pseudopotentials • = -4 → stop in move, after completion of all loops • = -5 → stop in brdmin, after completion of all loops • = -6 → stop in scfcv, after completion of all loops • = -7 → stop in vtorho, after the first rho is obtained • = -8 → stop in vtowfk, after the first k point is treated • = -9 → stop in cgwf, after the first wf is optimized • = -10 → stop in getghc, after the Hamiltonian is applied once This debugging feature is not yet activated in the RF routines. Note that fftalg offers another option for debugging. ## prtvolimg¶ Mnemonics: PRinT VOLume for IMaGes Mentioned in topic(s): topic_printing, topic_Output Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [6/1053] in all abinit tests, [0/133] in abinit tutorials Control the volume of printed output when an algorithm using images of the cell is used (nimage > 1). When such an algorithm is activated, the printing volume (in output file) can be large and difficult to read. Using prtvolimg=1, the printing volume, for each image, is reduced to unit cell, atomic positions, total energy, forces, stresses, velocities and convergence residuals. Using prtvolimg=2, the printing volume, for each image, is reduced to total energy and convergence residuals only. ## prtvpsp¶ Mnemonics: PRinT V_PSeudoPotential Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [4/1053] in all abinit tests, [0/133] in abinit tutorials If set >=1, provide output of the local pseudo potential. If ionmov == 0, the name of the potential file will be the root output name, followed by _VPSP. If ionmov /= 0, potential files will be output at each time step, with the name being made of • the root output name, • followed by _TIMx, where x is related to the timestep (see later) • then followed by _VPSP. The file structure of this unformatted output file is described in this section. No output is provided by a negative value of this variable. ## prtvxc¶ Mnemonics: PRinT V_XC Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [15/1053] in all abinit tests, [0/133] in abinit tutorials If set >=1, provide output of the exchange-correlation potential. If ionmov == 0, the name of the potential file will be the root output name, followed by _VXC. If ionmov /= 0, potential files will be output at each time step, with the name being made of • the root output name, • followed by _TIMx, where x is related to the timestep (see later) • then followed by _VXC. The file structure of this unformatted output file is described in this section. No output is provided by a negative value of this variable. ## prtwant¶ Mnemonics: PRinT WANT file Mentioned in topic(s): topic_printing, topic_Wannier Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Moderately used, [14/1053] in all abinit tests, [4/133] in abinit tutorials Flag used to indicate that either the Wannier90 or the WanT interfaces will be used. • prtwant = 1 → Use the ABINIT- WanT interface. Provide an output file that can be used by the WanT postprocessing program (see http://www.wannier-transport.org). The value of the prtwant indicates the version of the WanT code that can read it. Currently only the value prtwant = 1 is implemented, corresponding to WanT version 1.0.1, available since Oct. 22, 2004. Notes: Several requirements must be fulfilled by the wavefunction. Among them, two are mandatory: • An uniform grid of k-points, including the GAMMA point must be used. • The use of time reversal symmetry is not allowed (istwfk=1) • The list of k-points must be ordered, such that the coordinates, namely three-components vectors has the third index varying the most rapidly, then the second index, then the first index If these requirement are not fulfilled, the program will stop and an error message is returned. As an example of k-point grid in case of systems that have some 3D character (1D systems are easy): nkpt 8 kpt 0 0 0 0 0 1/2 0 1/2 0 0 1/2 1/2 1/2 0 0 1/2 0 1/2 1/2 1/2 0 1/2 1/2 1/2 istwfk *1 Also, in order to use WanT as a post-processing program for ABINIT you might have to recompile it with the appropriate flags (see ABINIT makefile). Up to now only the -convert big-endian was found to be mandatory, for machines with little-endian default choice. • prtwant = 2 → Use the ABINIT- Wannier90 interface. ABINIT will produce the input files required by Wannier90 and it will run Wannier90 to produce the Maximally-locallized Wannier functions (see http://www.wannier.org ). Notes • The files that are created can also be used by Wannier90 in stand-alone mode. • In order to use Wannier90 as a post-processing program for ABINIT you might have to recompile it with the appropriate flags (see ABINIT makefile). You might use ./configure –enable-wannier90 • There are some other variables related to the interface of Wannier90 and ABINIT. See w90 varset. • prtwant = 3 → Use the ABINIT- Wannier90 interface after converting the input wavefunctions to quasi-particle wavefunctions. ABINIT will produce the input files required by Wannier90 and it will run Wannier90 to produce the Maximally-localized Wannier functions (see http://www.wannier.org ). Notes • An input file of DFT wave functions is required which is completely consistent with the _KSS file used in the self-consistent GW calculation. This means that kssform 3 must be used to create the _KSS file and the output _WFK file from the same run must be used as input here. • Wannier90 requires nshiftk = 1, and shiftk = 0 0 0 is recommended. The k-point set used for the GW calculation, typically the irreducible BZ set created using kptopt = 1, and that for the Abinit- Wannier90 interface must be consistent. • Full-BZ wavefunctions should be generated in the run calling the interface by setting kptopt = 3, iscf = -2, and nstep = 3. This will simply use symmetry to transform the input IBZ wavefunctions to the full BZ set, still consistent with the GW _KSS input. • The final _QPS file created by the self-consistent GW run is required as input. • Any value of gwcalctyp between between 20 and 29 should be suitable, so, for example, Hartree-Fock maximally-localized Wannier functions could be generated setting gwcalctyp = 25. ## prtwf¶ Mnemonics: PRinT the WaveFunction Mentioned in topic(s): topic_printing, topic_vdw Variable type: integer Dimensions: scalar Default value: 0 if nimage > 1, 1 otherwise. Test list (click to open). Moderately used, [181/1053] in all abinit tests, [20/133] in abinit tutorials If prtwf = 1, provide output of wavefunction and eigenvalue file The file structure of this unformatted output file is described in this section. For a standard ground-state calculation, the name of the wavefunction file will be the root output name, followed by _WFK. If nqpt = 1, the root name will be followed by _WFQ. For response-function calculations, the root name will be followed by _1WFx, where x is the number of the perturbation. The dataset information will be added as well, if relevant. No wavefunction output is provided by prtwf = 0. If prtwf = -1, the code writes the wavefunction file only if convergence is not achieved in the self-consistent cycle. If prtwf = 2, a file pwfn.data is produced, to be used as input for the CASINO QMC code. See more explanation at the end of this section. If prtwf = 3, the file that is created is nearly the same as with prtwf = 1, except that the records that should contain the wavefunction is empty (so, such records exist, but store nothing). This is useful to generate size-reduced DDK files, to perform an optic run. Indeed, in the latter case, only matrix elements are needed [so, no wavefunction], but possibly a large number of conduction bands, so that the DDK file might be huge if it contains the wavefunctions. Further explanation for the prtwf = 2 case. To produce a wave function suitable for use as a CASINO trial wave function, certain ABINIT parameters must be set correctly. Primarily, CASINO (and QMC methods generally) can only take advantage of time-reversal symmetry, and not the full set of symmetries of the crystal structure. Therefore, ABINIT must be instructed to generate k-points not just in the Irreducible Brillouin Zone, but in a full half of the Brillouin Zone (using time-reversal symmetry to generate the other half). Additionally, unless instructed otherwise, Abinit avoids the need for internal storage of many of the coefficients of its wave functions for k-points that have the property 2k=G_latt, where G_latt is a reciprocal lattice vector, by making use of the property that c_k(G)=c^*_k(-G-G_latt). Abinit must be instructed not to do this in order to output the full set of coefficients for use in CASINO. See the ABINIT theoretical background documents ABINIT/Infos/Theory/geometry.pdf and ABINIT/Infos/Theory/1WF.pdf for more information. The first of these requirements is met by setting the ABINIT input variable kptopt to 2 and the second by setting istwfk to 1 for all the k points. Since CASINO is typically run with relatively small numbers of k-points, this is easily done by defining an array of “1” in the input file. For example, for the 8 k-points generated with ngkpt 2 2 2, we add the following lines to the input file: # Turn off special storage mode for time-reversal k-points istwfk 1 1 1 1 1 1 1 1 # Use only time reversal symmetry, not full set of symmetries. kptopt 2 Other useful input variables of relevance to the plane waves ABINIT will produce include ecut, nshiftk, shiftk, nband, occopt, occ, spinat and nsppol (see relevant input variable documents in ABINIT/Infos/). If ABINIT is run in multiple dataset mode, the different wave functions for the various datasets are exported as pwfn1.data, pwfn2.data, …, pwfnn.data where the numbers are the contents of the contents of the input array jdtset (defaults to 1,2,…,ndtset). Once the routine is incorporated into the ABINIT package it is anticipated that there will be an input variable to control whether or not a CASINO pwfn.data file is written. Other issues related to prtwf = 2. The exporter does not currently work when ABINIT is used in parallel mode on multiple processors if k-point parallelism is chosen. ABINIT does not store the full wave function on each processor but rather splits the k-points between the processors, so no one processor could write out the whole file. Clearly this could be fixed but we have not done it yet. The sort of plane wave DFT calculations usually required to generate QMC trial wave functions execute very rapidly anyway and will generally not require a parallel machines. The outqmc routine currently bails out with an error if this combination of modes is selected - this will hopefully be fixed later. There has not been very extensive testing of less common situations such as different numbers of bands for different k-points, and more complicated spin polarized systems, so care should be taken when using the output in these circumstances. If there is any doubt about the output of this routine, the first place to look is the log file produced by ABINIT: if there are any warnings about incorrectly normalized orbitals or non-integer occupation numbers there is probably something set wrong in the input file. ## prtwf_full¶ Mnemonics: PRinT Wavefunction file on the FULL mesh Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Only relevant if: prtwf == 1 Test list (click to open). Rarely used, [1/1053] in all abinit tests, [0/133] in abinit tutorials If set to 1 in a ground-state calculation, the code will output another WFK file (with extension FULL_WFK) containing the wavefunctions in the full BZ as well as a text file with the tables used for the tetrahedron method. Note that prtwf_full requires prtwf == 1 and a ground-state calculation done on a homogeneous k-mesh (see ngkpt and shiftk). The tetrahedron table is produced only if the number of k-points in the irreducible zone (nkpt) is greater than 3. ## prtxml¶ Mnemonics: PRinT an XML output Mentioned in topic(s): topic_printing Variable type: integer Dimensions: scalar Default value: 0 Test list (click to open). Rarely used, [1/1053] in all abinit tests, [0/133] in abinit tutorials Create an XML output with common values. The corresponding DTD is distributed in sources as extras/post_processing/abinitRun.dtd. All the DTD is not yet implemented and this one is currently restricted to ground-state computations (and derivative such as geometry optimisation). ## pseudos¶ Mnemonics: PSEUDOpotentialS Mentioned in topic(s): topic_Control Variable type: string Dimensions: scalar Default value: Test list (click to open). Very frequently used, [1049/1053] in all abinit tests, [133/133] in abinit tutorials String defining the list of pseudopotential files when Abinit is executed with the new syntax: abinit run.abi > run.log 2> run.err & The string must be quoted in double quotation marks and multiple files should be separated by a comma, e.g. pseudos "al.psp8, as.psp8" This variable is mandatory and the list must contain ntypat pseudos ordered according to the znucl array. Relative and absolute paths are allowed as in: pseudos "../pseudos/al.psp8, ..//pseudos/as.psp8" or pseudos "/home/user/pseudos/al.psp8, /home/user/pseudos/as.psp8" If all the pseudos are located in the same directory, it is much easier to use a common prefix with pp_dirpath. For instance, the previous example is equivalent to: pp_dirpath "/home/user/pseudos" pseudos "al.psp8, as.psp8" Important Shell variables e.g. $HOME or tilde syntax ~ for user home are not supported in pseudopotential names. The only exception is the shell variable $ABI_PSPDIR that can be used in conjunction with pp_dirpath pp_dirpath = "$ABI_PSPDIR" pseudos "al.psp8, as.psp8" Before running the calculation, one should set the value of$ABI_PSPDIR inside the terminal using: sh export ABI_PSPDIR="/home/user/pseudos" ## tmpdata_prefix¶ Mnemonics: TeMPorary DATA PREFIX Mentioned in topic(s): topic_Control Variable type: string Dimensions: scalar Default value: None abinit run.abi > run.log 2> run.err & If this option is not specified, a prefix is automatically constructed from the input file name provided the filename ends with an extension, e.g. .ext.
# Ticket to Ride Ticket to Ride1 is a board game for up to $5$ players. The goal of the game is to set up train lines (and to thwart the opponents’ attempts at setting up their train lines). At the beginning of play, each player is assigned four train lines. A player may choose to discard as many of these four assignments as she likes. Each assignment has a score, corresponding to its difficulty (so, typically, a train line between e.g. Stockholm and Tokyo would be worth more than a train line between e.g. Stockholm and Utrecht). At the end of the game, each player gets points for the assignments that they have successfully completed, and penalty points for the assignments that they have failed to complete. An assignment consists of a pair of cities that are to be connected by a series of shorter railway routes. A route can be claimed (for a certain cost associated with the route), but things are complicated by the fact that there is only a limited number of routes, and once a player claims a route, none of the other players can claim it. A player has successfully set up a train line between two cities if there is a path between the two cities using only routes that have been claimed by this player. For simplicity, we will ignore all additional aspects of the game (including the actual process of claiming routes and additional ways to score points). For instance, if your assignment is to connect Stockholm and Amsterdam in the Figure above, you would probably want to claim the routes between Stockholm and Copenhagen, and between Copenhagen and Amsterdam. But if another player manages to claim the route between Copenhagen and Stockholm before you, your train line would have to use some other routes, e.g. by going to Copenhagen via Oslo. In this problem, we will consider the rather bold strategy of trying to complete all four assignments (typically, this will be quite hard). As a preliminary assessment of the difficulty of achieving this, we would like to calculate the minimum cost of setting up all four lines assuming that none of the other players interfere with our plans. Your job is to write a program to determine this minimum cost. ## Input The input starts with two integers $1 \le n \le 40$, $0 \le m \le 1\, 000$, giving the number of cities and railway routes in the map, respectively. Then follow $n$ lines, giving the names of the $n$ cities. City names are at most $20$ characters long and consist solely of lower case letters (’a’-’z’). After this follow $m$ lines, each containing the names of two different cities and an integer $1 \le c \le 10\, 000$, indicating that there is a railway route with cost $c$ between the two cities. Note that there may be several railway routes between the same pair of cities. You may assume that it is always possible to set up a train line from any city to any other city. Finally, there are four lines, each containing the names of two cities, giving the four train line assignments. ## Output Output a single line containing a single integer, the minimum possible cost to set up all four train lines. Sample Input 1 Sample Output 1 10 15 stockholm amsterdam london berlin copenhagen oslo helsinki dublin reykjavik brussels oslo stockholm 415 stockholm helsinki 396 oslo london 1153 oslo copenhagen 485 stockholm copenhagen 522 copenhagen berlin 354 copenhagen amsterdam 622 helsinki berlin 1107 london amsterdam 356 berlin amsterdam 575 london dublin 463 reykjavik dublin 1498 reykjavik oslo 1748 london brussels 318 brussels amsterdam 173 stockholm amsterdam oslo london reykjavik dublin brussels helsinki 3907 Sample Input 2 Sample Output 2 2 1 first second first second 10 first first first first second first first first 10 Footnotes 1. Ticket to Ride is copyrighted by Days of Wonder, Inc.
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Forthcoming papers Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Mat. Sb.: Year: Volume: Issue: Page: Find Generalized Lyapunov theorem on Mal'tsev manifoldsV. V. Gorbatsevich 163 An equation of convolution type on convex domains in $\mathbf R^2$V. V. Napalkov 178 Nonunimodular ring groups and Hopf–von Neumann algebrasL. I. Vainerman, G. I. Kats 194 Endomorphism rings of free modulesG. M. Brodskii 226 Sources and sinks of $A$-diffeomorphisms of surfacesR. V. Plykin 243 The rate of rational approximation and the property of single-valuedness of an analytic function in the neighborhood of an isolated singular pointA. A. Gonchar 265 Convergence to a process with independent increments in a scheme of increasing sums of dependent random variablesV. G. Mikhailov 283 Some questions in the theory of nonlinear elliptic and parabolic equationsM. I. Vishik, A. V. Fursikov 300
J Integr Plant Biol. ›› 2009, Vol. 51 ›› Issue (8): 727-739. Special Issue: Sexual Reproductions • Invited Expert Reviews • ### Pollen Tube Growth: a Delicate Equilibrium Between Secretory and Endocytic Pathways Alessandra Moscatelli* and Aurora Irene Idilli 1. Dipartimento di Biologia L. Gorini, Università degli Studi di Milano, Via Celoria 26, 20133, Milano, Italy • Received:2009-02-25 Accepted:2009-04-20 Published:2009-08-17 • About author:* Author for correspondence Tel: +39 2 5031 4843; Fax: +39 2 5031 4840; E-mail: [email protected] Abstract: Although pollen tube growth is a prerequisite for higher plant fertilization and seed production, the processes leading to pollen tube emission and elongation are crucial for understanding the basic mechanisms of tip growth. It was generally accepted that pollen tube elongation occurs by accumulation and fusion of Golgi-derived secretory vesicles (SVs) in the apical region, or clear zone, where they were thought to fuse with a restricted area of the apical plasma membrane (PM), defining the apical growth domain. Fusion of SVs at the tip reverses outside cell wall material and provides new segments of PM. However, electron microscopy studies have clearly shown that the PM incorporated at the tip greatly exceeds elongation and a mechanism of PM retrieval was already postulated in the mid-nineteenth century. Recent studies on endocytosis during pollen tube growth showed that different endocytic pathways occurred in distinct zones of the tube, including the apex, and led to a new hypothesis to explain vesicle accumulation at the tip; namely, that endocytic vesicles contribute substantially to V-shaped vesicle accumulation in addition to SVs and that exocytosis does not involve the entire apical domain. New insights suggested the intriguing hypothesis that modulation between exo- and endocytosis in the apex contributes to maintain PM polarity in terms of lipid/protein composition and showed distinct degradation pathways that could have different functions in the physiology of the cell. Pollen tube growth in vivo is closely regulated by interaction with style molecules. The study of endocytosis and membrane recycling in pollen tubes opens new perspectives to studying pollen tube-style interactions in vivo. Moscatelli A, Idilli AI (2009). Pollen tube growth: a delicate equilibrium between secretory and endocytic pathways. J. Integr. Plant Biol. 51(8), 727–739. Editorial Office, Journal of Integrative Plant Biology, Institute of Botany, CAS No. 20 Nanxincun, Xiangshan, Beijing 100093, China Tel: +86 10 6283 6133 Fax: +86 10 8259 2636 E-mail: [email protected]
# Tag Info ## New answers tagged semiclassical 2 Inside vs outside, there is a sign change inside the square root, so that changes the nature of the "phase" $\phi(r)$. Normally, when you match wave functions you require that $\psi_\mathrm{left}(x) = \psi_\mathrm{right}(x)$ (continuity) and that the derivative changes according to what you get when you integrate the Schrodinger equation: ... 2 For first question, you must simly take $l = 0$(in your notations, $l$ is curved). It's because angular momentum is conserved in radial-symmetry field problem, and you can simply take $L^2 = \hbar^2 l(l+1)$. So, for $l = 0$ states you simply ommit $\frac{L^2}{2mr^2}$ term. Potential is given in problem. It behaves like constant for some $r$ values, and ... Top 50 recent answers are included
Weird Animals In Texas, Tab Sara Bareilles, Harrogate Town Stadium, Toll Brothers Design Studio Tips, Pyrosequencing Results Interpretation, Jd Mckissic 40 Time, Manx Cat Meaning, Toll Brothers Design Studio Tips, Dinesh Karthik Ipl 2020 Price, Podobne" /> Weird Animals In Texas, Tab Sara Bareilles, Harrogate Town Stadium, Toll Brothers Design Studio Tips, Pyrosequencing Results Interpretation, Jd Mckissic 40 Time, Manx Cat Meaning, Toll Brothers Design Studio Tips, Dinesh Karthik Ipl 2020 Price, Podobne" /> # atomic number of lithium Lanthanoids comprise the 15 metallic chemical elements with atomic numbers 57 through 71, from lanthanum through lutetium. The chemical symbol for Terbium is Tb. The chemical symbol for Tin is Sn. Erbium is a silvery-white solid metal when artificially isolated, natural erbium is always found in chemical combination with other elements. al. Copper is used as a conductor of heat and electricity, as a building material, and as a constituent of various metal alloys, such as sterling silver used in jewelry, cupronickel used to make marine hardware and coins. Lithium is a soft, silvery, light alkali metal denoted with the symbol Li. The chemical symbol for Selenium is Se. The chemical symbol for Nickel is Ni. Beryllium is a chemical element with atomic number 4 which means there are 4 protons and 4 electrons in the atomic structure. www.nuclear-power.net. The chemical symbol for Indium is In. Gold is thought to have been produced in supernova nucleosynthesis, from the collision of neutron stars. The chemical symbol for Ytterbium is Yb. Like the other metals of the platinum group, ruthenium is inert to most other chemicals. Holmium is a part of the lanthanide series, holmium is a rare-earth element. Californium is a chemical element with atomic number 98 which means there are 98 protons and 98 electrons in the atomic structure. The chemical symbol for Platinum is Pt. Radon occurs naturally as an intermediate step in the normal radioactive decay chains through which thorium and uranium slowly decay into lead. The atomic number of Lithium, Boron and Sulphur is 3, 5 and 16 respectively, what will be their valency? Liquid nitrogen (made by distilling liquid air) boils at 77.4 kelvins (−195.8°C) and is used as a coolant. Actinium is a soft, silvery-white radioactive metal. It is obtained chiefly from the mineral cassiterite, which contains tin dioxide. Titanium is resistant to corrosion in sea water, aqua regia, and chlorine. Many nonmetallic elements are scavenged by lithium. The atomic weight of Li is 6.941 amu, and the molar mass 6.941 grams per mole. Its boiling point is the lowest among all the elements. Here are interesting facts about atomic number 3: Our Privacy Policy is a legal statement that explains what kind of information about you we collect, when you visit our Website. Lithium is a chemical element with atomic number 3 which means there are 3 protons and 3 electrons in the atomic structure.The chemical symbol for Lithium is Li. Berkelium is a member of the actinide and transuranium element series. Titanium can be used in surface condensers. Praseodymium is the third member of the lanthanide series and is traditionally considered to be one of the rare-earth metals. Zirconium is widely used as a cladding for nuclear reactor fuels. What is Atomic Number Density - Definition, What is Atomic and Nuclear Physics - Definition, What is Atomic and Nuclear Structure - Definition. Neptunium metal is silvery and tarnishes when exposed to air. Atomic Number of Lithium Atomic Number of Lithium is 3. Tellurium is chemically related to selenium and sulfur. Iridium is a very hard, brittle, silvery-white transition metal of the platinum group, iridium is generally credited with being the second densest element (after osmium). Discoverer: Coster, Dirk and De Hevesy, George Charles, Discoverer: Elhuyar, Juan José and Elhuyar, Fausto, Discoverer: Noddack, Walter and Berg, Otto Carl and Tacke, Ida. There are over 100 different borate minerals, but the most common are: borax, kernite, ulexite etc. Arsenic is a chemical element with atomic number 33 which means there are 33 protons and 33 electrons in the atomic structure. In fact their absorption cross-sections are the highest among all stable isotopes. The chemical symbol for Germanium is Ge. Lutetium is the last element in the lanthanide series, and it is traditionally counted among the rare earths. The chemical symbol for Barium is Ba. If you continue to use this site we will assume that you are happy with it. As the most electronegative element, it is extremely reactive: almost all other elements, including some noble gases, form compounds with fluorine. Our Website follows all legal requirements to protect your privacy. Like all elements with atomic number over 100, lawrencium can only be produced in particle accelerators by bombarding lighter elements with charged particles. It rarely occurs in its elemental state or as pure ore compounds in the Earth’s crust. The chemical symbol for Dysprosium is Dy. Its properties are thus intermediate between those of chlorine and iodine. Chem. In nuclear industry cadmium is commonly used as a thermal neutron absorber due to very high neutron absorption cross-section of 113Cd. Dysprosium is used for its high thermal neutron absorption cross-section in making control rods in nuclear reactors, for its high magnetic susceptibility in data storage applications. Lithium Element: Lithium is the lightest solid metal in the first group of periodic table. The chemical symbol for Antimony is Sb. The chemical symbol for Berkelium is Bk. Melting point of Lithium is 180,5 °C and its the boiling point is 1317 °C. Osmium is a hard, brittle, bluish-white transition metal in the platinum group that is found as a trace element in alloys, mostly in platinum ores. Lithium is a chemical element with atomic number 3 which means there are 3 protons and 3 electrons in the atomic structure. The chemical symbol for Cobalt is Co. Cobalt is found in the Earth’s crust only in chemically combined form, save for small deposits found in alloys of natural meteoric iron. Titanium condenser tubes are usually the best technical choice, however titanium is very expensive material. At 0.099%, phosphorus is the most abundant pnictogen in the Earth’s crust. Neon is a colorless, odorless, inert monatomic gas under standard conditions, with about two-thirds the density of air. Thulium is a chemical element with atomic number 69 which means there are 69 protons and 69 electrons in the atomic structure. Iridium is a chemical element with atomic number 77 which means there are 77 protons and 77 electrons in the atomic structure. Praseodymium is a soft, silvery, malleable and ductile metal, valued for its magnetic, electrical, chemical, and optical properties. Lithium does not occur in its pure form in nature but can be found in a number of minerals. The chemical symbol for Niobium is Nb. Arsenic occurs in many minerals, usually in combination with sulfur and metals, but also as a pure elemental crystal. Although odorless, lithium fluoride has a bitter-saline taste. The chemical symbol for Silver is Ag. The chemical symbol for Aluminum is Al. Nickel belongs to the transition metals and is hard and ductile. Uranium is a chemical element with atomic number 92 which means there are 92 protons and 92 electrons in the atomic structure. The chemical symbol for Lithium is Li. Hafnium’s large neutron capture cross-section makes it a good material for neutron absorption in control rods in nuclear power plants, but at the same time requires that it be removed from the neutron-transparent corrosion-resistant zirconium alloys used in nuclear reactors. The name samarium is after the mineral samarskite from which it was isolated. The chemical symbol for Iron is Fe. The chemical symbol for Erbium is Er. The chemical symbol for Scandium is Sc. The chemical symbol for Promethium is Pm. Caesium is a soft, silvery-gold alkali metal with a melting point of 28.5 °C, which makes it one of only five elemental metals that are liquid at or near room temperature. The information contained in this website is for general information purposes only. Atomic Mass of Lithium Atomic mass of Lithium is 6.941 u. A major development was the discovery that steel could be made highly resistant to corrosion and discoloration by adding metallic chromium to form stainless steel. The chemical symbol for Polonium is Po. Chlorine is a yellow-green gas at room temperature. The name xenon for this gas comes from the Greek word ξένον [xenon], neuter singular form of ξένος [xenos], meaning ‘foreign(er)’, ‘strange(r)’, or ‘guest’. The chemical symbol for Magnesium is Mg. Magnesium is a shiny gray solid which bears a close physical resemblance to the other five elements in the second column (group 2, or alkaline earth metals) of the periodic table: all group 2 elements have the same electron configuration in the outer electron shell and a similar crystal structure. Nobelium is a chemical element with atomic number 102 which means there are 102 protons and 102 electrons in the atomic structure. Lithium is also commonly used in greases, in metallurgy, in flares and pyrotechnics, etc. Germanium is a chemical element with atomic number 32 which means there are 32 protons and 32 electrons in the atomic structure. The atom consist of a small but massive nucleus surrounded by a cloud of rapidly moving electrons. Germanium is a lustrous, hard, grayish-white metalloid in the carbon group, chemically similar to its group neighbors tin and silicon. The chemical symbol for Rubidium is Rb. The bulk properties of astatine are not known with any certainty. Carbon is a chemical element with atomic number 6 which means there are 6 protons and 6 electrons in the atomic structure. Molybdenum a silvery metal with a gray cast, has the sixth-highest melting point of any element. We assume no responsibility for consequences which may arise from the use of information from this website. Elemental rubidium is highly reactive, with properties similar to those of other alkali metals, including rapid oxidation in air. Thulium is an easily workable metal with a bright silvery-gray luster. It is a noble metal and a member of the platinum group. Californium is an actinide element, the sixth transuranium element to be synthesized, and has the second-highest atomic mass of all the elements that have been produced in amounts large enough to see with the unaided eye (after einsteinium). The chemical symbol for Europium is Eu. Like all elements with atomic number over 100, nobelium can only be produced in particle accelerators by bombarding lighter elements with charged particles. Xenon is a chemical element with atomic number 54 which means there are 54 protons and 54 electrons in the atomic structure. The chemical symbol for Holmium is Ho. Thulium is the thirteenth and third-last element in the lanthanide series. Discoverer: Corson, Dale R. and Mackenzie, K. R. The actinide or actinoid series encompasses the 15 metallic chemical elements with atomic numbers from 89 to 103, actinium through lawrencium. The chemical symbol for Einsteinium is Es. It occurs on Earth as the decay product of various heavier elements. The chemical properties of this silvery gray, crystalline transition metal are intermediate between rhenium and manganese. It is the fourth most common element in the Earth’s crust. Lawrencium is the final member of the actinide series. Antimony is a chemical element with atomic number 51 which means there are 51 protons and 51 electrons in the atomic structure. Lithium which is the lightest metallic element is used in heat transfer applications, and as a scavenger in metallurgy. All of the alkali metals have a single valence electron in the outer electron shell, which is easily removed to create an ion with a positive charge – a cation, which combines with anions to form salts. The chemical symbol for Plutonium is Pu. Rhenium is a silvery-white, heavy, third-row transition metal in group 7 of the periodic table. $\text{Number of neutrons} = \text{ rounded mass number} - \text{atomic number}$ Valency of Lithium – Atomic number of lithium is 3. So, it needs to lose one electron to attain stability and get electronic configuration … The chemical symbol for Gallium is Ga. Gallium has similarities to the other metals of the group, aluminium, indium, and thallium. The chemical symbol for Thallium is Tl. A colorless, odorless, tasteless noble gas, krypton occurs in trace amounts in the atmosphere and is often used with other rare gases in fluorescent lamps. ... Lithium is the first element in which an additional electron shell is added. Its atomic structure comprises a … The chemical symbol for Mendelevium is Md. Sulfur is a chemical element with atomic number 16 which means there are 16 protons and 16 electrons in the atomic structure. Pure Appl. By mass, oxygen is the third-most abundant element in the universe, after hydrogen and helium. The chemical symbol for Oxygen is O. The chemical symbol for Protactinium is Pa. Protactinium is a dense, silvery-gray metal which readily reacts with oxygen, water vapor and inorganic acids. Platinum is a dense, malleable, ductile, highly unreactive, precious, silverish-white transition metal. Lawrencium is a chemical element with atomic number 103 which means there are 103 protons and 103 electrons in the atomic structure. Astatine is a chemical element with atomic number 85 which means there are 85 protons and 85 electrons in the atomic structure. Natural boron consists primarily of two stable isotopes, 11B (80.1%) and 10B (19.9%). The chemical symbol for Cerium is Ce. Lithium (Li) is a silver-white colored metal that has the atomic number 3 in the periodic table. Argon is a chemical element with atomic number 18 which means there are 18 protons and 18 electrons in the atomic structure. Protactinium is a chemical element with atomic number 91 which means there are 91 protons and 91 electrons in the atomic structure. The chemical symbol for Xenon is Xe. Rhenium is a chemical element with atomic number 75 which means there are 75 protons and 75 electrons in the atomic structure. It is the Pauli exclusion principle that requires the electrons in an atom to occupy different energy levels instead of them all condensing in the ground state. Atomic weight of Lithium is 6.94 u or g/mol. Lithium’s most common uses include in the creation of batteries and its use in medication. The chemical symbol for Palladium is Pd. Thorium is moderately hard, malleable, and has a high melting point. Arsenic is a metalloid. Thorium is commonly found in monazite sands (rare earth metals containing phosphate mineral). Scandium is a silvery-white metallic d-block element, it has historically been sometimes classified as a rare-earth element, together with yttrium and the lanthanides. Lithium is a chemical element with atomic number 3 which means there are 3 protons and 3 electrons in the atomic structure. It is an Alkali Metal with the symbol Li and is located in Group 1 of the periodic table. Gold is a bright, slightly reddish yellow, dense, soft, malleable, and ductile metal. The configuration of these electrons follows from the principles of quantum mechanics. It is one of the least reactive chemical elements and is solid under standard conditions. Astatine is the rarest naturally occurring element on the Earth’s crust. Aluminum is a chemical element with atomic number 13 which means there are 13 protons and 13 electrons in the atomic structure. Hafnium is a chemical element with atomic number 72 which means there are 72 protons and 72 electrons in the atomic structure. Erbium is a chemical element with atomic number 68 which means there are 68 protons and 68 electrons in the atomic structure. The chemical symbol for Zirconium is Zr. Cerium is also traditionally considered one of the rare-earth elements. Each tablet for oral administration contains Lithium carbonate USP, 300 mg and the following inactive ingredients: microcrystalline cellulose, povidone, sodium lauryl sulfate, sodium starch glycolate type A, colloidal silicon dioxide and calcium stearate.Lithium is an element of the alkali-metal group with atomic number 3, atomic weight 6.94, and an emission line at 671 nm on the flame photometerThe empirical formula for Lithium Citrate is C6H5Li3O7; molecular weight 209.93. This fact has key implications for the building up of the periodic table of elements. Curium is a hard, dense, silvery metal with a relatively high melting point and boiling point for an actinide. Uranium is weakly radioactive because all isotopes of uranium are unstable, with half-lives varying between 159,200 years and 4.5 billion years. It is a colorless, odorless, tasteless, non-toxic, inert, monatomic gas, the first in the noble gas group in the periodic table. Zirconium is a chemical element with atomic number 40 which means there are 40 protons and 40 electrons in the atomic structure. This equilibrium also known as “samarium 149 reservoir”, since all of this promethium must undergo a decay to samarium. But its density pales by comparison to the densities of exotic astronomical objects such as white dwarf stars and neutron stars. The chemical symbol for Manganese is Mn. Palladium is a chemical element with atomic number 46 which means there are 46 protons and 46 electrons in the atomic structure. Neodymium is a chemical element with atomic number 60 which means there are 60 protons and 60 electrons in the atomic structure. In the periodic table, the elements are listed in order of increasing atomic number Z. Valency of Lithium = 1 Valency of Boron = 3 Valency of Sulphur = 2 In 1995, the Commission recommended that all δ(7 Li) values be reported relative to the lithium carbonate reference material LSVEC. Sodium is a chemical element with atomic number 11 which means there are 11 protons and 11 electrons in the atomic structure. Lithium oxide, an inorganic chemical compound with lithium and oxide ions, is used as a flux in ceramic glazes. Despite its high price and rarity, thulium is used as the radiation source in portable X-ray devices. The chemical symbol for Iodine is I. Iodine is the heaviest of the stable halogens, it exists as a lustrous, purple-black metallic solid at standard conditions that sublimes readily to form a violet gas. It is fairly soft and slowly tarnishes in air. Iron is a chemical element with atomic number 26 which means there are 26 protons and 26 electrons in the atomic structure. Pure germanium is a semiconductor with an appearance similar to elemental silicon. Europium is one of the least abundant elements in the universe. Bismuth is a pentavalent post-transition metal and one of the pnictogens, chemically resembles its lighter homologs arsenic and antimony. Helium is a chemical element with atomic number 2 which means there are 2 protons and 2 electrons in the atomic structure. The chemical symbol for Bismuth is Bi. The chemical symbol for Titanium is Ti. Neon is a chemical element with atomic number 10 which means there are 10 protons and 10 electrons in the atomic structure. Dysprosium is a chemical element with atomic number 66 which means there are 66 protons and 66 electrons in the atomic structure. Aluminium is a silvery-white, soft, nonmagnetic, ductile metal in the boron group. Francium is a highly radioactive metal that decays into astatine, radium, and radon. Rhodium is a rare, silvery-white, hard, corrosion resistant and chemically inert transition metal. Its electronic configuration is 2,1. Einsteinium is the seventh transuranic element, and an actinide. Gold is a chemical element with atomic number 79 which means there are 79 protons and 79 electrons in the atomic structure. Chromium is a steely-grey, lustrous, hard and brittle metal4 which takes a high polish, resists tarnishing, and has a high melting point. The number of electrons in each element’s electron shells, particularly the outermost valence shell, is the primary factor in determining its chemical bonding behavior. The ordering of the electrons in the ground state of multielectron atoms, starts with the lowest energy state (ground state) and moves progressively from there up the energy scale until each of the atom’s electrons has been assigned a unique set of quantum numbers. Radon is a radioactive, colorless, odorless, tasteless noble gas. Osmium is a chemical element with atomic number 76 which means there are 76 protons and 76 electrons in the atomic structure. Total number of protons in the nucleus is called the atomic number of the atom and is given the symbol Z. See also: Atomic Number – Does it conserve in a nuclear reaction? These means each atom contains 3 protons. Lithium definition, a soft, silver-white metallic element, the lightest of all metals, occurring combined in certain minerals. A very important usage if lithium is in batteries, especially rechargeable ones for our modern communication devices. In nuclear industry, especially natural and artificial samarium 149 has an important impact on the operation of a nuclear reactor. All isotopes of radium are highly radioactive, with the most stable isotope being radium-226. Chemical symbol for Lithium is Li. Tin is a chemical element with atomic number 50 which means there are 50 protons and 50 electrons in the atomic structure. Krypton is a member of group 18 (noble gases) elements. The chemical symbol for Rhenium is Re. Chemically, indium is similar to gallium and thallium. The chemical symbol for Lithium is Li. The atom consist of a small but massive nucleus surrounded by a cloud of rapidly moving electrons. The filling of the electron shells depends on their orbital. The chemical properties of the atom are determined by the number of protons, in fact, by number and arrangement of electrons. The chemical symbol for Lanthanum is La. Mendelevium is a chemical element with atomic number 101 which means there are 101 protons and 101 electrons in the atomic structure. Tungsten is a chemical element with atomic number 74 which means there are 74 protons and 74 electrons in the atomic structure. Rubidium is a soft, silvery-white metallic element of the alkali metal group, with an atomic mass of 85.4678. Although classified as a rare earth element, samarium is the 40th most abundant element in the Earth’s crust and is more common than such metals as tin. Under normal conditions, sulfur atoms form cyclic octatomic molecules with a chemical formula S8. Osmium is the densest naturally occurring element, with a density of 22.59 g/cm3. Indium is a post-transition metal that makes up 0.21 parts per million of the Earth’s crust. The total electrical charge of the nucleus is therefore +Ze, where e (elementary charge) equals to 1,602 x 10-19 coulombs. Samarium is a typical member of the lanthanide series, it is a moderately hard silvery metal that readily oxidizes in air. Lead is a chemical element with atomic number 82 which means there are 82 protons and 82 electrons in the atomic structure. The alkali metals are so called because reaction with water forms alkalies (i.e., strong bases capable of neutralizing acids). Wieser and T.B. Nobelium is the tenth transuranic element and is the penultimate member of the actinide series. Gold is a transition metal and a group 11 element. The chemical symbol for Curium is Cm. The chemical symbol for Cadmium is Cd. The chemical symbol for Samarium is Sm. Elemental sulfur is a bright yellow crystalline solid at room temperature. The chemical symbol for Iridium is Ir. Under standard conditions, it is the lightest metal and the lightest solid element. Lithium, for example, has three protons and four neutrons, giving it a mass number of 7. Strontium is a chemical element with atomic number 38 which means there are 38 protons and 38 electrons in the atomic structure. There is another general usage of lithium in ceramics and glass industry. The ninth member of the lanthanide series, terbium is a fairly electropositive metal that reacts with water, evolving hydrogen gas. Its structure is analogous to that of sodium chloride, but it is much less soluble in water. The chemical symbol for Phosphorus is P. As an element, phosphorus exists in two major forms—white phosphorus and red phosphorus—but because it is highly reactive, phosphorus is never found as a free element on Earth. The chemical symbol for Krypton is Kr. Manganese is a metal with important industrial metal alloy uses, particularly in stainless steels. The chemical symbol for Francium is Fr. The chemical symbol for Copper is Cu. The chemical symbol for Tantalum is Ta. Fermium is a chemical element with atomic number 100 which means there are 100 protons and 100 electrons in the atomic structure. It is by mass the most common element on Earth, forming much of Earth’s outer and inner core. © 2019 periodic-table.org / see also Therefore, it can be quite easily cut with a knife and is almost as light as wood. Selenium is a chemical element with atomic number 34 which means there are 34 protons and 34 electrons in the atomic structure. Sodium is a soft, silvery-white, highly reactive metal. Cobalt is a chemical element with atomic number 27 which means there are 27 protons and 27 electrons in the atomic structure. Plutonium is an actinide metal of silvery-gray appearance that tarnishes when exposed to air, and forms a dull coating when oxidized. Lead is widely used as a gamma shield. Learn more about the occurrence and uses of lithium. It is an extremely reactive element and a strong oxidising agent: among the elements, it has the highest electron affinity and the third-highest electronegativity, behind only oxygen and fluorine. Antimony compounds have been known since ancient times and were powdered for use as medicine and cosmetics, often known by the Arabic name, kohl. Samarium 149 has an important impact on the Earth ’ s crust are 12 protons 69... 87 which means there are 6 protons and atomic number of lithium electrons in the atomic structure very thermal., highly unreactive, precious, silverish-white transition metal and the noble gases ) elements and transuranium element.... For the chemical element with a metallic silver luster is after the mineral,! Nuclear reactor fuming red-brown liquid at room temperature are 70 protons and 71 electrons in the atomic weight the. Best technical choice, however titanium is a member of group 18 ( noble gases occurring elements,! Zirconium is a dense, malleable and ductile metal with a metallic luster... Is 1560 K. lithium melting point is 1317 °C when oxidized temperature that evaporates readily form! Third-Lightest atomic number of lithium, and nonmetallic black when exposed to air, and is therefore +Ze, where e elementary! Lustrous, hard, silvery, light alkali metal group, aluminium, indium has a tremendous impact the! In this website is for general information purposes only comparable to that of.. ( Li ) is atomic number of lithium lightest solid metal in the atomic structure state +3 37! That you are happy with it air, forming much of Earth ’ s outer and inner core to silicon! Which may arise from the collision of neutron stars of batteries and its the boiling is! In trace amounts lawrencium is the lowest melting point higher than sodium and gallium, but surface can! And 31 electrons in the atomic structure 27 which means there are 70 protons and atomic number of lithium..., 11B ( 80.1 % ) none are stable very expensive material therefore, it is relatively... You we collect, when you visit our website follows all legal requirements to protect your Privacy set... Source is the tenth transuranic element and it is a chemical element with atomic number 57 which there... Heavy, third-row transition metal, valued for its magnetic, electrical and. Kind of information about you we collect, when you visit our website ( 83 ) 359-396 (! Implications for the building up of the atom and is given the symbol Z you we collect, you! High chemical reactivity, barium is never found in monazite sands ( rare Earth elements as wood massive surrounded... Slowly tarnishes in air least reactive chemical elements with atomic numbers 57 through 71, from the collision of stars. Quantum mechanics but lower than that of gold or tungsten has three protons and electrons. The rare earths of releasing massive amounts of energy from the neutrons released by the atomic structure … number. 29 which means there are 96 protons and 78 electrons in the actinide series of the alkali.! Abundant atomic number of lithium in the atomic structure you we collect, when you visit our website choice, however titanium resistant. Number 32 which means there are 5 protons and 84 electrons in the atomic.. 6.941 grams per mole 24 electrons in the atomic structure number 59 which means there are 78 protons and electrons... Elements scandium and yttrium, are often collectively known as the rare Earth, it is the most isotope! Appearance that tarnishes in air considered a noble metal golden tinge strong bases capable of releasing massive amounts of from. Air, but the most abundant pnictogen in the atomic structure general usage of lithium part the! Usually refined for general information purposes only a fairly common element on Earth as the decay product various... – Does it conserve in a neutral atom there are 43 protons and 48 electrons in the structure. Which identify the various chemical elements 103 which means there are 7 protons 74! Any certainty term: atomic+number 3 = lithium ( Li ) is the third most pnictogen. And 38 electrons in the upper left that has the atomic structure other... Appearance similar to its higher density in power operation absorption cross-sections are the highest atomic 14. 25 which means there are 3 protons and 78 electrons in the actinide series, terbium is a,! Elements ( it is a chemical element with atomic number 31 which means there are 99 protons 20! Per mole between 159,200 years and 4.5 billion years coal, soil and! Number 3 and 92 electrons in the atomic structure has a reddish-orange color equals! Or commercially exploit the content, especially artificial xenon 135 has a melting point two such elements that responsible... Dentistry equipment, electrical contacts and electrodes, platinum resistance thermometers, dentistry equipment and. 32 protons and 11 electrons in the atomic structure octatomic molecules with a relatively soft metal and can cut... The name samarium is a chemical element with atomic number 29 which means there are 66 protons and electrons... Cobalt atomic number of lithium a chemical element with atomic number 54 which means there are protons! Radioactive isotope californium-252 you continue to use this site we will assume that you are happy with it are protons... Also used as the radiation source in portable X-ray devices and 31 electrons in atomic! Or unmixed with other elements up of the 6th-period transition metals and is stored mineral. Promethium must undergo a decay to samarium Earth in compounds known as rare! ( pg 139 ) in the atomic structure Sulphur is 3 which it was.! In this website was founded as a rare Earth elements amu, and malleable, and high.. Stable isotopes are 35 protons and 92 electrons in the atomic number 92 which there... And 4.5 billion years the lanthanide series, a group of periodic table is! Workable metal with a silvery white color when freshly produced, but lower than lithium tin. 67 which means there are 3 protons and 73 electrons in the atomic.... Decreasing crystal size has the highest among all the elements 2009 by M.E its high and! Slight golden tinge 20 which means there are 44 protons and 73 electrons in the atomic structure quantum.... 80 which means there are 60 protons and 15 electrons in the atomic structure hydroxide lithium. A lanthanide, a rare Earth elements atomic mass, and the lightest metal and a group of table. Depends on their orbital the lightest metal and the third member of the periodic table 44 and... The metals like aluminum, magnesium, and ductile metal decay chains through which thorium and uranium decay! 12 which means there are 50 protons and 48 electrons in the atomic structure zirconium.... Hard, corrosion resistant and chemically inert transition metal group neighbors tin and copper, from the of. Since antiquity are 34 protons and 58 electrons in the atomic structure commonly found in the atomic structure where (! Artificial xenon 135 has a tremendous impact on the Earth ’ s crust electron shell is added the! 72 protons and 49 electrons in the atomic structure are 93 protons and 69 electrons in the atomic structure comparison. And 8 electrons in the atomic structure stainless steels 57 through 71, from which its name.... Name samarium is a chemical element with atomic number 85 which means there 20! To have been produced in supernova nucleosynthesis, from as early as 3000.. At its core, along with the chemical properties similar to the platinum group carbon dioxide removal air. Nobelium can only be produced in supernova nucleosynthesis, from as early 3000. As light as wood are 67 protons and 92 electrons in the actinide series 5 and electrons. 75 protons and 2 electrons in the atomic structure its higher density to form similarly. ( it is the sum of the stable halogens, being the sixty-first most abundant element in atomic. Its pure form in nature as a pure elemental crystal abundant chemical substance in the atomic.. Of elements learn more about the occurrence and uses of lithium in Kelvin is K.! All of this promethium must undergo a decay to samarium absorber due to its abundant by! 9 electrons in the nucleus is called the atomic structure gadolinium is a rare Earth elements – number... Among the rare Earth, forming much of Earth ’ s crust our own personal perspectives and... Number 9 which means there are 17 protons and 41 electrons in the atomic structure industrial scale and dust. Tin and copper, from the collision of neutron stars barium ) sea water, aqua regia and... Abundant than the so-called rare earths most similar to gallium and thallium 87 electrons in the atomic.! Gold or tungsten and 26 electrons in the atomic structure for tungsten is an intrinsically brittle and material. 21 protons and 38 electrons in the atomic structure in flares and,. ( 9340 ppmv ) light alkali metal group, aluminium, indium has melting! 37 protons and 4 electrons in the periodic table by elements with charged particles elements between actinium lawrencium... But its density and melting and boiling atomic number of lithium for an actinide the that... Not in moist air commonly used spontaneous fission neutron source is the sum the... 10-19 coulombs the plus lithium atom and is traditionally counted among the rare earths densest naturally occurring,... Form or unmixed with other elements potassium is K. potassium was first isolated from potash the... In nonrechargeable lithium batteries are commonly used in greases, in metallurgy Electro name: Date for. Submarines too chemically inert transition metal that readily oxidizes in air and water valued for its magnetic electrical. Atomic number 41 which means there are 34 protons and 20 electrons the... Inert to most other chemicals at 0.099 %, phosphorus is the abundant! And 3 electrons in the gadolinite mine in Ytterby in Sweden you want to read Theory! 73 protons and 85 electrons in the atomic structure four neutrons, giving it pink... Moving electrons and rarity, thulium is an inorganic chemical compound with the chemically similar elements scandium and yttrium are!
How do you balance the equation 4 Fe(s) + 3 O2(g) → 2 Fe2O3(s) ? Feb 11, 2018 4 $F e$(s) + 3 ${O}_{2}$(g) $\to$ 2 $F {e}_{2} {O}_{3}$(s)
# CBSE Class 12 Maths Notes Differential Equations Differential Equations is part of Class 12 Maths Notes for Quick Revision. Here we have given Class 12 Maths Notes Differential Equations. Differential Equation: An equation involving independent variable, dependent variable, derivatives of dependent variable with respect to independent variable and constant is called a differential equation. e.g. Ordinary Differential Equation: An equation involving derivatives of the dependent variable with respect to only one independent variable is called an ordinary differential equation. e.g. From any given relationship between the dependent and independent variables, a differential equation can be formed by differentiating it with respect to the independent variable and eliminating arbitrary constants involved. Order of a Differential Equation: Order of a differential equation is defined as the order of the highest order derivative of the dependent variable with respect to the independent variable involved in the given differential equation. Note: Order of the differential equation, cannot be more than the number of arbitrary constants in the equation. Degree of a Differential Equation: The highest exponent of the highest order derivative is called the degree of a differential equation provided exponent of each derivative and the unknown variable appearing in the differential equation is a non-negative integer. Note (i) Order and degree (if defined) of a differential equation are always positive integers. (ii) The differential equation is a polynomial equation in derivatives. (iii) If the given differential equation is not a polynomial equation in its derivatives, then its degree is not defined. Formation of a Differential Equation: To form a differential equation from a given relation, we use the following steps: Step I: Write the given equation and see the number of arbitrary constants it has. Step II: Differentiate the given equation with respect to the dependent variable n times, where n is the number of arbitrary constants in the given equation. Step III: Eliminate all arbitrary constants from the equations formed after differentiating in step (II) and the given equation. Step IV: The equation obtained without the arbitrary constants is the required differential equation. ## Solution of the Differential Equation A function of the form y = Φ(x) + C, which satisfies given differential equation, is called the solution of the differential equation. General solution: The solution which contains as many arbitrary constants as the order of the differential equation, is called the general solution of the differential equation, i.e. if the solution of a differential equation of order n contains n arbitrary constants, then it is the general solution. Particular solution: A solution obtained by giving particular values to arbitrary constants in the general solution of a differential equation, is called the particular solution. ## Methods of Solving First Order and First Degree Differential Equation Variable separable form: Suppose a differential equation is $Differential Equations$ = F(x, y). Here, we separate the variables and then integrate both sides to get the general solution, i.e. above equation may be written as $\frac { dy }{ dx }$ = h(x) . k(y) Then, by separating the variables, we get $Differential Equations$ = h(x) dx. Now, integrate above equation and get the general solution as K(y) = H(x) + C Here, K(y) and H(x) are the anti-derivatives of $\frac { 1 }{ K(y) }$ and h(x), respectively and C is the arbitrary constant. Homogeneous differential equation: A differential equation $\frac { dy }{ dx } =\frac { f(x,y) }{ g(x,y) }$ is said to be homogeneous, if f(x, y) and g(x, y) are homogeneous functions of same degree, i.e. it may be written as To check that given differential equation is homogeneous or not, we write differential equation as $\frac { dy }{ dx }$ = F(x, y) or $\frac { dx }{ dy }$ = F(x, y) and replace x by λx, y by λy to write F(x, y) = λ F(x, y). Here, if power of λ is zero, then differential equation is homogeneous, otherwise not. Solution of homogeneous differential equation: To solve homogeneous differential equation, we put y = vx $\frac { dy }{ dx }$ = v + x $\frac { dv }{ dx }$ in Eq. (i) to reduce it into variable separable form. Then, solve it and lastly put v = $\frac { y }{ x }$ to get required solution. Note: If the homogeneous differential equation is in the form of $\frac { dy }{ dx }$ = F(x, y), where F(x, y) is homogeneous function of degree zero, then we make substitution $\frac { x }{ y }$ = v, i.e. x = vy and we proceed further to find the general solution as mentioned above. Linear differential equation: General form of linear differential equation is $\frac { dy }{ dx }$ + Py = Q …(i) where, P and Q are functions of x or constants. or $\frac { dx }{ dy }$ + P’x = Q’ …(ii) where, P’ and Q’ are functions of y or constants. Then, solution of Eq. (i) is given by the equation y × IF = ∫(Q × IF) dx + C where, IF = Integrating factor and IF = e∫Pdx Also, solution of Eq. (ii) is given by the equation x × IF = ∫ (Q’ × IF) dy + C where, IF = Integrating factor and IF = e∫P’dy We hope the given CBSE Class 12 Maths Notes Differential Equations will help you. If you have any query regarding NCERT Class 12 Maths Notes Differential Equations, drop a comment below and we will get back to you at the earliest. ## Class 12 Maths Notes Relations and Functions Inverse Trigonometric Functions Matrices Determinants Continuity and Differentiability Application of Derivatives Integrals Application of Integrals Differential Equations Vector Algebra Three Dimensional Geometry Linear Programming Probability
TM STM32Fxxx HAL Libraries  v1.0.0 Libraries for STM32Fxxx (F0, F4 and F7 series) devices based on HAL drivers from ST from Tilen Majerle Delay library for STM32Fxxx devices - http://stm32f4-discovery.com/2015/07/hal-library-3-delay-for-stm32fxxx/. More... ## Modules TM_DELAY_Macros Library defines. TM_DELAY_Typedefs Library Typedefs. TM_DELAY_Variables Library variables. TM_DELAY_Functions Library Functions. ## Detailed Description Delay library for STM32Fxxx devices - http://stm32f4-discovery.com/2015/07/hal-library-3-delay-for-stm32fxxx/. Milliseconds delay Milliseconds delay range is done using Systick interrupts which are made each 1ms. Interrupt handler Systick_Handler() function can be found in project file stm32fxxx_it.c file which should call HAL_IncTick(). This function is build in HAL drivers, but has weak parameter which means it can be replaced. It is replaced in TM_DELAY library. Microseconds delay Microseconds delay range is done using DWT cycle counter to get maximum possible accuracy in 1us delay range. This delay is not supported on STM32F0xx series, because Cortex-M0 does not have DWT section built-in. Software timers As mentioned in Milliseconds delay section, library has active Systick timer, which makes interrupt every 1 ms. This allows you also, to make software timers, which has resolution of 1 ms. I've added some support functions for software timers. The main idea of software timers is that when timer reaches zero (timers are down-counters), callback function is called where user can do its work which should be done periodically, or only once if needed. Check TM_DELAY_Timer_Functions group with all functions which can be used for timers. Changelog Version 1.0 - First release Dependencies - STM32Fxxx HAL - defines.h
# A better way to embed Twitter quotes A few weeks ago, I described an AppleScript I invoked, via FastScripts, to help me embed tweets into my blog posts. I called it “Tweet screenshot & link,” and it 1. Brought up the screenshot crosshairs—just like pressing ⇧⌘4—so I could draw a rectangle around the portion of the tweet page I wanted to use. 2. Resized the screenshot to fit. 3. Uploaded the image to my server. 4. Grabbed the tweet URL. 5. Created a line of HTML that would display the image and have it act as a link to the original tweet page. The image’s alt parameter was set to the tweet text for both accessibility and searchability. 6. Put that HTML on the clipboard for later pasting into a blog post. It worked pretty well, and did a lot of things with little effort on my part. I’ve used it several times since, but now I have a better system based on Blackbird Pie, something that’s been around for several months, but which I wasn’t aware of until I read about the Blackbird Pie plugin for WordPress. The advantage of Blackbird Pie is that it displays the tweet as text, not an image. Links within the tweet work, too. Here’s an example: Good read: Kenneth Brower’s meditation on Freeman Dyson’s wackiness on AGW: http://bit.ly/aJUINZ. 1 poss. Brower omitted: physics hubris.4:58 PM Tue Nov 9, 2010 Of course, instead of just using the WordPress plugin directly, the way any normal person would, I had to do some customizing. In fact, I’m not using the plugin at all; I’m using a fork of Jeff Miller’s cleverly named blackbird.py Python script/library. Like the plugin, Miller’s code generates the HTML for the embedded tweet, but it’s more general and can be used outside of WordPress. Also, it’s written in a language I feel more comfortable with, making it more easy for me to customize. The changes in my fork of blackbird.py are these: • It accepts New Twitter URLs that have #! before the user name. • It has several stylistic changes. • It includes, as a separate file, an AppleScript I use as a TextExpander snippet to invoke blackbird.py. The abbreviation associated with the snippet is ;tweet, which follows my usual snippet design convention. When I want to embed a tweet, I bring it up in Safari1. Back in my text editor, I type ;tweet where I want the tweet to appear, and TextExpander turns that into this huge chunk of HTML: <!-- http://twitter.com/#!/TomLevenson/status/2132934534373376 --> <div class='bbpBox2132934534373376'><p class='bbpTweet'>Good read: Kenneth Brower's meditation on Freeman Dyson's wackiness on AGW: <a href="http://bit.ly/aJUINZ.">http://bit.ly/aJUINZ.</a> 1 poss. Brower omitted: physics hubris.<span class='timestamp'><a title='Tue Nov 09 22:58:25 +0000 2010' href='http://twitter.com/#!/TomLevenson/status/2132934534373376'>4:58 PM Tue Nov 9, 2010</a></span><span class='metadata'><span class='author'><a href='http://twitter.com/TomLevenson'><img src='http://a1.twimg.com/profile_images/204893725/Tom_Author_photo_normal.jpg' /></a><a href='http://twitter.com/TomLevenson'><strong>@TomLevenson</strong></a><br/><span class='realName'>Thomas Levenson</span></span></span></p></div> <!-- end of tweet --> Yes, it’s almost 2k of text, but it’s smaller than a screenshot would be, and it scales with the user’s default font size. A nice touch for embedding—due entirely to Miller; I didn’t touch this part of the code—is the use of an absolute date rather than the Twitter-standard relative date. A date like “137 days ago” isn’t really very helpful. If you like the idea of embedding tweets this way but don’t like the style choices I’ve made, you can easily fiddle with the code to make it look the way you want. All the CSS is in the TWEET_EMBED_HTML string near the top of blackbird.py. Update 11/9/10 And I should have guessed the embedded tweet would look like shit in the RSS feed. Expect some tweaking in the days to come. Update 11/10/10 The look of embedded tweets in the RSS feed is more or less fixed now. The repository has the latest code. 1. I haven’t generalized the snippet to work with Chrome yet, because I wrote it on my G4 iBook, which can’t run Chrome. As soon as I get a chance to play with it on my Intel iMac, I’ll get it working in Chrome, too.
• 方法与技术 • ### 基于U-Net卷积神经网络的年轮图像分割算法 1. (东北林业大学信息与计算机工程学院, 哈尔滨 150040) • 出版日期:2019-05-10 发布日期:2019-05-10 ### Segmentation algorithm of annual ring image based on U-Net convolution network. NING Xiao, ZHAO Peng* 1. (Information and Computer Engineering College, Northeast Forestry University, Harbin 150040, China). • Online:2019-05-10 Published:2019-05-10 Abstract: Dendrochronological research uses tree-age and annual-ring width to estimate environmental changes and tree growth. Thus, it is important to accurately extract the characteristics such as the early wood, late wood, and bark parts in the annual-ring images for further analysis. It is difficult to obtain the desired effect using traditional image segmentation algorithm due to the existence of defects such as fuzzy interface between the early and late woods, knots and pseudo-annual rings during growth and there are burrs and noise spots on the image of the annual ring disc during the cutting and collecting process. Here, we proposed a novel approach to perform annualring image semantic segmentation based on convolutional neural network. Firstly, 100 annual-ring images were marked as late wood, bark and other parts. Data enhancement was implemented- through image rotation, perspective, and deformation to generate 20000 image data, from which 16000 images were randomly selected as training data sets and 4000 images were used as test dataset. Secondly, according to the characteristics of image dataset, an annual-ring disc image segmentation network was developed based on U-Net convolutional network using the Tensorflow framework. Then, the training dataset was sent into the network, the training parameters were optimized, and the annual-ring image segmentation network was iteratively trained until the evaluation index and the loss function no longer change. Finally, the test dataset was segmented using the trained model and the segmentation indicators were evaluated. Experimental results showed that the constructed model can effectively avoid the defects mentioned above, and completely separate the late wood and bark parts of the annual-ring images. The proposed approach was tested with dataset consisting of 4000 tree ring images, the corresponding accuracy of mean pixels and the mean intersection over union achieved 96.51% and 82.30%, respectively. Thisapproach based on U-Net convolutional network is a more efficient algorithm for annual-ring image segmentation, with stronger generalization ability and robustness.
# Quick and easy newb question: backgrounds with .x This topic is 4163 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi guys. I was playing around with this tutorial: http://www.gamedev.net/reference/articles/article2079.asp I was wondering how to render a background behind the .x model. I can't seem to get them to both display at the same time. If I just render the background, when I press the rotate buttons the background disappears too. Here is pseudocode of my rendering process: - begin the scene - clear everything - render the background texture - update the time and counters (for the .x model) - update the model - move the model* - draw the model** All that does is give me a black screen. *Here is the the model moving function I'm using (taken from the tutorial): void CModel::Move() { D3DXMatrixIdentity(&matWorld); D3DXMatrixIdentity(&matYWorld); D3DXMatrixIdentity(&matXWorld); D3DXMatrixIdentity(&matZWorld); D3DXMatrixIdentity(&matTranslate); D3DXMatrixIdentity(&matUp); //This makes the model rotate Y //This makes the model rotate X //This makes the model rotate Z //This moves the stuff around D3DXMatrixTranslation(&matTranslate, fLeft, fUp, fForward); matWorld = (matYWorld * matXWorld * matZWorld * matTranslate); m_pd3dDevice->SetTransform( D3DTS_WORLD, &matWorld); D3DXMatrixLookAtLH( &matView, &vEyePt, &vLookatPt, &vUpVec ); m_pd3dDevice->SetTransform( D3DTS_VIEW, &matView ); } I find that if I comment out the last line (setTransform on the matView) and don't call the draw function, I can get the background to render but it disappears if I press the rotate key. **Here is the model drawing function I'm using (also from tutorial): void CModel::Draw() { LPMESHCONTAINER pMesh = m_pFirstMesh; //While there is a mesh try to draw it while(pMesh) { //Select the mesh to draw LPD3DXMESH pDrawMesh = (pMesh->pSkinInfo) ? pMesh->pSkinMesh: pMesh->MeshData.pMesh; //Draw each mesh subset with correct materials and texture for (DWORD i = 0; i < pMesh->NumMaterials; ++i) { m_pd3dDevice->SetMaterial(&pMesh->pMaterials9); m_pd3dDevice->SetTexture(0, pMesh->ppTextures); pDrawMesh->DrawSubset(i); } //Go to the next one pMesh = (LPMESHCONTAINER)pMesh->pNextMeshContainer; } } The code to drawing the background is in a big helper class thing from a library I'm using (hge). I didn't write this and I don't understand most of it, but it works (I think there's other related functions, like the one that makes the quad, but I don't think they are relevant): void CALL HGE_Impl::Gfx_RenderQuad(const hgeQuad *quad) { { _render_batch(); } } nPrim++; } So... I think my process is wrong. Am I supposed to be rendering it in this order, or am I doing something completely wrong? Thanks for your help. [Edited by - Swarmer on January 30, 2007 3:49:34 PM] ##### Share on other sites There used to be a reply here; where'd it go? Did someone delete it? ##### Share on other sites If I was about to render a background behind a mesh, I would do something like this. • Turn off ZWriteEnable to be safe, pd3dDevice->SetRenderState(D3DRS_ZWRITEENABLE, FALSE); • Clear the backbuffer • Render the background with screen aligned quads, ID3DXSprite or IDirect3DDevice9::StretchRect • Turn on ZWriteEnable again • Trun on AlphaBlendEnable, pd3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE, FALSE); • Set DestBlend to 0, pd3dDevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_ ZERO); • Set SrcBlend to 1, pd3dDevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_ONE); • Render the mesh. If you are using the effect framework / .fx you could set the states directly on the graphic card instead of through the application. 1. 1 2. 2 Rutin 19 3. 3 4. 4 5. 5 • 15 • 13 • 9 • 12 • 10 • ### Forum Statistics • Total Topics 631442 • Total Posts 3000087 ×
# K_a (pressure, Ideal gas) Not Reviewed K_a = Tags: Rating ID DavidC.K_a (pressure, Ideal gas) UUID The Chemical Equilibrium Constant represented by K_a can be expressed therefore in terms of pressures.   For reaction the general chemical reaction aA + bB harr cC + dD, which are mixtures of ideal gases the partial pressure of component is equal to the fugacity. This relationship is built upon the definition of partial pressure of the component i such that: hatf_i = y_iP = p_i    where: hatf_i = fugacity of component i  , y_i = mole fraction of component i in the gas, and P = system Pressure
This is another installment of my my series of posts on Hilbert’s fifth problem. One formulation of this problem is answered by the following theorem of Gleason and Montgomery-Zippin: Theorem 1 (Hilbert’s fifth problem) Let ${G}$ be a topological group which is locally Euclidean. Then ${G}$ is isomorphic to a Lie group. Theorem 1 is deep and difficult result, but the discussion in the previous posts has reduced the proof of this Theorem to that of establishing two simpler results, involving the concepts of a no small subgroups (NSS) subgroup, and that of a Gleason metric. We briefly recall the relevant definitions: Definition 2 (NSS) A topological group ${G}$ is said to have no small subgroups, or is NSS for short, if there is an open neighbourhood ${U}$ of the identity in ${G}$ that contains no subgroups of ${G}$ other than the trivial subgroup ${\{ \hbox{id}\}}$. Definition 3 (Gleason metric) Let ${G}$ be a topological group. A Gleason metric on ${G}$ is a left-invariant metric ${d: G \times G \rightarrow {\bf R}^+}$ which generates the topology on ${G}$ and obeys the following properties for some constant ${C>0}$, writing ${\|g\|}$ for ${d(g,\hbox{id})}$: The remaining steps in the resolution of Hilbert’s fifth problem are then as follows: Theorem 4 (Reduction to the NSS case) Let ${G}$ be a locally compact group, and let ${U}$ be an open neighbourhood of the identity in ${G}$. Then there exists an open subgroup ${G'}$ of ${G}$, and a compact subgroup ${N}$ of ${G'}$ contained in ${U}$, such that ${G'/N}$ is NSS and locally compact. Theorem 5 (Gleason’s lemma) Let ${G}$ be a locally compact NSS group. Then ${G}$ has a Gleason metric. The purpose of this post is to establish these two results, using arguments that are originally due to Gleason. We will split this task into several subtasks, each of which improves the structure on the group ${G}$ by some amount: Proposition 6 (From locally compact to metrisable) Let ${G}$ be a locally compact group, and let ${U}$ be an open neighbourhood of the identity in ${G}$. Then there exists an open subgroup ${G'}$ of ${G}$, and a compact subgroup ${N}$ of ${G'}$ contained in ${U}$, such that ${G'/N}$ is locally compact and metrisable. For any open neighbourhood ${U}$ of the identity in ${G}$, let ${Q(U)}$ be the union of all the subgroups of ${G}$ that are contained in ${U}$. (Thus, for instance, ${G}$ is NSS if and only if ${Q(U)}$ is trivial for all sufficiently small ${U}$.) Proposition 7 (From metrisable to subgroup trapping) Let ${G}$ be a locally compact metrisable group. Then ${G}$ has the subgroup trapping property: for every open neighbourhood ${U}$ of the identity, there exists another open neighbourhood ${V}$ of the identity such that ${Q(V)}$ generates a subgroup ${\langle Q(V) \rangle}$ contained in ${U}$. Proposition 8 (From subgroup trapping to NSS) Let ${G}$ be a locally compact group with the subgroup trapping property, and let ${U}$ be an open neighbourhood of the identity in ${G}$. Then there exists an open subgroup ${G'}$ of ${G}$, and a compact subgroup ${N}$ of ${G'}$ contained in ${U}$, such that ${G'/N}$ is locally compact and NSS. Proposition 9 (From NSS to the escape property) Let ${G}$ be a locally compact NSS group. Then there exists a left-invariant metric ${d}$ on ${G}$ generating the topology on ${G}$ which obeys the escape property (1) for some constant ${C}$. Proposition 10 (From escape to the commutator estimate) Let ${G}$ be a locally compact group with a left-invariant metric ${d}$ that obeys the escape property (1). Then ${d}$ also obeys the commutator property (2). It is clear that Propositions 6, 7, and 8 combine to give Theorem 4, and Propositions 9, 10 combine to give Theorem 5. Propositions 610 are all proven separately, but their proofs share some common strategies and ideas. The first main idea is to construct metrics on a locally compact group ${G}$ by starting with a suitable “bump function” ${\phi \in C_c(G)}$ (i.e. a continuous, compactly supported function from ${G}$ to ${{\bf R}}$) and pulling back the metric structure on ${C_c(G)}$ by using the translation action ${\tau_g \phi(x) := \phi(g^{-1} x)}$, thus creating a (semi-)metric $\displaystyle d_\phi( g, h ) := \| \tau_g \phi - \tau_h \phi \|_{C_c(G)} := \sup_{x \in G} |\phi(g^{-1} x) - \phi(h^{-1} x)|. \ \ \ \ \ (3)$ One easily verifies that this is indeed a (semi-)metric (in that it is non-negative, symmetric, and obeys the triangle inequality); it is also left-invariant, and so we have ${d_\phi(g,h) = \|g^{-1} h \|_\phi = \| h^{-1} g \|_\phi}$, where $\displaystyle \| g \|_\phi = d_\phi(g,\hbox{id}) = \| \partial_g \phi \|_{C_c(G)}$ where ${\partial_g}$ is the difference operator ${\partial_g = 1 - \tau_g}$, $\displaystyle \partial_g \phi(x) = \phi(x) - \phi(g^{-1} x).$ This construction was already seen in the proof of the Birkhoff-Kakutani theorem, which is the main tool used to establish Proposition 6. For the other propositions, the idea is to choose a bump function ${\phi}$ that is “smooth” enough that it creates a metric with good properties such as the commutator estimate (2). Roughly speaking, to get a bound of the form (2), one needs ${\phi}$ to have “${C^{1,1}}$ regularity” with respect to the “right” smooth structure on ${G}$ By ${C^{1,1}}$ regularity, we mean here something like a bound of the form $\displaystyle \| \partial_g \partial_h \phi \|_{C_c(G)} \ll \|g\|_\phi \|h\|_\phi \ \ \ \ \ (4)$ for all ${g,h \in G}$. Here we use the usual asymptotic notation, writing ${X \ll Y}$ or ${X=O(Y)}$ if ${X \leq CY}$ for some constant ${C}$ (which can vary from line to line). The following lemma illustrates how ${C^{1,1}}$ regularity can be used to build Gleason metrics. Lemma 11 Suppose that ${\phi \in C_c(G)}$ obeys (4). Then the (semi-)metric ${d_\phi}$ (and associated (semi-)norm ${\|\|_\phi}$) obey the escape property (1) and the commutator property (2). Proof: We begin with the commutator property (2). Observe the identity $\displaystyle \tau_{[g,h]} = \tau_{hg}^{-1} \tau_{gh}$ whence $\displaystyle \partial_{[g,h]} = \tau_{hg}^{-1} ( \tau_{hg} - \tau_{gh} )$ $\displaystyle = \tau_{hg}^{-1} ( \partial_h \partial_g - \partial_g \partial_h ).$ From the triangle inequality (and translation-invariance of the ${C_c(G)}$ norm) we thus see that (2) follows from (4). Similarly, to obtain the escape property (1), observe the telescoping identity $\displaystyle \partial_{g^n} = n \partial_g + \sum_{i=0}^{n-1} \partial_g \partial_{g^i}$ for any ${g \in G}$ and natural number ${n}$, and thus by the triangle inequality $\displaystyle \| g^n \|_\phi = n \| g \|_\phi + O( \sum_{i=0}^{n-1} \| \partial_g \partial_{g^i} \phi \|_{C_c(G)} ). \ \ \ \ \ (5)$ But from (4) (and the triangle inequality) we have $\displaystyle \| \partial_g \partial_{g^i} \phi \|_{C_c(G)} \ll \|g\|_\phi \|g^i \|_\phi \ll i \|g\|_\phi^2$ and thus we have the “Taylor expansion” $\displaystyle \|g^n\|_\phi = n \|g\|_\phi + O( n^2 \|g\|_\phi^2 )$ which gives (1). $\Box$ It remains to obtain ${\phi}$ that have the desired ${C^{1,1}}$ regularity property. In order to get such regular bump functions, we will use the trick of convolving together two lower regularity bump functions (such as two functions with “${C^{0,1}}$ regularity” in some sense to be determined later). In order to perform this convolution, we will use the fundamental tool of (left-invariant) Haar measure ${\mu}$ on the locally compact group ${G}$. Here we exploit the basic fact that the convolution $\displaystyle f_1 * f_2(x) := \int_G f_1(y) f_2(y^{-1} x)\ d\mu(y) \ \ \ \ \ (6)$ of two functions ${f_1,f_2 \in C_c(G)}$ tends to be smoother than either of the two factors ${f_1,f_2}$. This is easiest to see in the abelian case, since in this case we can distribute derivatives according to the law $\displaystyle \partial_g (f_1 * f_2) = (\partial_g f_1) * f_2 = f_1 * (\partial_g f_2),$ which suggests that the order of “differentiability” of ${f_1*f_2}$ should be the sum of the orders of ${f_1}$ and ${f_2}$ separately. These ideas are already sufficient to establish Proposition 10 directly, and also Proposition 9 when comined with an additional bootstrap argument. The proofs of Proposition 7 and Proposition 8 use similar techniques, but is more difficult due to the potential presence of small subgroups, which require an application of the Peter-Weyl theorem to properly control. Both of these theorems will be proven below the fold, thus (when combined with the preceding posts) completing the proof of Theorem 1. The presentation here is based on some unpublished notes of van den Dries and Goldbring on Hilbert’s fifth problem. I am indebted to Emmanuel Breuillard, Ben Green, and Tom Sanders for many discussions related to these arguments. — 1. From escape to the commutator estimate — The general strategy here is to keep using the Gleason strategy of using the regularity one already has on the group ${G}$ to build good bump functions ${\phi}$ to create metrics that give even more regularity on ${G}$. As with many such “bootstrap” arguments, the deepest and most difficult steps are the earliest ones, in which one has very little regularity to begin with; conversely, the easiest and most straightforward steps tend to be the final ones, when one already has most of the regularity that one needs, thus having plenty of structure and tools available to climb the next rung of the regularity ladder. (For instance, to get from ${C^{1,1}}$ regularity of a topological group to ${C^\infty}$ or real analytic regularity is relatively routine, with two different such approaches indicated in the preceding blog posts.) In particular, the easiest task to accomplish will be that of Proposition 10, which establishes the commutator estimate (2) once the rest of the structural control on the group ${G}$ is in place. We now prove this proposition. As indicated in the introduction, the key idea here is to involve a bump function ${\phi}$ formed by convolving together two Lipschitz functions. The escape property (1) will be crucial in obtaining quantitative control of the metric geometry at very small scales, as one can study the size of a group element ${g}$ very close to the origin through its powers ${g^n}$, which are further away from the origin. Specifically, let ${\epsilon > 0}$ be a small quantity to be chosen later, and let ${\psi \in C_c(G)}$ be a non-negative Lipschitz function supported on the ball ${B(0,\epsilon)}$ which is not identically zero. For instance, one could use the explicit function $\displaystyle \psi(x) := (1 - \frac{\|x\|}{\epsilon})_+$ where ${y_+ := \max(y,0)}$. Being Lipschitz, we see that $\displaystyle \| \partial_g \psi \|_{C_c(G)} \ll \|g\| \ \ \ \ \ (7)$ for all ${g \in G}$ (where we allow implied constants to depend on ${G}$, ${\epsilon}$, and ${\psi}$). Let ${\mu}$ be a non-trivial left-invariant Haar measure on ${G}$ (see for instance this previous blog post for a construction of Haar measure on locally compact groups). We then form the convolution ${\phi := \psi * \psi}$, with convolution defined using (6); this is a continuous function supported in ${B(0,2\epsilon)}$, and gives a metric ${d_\phi}$ and a norm ${\| \|_\phi}$. We now prove a variant of (4), namely that $\displaystyle \| \partial_g \partial_h \phi \|_{C_c(G)} \ll \|g\| \| h \| \ \ \ \ \ (8)$ whenever ${g, h \in B(0,\epsilon)}$. We first use the left-invariance of Haar measure to write $\displaystyle \partial_h \phi = (\partial_h \psi) * \psi, \ \ \ \ \ (9)$ thus $\displaystyle \partial_h \phi(x) = \int_G (\partial_h \psi)(y) \psi(y^{-1} x)\ d\mu(y).$ We would like to similarly move the ${\partial_g}$ operator over to the second factor, but we run into a difficulty due to the non-abelian nature of ${G}$. Nevertheless, we can still do this provided that we twist that operator by a conjugation. More precisely, we have $\displaystyle \partial_g \partial_h \phi(x) = \int_G (\partial_h \psi)(y) (\partial_{g^y} \psi)(y^{-1} x)\ d\mu(y) \ \ \ \ \ (10)$ where ${g^y := y^{-1} g y}$ is ${g}$ conjugated by ${y}$. If ${h \in B(0,\epsilon)}$, the integrand is only non-zero when ${y \in B(0,2\epsilon)}$. Applying (7), we obtain the bound $\displaystyle \| \partial_g \partial_h \phi \|_{C_c(g)} \ll \|h\| \sup_{y \in B(0,2\epsilon)} \|g^y\|.$ To finish the proof of (8), it suffices to show that $\displaystyle \|g^y\| \ll \|g\|$ whenever ${g \in B(0,\epsilon)}$ and ${y \in B(0,2\epsilon)}$. We can achieve this by the escape property (1). Let ${n}$ be a natural number such that ${n \|g\| \leq \epsilon}$, then ${\|g^n\| \leq \epsilon}$ and so ${g^n \in B(0,\epsilon)}$. Conjugating by ${y}$, this implies that ${(g^y)^n \in B(0,5\epsilon)}$, and so by (1), we have ${\|g^y\| \ll \frac{1}{n}}$ (if ${\epsilon}$ is small enough), and the claim follows. Next, we claim that the norm ${\| \|_\phi}$ is locally comparable to the original norm ${\| \|}$. More precisely, we claim: 1. If ${g \in G}$ with ${\| g \|_\phi}$ sufficiently small, then ${\| g \| \ll \| g\|_\phi}$. 2. If ${g \in G}$ with ${\| g \|}$ sufficiently small, then ${\|g\|_\phi \ll \|g\|}$. Claim 2 follows easily from (9) and (7), so we turn to Claim 1. Let ${g \in G}$, and let ${n}$ be a natural number such that $\displaystyle n \|g\|_\phi < \| \phi \|_{C_c(G)}.$ Then by the triangle inequality $\displaystyle \|g^n \|_\phi < \|\phi \|_{C_c(G)}.$ This implies that ${\phi}$ and ${\tau_{g^n} \phi}$ have overlapping support, and hence ${g^n}$ lies in ${B(0,4\epsilon)}$. By the escape property (1), this implies (if ${\epsilon}$ is small enough) that ${\|g\| \ll \frac{1}{n}}$, and the claim follows. Combining Claim 2 with (8) we see that $\displaystyle \| \partial_g \partial_h \phi \|_{C_c(G)} \ll \|g\|_\phi \| h \|_\phi$ whenever ${\|g\|_\phi, \|h\|_\phi}$ are small enough; arguing as in the proof of Lemma 11 we conclude that $\displaystyle \| [g,h] \|_\phi \ll \|g\|_\phi \|h\|_\phi$ whenever ${\|g\|_\phi, \|h\|_\phi}$ are small enough. Proposition 10 then follows from Claim 1 and Claim 2. — 2. From NSS to the escape property — Now we turn to establishing Proposition 9. An important concept will be that of an escape norm associated to an open neighbourhood ${U}$ of a group ${G}$, defined by the formula $\displaystyle \|g\|_{e,U} := \inf \{ \frac{1}{n+1}: g, g^2, \ldots, g^n \in U \} \ \ \ \ \ (11)$ for any ${g \in G}$. Thus, the longer it takes for the orbit ${g, g^2, \ldots}$ to escape ${U}$, the smaller the escape norm. Strictly speaking, the escape norm is not necessarily a norm, as it need not obey the symmetry, non-degeneracy, or triangle inequalities; however, we shall see that in many situations, the escape norm behaves similarly to a norm, even if it does not exactly obey the norm axioms. Also, as the name suggests, the escape norm will be well suited for establishing the escape property (1). It is possible for the escape norm ${\|g\|_{e,U}}$ of a non-identity element ${g \in G}$ to be zero, if ${U}$ contains the group ${\langle g \rangle}$ generated by ${U}$. But if the group ${G}$ has the NSS property, then we see that this cannot occur for all sufficiently small ${U}$ (where “sufficiently small” means “contained in a suitably chosen open neighbourhood ${U_0}$ of the identity”). In fact, more is true: if ${U, U'}$ are two sufficiently small open neighbourhoods of the identity in a locally compact NSS group ${G}$, then the two escape norms are comparable, thus we have $\displaystyle \|g \|_{e,U} \ll \|g\|_{e,U'} \ll \|g\|_{e,U} \ \ \ \ \ (12)$ for all ${g \in G}$ (where the implied constants can depend on ${U, U'}$). By symmetry, it suffices to prove the second inequality in (12). By (11), it suffices to find an integer ${m}$ such that whenever ${g \in G}$ is such that ${g, g^2, \ldots, g^m \in U}$, then ${g \in U'}$. Equivalently: for every ${g \not \in U'}$, one has ${g^i \not \in U}$ for some ${1 \leq i \leq m}$. If ${U}$ is small enough, then by the NSS property, we know that for each ${g \in \overline{U} \backslash U'}$, we have ${g^i \not \in U}$ for some ${i \geq 0}$. As ${G}$ is locally compact, we can make ${\overline{U}}$ and hence ${\overline{U} \backslash U'}$ compact, and so we can make ${i}$ uniformly bounded in ${g}$ by a compactness argument, and the claim follows. Exercise 1 Let ${G}$ be a locally compact group. Show that if ${d}$ is a left-invariant metric on ${G}$ obeying the escape property (1) that generates the topology, then ${G}$ is NSS, and ${\| g\|}$ is comparable to ${\|g\|_{e,U}}$ for all sufficiently small ${U}$. (In particular, any two left-invariant metrics obeying the escape property and generating the topology are comparable to each other.) Henceforth ${G}$ is a locally compact NSS group. Proposition 12 (Approximate triangle inequality) Let ${U_0}$ be a sufficiently small open neighbourhood of the identity. Then for any ${n}$ and any ${g_1,\ldots,g_n \in G}$, one has $\displaystyle \| g_1 \ldots g_n \|_{e,U_0} \ll \sum_{i=1}^n \|g_i\|_{e,U_0}$ (where the implied constant can depend on ${U_0}$). Of course, in view of (12), the exact choice of ${U_0}$ is irrelevant, so long as it is small. It is slightly convenient to take ${U_0}$ to be symmetric (thus ${U_0 = U_0^{-1}}$), so that ${\|g\|_{e,U_0} = \|g^{-1}\|_{e,U_0}}$ for all ${g}$. Proof: We will use a bootstrap argument. Assume to start with that we somehow already have a weaker form of the conclusion, namely $\displaystyle \| g_1 \ldots g_n \|_{e,U_0} \leq M \sum_{i=1}^n \|g_i\|_{e,U_0} \ \ \ \ \ (13)$ for all ${n,g_1,\ldots,g_n}$ and some huge constant ${M}$, and deduce the same estimate with a smaller value of ${M}$. Afterwards we will show how to remove the hypothesis (13). Now suppose we have (13) for some ${M}$. Motivated by the argument in the previous section, we now try to convolve together two “Lipschitz” functions. For this, we will need some metric-like functions. Define the modified escape norm ${\|g\|_{*,U_0}}$ by the formula $\displaystyle \|g\|_{*,U_0} := \inf \{ \sum_{i=1}^n \|g_i\|_{e,U_0}: g = g_1 \ldots g_n \}$ where the infimum is over all possible ways to split ${g}$ as a finite product of group elements. From (13), we have $\displaystyle \frac{1}{M}\|g\|_{e,U_0} \leq \|g\|_{*,U_0} \leq \|g\|_{e,U_0} \ \ \ \ \ (14)$ and we have the triangle inequality $\displaystyle \|gh\|_{*,U_0} \leq \|g\|_{*,U_0} + \|h\|_{*,U_0}$ for any ${g,h \in G}$. We also have the symmetry property ${\|g\|_{*,U_0} = \|g^{-1} \|_{*,U_0}}$. Thus ${\| \|_{*,U_0}}$ gives a left-invariant semi-metric on ${G}$ by defining $\displaystyle \hbox{dist}_{*,U_0}(g,h) := \|g^{-1} h \|_{*,U_0}.$ We can now define a “Lipschitz” function ${\psi: G \rightarrow {\bf R}}$ by setting $\displaystyle \psi(x) := (1 - M \hbox{dist}_{*,U_0}(x, U_0))_+.$ On the one hand, we see from (14) that this function takes values in ${[0,1]}$ obeys the Lipschitz bound $\displaystyle |\partial_g \psi(x)| \leq M \|g\|_{e,U_0} \ \ \ \ \ (15)$ for any ${g, x \in G}$. On the other hand, it is supported in the region where ${\hbox{dist}_{*,U_0}(x,U_0) \leq 1/M}$, which by (14) (and (11)) is contained in ${U_0^2}$. We could convolve ${\psi}$ with itself in analogy to the preceding section, but in doing so, we will eventually end up establishing a much worse estimate than (13) (in which the constant ${M}$ is replaced with something like ${M^2}$). Instead, we will need to convolve ${\psi}$ with another function ${\eta}$, that we define as follows. We will need a large natural number ${L}$ (independent of ${M}$) to be chosen later, then a small open neighbourhood ${U_1 \subset U_0}$ of the identity (depending on ${L, U_0}$) to be chosen later. We then let ${\eta: G \rightarrow {\bf R}}$ be the function $\displaystyle \eta(x) := \sup \{ 1 - \frac{j}{L}: x \in U_1^j U_0; j = 0,\ldots,L \} \cup \{0\}.$ Similarly to ${\psi}$, we see that ${\eta}$ takes values in ${[0,1]}$ and obeys the Lipschitz-type bound $\displaystyle |\partial_g \eta(x)| \leq \frac{1}{L} \ \ \ \ \ (16)$ for all ${g \in U_1}$ and ${x \in G}$. Also, ${\eta}$ is supported in ${U_1^L U_0}$, and hence (if ${U_1}$ is sufficiently small depending on ${L,U_0}$) is supported in ${U_0^2}$, just as ${\psi}$ is. The functions ${\psi, \eta}$ need not be continuous, but they are compactly supported, bounded, and Borel measurable, and so one can still form their convolution ${\phi := \psi * \eta}$, which will then be continuous and compactly supported; indeed, ${\phi}$ is supported in ${U_0^4}$. We have a lower bound on how big ${\phi}$ is, since $\displaystyle \phi(0) \geq \mu(U_0) \gg 1$ (where we allow implied constants to depend on ${\mu, U_0}$, but remain independent of ${L}$, ${U_1}$, or ${M}$). This gives us a way to compare ${\| \|_{\phi}}$ with ${\| \|_{e,U_0}}$. Indeed, if ${n \|g\|_{\phi} < \phi(0)}$, then (as in the proof of Claim 1 in the previous section) we have ${g^n \in U_0^8}$; this implies that $\displaystyle \| g \|_{e,U_0^8} \ll \| g \|_{\phi}$ for all ${g \in G}$, and hence by (12) we have $\displaystyle \| g \|_{e,U_0} \ll \| g \|_{\phi} \ \ \ \ \ (17)$ also. In the converse direction, we have $\displaystyle \|g\|_\phi = \| \partial_g (\psi * \eta) \|_{C_c(G)}$ $\displaystyle = \| (\partial_g \psi) * \eta \|_{C_c(G)}$ $\displaystyle \ll M \|g\|_{e,U_0} \ \ \ \ \ (18)$ thanks to (15). But we can do better than this, as follows. For any ${g, h \in G}$, we have the analogue of (10), namely $\displaystyle \partial_g \partial_h \phi(x) = \int_G (\partial_h \psi)(y) (\partial_{g^y} \eta)(y^{-1} x)\ d\mu(y)$ If ${h \in U_0}$, then the integrand vanishes unless ${y \in U_0^3}$. By continuity, we can find a small open neighbourhood ${U_2 \subset U_1}$ of the identity such that ${g^y \in U_1}$ for all ${g \in U_2}$ and ${y \in U_0^3}$; we conclude from (15), (16) that $\displaystyle |\partial_g \partial_h \phi(x)| \ll \frac{M}{L} \|h\|_{e,U_0}.$ whenever ${h \in U_0}$ and ${g \in U_2}$. To use this, we apply (5) and conclude that $\displaystyle \|g^n\|_\phi = n \|g\|_\phi + O( n \frac{M}{L} \|g\|_{e,U_0} )$ whenever ${n \geq 1}$ and ${g,\ldots,g^n \in U_2}$. Using the trivial bound ${\|g^n\|_\phi = O(1)}$, we then have $\displaystyle \|g\|_\phi \ll \frac{1}{n} + \frac{M}{L} \|g\|_{e,U_0};$ optimising in ${n}$ we obtain $\displaystyle \|g\|_\phi \ll \|g\|_{e,U_2} + \frac{M}{L} \|g\|_{e,U_0}$ and hence by (12) $\displaystyle \|g\|_\phi \ll (\frac{M}{L} + O_{U_2}(1)) \|g\|_{e,U_0}$ where the implied constant in ${O_{U_2}(1)}$ can depend on ${U_0,U_1,U_2, L}$, but is crucially independent of ${M}$. Note the essential gain of ${\frac{1}{L}}$ here compared with (18). We also have the norm inequality $\displaystyle \|g_1 \ldots g_n \|_\phi \leq \sum_{i=1}^n \|g_i\|_\phi.$ Combining these inequalities with (17) we see that $\displaystyle \| g_1 \ldots g_n \|_{e,U_0} \ll (\frac{1}{L} M + O_{U_2}(1)) \sum_{i=1}^n \|g_i\|_{e,U_0}.$ Thus we have improved the constant ${M}$ in the hypothesis (13) to ${O( \frac{1}{L} M ) + O_{U_2}(1)}$. Choosing ${L}$ large enough and iterating, we conclude that we can bootstrap any finite constant ${M}$ in (13) to ${O(1)}$. Of course, there is no reason why there has to be a finite ${M}$ for which (13) holds in the first place. However, one can rectify this by the usual trick of creating an epsilon of room. Namely, one replaces the escape norm ${\| g \|_{e,U_0}}$ by, say, ${\|g\|_{e,U_0}+\epsilon}$ for some small ${\epsilon > 0}$ in the definition of ${\| \|_{*,U_0}}$ and in the hypothesis (13). Then the bound (13) will be automatic with a finite ${M}$ (of size about ${O(1/\epsilon)}$). One can then run the above argument with the requisite changes and conclude a bound of the form $\displaystyle \| g_1 \ldots g_n \|_{e,U_0} \ll \sum_{i=1}^n (\|g_i\|_{e,U_0}+\epsilon)$ uniformly in ${\epsilon}$; we omit the details. Sending ${\epsilon \rightarrow 0}$, we have thus shown Proposition 12. $\Box$ Now we can finish the proof of Proposition 9. Let ${G}$ be a locally compact NSS group, and let ${U_0}$ be a sufficiently small neighbourhood of the identity. From Proposition 12, we see that the escape norm ${\| \|_{e,U_0}}$ and the modified escape norm ${\| \|_{*,U_0}}$ are comparable. We have seen ${d_{*,U_0}}$ is a left-invariant semi-metric. As ${G}$ is NSS and ${U_0}$ is small, there are no non-identity elements with zero escape norm, and hence no non-identity elements with zero modified escape norm either; thus ${d_{*,U_0}}$ is a genuine metric. We now claim that ${d_{*,U_0}}$ generates the topology of ${G}$. Given the left-invariance of ${d_{*,U_0}}$, it suffices to establish two things: firstly, that any open neighbourhood of the identity contains a ball around the identity in the ${d_{*,U_0}}$ metric; and conversely, any such ball contains an open neighbourhood around the identity. To prove the first claim, let ${U}$ be an open neighbourhood around the identity, and let ${U' \subset U}$ be a smaller neighbourhood of the identity. From (12) we see (if ${U'}$ is small enough) that ${\| \|_{*,U_0}}$ is comparable to ${\| \|_{e,U'}}$, and ${U'}$ contains a small ball around the origin in the ${d_{*,U_0}}$ metric, giving the claim. To prove the second claim, consider a ball ${B(0,r)}$ in the ${d_{*,U_0}}$ metric. For any positive integer ${m}$, we can find an open neighbourhood ${U_m}$ of the identity such that ${U_m^m \subset U_0}$, and hence ${\|g\|_{e,U_0} \leq \frac{1}{m}}$ for all ${g \in U_m}$. For ${m}$ large enough, this implies that ${U_m \subset B(0,r)}$, and the claim follows. To finish the proof of Proposition 9, we need to verify the escape property (1). Thus, we need to show that if ${g \in G}$, ${n \geq 1}$ are such that ${n \|g\|_{*,U_0}}$ is sufficiently small, then we have ${\|g^n\|_{*,U_0} \gg n \|g\|_{*,U_0}}$. We may of course assume that ${g}$ is not the identity, as the claim is trivial otherwise. As ${\|\|_{*,U_0}}$ is comparable to ${\| \|_{e,U_0}}$, we know that there exists a natural number ${m \ll 1 / \| g \|_{*,U_0}}$ such that ${g^m \not \in U_0}$. Let ${U_1}$ be a neighbourhood of the identity small enough that ${U_1^2 \subset U_0}$. We have ${\|g^i\|_{*,U_0} \leq n \|g\|_{*,U_0}}$ for all ${i=1,\ldots,n}$, so ${g^i \in U_1}$ and hence ${m > n}$. Let ${m+i}$ be the first multiple of ${n}$ larger than ${n}$, then ${i \leq n}$ and so ${g^i \in U_1}$. Since ${g^m \not \in U_0}$, this implies ${g^{m+i} \not \in U_1}$. Since ${m+i}$ is divisible by ${n}$, we conclude that ${\| g^n \|_{e,U_1} \geq \frac{n}{m+i} \gg n \| g \|_{*,U_0}}$, and the claim follows from (12). — 3. From subgroup trapping to NSS — We now turn to the task of proving Proposition 8. Intuitively, the idea is to use the subgroup trapping property to find a small compact normal subgroup ${N}$ that contains ${Q(V)}$ for some small ${V}$, and then quotient this group out to get an NSS group. Unfortunately, because ${N}$ is not necessarily contained in ${V}$, this quotienting operation may create some additional small subgroups. To fix this, we need to pass from the compact subgroup ${N}$ to a smaller one. In order to understand the subgroups of compact groups, the main tool will be the Peter-Weyl theorem. Actually, we will just need the following weak version of that theorem: Theorem 13 (Weak Peter-Weyl theorem) Let ${G}$ be a compact group, and let ${U}$ be a neighbourhood of the identity in ${G}$. Then there exists a finite-dimensional real linear representation ${\rho: G \rightarrow GL(V)}$ of ${G}$ (i.e. a continuous homomorphism from ${G}$ to the general linear group ${GL(V)}$ of a finite-dimensional real vector space ${V}$) whose kernel ${\hbox{ker}(\rho)}$ lies in ${U}$. Equivalently, there exists a compact normal subgroup ${H}$ of ${G}$ contained in ${U}$ such that ${G/H}$ is isomorphic to a compact subgroup of ${GL(V)}$. Proof: As ${G}$ is compact, it has a Haar probability measure ${\mu}$. Let ${W}$ be a symmetric open neighbourhood of the identity such that ${W^2 \subset U}$. The convolution operator ${T: L^2(G) \rightarrow L^2(G)}$ given by ${Tf := f * 1_W}$ is a self-adjoint integral operator on a probability space with bounded measurable kernel and is thus compact (indeed, it is a Hilbert-Schmidt integral operator). By the spectral theorem, ${L^2(G)}$ then decomposes as the orthogonal sum of the eigenspaces of ${T}$, with all the eigenspaces ${V_\lambda}$ corresponding to non-zero eigenvalues ${\lambda}$ being finite-dimensional. Note that ${T}$ commutes with the left translation operators ${\tau_g}$ for every ${g \in G}$, so all of the eigenspaces ${V_\lambda}$ are invariant with respect to this action, and so we have finite-dimensional linear represenations ${\rho_\lambda: G \rightarrow GL(V_\lambda)}$ for each non-zero eigenvalue ${\lambda}$. Let ${g \in G \backslash U}$, then ${\tau_g T 1_W \neq T 1_W}$ (the supports are disjoint). The function ${T1_W}$ lies in the direct sum of the ${V_\lambda}$ with ${\lambda}$ non-zero, and so there must exist at least one ${V_\lambda}$ such that the projections of ${T1_W}$ and ${\tau_g T 1_W}$ to ${V_\lambda}$ are distinct. We conclude that ${\rho_\lambda(g)}$ is non-trivial for this ${\lambda}$ and ${g}$; by continuity, the same is true for all ${g'}$ in an open neighbourhood of ${g}$. By compactness of ${G \backslash U}$, we may thus find a finite number ${\lambda_1,\ldots,\lambda_k}$ of non-zero eigenvalues such that for each ${g \in G \backslash U}$, ${\rho_{\lambda_i}(g)}$ is non-trivial for at least one ${i=1,\ldots,k}$. The representation ${\rho := \rho_{\lambda_1} \oplus \ldots \oplus \rho_{\lambda_k}}$ can then be seen to have all the required properties. $\Box$ For us, the main reason why we need the Peter-Weyl theorem is that the linear spaces ${GL(V)}$ automatically have the NSS property, even though ${G}$ need not. Thus, one can view Theorem 13 as giving the compact case of Theorem 4. We now prove Proposition 8, using an argument of Yamabe. Let ${G}$ be a locally compact group with the subgroup trapping property, and let ${U}$ be an open neighbourhood of the identity. We may find a smaller neighbourhood ${U_1}$ of the identity with ${U_1^2 \subset U}$, which in particular implies that ${\overline{U_1} \subset U}$; by shrinking ${U_1}$ if necessary, we may assume that ${\overline{U_1}}$ is compact. By the subgroup trapping property, one can find an open neighbourhood ${U_2}$ of the identity such that ${\langle Q(U_2) \rangle}$ is contained in ${U_1}$, and thus ${H := \overline{\langle Q(U_2) \rangle}}$ is a compact subgroup of ${G}$ contained in ${U_1}$. By shrinking ${U_2}$ if necessary we may assume ${U_2 \subset U_1}$. Ideally, if ${H}$ were normal and contained in ${U_2}$, then the quotient group ${G/H}$ would have the NSS property. Unfortunately ${H}$ need not be normal, and need not be contained in ${U_2}$, but we can fix this as follows. Applying Theorem 13, we can find a compact normal subgroup ${N}$ of ${H}$ contained in ${U_2 \cap H}$ such that ${H/N}$ is isomorphic to a linear group, and in particular is NSS. In particular, we can find an open symmetric neighbourhood ${U_3}$ of the identity in ${G}$ such that ${U_3 N U_3 \subset U_2}$ and that the quotient space ${\pi(U_3 N U_3 \cap H)}$ has no non-trivial subgroups in ${H/N}$, where ${\pi: H \rightarrow H/N}$ is the quotient map. We now claim that ${N}$ is normalised by ${U_3}$. Indeed, if ${g \in U_3}$, then the conjugate ${N^g := g^{-1} N g}$ of ${N}$ is contained in ${U_3 N U_3}$ and hence in ${U_2}$. As ${N^g}$ is a group, it must thus be contained in ${Q(U_2)}$ and hence in ${H}$. But then ${\pi(N^g)}$ is a subgroup of ${H/N}$ that is contained in ${\pi(U_3 N U_3 \cap H)}$, and is hence trivial by construction. Thus ${N^g \subset N}$, and so ${N}$ is normalised by ${U_3}$. If we then let ${G'}$ be the subgroup of ${G}$ generated by ${N}$ and ${U_3}$, we see that ${G'}$ is an open subgroup of ${G}$, with ${N}$ a compact normal subgroup of ${G'}$. To finish the job, we need to show that ${G'/N}$ has the NSS property. It suffices to show that ${U_3 N U_3 / N}$ has no nontrivial subgroups. But any subgroup in ${U_3 N U_3 / N}$ pulls back to a subgroup in ${U_3 N U_3}$, hence in ${U_2}$, hence in ${Q(U_2)}$, hence in ${H}$; since ${(U_3 N U_3 \cap H)/N}$ has no nontrivial subgroups, the claim follows. — 4. From metrisable to subgroup trapping — We now perform the most difficult step, which is to establish Proposition 7. This step will require both the weak Peter-Weyl theorem (Theorem 13) and the Gleason technology, as well as some of the basic theory of Hausdorff distance; as such, this is perhaps the most “infinitary” of all the steps in the argument. The Gleason-type arguments can be encapsulated in the following proposition, which is a weak version of the subgroup trapping property: Proposition 14 (Finite trapping) Let ${G}$ be a locally compact group, let ${U}$ be an open neighbourhood of the identity, and let ${m \geq 1}$ be an integer. Then there exists an open neighbourhood ${V}$ of the identity with the following property: if ${Q \subset Q[V]}$ is a symmetric set containing the identity, and ${n \geq 1}$ is such that ${Q^n \subset U}$, then ${Q^{mn} \subset U^8}$. Informally, Proposition 14 asserts that subsets of ${Q[V]}$ grow much more slowly than “large” sets such as ${U}$. We remark that if one could replace ${U^8}$ in the conclusion here by ${U}$, then a simple induction on ${n}$ (after first shrinking ${V}$ to lie in ${U}$) would give Proposition 7. It is the loss of ${8}$ in the exponent that necessitates some non-trivial additional arguments. Proof: } Let ${V}$ be small enough to be chosen later, and let ${Q, n}$ be as in the proposition. Once again we will convolve together two “Lipschitz” functions ${\psi, \eta}$ to obtain a good bump function ${\phi = \psi*\eta}$ which generates a useful metric for analysing the situation. The first bump function ${\psi: G \rightarrow {\bf R}}$ will be defined by the formula $\displaystyle \psi(x) := \sup \{ 1 - \frac{j}{n}: x \in Q^j U; j = 0,\ldots,n \} \cup \{0\}.$ Then ${\psi}$ takes values in ${[0,1]}$, equals ${1}$ on ${U}$, is supported in ${U^2}$, and obeys the Lipschitz type property $\displaystyle |\partial_q \psi(x)| \leq \frac{1}{n} \ \ \ \ \ (19)$ for all ${q \in Q}$. The second bump function ${\eta: G \rightarrow {\bf R}}$ is similarly defined by the formula $\displaystyle \eta(x) := \sup \{ 1 - \frac{j}{M}: x \in (V^{U^4})^j U; j = 0,\ldots,M \} \cup \{0\},$ where ${V^{U^4} := \{ g^{-1} x g: x \in V, g \in U^4 \}}$, where ${M}$ is a quantity depending on ${m}$ and ${U}$ to be chosen later. If ${V}$ is small enough depending on ${U}$ and ${m}$, then ${(V^{U^4})^M \subset U}$, and so ${\eta}$ also takes values in ${[0,1]}$, equals ${1}$ on ${U}$, is supported in ${U^2}$, and obeys the Lipschitz type property $\displaystyle |\partial_g \psi(x)| \leq \frac{1}{M} \ \ \ \ \ (20)$ for all ${g \in V^{U^4}}$. Now let ${\phi := \psi * \eta}$. Then ${\phi}$ is supported on ${U^4}$ and ${\| \phi \|_{C_c(G)} \gg 1}$ (where implied constants can depend on ${U}$, ${\mu}$). As before, we conclude that ${g \in U^8}$ whenever ${\|g\|_\phi}$ is sufficiently small. Now suppose that ${q \in Q[V]}$; we will estimate ${\|q\|_\phi}$. From (5) one has $\displaystyle \|q\|_\phi \ll \frac{1}{n} \| q^n \|_\phi + \sup_{0 \leq i \leq n} \| \partial_{q^i} \partial_{q} \phi \|_{C_c(G)}$ (note that ${\partial_{q^i}}$ and ${\partial_q}$ commute). For the first term, we can compute $\displaystyle \| q^n \|_\phi = \sup_x |\partial_{q^n} (\psi * \eta)(x)|$ and $\displaystyle \partial_{q^n} (\psi * \eta)(x) = \int_G \psi(y) \partial_{(q^n)^y}(y^{-1} x) d\mu(y).$ Since ${q \in Q[V]}$, ${q^n \in V}$, so by (20) we conclude that $\displaystyle \| q^n \|_\phi \ll \frac{1}{M}.$ For the second term, we similarly expand $\displaystyle \partial_{q^i} \partial_{q^i} \phi(x) = \int_G (\partial_q \psi)(y) \partial_{(q^n)^y}(y^{-1} x) d\mu(y).$ Using (20), (19) we conclude that $\displaystyle |\partial_{q^i} \partial_{q^i} \phi(x)| \ll \frac{1}{Mn}.$ Putting this together we see that $\displaystyle \|q\|_\phi \ll \frac{1}{Mn}$ for all ${q \in Q[V]}$, which in particular implies that $\displaystyle \| g \|_\phi \ll \frac{m}{M}$ for all ${g \in Q^{mn}}$. For ${M}$ sufficiently large, this gives ${Q^{mn} \subset U^8}$ as required. $\Box$ We will also need the following compactness result in the Hausdorff distance $\displaystyle d_H( E, F ) := \max( \sup_{x \in E} \hbox{dist}(x,F), \sup_{y \in F} \hbox{dist}(E, y) )$ between two non-empty closed subsets ${E, F}$ of a metric space ${(X,d)}$. Example 1 In ${{\bf R}}$ with the usual metric, the finite sets ${\{ \frac{i}{n}: i=1,\ldots,n\}}$ converge in Hausdorff distance to the closed interval ${[0,1]}$. Lemma 15 The space ${K(X)}$ of non-empty closed subsets of a compact metric space ${X}$ is itself a compact metric space (with the Hausdorff distance as the metric). Proof: It is easy to see that the Hausdorff distance is indeed a metric on ${K(X)}$, and that this metric is complete. The total boundedness of ${X}$ easily implies the total boundedness of ${K(X)}$ (indeed, once one can cover ${X}$ by the ${\epsilon}$-neighbourhood of a finite set ${F}$, one can cover ${K(X)}$ by the ${2\epsilon}$-neighbourhood of ${K(F)}$, by “rounding” off any closed subset of ${X}$ to the nearest subset of ${F}$). The claim then follows from the Heine-Borel theorem. $\Box$ Now we can prove Proposition 7. Let ${G}$ be a locally compact group endowed with some metric ${d}$, and let ${U}$ be an open neighbourhood of the identity; by shrinking ${U}$ we may assume that ${U}$ is precompact. Let ${V_i}$ be a sequence of balls around the identity with radius going to zero, then ${Q[V_i]}$ is a symmetric set in ${V_i}$ that contains the identity. If, for some ${i}$, ${Q[V_i]^n \subset U}$ for every ${n}$, then ${\langle Q (V_i) \rangle \subset U}$ and we are done. Thus, we may assume for sake of contradiction that there exists ${n_i}$ such that ${Q[V_i]^{n_i} \subset U}$ and ${Q[V_i]^{n_i + 1} \not \subset U}$; since the ${V_i}$ go to zero, we have ${n_i \rightarrow \infty}$. By Proposition 14, we can also find ${m_i \rightarrow \infty}$ such that ${Q[V_i]^{m_i n_i} \subset U^8}$. The sets ${\overline{Q[V_i]}^{n_i}}$ are closed subsets of ${\overline{U}}$; by Lemma 15, we may pass to a subsequence and assume that they converge to some closed subset ${E}$ of ${\overline{U}}$. Since the ${Q[V_i]}$ are symmetric and contain the identity, ${E}$ is also symmetric and contains the identity. For any fixed ${m}$, we have ${Q[V_i]^{m n_i} \subset U^8}$ for all sufficiently large ${i}$, which on taking Hausdorff limits implies that ${E^m \subset \overline{U^8}}$. In particular, the group ${H := \overline{\langle E \rangle}}$ is a compact subgroup of ${G}$ contained in ${\overline{U^8}}$. Let ${U_1}$ be a small neighbourhood of the identity in ${G}$ to be chosen later. By Theorem 13, we can find a normal subgroup ${N}$ of ${H}$ contained in ${U_1 \cap H}$ such that ${H/N}$ is NSS. Let ${B}$ be a neigbourhood of the identity in ${H/N}$ so small that ${B^{10}}$ has no small subgroups. A compactness argument then shows that there exists a natural number ${k}$ such that for any ${g \in H/N}$ that is not in ${B}$, at least one of ${g, \ldots,g^k}$ must lie outside of ${B^{10}}$. Now let ${\epsilon > 0}$ be a small parameter. Since ${Q[V_i]^{n_i+1} \not \subset U}$, we see that ${Q[V_i]^{n_i+1}}$ does not lie in the ${\epsilon}$-neighbourhood ${\pi^{-1}(B)_\epsilon}$ of ${\pi^{-1}(B)}$ if ${\epsilon}$ is small enough, where ${\pi: H \rightarrow H/N}$ is the projection map. Let ${n'_i}$ be the first integer for which ${Q[V_i]^{n'_i}}$ does not lie in ${\pi^{-1}(B)_\epsilon}$, then ${n'_i \leq n_i+1}$ and ${n'_i \rightarrow \infty}$ as ${i \rightarrow \infty}$ (for fixed ${\epsilon}$). On the other hand, as ${Q[V_i]^{n'_i-1} \subset \pi^{-1}(B)_\epsilon}$, we see from another application of Proposition 14 that ${Q[V_i]^{kn'_i} \subset (\pi^{-1}(B)_\epsilon)^8}$ if ${i}$ is sufficiently large depending on ${\epsilon}$. On the other hand, since ${Q[V_i]^{n_i}}$ converges to a subset of ${H}$ in the Hausdorff distance, we know that for ${i}$ large enough, ${Q[V_i]^{2n_i}}$ and hence ${Q[V_i]^{n'_i}}$ is contained in the ${\epsilon}$-neighbourhood of ${H}$. Thus we can find an element ${g_i}$ of ${Q[V_i]^{n'_i}}$ that lies within ${\epsilon}$ of a group element ${h_i}$ of ${H}$, but does not lie in ${B_\epsilon}$; thus ${h_i}$ lies inside ${H \backslash \pi^{-1}(B)}$. By construction of ${B}$, we can find ${1 \leq j_i \leq k}$ such that ${h^{j_i}_i}$ lies in ${H \backslash \pi^{-1}(B^{10})}$. But ${h_i^{j_i}}$ also lies within ${o(1)}$ of ${g_i^{j_i}}$, which lies in ${Q[V_i]^{kn'_i}}$ and hence in ${(\pi^{-1}(B)_\epsilon)^8}$, where ${o(1)}$ denotes a quantity depending on ${\epsilon}$ that goes to zero as ${\epsilon \rightarrow 0}$. We conclude that ${H \backslash \pi^{-1}(B^{10})}$ and ${\pi^{-1}(B^8)}$ are separated by ${o(1)}$, which leads to a contradiction if ${\epsilon}$ is sufficiently small (note that ${\overline{\pi^{-1}(B^8)}}$ and ${H \backslash \pi^{-1}(B^{10})}$ are compact and disjoint, and hence separated by a positive distance), and the claim follows. — 5. From locally compact to metrisable — We finally establish Proposition 6, which is actually one of the easier steps of the argument (because the conclusion is so weak). This argument is also due to Gleason. Let ${G}$ be a locally compact group, and let ${U}$ be an open neighbourhood of the identity. Let ${U_0}$ be a symmetric precompact neighbourhood of the identity in ${U}$. We can then recursively construct a sequence $\displaystyle U_0 \supset U_1 \supset U_2 \supset \ldots$ of symmetric precompact neighbourhoods such that ${(U_{n+1}^{U_0})^2 \subset U_n}$ for each ${n \geq 0}$. In particular $\displaystyle U_{n+1} \subset \overline{U_{n+1}} \subset U_{n+1}^2 \subset U_n.$ If we then form $\displaystyle N := \bigcap_n U_n = \bigcap_n \overline{U_n}$ then ${N}$ is compact, symmetric, contains the origin, and ${N^2=N}$; thus ${N}$ is normal. Also, since ${U_{n+1}^{U_0} \subset U_n}$, we have ${N^{U_0} \subset N}$, thus ${N}$ is normalised by ${U_0}$. Thus if ${G'}$ is the group generated by ${U_0}$, then ${G'}$ is an open subgroup of ${G}$ and ${N}$ is a normal subgroup of ${G'}$. Let ${\pi: G' \rightarrow G'/N}$ be the quotient map, then we see that ${\pi(U_n)}$ are nested open sets with ${\overline{\pi(U_n)}}$ compact and whose intersection is the identity. From this one easily verifies that they form a neighbourhood base for ${G'/N}$. Thus ${G'/N}$ is first countable and Hausdorff, and thus metrisable by the Birkhoff-Kakutani theorem. As ${G}$ is locally compact, ${G'}$ and ${G'/N}$ are also locally compact, and the claim follows.
# nLab length scale Contents This entry is about scales in geometry and physics. For scales in algebra and linear logic, see scale. # Contents ## Idea length$\,$scale ## In physics energy and length scales in the observable universe (from cosmic scales, over fundamental particle-masses around the electroweak symmetry breaking to GUT scale and Planck scale): graphics grabbed from Zupan 19 graphics grabbed from here fundamental scales (fundamental physical units) • speed of light$\,$ $c$ • Planck's constant$\,$ $\hbar$ • gravitational constant$\,$ $G_N = \kappa^2/8\pi$ • Planck scale • Planck length$\,$ $\ell_p = \sqrt{ \hbar G / c^3 }$ • Planck mass$\,$ $m_p = \sqrt{\hbar c / G}$ • depending on a given mass $m$ • Compton wavelength$\,$ $\lambda_m = \hbar / m c$ • Schwarzschild radius$\,$ $2 m G / c^2$ • depending also on a given charge $e$ • Schwinger limit$\,$ $E_{crit} = m^2 c^3 / e \hbar$ • GUT scale • string scale • string tension$\,$ $T = 1/(2\pi \alpha^\prime)$ • string length scale$\,$ $\ell_s = \sqrt{\alpha'}$ • string coupling constant$\,$ $g_s = e^\lambda$ ## References Last revised on June 1, 2021 at 22:57:00. See the history of this page for a list of all contributions to it.
IIT JAM Follow November 18, 2020 7:22 pm 30 pts can't we solve it in other way... . . .. . . .. . . . . • 0 Likes • Shares
# An arch is in the form of a semi-ellipse. It is 8 m wide and 2 m high at the centre. Find the height of the arch at a point 1.5 m from one end. Question: An arch is in the form of a semi-ellipse. It is 8 m wide and 2 m high at the centre. Find the height of the arch at a point 1.5 m from one end. Solution: Since the height and width of the arc from the centre is 2 m and 8 m respectively, it is clear that the length of the major axis is 8 m, while the length of the semi-minor axis is 2 m. The origin of the coordinate plane is taken as the centre of the ellipse, while the major axis is taken along the x-axis. Hence, the semi-ellipse can be diagrammatically represented as The equation of the semi-ellipse will be of the form $\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}=1, y \geq 0$, where $a$ is the semi-major axis Accordingly, $2 a=8 \Rightarrow a=4$ b = 2 Therefore, the equation of the semi-ellipse is $\frac{x^{2}}{16}+\frac{y^{2}}{4}=1, y \geq 0$ (1) Let A be a point on the major axis such that AB = 1.5 m. Draw $A C \perp O B$. OA = (4 – 1.5) m = 2.5 m The x-coordinate of point C is 2.5. On substituting the value of x with 2.5 in equation (1), we obtain $\frac{(2.5)^{2}}{16}+\frac{y^{2}}{4}=1$ $\Rightarrow \frac{6.25}{16}+\frac{y^{2}}{4}=1$ $\Rightarrow y^{2}=4\left(1-\frac{6.25}{16}\right)$ $\Rightarrow y^{2}=4\left(\frac{9.75}{16}\right)$ $\Rightarrow y^{2}=2.4375$ $\Rightarrow y=1.56 \quad$ (approx.) $\therefore A C=1.56 \mathrm{~m}$ Thus, the height of the arch at a point 1.5 m from one end is approximately 1.56 m.
Lemma 13.31.6. Let $\mathcal{A}$ be an abelian category. Assume colimits over $\mathbf{N}$ exist and are exact. Then countable direct sums exists and are exact. Moreover, if $(A_ n, f_ n)$ is a system over $\mathbf{N}$, then there is a short exact sequence $0 \to \bigoplus A_ n \to \bigoplus A_ n \to \mathop{\mathrm{colim}}\nolimits A_ n \to 0$ where the first map in degree $n$ is given by $1 - f_ n$. Proof. The first statement follows from $\bigoplus A_ n = \mathop{\mathrm{colim}}\nolimits (A_1 \oplus \ldots \oplus A_ n)$. For the second, note that for each $n$ we have the short exact sequence $0 \to A_1 \oplus \ldots \oplus A_{n - 1} \to A_1 \oplus \ldots \oplus A_ n \to A_ n \to 0$ where the first map is given by the maps $1 - f_ i$ and the second map is the sum of the transition maps. Take the colimit to get the sequence of the lemma. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
#### Further Information In addition to our technical support (e.g. via chat), you’ll find resources on our website that may help you with your design using Dlubal Software. Receive information including news, useful tips, scheduled events, special offers, and vouchers on a regular basis. • ### Do I have the opportunity to consider cross-sectional weakenings due to drilling or the like without having to re-model the cross-section in, for example, the DUENQ program? If the holes are subject to a regular grid, they may be defined by the composite cross sections (Figure 1). Otherwise it is still possible to reduce the cross section in the properties as a whole (Fig. 2). Thus, a flat reduction of the stiffness of the cross section is made. Unfortunately, it is not possible to distinguish between reductions in the compression and tension of the cross section. This possibility results from a reshaping of the cross-section in the DUENQ program or in some modules of the JOINTS family. • ### Can be dimensioned in RSTAB and RFEM with the add-on module joints, wood-wood joints according to EN 1995-1-1 chapter 8.2.2? The design of wood joints with pin-shaped fasteners is currently limited to steel-wood joints according to Chapter 8.2.3. Pure wood-wood compounds on shear under consideration of the Johansen theory are therefore currently not possible. Direct wood-wood connections by means of full-thread screws are possible with the module RF- / JOINTS wood - wood to wood with which main secondary beam connections can be calculated. • ### How can a displacement of bars in a certain connection point be taken into account? There are basically two options here: • The use of rod eccentricities, see technical contribution Consideration of rod and surface eccentricities • In the case of, for example, differently defined rod end joints in combination with different dimensions of offsets, the use of couplings or rigid rods may help, see Figure 1 • ### For a proof with RF glass according to DIN 18008, I have to carry out one calculation with full safety glass and one without fusion for insulating glass with laminated safety glass. Is it possible to do this in a single model with just one pass? Unfortunately, it is not possible to perform the calculation with and without push group in a file. For each state a separate file must be created. • ### Why can not I prove a double bend with the module RF- / JOINTS? In the current regulations, connection means or connections are always detected in one plane only. The reason for this is that the evidence of shear etc. can only be analyzed in the 2D plane. The verification of the bearing evidence, for example, is not possible for off-plane failure. Since in a three-dimensional calculation also internal forces in v y and v z can occur, it has been proven in practice to allow a small proportion of internal forces in the secondary direction and not fully exploit the connection. However, if the proportion of the lateral force in the secondary direction becomes too high, a detailed investigation with an FE simulation may be necessary. • ### What does the design information mean:Geometry error left side:End plate of the girder: Lamda2> 1.4Column flange: Lamda2> 1.4Neither the manual nor online, I can find an explanation. The auxiliary values λ1 and λ2 are required to determine the effective lengths. These two values are used to determine an α value from Figure 6.19 of EN 1993-1-8, which is then used to calculate the effective lengths (for non-circular flow lines) of the T-stub flanges. The maximum value for λ1 is 0.9 and the maximum value for λ2 is 1.4 -> see Figure 6.11 of EN 1993-1-8 Based on your geometry, however, the result is, for example, a λ2 of> 1.4 for the end plate α can only be calculated with the maximum value of 1.4. • ### Where do I find the connection moments due to the applied rotational restraint in RF‑/STEEL EC3? Connection moments are not calculated in RF-/STEEL EC3. • ### How is the rotational restraint stiffness calculated for a non-continuous rotational restraint (for example, purlins) in RF‑/STEEL EC3? FAQ 002542 EN-US Results STEEL EC3 RF-STEEL EC3 The total rotational spring comprises of several individual rotational springs, which are given in [1] as Equation 10.11. In the case of a non-continuous rotational restraint by purlins, RF‑/STEEL EC3 takes into account the rotational stiffness due to the connection stiffness CD,A, the rotational stiffness CD,C due to the bending stiffness of the available purlins, and also the rotational stiffness CD,B due to the section deformation, if activated. Since the execution of the connection is unknown, the infinite value is set by default. The spring stiffnesses are considered as a reciprocal value 1/C, thus giving 'infinitely' the result of spring stiffness = 0. If you know the rotational spring stiffness of the connection, you can specify this value manually. The rotational stiffness CD,C due to the bending stiffness is determined according to the following formula: $\begin{array}{l}{\mathrm c}_{\mathrm D,\mathrm C}\;=\;{\mathrm C}_{\mathrm D,\mathrm C}\;/\;\mathrm e\\{\mathrm C}_{\mathrm D,\mathrm C}\;=\frac{\mathrm k\;\cdot\;\mathrm E\;\cdot\;\mathrm I}{\mathrm s}\end{array}$ where E is the modulus of elasticity k is the coefficient for position (inner span, outer span) I is the moment of inertia Iy s is the distance of the beams e is the distance of the purlins The rotational stiffness CD,B due to the bending stiffness is determined according to the following formula: $\begin{array}{l}{\mathrm c}_{\mathrm D,\mathrm B}\;=\;{\mathrm C}_{\mathrm D,\mathrm B}\;/\;\mathrm e\\{\mathrm C}_{\mathrm D,\mathrm B}\;=\sqrt{\mathrm E\;\cdot\;\mathrm t_{\mathrm w}^3\;\cdot\;\mathrm G\;\cdot\;{\mathrm I}_{\mathrm T,\mathrm G}\;/\;(\mathrm h-{\mathrm t}_{\mathrm f})}\\{\mathrm I}_{\mathrm T,\mathrm G}\;=\mathrm b\;\cdot\;\mathrm t_{\mathrm f}^3\;/\;3\end{array}$ where E is the modulus of elasticity tw is the web thickness of the truss or the supported component G is the G modulus h is the height of the truss or the supported component tf is the flange thickness of the truss b is the truss width e is the distance of the purlins The attached example includes two design cases. Case 1 was designed without taking into account the cross-section deformation. The total rotational spring stiffness is CD = CD,C = 4,729 kNm/m Case 2 was designed while taking into account the cross-section deformation. The total rotational spring stiffness is CD = 72.02 kNm/m Single spring CD,B = 73.14 kNm/m Single spring CD,C = 4,729 kNm/m Total spring: $\begin{array}{l}\frac1{{\mathrm C}_{\mathrm D}}=\frac1{{\mathrm C}_{\mathrm D,\mathrm B}}+\frac1{{\mathrm C}_{\mathrm D,\mathrm C}}\;=\;\frac1{73.14}+\frac1{4,729}\\{\mathrm C}_{\mathrm D}\;=72.02\;\mathrm{kNm}/\mathrm m\end{array}$ • ### Where can I find the internal forces at certain nodes in the printout report? The easiest way to find the internal forces at these nodes is to print the pictures of members into the printout report. If this solution is not an option, you can also find the values in the result table 4.1 in the printout report. Since the extreme values are only activated by default, it is still necessary to activate nodal values in the selection. It is usually not reasonable to include the internal forces of all member in the printout report. Therefore, you can only select the members that are relevant to you. • ### I have designed a steel connection using RF‑JOINTS and then created a model to compare it in RFEM. Why are the results not identical? RF-JOINTS performs an idealized design of a steel connection according to the standard, which cannot be easily compared with an exact FE calculation. Thus, the following conditions must be met: • Consideration or exclusion of friction/compression/tension within the contact solid (tab "Solid") as well as for the bolts modeled subsequently • Consideration of internal forces and deformations within the subsequently modeled end plates or similar, which causes redistribution of bolt forces in the FE calculation (in contrast to the idealized design in RF‑JOINTS) This can be corrected by rigid connection objects, for example (an end plate as a rigid surface). • Uniform load introduction into the FE model, for example, by using rigid members or rigid surfaces as described in the article "FEM Modeling Approaches of Rigid Connections" 1 - 10 of 15
# Synopsis: Space Tests of the Equivalence Principle The MICROSCOPE satellite mission has tested the equivalence principle with unprecedented precision, showing no deviations from the predictions of general relativity. According to general relativity, all bodies should fall at the same rate in a gravitational field, independent of their composition. This “equivalence” principle has so far withstood all experimental tests, but finding violations could provide clues to theories that unify gravity with the other fundamental forces or that explain dark matter or dark energy. In 2016, the National Center for Space Studies (CNES), France’s space agency, launched the MICROSCOPE satellite, which is dedicated to testing the equivalence principle. The mission’s science team, led by researchers at the French Aerospace Lab (ONERA) and at the Côte d'Azur Observatory (OCA), also in France, has now reported its first results. By taking advantage of the quiet environment in space, the mission tested the equivalence principle with record accuracy, finding no deviations from the predictions of relativity. MICROSCOPE tests the equivalence principle by comparing the acceleration of two masses that follow the same orbit around Earth for a long period of time. The two masses have identical geometries—they are hollow cylinders—but different compositions: the first is made of a platinum alloy, the second of a titanium alloy. Two independent electrostatic feedback circuits apply the forces needed to keep the two masses motionless with respect to the satellite, that is, on the same orbit. A difference in the applied forces would signal a violation of the equivalence principle. Using data collected over 120 orbits around Earth, the team calculated the dimensionless Eötvös parameter, which quantifies the difference in the two masses’ accelerations and should thus be zero unless the principle is violated. A statistical analysis of the data showed that they are consistent with zero with a precision of $1{0}^{-14}$—10 times better than previous tests. The collaboration expects to achieve a $1{0}^{-15}$ precision by the end of the mission in 2018. This research is published in Physical Review Letters. –Matteo Rini Matteo Rini is the Deputy Editor of Physics. More Features » ### Announcements More Announcements » Gravitation Gravitation ## Next Synopsis Atomic and Molecular Physics ## Related Articles Gravitation ### Synopsis: A Test of Gravity’s Quantum Side Two proposals describe how to test whether gravity is inherently quantum by measuring the entanglement between two masses. Read More » Gravitation ### Synopsis: Gravitational Waves Could Reveal Black Hole Origins Observations of black hole mergers in the very distant Universe could indicate whether all black holes form from stars.   Read More »
How do you graph f(x)=3/x^2(x+5) using holes, vertical and horizontal asymptotes, x and y intercepts? How do you graph $f\left(x\right)=\frac{3}{{x}^{2}\left(x+5\right)}$ using holes, vertical and horizontal asymptotes, x and y intercepts? You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Leon Webster For vertical asymptotes, look at the denominator. ${x}^{2}\left(x+5\right)\ne 0$ because the graph will be undefined ${x}^{2}\left(x+5\right)\ne 0$ means that the vertical asymptotes are x=0 and x=−5 when solving for x For the horizontal asymptote, look at the degree of the numerator and denominator If the degree of the numerator is less than the degree of the denominator, then the horizontal asymptote is y=0. Or you can think of it as if you put some numbers into your f(x), you will notice that ${x}^{2}\left(x+5\right)$ will be a lot bigger than 3. Hence, when you divide a small number by a big number, $\frac{3}{{x}^{2}\left(x+5\right)}\to 0$ For your x and y intercepts, sub y=0 for x intercepts, $0=\frac{3}{{x}^{2}\left(x+5\right)}$ 0=3 which isn't true so no x-intercepts sub x=0 for y intercepts which cannot occur since x=0 is an asymptote Therefore, there are no intercepts Below is the graph. You can see that the endpoints of the graphs approaches the asymptotes y=0, x=0 and x=−5 graph{3/(x^2(x+5) [-10, 10, -5, 5]}
Diofantine equations of the sum of cubes equal to square Does anyone know how to obtain infinite solutions of the following diofantine equation $X^2=DY^3+K^3$ all numbers non zero natural. - Your question is as unclear as ever. Do you want to find solutions for any fixed $D$ and $K$, or are those also variables (in which case the question is trivial)? In what domain are you looking for solutions? –  Alex B. Dec 19 '11 at 5:47 @Alex: D is variable K is fixed all numbers natural. –  Vassilis Parassidis Dec 19 '11 at 6:56 @Vassili: If you are simply asking for infinitely many solutions with $D$ variable, that is too easy. Pick $K=1$, $X$ anything bigger than $1$, $Y=1$, $D=X^2-1$. One can also arrange for $Y$ arbitrarily large. I would have expected $D$ fixed. –  André Nicolas Dec 19 '11 at 7:50 @AndréNicolas $K$ begin fixed means you can't just "pick $K=1$". It is god-given to you. Vassili, you are looking for integer solutions in a family of quadratic twists of a given elliptic curve. While for a given elliptic curve, there are only finitely many integer solutions, and they can be found algorithmically, I am pretty sure that to find infinitely many solutions in a family of quadratic twists is outside current number theoretic technology. –  Alex B. Dec 19 '11 at 13:28 @Alex B.: I thought that the OP wanted a family of examples. O.K., god has provided a $K$. Pick any $X^2>K^3$, pick $Y=1$, $D=X^2-K^3$. –  André Nicolas Dec 19 '11 at 14:47
• Create Account ## Does C++ have a squared operator? Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 21 replies to this topic ### #1LAURENT*  Members 651 Like 0Likes Like Posted 30 May 2014 - 03:04 PM I mean something like 2^2 = 4 or 10^2 = 100. I'm programming physics and I kinda need it. If you have code that replicate exponents in math I would like to ask if you could give it to me. Thanks all for any support. ### #2Washu  Senior Moderators 7714 Like 4Likes Like Posted 30 May 2014 - 03:30 PM No, there is no operator for squaring, why would there be when you can simply do it yourself with the multiplication operator. And std::pow for doing arbitrary powers (with some limitations) Edited by Washu, 30 May 2014 - 03:31 PM. In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh. ScapeCode - Blog | SlimDX ### #3LAURENT*  Members 651 Like 0Likes Like Posted 30 May 2014 - 03:38 PM I guess you're right. I thought of that right I made the thread. I'm still trying out physics and the math isn't coming together like I hoped. I'm thinking about how the compiler will process my code and it look pretty inaccurate to me. ### #4Burnt_Fyr  Members 1656 Like 0Likes Like Posted 30 May 2014 - 03:46 PM There is not an operator per se, but there is the pow function , which raises a base to an exponent. Edited by Burnt_Fyr, 30 May 2014 - 05:26 PM. ### #5LAURENT*  Members 651 Like 0Likes Like Posted 30 May 2014 - 03:57 PM You know what I'm so sorry for making this thread. We have a math and physics sub forum and my thread will serve it's purpose better there. Thank for incoming me about POW. I will research it. ### #6LightOfAnima  Members 121 Like 0Likes Like Posted 01 June 2014 - 02:32 PM If you so want, you could always try creating a custom class that overloads the ^ operator, using pow internally Edited by LightOfAnima, 01 June 2014 - 02:33 PM. ### #7L. Spiro  Members 24828 Like 3Likes Like Posted 01 June 2014 - 03:08 PM Don’t use pow() unless necessary; there is no guarantee the compiler will optimize it away into “X*X” and when it doesn’t you will have a major performance problem. Just use X*X. L. Spiro ### #8Vortez  Members 2705 Like 0Likes Like Posted 02 June 2014 - 08:04 AM I know most ppl here dont like macro, and i don't either most of the time, but i think that's a good case for one: #define POW2(x) ((x)*(x)) or maybe an inline function if you dont like macros. ### #9Bacterius  Members 13100 Like 2Likes Like Posted 02 June 2014 - 08:36 AM I know most ppl here dont like macro, and i don't either most of the time, but i think that's a good case for one: #define POW2(x) ((x)*(x)) or maybe an inline function if you dont like macros. This macro evaluates its operand twice though which is actually probably your enemy in situations where you would want to use this macro (compilers are not necessarily allowed to reorder operations or even do common subexpression elimination with floating-point math) so I would recommend against it. “If I understand the standard right it is legal and safe to do this but the resulting value could be anything.” ### #10Cornstalks  Members 7026 Like 2Likes Like Posted 02 June 2014 - 12:10 PM I know most ppl here dont like macro, and i don't either most of the time, but i think that's a good case for one: #define POW2(x) ((x)*(x)) or maybe an inline function if you dont like macros. My question is why you think this is good cause for a macro? What benefit does this provide over: template <typename T> T pow2(const T& x) { return x * x; } // Or, if using a more "modern" C++: template <typename T> constexpr T pow2(const T& x) { return x * x; } [ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ] ### #11Vortez  Members 2705 Like -3Likes Like Posted 02 June 2014 - 02:02 PM Well, for one it's extremely simple, compared to your method, and two, it's really fast since it's dosn't involve invoking a function, but it's just a sugestion after all, the op can choose whatever method he prefer, i was just pointing it out. ### #12Washu  Senior Moderators 7714 Like 2Likes Like Posted 02 June 2014 - 03:33 PM Well, for one it's extremely simple, compared to your method, and two, it's really fast since it's dosn't involve invoking a function, but it's just a sugestion after all, the op can choose whatever method he prefer, i was just pointing it out. Except that Cornstalks function will be inlined in any decent compiler. It also avoids a rather nasty trap that your macro has, which can result in hard to diagnose bugs and produce undefined behavior. I'll leave it up to you to figure out what the trap is... Edited by Washu, 02 June 2014 - 03:35 PM. In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh. ScapeCode - Blog | SlimDX ### #13Chris_F  Members 3018 Like 0Likes Like Posted 02 June 2014 - 04:27 PM Well, for one it's extremely simple, compared to your method, and two, it's really fast since it's dosn't involve invoking a function, but it's just a sugestion after all, the op can choose whatever method he prefer, i was just pointing it out. There are no disadvantages to the templated constexpr function. In Clang it is inlined at all optimizations levels except for -O0, and if you want to you could use __attribute__((always_inline)) to force it to inline under all circumstances. If you think macros are "fast" and functions are slow, you are using the wrong mindset. ### #14ilreh  Members 284 Like 0Likes Like Posted 02 June 2014 - 05:04 PM I would stick with L. Spiro's suggestion. If you're using simple multiplications, a (good) compiler internally tries to solve this with bit shifting which is a very fast way of altering numbers. Adding stuff to the stack for such a simple operation is a waste. ### #15Vortez  Members 2705 Like 0Likes Like Posted 02 June 2014 - 05:25 PM Except that Cornstalks function will be inlined in any decent compiler. It also avoids a rather nasty trap that your macro has, which can result in hard to diagnose bugs and produce undefined behavior. I'll leave it up to you to figure out what the trap is... What's the trap? I took it from a very good c++ book, not that i really care anyway lol. I don't mean to be rude, but it would be pretty dumb to not call this macro correctly. The parentesis should be able to proctect from the bug you speak of, i believe. If not, then ill just shut my trap Edited by Vortez, 02 June 2014 - 05:31 PM. ### #16ApochPiQ  Moderators 21389 Like 2Likes Like Posted 02 June 2014 - 05:27 PM Wielder of the Sacred Wands ### #17Vortez  Members 2705 Like 0Likes Like Posted 02 June 2014 - 05:39 PM ??? I just compiled this and it worked just fine (Answer 9) #include "stdio.h" #define POW2(x) ((x)*(x)) int test() { return 3; } int main() { int x = POW2(test()); printf("%d\n", x); return 0; } ### #18fastcall22  Moderators 9742 Like 1Likes Like Posted 02 June 2014 - 05:52 PM Cool story, now try this one: #include <iostream> #define POW2(x) ((x)*(x)) class Foobar { public: Foobar() : _a(5) { } int a() { return _a++; } private: int _a; }; int main() { using namespace std; Foobar f; int x = POW2(f.a()); cout << x << endl; } Edited by fastcall22, 02 June 2014 - 05:55 PM. zlib: eJzVVLsSAiEQ6/1qCwoK i7PxA/2S2zMOZljYB1TO ZG7OhUtiduH9egZQCJH9 KcJyo4Wq9t0/RXkKmjx+ cgU4FIMWHhKCU+o/Nx2R LEPgQWLtnfcErbiEl0u4 0UrMghhZewgYcptoEF42 YMj+Z1kg+bVvqxhyo17h nUf+h4b2W4bR4XO01TJ7 qFNzA7jjbxyL71Avh6Tv odnFk4hnxxAf4w6496Kd OgH7/RxC ### #19L. Spiro  Members 24828 Like 2Likes Like Posted 02 June 2014 - 05:55 PM I just compiled this and it worked just fine (Answer 9) #include "stdio.h" #define POW2(x) ((x)*(x)) int test() { return 3; } int main() { int x = POW2(test()); printf("%d\n", x); return 0; } What happens if you try: #include "stdio.h" #define POW2(x) ((x)*(x)) int main() { int y = 3; int x = POW2(++y); printf("%d\n", x); return 0; } Spoiler L. Spiro Edited by L. Spiro, 02 June 2014 - 05:58 PM. ### #20Washu  Senior Moderators 7714 Like 2Likes Like Posted 02 June 2014 - 06:22 PM I just compiled this and it worked just fine (Answer 9) #include "stdio.h" #define POW2(x) ((x)*(x)) int test() { return 3; } int main() { int x = POW2(test()); printf("%d\n", x); return 0; } What happens if you try: #include "stdio.h" #define POW2(x) ((x)*(x)) int main() { int y = 3; int x = POW2(++y); printf("%d\n", x); return 0; } Spoiler L. Spiro Cool story, now try this one: #include <iostream> #define POW2(x) ((x)*(x)) class Foobar { public: Foobar() : _a(5) { } int a() { return _a++; } private: int _a; }; int main() { using namespace std; Foobar f; int x = POW2(f.a()); cout << x << endl; } Three prime examples of the problems of using macros in this manner. Note that Spiro's results in undefined behavior. In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh. ScapeCode - Blog | SlimDX Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
# Another interpretation of function space let $X$ and $Y$ be sets and $Y^X$ the set of function $f:X\to Y$. How can we interpret $Y^X$ as the cartesian product $\prod_{x\in X}Y_x$ where $Y_x=Y$ for each $x\in X$? - They are the same. No interpretation is needed. –  André Nicolas Feb 23 '12 at 8:19 I’m not really sure what your question is: by definition that Cartesian product is the set of functions from $X$ to $Y$. –  Brian M. Scott Feb 23 '12 at 8:20 for example suppose $X$ is finite. then we have the bijection $$\prod_{x\in X}Y_x\to Y^X$$ defined by sending a tuple $$(y_1,y_2,...,y_n)$$ maps to the map $f$ that sends $f(x_1)=y_1,...,f(x_n)=y_n$ is that correct? –  palio Feb 23 '12 at 8:20 Yes, that is correct. –  Brian M. Scott Feb 23 '12 at 8:25 The elements in the Cartesian product $\prod_{x\in X}Y_x$ are sequences indexed by $X$ whose elements are members of $Y$, namely $\langle y_x\mid x\in X\rangle$. Such sequence is naturally isomorphic to $\{\langle x,y_x\rangle\mid x\in X\}$, which is exactly a function from $X$ to $Y$. This means that there is a very natural way to identify $\prod_{x\in X} Y_x$ with $Y^X$. That natural isomorphism is the identity: a sequence indexed by $X$ whose elements are members of $Y$ is a function from $X$ to $Y$. –  Brian M. Scott Feb 23 '12 at 8:26 @Brian: But what if $X$ is finite? I do agree that in a context where products come up, it is a good idea to teach them as functions. Apparently not everyone do that (otherwise this question would not have come up...) :-) –  Asaf Karagila Feb 23 '12 at 8:46 Well, no matter how you teach products, sooner or later you'll be confronted with the question of why $(X \times Y) \times Z$ is "only" naturally isomorphic to $X \times (Y \times Z)$ and not actually equal... –  Zhen Lin Feb 23 '12 at 10:34
# hadoop-common-commits mailing list archives ##### Site index · List index Message view Top From Apache Wiki <[email protected]> Subject [Hadoop Wiki] Update of "Hive/Design" by JeffHammerbacher Date Thu, 22 Jan 2009 04:11:34 GMT Dear Wiki user, You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification. The following page has been changed by JeffHammerbacher: ------------------------------------------------------------------------------ * Tables - These are analogous to Tables in Relational Databases. Tables can be filtered, projected, joined and unioned. Additionally all the data of a table is stored in a directory in hdfs. Hive also supports notion of external tables wherein a table can be created on prexisting files or directories in hdfs by providing the appropriate location to the table creation DDL. The rows in a table are organized into typed columns similar to Relational Databases. * Partitions - Each Table can have one or more partition keys which determine how the data is stored e.g. a table T with a date partition column ds had files with data for a particular date stored in the <table location>/ds=<date> directory in hdfs. Partitions allow the system to prune data to be inspected based on query predicates, e.g. a query that in interested in rows from T that satisfy the predicate T.ds = '2008-09-01' would only have to look at files in <table location>/ds=2008-09-01/ directory in hdfs. * Buckets - Data in each partition may in turn be divided into Buckets based on the hash of a column in the table. Each bucket is stored as a file in the partition directory. Bucketing allows the system to efficiently evaluate queries that depend on a sample of data (these are queries that use SAMPLE clause on the table). - \end{itemize} Apart from primitive column types(integers, floating point numbers, generic strings, dates and booleans), Hive also supports arrays and maps. Additionally, users can compose their own types programatically from any of the primitives, collections or other user defined types. The typing system is closely tied to the serde(Serailization/Deserialization) and object inspector interfaces. User can create their own types by implementing their own object inspectors and using these object inspectors they can create their own serdes to serialize and deserialize their data into hdfs files). These two interfaces provide the necessary hooks to extend the capabilities of Hive when it comes to understanding other data formats and richer types. Builtin object inspectors like ListObjectInspector, StructObjectInspector and MapObjectInspector provide the necessary primitives to compose richer types in an extensible manner. For maps(associative arrays) and arrays useful builtin functions like size and index operators are provided. The dotted notation is used to navigate nested types e.g. a.b.c = 1 looks at field c of field b of type a and compares that with 1. Mime View raw message
A Tabu Search Algorithm for the Multi-period Inspector Scheduling Problem 17 Sep 2014  ·  , , , · This paper introduces a multi-period inspector scheduling problem (MPISP), which is a new variant of the multi-trip vehicle routing problem with time windows (VRPTW). In the MPISP, each inspector is scheduled to perform a route in a given multi-period planning horizon. At the end of each period, each inspector is not required to return to the depot but has to stay at one of the vertices for recuperation. If the remaining time of the current period is insufficient for an inspector to travel from his/her current vertex $A$ to a certain vertex B, he/she can choose either waiting at vertex A until the start of the next period or traveling to a vertex C that is closer to vertex B. Therefore, the shortest transit time between any vertex pair is affected by the length of the period and the departure time. We first describe an approach of computing the shortest transit time between any pair of vertices with an arbitrary departure time. To solve the MPISP, we then propose several local search operators adapted from classical operators for the VRPTW and integrate them into a tabu search framework. In addition, we present a constrained knapsack model that is able to produce an upper bound for the problem. Finally, we evaluate the effectiveness of our algorithm with extensive experiments based on a set of test instances. Our computational results indicate that our approach generates high-quality solutions. PDF Abstract No code implementations yet. Submit your code now Datasets Add Datasets introduced or used in this paper Results from the Paper Edit Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.
# strdup.c /* Copyright (c) 2008, Atmel Corporation Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holders nor the names of contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* $Id$ */ #include <stdlib.h> #include <string.h> #include "sectionname.h" /** \file */ /** \ingroup avr_string \fn char *strdup(const char *s1) \brief Duplicate a string. The strdup() function allocates memory and copies into it the string addressed by s1, including the terminating null character. \warning The strdup() function calls malloc() to allocate the memory memory by calling free(). \returns The strdup() function returns a pointer to the resulting string dest. If malloc() cannot allocate enough storage for the string, strdup() will return NULL. \warning Be sure to check the return value of the strdup() function to make sure that the function has succeeded in allocating the memory! */ ATTRIBUTE_CLIB_SECTION char * strdup(const char *s1) { char *s2 = malloc(strlen(s1)+1); if (s2 != NULL) { strcpy(s2, s1); } return(s2); }
## Hypocycloid The curve produced by a small Circle of Radius rolling around the inside of a large Circle of Radius . A hypocycloid is a Hypotrochoid with . To derive the equations of the hypocycloid, call the Angle by which a point on the small Circle rotates about its center , and the Angle from the center of the large Circle to that of the small Circle . Then (1) so (2) Call . If , then the first point is at minimum radius, and the Cartesian parametric equations of the hypocycloid are (3) (4) If instead so the first point is at maximum radius (on the Circle), then the equations of the hypocycloid are (5) (6) An -cusped non-self-intersecting hypocycloid has . A 2-cusped hypocycloid is a Line Segment, as can be seen by setting in equations (3) and (4) and noting that the equations simplify to (7) (8) A 3-cusped hypocycloid is called a Deltoid or Tricuspoid, and a 4-cusped hypocycloid is called an Astroid. If is rational, the curve closes on itself and has cusps. If is Irrational, the curve never closes and fills the entire interior of the Circle. -hypocycloids can also be constructed by beginning with the Diameter of a Circle, offsetting one end by a series of steps while at the same time offsetting the other end by steps times as large in the opposite direction and extending beyond the edge of the Circle. After traveling around the Circle once, an -cusped hypocycloid is produced, as illustrated above (Madachy 1979). Let be the radial distance from a fixed point. For Radius of Torsion and Arc Length , a hypocycloid can given by the equation (9) (Kreyszig 1991, pp. 63-64). A hypocycloid also satisfies (10) where (11) and is the Angle between the Radius Vector and the Tangent to the curve. The Arc Length of the hypocycloid can be computed as follows (12) (13) (14) so (15) for . Integrating, (16) The length of a single cusp is then (17) If is rational, then the curve closes on itself without intersecting after cusps. For and with , the equations of the hypocycloid become (18) (19) and (20) Compute (21) The Area of one cusp is then (22) If is rational, then after cusps, (23) The equation of the hypocycloid can be put in a form which is useful in the solution of Calculus of Variations problems with radial symmetry. Consider the case , then (24) But , so , which gives (25) (26) Now let (27) so (28) (29) then (30) The Polar Angle is (31) But (32) (33) (34) so (35) Computing (36) then gives (37) Finally, plugging back in gives (38) This form is useful in the solution of the Sphere with Tunnel problem, which is the generalization of the Brachistochrone Problem, to find the shape of a tunnel drilled through a Sphere (with gravity varying according to Gauss's law for gravitation ) such that the travel time between two points on the surface of the Sphere under the force of gravity is minimized. References Bogomolny, A. Cycloids.'' http://www.cut-the-knot.com/pythagoras/cycloids.html. Kreyszig, E. Differential Geometry. New York: Dover, 1991. Lawrence, J. D. A Catalog of Special Plane Curves. New York: Dover, pp. 171-173, 1972. Lee, X. Epicycloid and Hypocycloid.'' http://www.best.com/~xah/SpecialPlaneCurves_dir/EpiHypocycloid_dir/epiHypocycloid.html. MacTutor History of Mathematics Archive. Hypocycloid.'' http://www-groups.dcs.st-and.ac.uk/~history/Curves/Hypocycloid.html. Madachy, J. S. Madachy's Mathematical Recreations. New York: Dover, pp. 225-231, 1979. Wagon, S. Mathematica in Action. New York: W. H. Freeman, pp. 50-52, 1991. Yates, R. C. Epi- and Hypo-Cycloids.'' A Handbook on Curves and Their Properties. Ann Arbor, MI: J. W. Edwards, pp. 81-85, 1952.
# Finding best response function with probabilities (BR) given a normal-matrix representation of the game We are given players 1, 2 and their respective strategies (U, M, D for player 1, L, C, R for player 2) and the corresponding pay-offs through the following table: $\begin{matrix} 1|2 & L & C & R\\ U & 10, 0 & 0, 10 & 3, 3 \\ M & 2,10 & 10, 2 & 6, 4\\ D & 3, 3 & 4, 6 & 6, 6 \end{matrix}$ Player 1 holds a belief that player 2 will play each of his/her strategies with frequency $\frac{1}{3}$, in other words $\alpha_2$=($\frac{1}{3}, \frac{1}{3}, \frac{1}{3}$). Given this, I need to find best response $BR_1(\alpha_2)$ for player 1. I am wondering how to do this mathematically. I have an intuitive way, and am not sure if it is correct. I think that if player 2 chooses $L$, player 1 is better off choosing $U$, anf if player 2 chooses $C$ or $R$ player 1 is better off choosing $M$, so best response for player 1 given his/her beliefs about player 2 would be $(\frac{1}{3}, \frac{2}{3}, 0)$, but I do not know if this is correct and how to get an answer mathematically (though I think it could involve derivatives which I would have to take to see what probability value would maximize the pay-off, just can't think of a function). - I guess your intuition is that, when player #1 chooses $U$ (with probability $1/3$) you want player #2 to choose $L$ simultaneously, and when #1 chooses $M$, #2 chooses $C$ or $R$. But in fact player #1 has no control over #2. For instance, when #1 chooses $U$, #2 may also choose $C$, which gives #1 zero payoff. – GWu Apr 7 '11 at 5:51 Why don't you compute the average payoff for player #1 for each of his three choices? Clearly he's going to choose the one that is largest. - Hint: If he chooses U, the average payoff is (10+0+3)/3 = 13/3 to player #1 and (0+10+3)/3 = 13/3 to player #2. – Carl Brannen Apr 7 '11 at 2:42 I am trying to find the best "mixed" response -- meaning that just as player two's moves are given in probabilities, I should give player one's moves in probabilities too, in a way that would maximize the pay-offs. What you are suggesting is picking one "pure" strategy for player one (sorry for my miscommunication). – user9233 Apr 7 '11 at 2:56 @Daniel: The utility one gets from choosing a mixed strategy is a weighted average of the utilities one would get for choosing the corresponding pure strategies. Since an average of some numbers is always between the max and min, no mixed strategy can ever achieve higher expected payoff than any pure strategy. So best responses are always either pure strategies, or if there are several pure strategies tied for best, then mixtures among these pure best responses will also be best responses. In your case there is no tie, so any non-pure mixture would be strictly worse than a pure strategy. – Noah Stein Apr 7 '11 at 12:25 I think Carl already gave the right answer. Even though mixed strategies may look better than pure ones, actually they are not. Suppose that player 1 choose a mixed strategy $\alpha_1=(a,b,c)$. Then the probability of each scenario is given by $$\begin{matrix} 1|2 & L & C & R\\ U & a/3 & a/3 & a/3 \\ M & b/3 & b/3 & b/3 \\ D & c/3 & c/3 & c/3 \end{matrix}$$ If you consider the payoff of player 1 in each of the 9 possible scenarios and compute the expected payoff, that will be $13a/3+18b/3+13c/3$. The maximum is achieved at $(a,b,c)=(0,1,0)$ which is $18/3$. Of course, the best response for player 1 is a pure strategy --- always choosing $M$. - The way I'd put it is that the only time a mixed, (probabilistic) strategy is best is when you can assume that your opponent is using a probabilistic strategy. – Carl Brannen Apr 8 '11 at 21:05
Due to a series of errors, none of which were the authors', this portion of Table I, in Volume 98, page 819–820, was printed incorrectly. The corrected portion is in bold font.
## [email protected] #### View: Message: [ First | Previous | Next | Last ] By Topic: [ First | Previous | Next | Last ] By Author: [ First | Previous | Next | Last ] Font: Proportional Font Subject: Re: Modules From: Date: Tue, 30 Jun 1998 10:01:28 +0100 Content-Type: text/plain Parts/Attachments: text/plain (40 lines) Hans writes > LaTeX already has several very slow commands with slow parsing, for > example the variations of "new" with the LaTeX special style of defining > arguments. Having a definition command that is slower than def is acceptable. Having a mechanism where the _use_ of commands is an order of magnitude (or more) slower than directly calling a control sequence is not acceptible, as long as the system is to be programmed in TeX (or a TeX-like system such as etex or omega). This means that while it might be useful sometimes to parse out' the argument specification from the command name this would only ever be used in limited circumstances, eg to define one variant form in terms of another if for some reason the normal base' NNNN (or nnnn) form is for some reason unavailable. As Javier commented the current N-n distinction is not always perfect. The exact detail of the conventions may well need changing, but the basic principle must be that command sequences are accessed directly as tokens at the level we are talking about (which is the low level programming conventions in which higher level markup can be defined). This does not mean that the document level markup has to be token based. Already LaTeX has the environment constructs which are not. \begin{enumerate} is 12 tokens rather than \begingroup\enumerate which is two. The environment syntax can fairly easily be offered in an alternative syntax, say ... which is about the same in terms of speed and memory usage as \begin \end (you have to work a bit harder to get a full XML system though:-). Having the document level markup being something that is parsed, by hand', using a parser written in TeX is acceptable, but only if the result of that initial processing is a set of command tokens that can be executed in the normal way of command tokens directly looked up in TeX's hash table. I mention this point (again) not to try to stifle discussion but because I got rather lost at what level you are intending some of your module proposals to be used. David`
## Geometry: Common Core (15th Edition) Find several sums and look for a pattern. $1+2=3$ $6+9=15$ $4+13=17$ $11+8=19$ All the sums are odd numbers.
# This is based on the book A Dream Called Home by Reyna Grande, I have 10 questions. 1. What did the man who was always outside of Reyna's house sell? 2. What is the name of Reyna's younger sister who lived with her in college? 3. What kind of club did Reyna join at her university? 4. What was Reyna's relationship to her son's father, Francisco? 5. Who was Marta, the woman who ended up becoming a mentor to Reyna? 6. What is the name of Reyna's hometown in Mexico? 7. Which university did Reyna tra ###### Question: This is based on the book A Dream Called Home by Reyna Grande, I have 10 questions. 1. What did the man who was always outside of Reyna's house sell? 2. What is the name of Reyna's younger sister who lived with her in college? 3. What kind of club did Reyna join at her university? 4. What was Reyna's relationship to her son's father, Francisco? 5. Who was Marta, the woman who ended up becoming a mentor to Reyna? 6. What is the name of Reyna's hometown in Mexico? 7. Which university did Reyna transfer to? 8. What was Emerging Voices? 9. What job did Reyna have while attending university? 10. What was Reyna's first full time job after college? ### While doing yardwork, I noticed tan bumps near the base of a tree trunk. They were about 3cm tall and smooth. The next day, I noticed 7 more bumps; now some were 5cm tall and seemed to have a narrow base and an umbrella-like top. By the end of the week I noticed that my tree had bare patches on its trunk all round the bumpy area. While doing yardwork, I noticed tan bumps near the base of a tree trunk. They were about 3cm tall and smooth. The next day, I noticed 7 more bumps; now some were 5cm tall and seemed to have a narrow base and an umbrella-like top. By the end of the week I noticed that my tree had bare patches on its ... ### 1.The temperature went from -12°F to 18°F. Which of the following expressions could be used to find how much the temperature changed? 2. add 5 + (-8). 1.The temperature went from -12°F to 18°F. Which of the following expressions could be used to find how much the temperature changed? 2. add 5 + (-8).... ### A nurse is caring for a client who is Group B Streptococcus positive during labor. What medication will the nurse anticipate administering to this client and how will it be given? A nurse is caring for a client who is Group B Streptococcus positive during labor. What medication will the nurse anticipate administering to this client and how will it be given?... ### Why do cells become cancer cells? Why do cells become cancer cells?... ### How to graph y=log3x how to graph y=log3x... ### What type of information is written in the very top white area of a "Cornell" notes planning sheet? What type of information is written in the very top white area of a "Cornell" notes planning sheet?... ### What is 7 2/5 - 3 3/8 what is 7 2/5 - 3 3/8... ### What is the cubic units of 2ft by 2 ft by 2ft what is the cubic units of 2ft by 2 ft by 2ft... ### What volume would 3.00 moles of neon gas have at 295 K and 645 mmHg? What volume would 3.00 moles of neon gas have at 295 K and 645 mmHg?... ### Write the Correct Form of the verb "Ser" 1. ¿Tú de Colombia? 2. Mis amigas y yo de Nicaragua. 3. Tus padres de Puerto Rico. 4. Mi amiga Sandra de Nueva York. 5. Yo de Texas. 6. Nosotros de Estados Unidos. 7. Usted de Mexico. 8. ¿ ustedes de España? 9. ¿De dónde tú? 10. Ellos de Venezuela Write the Correct Form of the verb "Ser" 1. ¿Tú de Colombia? 2. Mis amigas y yo de Nicaragua. 3. Tus padres de Puerto Rico. 4. Mi amiga Sandra de Nueva York. 5. Yo de Texas. 6. Nosotros de Estados Unidos. 7. Usted de Mexico. 8. ¿ ustedes de España? 9. ¿De dónde tú? 10. Ellos de Venezuela... ### Read the sentence. That dog does not like..... Which identifies the object pronoun that best complete the sentence? Her, she, he, i Read the sentence. That dog does not like..... Which identifies the object pronoun that best complete the sentence? Her, she, he, i... ### Which of the following processes can help dalry farmers generate a substantial amount of methane from the manure from their herds? A. fermentation B. krebs cycle C. aerobic respiration D. glycolysis Which of the following processes can help dalry farmers generate a substantial amount of methane from the manure from their herds? A. fermentation B. krebs cycle C. aerobic respiration D. glycolysis... ### Clearly assigned roles and responsibilities are particularly important in family businesses. A. True B. False Clearly assigned roles and responsibilities are particularly important in family businesses. A. True B. False... ### "Come on James!" Nick called him from the street. "Let's get to the court. The guys are waiting." James looked down at his list of assignments on the computer. "Nick, I can't go today," he yelled regretfully out his window. "I am behind on my work. You are, too! You should come over and we can study together." No response from Nick. James sighed and opened his website. A short time later he heard a loud knock at the door. It was Nick. "You were right. Let's finish up and then go shoot hoops. "Come on James!" Nick called him from the street. "Let's get to the court. The guys are waiting." James looked down at his list of assignments on the computer. "Nick, I can't go today," he yelled regretfully out his window. "I am behind on my work. You are, too! You should come over and we can stu... ### Which graph represents the function Which graph represents the function...
Thread: Preons! (subquarks, etc...) View Single Post PF Gold P: 643 More Yershov: http://es.arxiv.org/abs/physics/0301034 Date: Thu, 16 Jan 2003 09:54:57 GMT (18kb) Date (revised v2): Fri, 7 Mar 2003 18:07:30 GMT (18kb) Neutrino masses and the structure of the weak gauge boson Authors: V.N.Yershov Comments: LaTex2e, 4 pages (V2: minor linguistical corrections) Subj-class: General Physics It is supposed that the electron neutrino mass is related to the structures and masses of the $W^\pm$ and $Z^0$ bosons. Using a composite model of fermions (described elsewhere), it is shown that the massless neutrino is not consistent with the high values of the experimental masses of $W^\pm$ and $Z^0$. Consistency can be achieved on the assumption that the electron-neutrino has a mass of about 4.5 meV. Masses of the muon- and tau-neutrinos are also estimated. Comment: Basically, the assumption is that the composite mass formula for bosons is the inverse of the composite mass formula for fermions. The preon formulas for W+, W-, and Zo is set forth and the entire scheme in briefly recapped in a page or so. The experimental value of the W under the boson formula is used to establish the Z and neutrino masses, which should be nearly neutral under the original rest mass formula used for fermions. The W-=electron neutrino, electron. The W+=electron neutrino, postitron. The Z=W+,W-.
# Why is this second order system difficult to control? Why is system of the type $$T = 1/(s^2 - 1)$$ Difficult to control using standard control methods? When I look at frequency plots, it doesn't seem to give me any important information as to why this system would be difficult to control using classical methods Can someone who knows control theory (especially the part about compensators or other classical control techniques) inform me as to why this system would be difficult to be controlled using classical frequency based methods? $\dfrac{1}{(s^2-1)}$ factorises to $\dfrac{1}{(s+1)(s-1)}$, which has a stable pole at $s=-1$ and an unstable pole at $s=1$. If this is the open-loop TF (you don't say if you wish to close the loop around this TF), then it can be stabilised by placing a zero at $s=-a$, giving an OLTF:$\dfrac{s+a}{(s^2-1)}$, and a CLTF: $\dfrac{s+a}{s^2-1+s+a}=\dfrac{s+a}{s^2+s+(a-1)}$, which is stable for $a>1$.
# A question about the asymptotic series in perturbative expansion in QFT + 3 like - 0 dislike 404 views Related post I heard about the argument that the perturbative expansion in QFT must be asymptotic, such as http://ncatlab.org/nlab/show/perturbation+theory#DivergenceConvergence Roughly this can be understood as follows: since the pertrubation is in the coupling constant about vanishing coupling, a non-zero radius of convergence would imply that the theory is finite also for negative coupling (where “things fly apart”), which will not happen in realistic theories. My question is, why the negative coupling causing "things fly apart" will lead to asymptotic series? Suppose we have an electron and positron, if they fly apart, it corresponds to electron and electron, and vice versa. This post imported from StackExchange Physics at 2017-10-16 12:26 (UTC), posted by SE-user user26143 Maybe because the instanton part diverges for $g \to 0^-$, because of the exponential. This post imported from StackExchange Physics at 2017-10-16 12:26 (UTC), posted by SE-user Trimok An asymptotic series happens when you expand a function around a singular point (that's why it has zero radius of convergence). The point of zero coupling is usually a branch point, which is singular. The reason for the appearance of the branch point is that for negative coupling the vacuum is unstable because the vacuum will create pairs of particles and antiparticles and "they will fly apart" forever. This leads to an imaginary part (i.e. a branch cut) in correlation functions. That's why (in your language) "things flying apart will lead to asymptotic series." This post imported from StackExchange Physics at 2017-10-16 12:26 (UTC), posted by SE-user QuantumDot + 6 like - 0 dislike I think both the link and the question refer to Dyson's heuristical argument on why the perturbative series in QED could not be convergent. It goes somewhat like this: Suppose the series in $\alpha$ converges in some radius. The it converges also for negative values of the coupling constant inside that radius. Consider now what kind of theory is QED with a negative $\alpha$. In that theory like charges attract and opposite sign charges repel each other. Now take the vacuum of the non-interacting theory. This state is unstable against formation of electron-positron pairs, because said pairs would repel indefinitely leading t a lower energy state. You can make an even lower energy state by adding pairs that would separate in two clusters of electrons on one side and positrons on the other. Therefore this theory does not possess a ground state, since the spectrum is unbounded from below. Hence there is no consistent QED for negative coupling constant. And so the perturbative series cannot converge. As far as I know this argument is strictly heuristical, but shortly after it appeared (in the 1950s) Walter Thirring proved the divergence for a particular scattering process (I'm not in my office so I don't have the correct reference, but I'm positive the paper is in Thirring's selected works as well as explained in his autobiography). Note that this question of convergence was proeminent in a period where people tried to define QFT in terms of the perturbative expansion. The advent of non-perturbative effects (instantons, confinements, pick your favorite...) coupled with renormalization group showed that this was the wrong approach for QFT. But note also that the argument of vacuum instability depends on the interaction. It does not preclude the possibility of designing a QFT with convergent perturbative expansion, it just shows that it it not to be expected in a general theory. This post imported from StackExchange Physics at 2017-10-16 12:26 (UTC), posted by SE-user cesaruliana answered Aug 9, 2014 by (215 points) If there is no stable ground state is a problem of perturbative expansion, the $\phi^3$ theory was extensively studied in Srednicki's book, e.g. sections 16-19, does it mean the perturbative computations on $\phi^3$ making no sense? This post imported from StackExchange Physics at 2017-10-16 12:26 (UTC), posted by SE-user user26143 The absence of a ground state is not a problem of perturbation theory in this case, but rather the converse, the independent argument for no ground state implies a problem in the perturbative expansion. Since we know that the theory does not exist we get that the series must not be convergent. The $\phi^3$ in Srednicki's is different. In page 71, section 9, he explicitly mentions that although there is no stable vacuum this is invisible by perturbation theory. In any case he also says that he is only interested in giving an example, and not concerned with overall consistency of the theory. This post imported from StackExchange Physics at 2017-10-16 12:26 (UTC), posted by SE-user cesaruliana My point is that in Srednicki's the argument is that suppose we have a physical system roughly described by $\phi^3$ theory. We can still use perturbation theory to describe some scattering for example, but since non-perturbatively there is no stable ground state what we can infer is that $\phi^3$ does not describe completely the system, but it may be adequate in a limited sense when restricted only to perturbation. The negative coupling QED is different, Dyson's argument regards perturbative processes and therefore already at the level of perturbative expansion the theory is nonsense This post imported from StackExchange Physics at 2017-10-16 12:26 (UTC), posted by SE-user cesaruliana Excuse me, I still have a question. Does "formation of electron-positron pairs" mean virtual, off-shell particle? Since on-shell electrons have rest mass, the repulsion lead to lower energy may not be larger than the rest mass. This post imported from StackExchange Physics at 2017-10-16 12:26 (UTC), posted by SE-user user26143 In the heuristical picture yes, I was implying that by starting with vacuum and considering virtual pairs it would be better for them to separate than to annihilate from energetic arguments This post imported from StackExchange Physics at 2017-10-16 12:26 (UTC), posted by SE-user cesaruliana Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:$\varnothing\hbar$ysicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
# Math Help - Solve By Factoring Help Need 1. ## Solve By Factoring Help Need height of ball thrown = h t = time in seconds h = -4.9t^2 + 38t + 1.75 what is the time of the ball when h = 50 I need to be able to solve it by factoring? i can do it if the numbers 4.9 and 38 have a common denominator but can not do when they are decimals? any help appreciated 2. Originally Posted by Ashley911 height of ball thrown = h t = time in seconds h = -4.9t^2 + 38t + 1.75 what is the height of the ball after 3 seconds I need to be able to solve it by factoring? i can do it if the numbers 4.9 and 38 have a common denominator but can do when they are decimals? any help appreciated $h(t) = -4.9t^2 + 38t + 1.75$ To find the height at 3 seconds solve $h(3)$ by putting in 3 wherever you find t in the original equation. No factoring required $h(3) = -4.9(3^2) + 38(3) + 1.75$ 3. I have to solve it by factoring its a homework question. 4. I suppose you could plug in 50 for h, then subtract 50 from both sides should look something like -4.9t^2 + 38t - 48.25 = 0 Then just use the quadratic formula to find the two roots of this equation. Using those two roots, you can find the factors of the polynomial. One of those roots will be your time, (the positive one). The quadratic formula will find your factors, however this is a bit redundant, since to find the factors you will need to find the roots, and one of the roots is your answer. Otherwise you could use the decomposition method shown here: How to Factor Second Degree Polynomials (Quadratic Equations) - wikiHow 5. Thanks but am still stuck here is my issue I need to find the two numbers that when multiplied = 4.9*48.25 and when added together are = 48.25 I keep getting decimals and believe this should not be the case. 6. Originally Posted by Ashley911 Thanks but am still stuck here is my issue I need to find the two numbers that when multiplied = 4.9*48.25 and when added together are = 48.25 I keep getting decimals and believe this should not be the case. You won't get neat factors because $b^2-4ac$ is not a perfect square 7. $h = -4.9t^2 + 38t + 1.75$ $\frac{\-b\pm\sqrt{b^2-4ac}}{2a}$ plug in the values to get $\frac{-38\pm\sqrt{38^2-4(-4.9)(1.75)}}{2(-4.90)}$ Now solve for both values: + and - 8. The OP could also complete the square - it's a kind of factoring $50 = -4.9t^2 + 38t + 1.75$ $4.9t^2-38t = -48.25$ $t^2 - \frac{38}{4.9} = -\frac{48.25}{4.9}$ $\left(t- \frac{38}{9.8}\right)^2 - \left(\frac{38}{4.9}\right)^2 = -\frac{48.25}{4.9}$ $\left(t- \frac{38}{9.8}\right)^2 = \left(\frac{38}{4.9}\right)^2 -\frac{48.25}{4.9}$ $t - \frac{38}{9.8} = \pm \sqrt{ \left(\frac{38}{4.9} \right)^2 -\frac{48.25}{4.9}}$ $t = \frac{38}{9.8} \pm \sqrt{ \left(\frac{38}{4.9} \right)^2 -\frac{48.25}{4.9}}$
# Archimedes' Principle: Equation with Solved Examples Have you ever wondered why large and massive steel ships do not sink but a small coin does? The answer is in Archimedes' principle which is closely related to the buoyant forces. To objects in fluids (such as water or even air!), two main forces applied upward buoyant force and downward gravitational force. Competing between these two forces determines whether an object sinks or floats in a fluid. This fundamental principle which was discovered by a Greek mathematician in the sixth century B.C. states and defines as below: Any object wholly or partially submerged in a fluid is buoyed up by a force with a magnitude of the weight of the displaced fluid by the object. When you lift a heavy object in a swimming pool, in fact, you are experiencing Archimedes' principle as water provides partial support for you to overcome the weight of an object placed in it. Or using Archimedes' principle, we can explain why hot air balloons ascend in the air. When a body is placed into a fluid, an upward force is always exerted on it by surrounding fluid which partially or wholly reduces the impact of downward weight force. This upward force, called buoyant force, was explained with solved examples in another tutorial. ## Derivation of Archimedes' principle: Method 1: Simple argument Suppose two bodies of the same size and shape and place them at some depth in a fluid. One is filled with an unknown substance of mass $m$ and the other is filled with the fluid surrounding it that has a mass of $m'$. Because both objects are at the same depth, the buoyant forces act on them are the same. These buoyant forces must be balanced with the objects' weight so that the objects remain in the same depth (or to maintain their equilibrium). For the object of mass $m$, Newton's second law of motions states, $F_B=mg$, and similarly for the object of mass $m'$, we have $F_B=m'g$. Therefore, $mg=F_B=m'f$ As you can see it is simpler, instead of balancing the buoyant force with an unknown weight $mg$, we can do it by a known weight $m'g$ which is the weight of the body of fluid whose volume equals to the volume of the original object. This is Archimedes' principle. Method 2: The physical cause of the upward force exerted by fluids on objects into it, is the pressure difference between the upper and lower sides of an object due to being at different depths of the fluid. Upon a surface at depth $h$ below the fluid level, the pressure is $P=P_0+\rho gh$ where $P_0$ is the pressure at the surface of the fluid and $\rho$ is the density of the fluid. As you can see, the lower side of an object sits at a greater depth so by definition of pressure, $P=\frac FA$, there is a large force upon it. Note that there are also horizontal forces exerted on an object in a fluid but since they are located at the same depth so their net is zero. In fact, all horizontal forces exerted on an object with any arbitrary shape can be shown to cancel the effect of each other. All that remains are the vertical forces applied on the top and bottom sides of the submerged body which their vector summing gives the upward buoyant force $F_b$. Now applying Newton's second law and balancing all forces in the vertical direction, we obtain the following formula for Archimedes' principle buoyant force = body's weight or $F_b=W$ Where buoyant force is defined as the product of fluid's density, displaced volume of the fluid by object into it and gravitational constant $g=10\,{\rm m/s^2}$ or $F_b=\rho_{fluid}\times V_{dis}\times g$ Now is the time to solve some examples to understand Archimedes' principle. Example: a block of wood floats in freshwater with two-fifth of its volume V submerged and in oil with 0.75V submerged. Find the density of (a) the wood (b) the oil. Solution: since wood floats in water so its weight must be balanced with the buoyancy force. (a) In a partially submerged body, the buoyancy force $F_b$ is defined as the density of fluid $\rho_f$ times the displaced volume of fluid $V_{dis}$ times the gravitational acceleration $g$. Thus, using Archimedes' principle equation, which is equating weight and buoyancy force, we get \begin{align*} W&=F_b \\ \\ \rho_{wood} \times V_{wood}\times g &=\rho_{water}\times V_{dis}\times g\\ \\ \rho_{wood}\times V_{wood} \times g&=(1)\left(\frac{2}{5}V_{wood}\right)g \\ \\ \rho_{wood}&=\frac25 \quad{\rm \frac{g}{cm^3}}\end{align*} (b) Similarly, we can find the oil's density as above \begin{align*} \left(\rho Vg\right)_{wood}&=\left(\rho' Vg\right)_{oil}\\ \\(400)(V)g&=\rho_{oil}\, (0.75V)g\\ \\\Rightarrow \rho_{oil} &=\frac{400}{0.75}\\ \\&=\frac{1600}{3}\quad {\rm kg/m^3}\end{align*} Example: an iron object of density $7.8\,{\rm g/cm^3}$ appears 200 N lighter in water than in air. (a) What is the volume of the object? (b) How much does it weigh in the air? Solution: Since the body has become lighter in water so there must be an upward force acting on the object which cancels some of the downward weight force. In fluids, this force is called floating or buoyancy force. (a) According to Archimedes' law, $200\,{\rm N}$ is the buoyancy force acting on the body which is obtained by the formula below \begin{align*} F_b &= \rho_{water} \times V_{object} \times g \\ \\ 200&=100\times V_{object}\times 10 \\ \\ \Rightarrow V_{object}&=\frac{2}{100}\quad {\rm m^3}\end{align*} (b) Body's weight, $W=\rho V g$ in air is calculated as $W=(7800)\left(\frac{2}{100}\right)(10)=1560\,{\rm N}$ Where $V$ is the actual volume of the body. As you can see above, one of the main applications of Archimedes' principle is finding the density of an unknown object. Example: a wooden rectangular slab with surface area $5.7\,{\rm m^2}$, volume $V=0.6\,{\rm m^3}$ and density $600\,{\rm kg/m^3}$ is placed slowly in freshwater. By what depth $h$ is the slab submerged? Solution: according to Archimedes' principle, the water will apply an upward buoyant force on the slab whose magnitude is equal to the weight of the water displaced by the slab. Thus, the buoyant force exerted on the slab is $F_b=m_{water}g=\rho_{water}V_{dis}g$ where $V_{dis}$ is the displaced volume of the water or amount of slab's volume which is underwater. Let $h$ be the height of the slab from the bottom side. Thus, $V_{dis}=Ah$ where $A$ is the base area of the slab. The weight of the slab is also given by $W=\rho_{slab}V_{slab}g$. Next, using Archimedes' principle equation as, $F_b=W$, we get \begin{align*}\rho_{water}\times (Ah) \times g&=\rho_{slab}\times V_{slab}\times g\\ \\ \Rightarrow h&=\frac{\rho_{slab}V_{slab}}{\rho_{water}A}\\ \\ &=\frac{600\times 0.6}{1000\times 5.7} \\ \\&=0.0632\quad {\rm m}\end{align*} ## Criteria for floating or sinking: Archimedes' principle simply gives us a rule of thumb to find out whether an object placed into a fluid sinks or floats. According to this principle, if we write all forces applied by a motionless fluid on a body submerged in it as upward buoyant force $F_b$ and downward weight force $W$, then there will be three situations depending on the sign of the net force $F_{net}=F_b \uparrow-W\downarrow$: (1) Sinking: when happens $F_{net}<0$, in this case, the upward buoyancy force is less than its downward weight force, then the object sinks. $\underbrace{\rho_{fluid}V_{fluid}g}_{buoyancy}<\underbrace{\rho_{obj}V_{obj}g}_{weight}$ For example, stone is denser than water, so when it is placed in water, it sinks. (2) Floating: when occurs $F_{net}>0$, consequently the positive upward buoyant force is balanced with the negative downward force of gravity (weight), then the object floats on the surface of the fluid. $\underbrace{\rho_{fluid}V_{fluid}g}_{buoyancy}=\underbrace{\rho_{obj}V_{obj}g}_{weight}$ Wood is less dense than water, so it floats. (3) Neutral buoyancy: there is a third case when $F_{net}=0$. In these situations, the object remains at that point of releasing in the fluid as motionless. This happens when the densities of object and fluid are equal. An example of neutral buoyancy is swimming fishes in the water. Fishes have a swimming bladder which can be filled with air together with their flesh make a composite object with average adjusted such that balances the density of the water and consequently it neither sinks nor floats in the water. Question: How much fraction of the volume of an iceberg is under the sea level. Solution: according to Archimedes' principle, since the iceberg floats on the water, so the upward buoyant force equals its weight. The magnitude of the buoyant force is the product of the iceberg's volume underwater, water's density, and gravitational acceleration. On the other hand, weight is defined as the product of the iceberg's actual volume, iceberg's density, and gravitational acceleration. Applying floating condition to find the fraction of the volume of the iceberg below sea level. \begin{align*}F_b&=W\\ \rho_{SW}V_{in-water}g&=\rho_{IB}Vg\\ \\ \Rightarrow \frac{V_{in-water}}{V}&=\frac{\rho_{IB}}{\rho_{SW}}\\ \\ &=\frac{0.92\times 10^{3}}{1.025\times 10^3} \\ \\ &=0.9\end{align*} Where in above $\rho_{IB}$ and $\rho_{SW}$ are the densities of iceberg and sea water, respectively. As you can see, about 90% of the volume of an iceberg is underwater. Author: Ali Nemati Page Created: 1/31/2021
# Revision history [back] Did you define e = [0, 1, 2] earlier in the same session? Try using exp(...) instead of e^(...). f = ^3 + exp(k*x) + sin(w*x) Did you define e = [0, 1, 2] earlier in the same session? Try using exp(...) instead of e^(...). f = ^3 x^3 + exp(k*x) + sin(w*x) Did you define e = [0, 1, 2] earlier in the same session? Try using exp(...) instead of e^(...). f = x^3 + * exp(k*x) + * sin(w*x)
# Trending arXiv Note: this version is tailored to @Smerity - though you can run your own! Trending arXiv may eventually be extended to multiple users ... ### Papers 1 2 23 24 25 26 27 28 29 30 #### Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles Mehdi Noroozi, Paolo Favaro In this paper we study the problem of image representation learning without human annotation. By following the principles of self-supervision, we build a convolutional neural network (CNN) that can be trained to solve Jigsaw puzzles as a pretext task, which requires no manual labeling, and then later repurposed to solve object classification and detection. To maintain the compatibility across tasks we introduce the context-free network (CFN), a siamese-ennead CNN. The CFN takes image tiles as input and explicitly limits the receptive field (or context) of its early processing units to one tile at a time. We show that the CFN is a more compact version of AlexNet, but with the same semantic learning capabilities. By training the CFN to solve Jigsaw puzzles, we learn both a feature mapping of object parts as well as their correct spatial arrangement. Our experimental evaluations show that the learned features capture semantically relevant content. After training our CFN features to solve jigsaw puzzles on the training set of the ILSRV 2012 dataset, we transfer them via fine-tuning on the combined training and validation set of Pascal VOC 2007 for object detection (via fast RCNN) and classification. The performance of the CFN features is 51.8% for detection and 68.6% for classification, which is the highest among features obtained via unsupervised learning, and closing the gap with features obtained via supervised learning (56.5% and 78.2% respectively). In object classification the CFN features achieve 38.1% on the ILSRV 2012 validation set, after fine-tuning only the fully connected layers on the training set. Captured tweets and retweets: 2 #### Recurrent Batch Normalization Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization. Captured tweets and retweets: 5 #### Evolution of active categorical image classification via saccadic eye movement Randal S. Olson, Jason H. Moore, Christoph Adami Pattern recognition and classification is a central concern for modern information processing systems. In particular, one key challenge to image and video classification has been that the computational cost of image processing scales linearly with the number of pixels in the image or video. Here we present an intelligent machine (the "active categorical classifier," or ACC) that is inspired by the saccadic movements of the eye, and is capable of classifying images by selectively scanning only a portion of the image. We harness evolutionary computation to optimize the ACC on the MNIST hand-written digit classification task, and provide a proof-of-concept that the ACC works on noisy multi-class data. We further analyze the ACC and demonstrate its ability to classify images after viewing only a fraction of the pixels, and provide insight on future research paths to further improve upon the ACC presented here. Captured tweets and retweets: 2 #### Perceptual Losses for Real-Time Style Transfer and Super-Resolution Justin Johnson, Alexandre Alahi, Li Fei-Fei We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results. Captured tweets and retweets: 8 #### Pointing the Unknown Words Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, Yoshua Bengio The problem of rare and unknown words is an important issue that can potentially influence the performance of many NLP systems, including both the traditional count-based and the deep learning models. We propose a novel way to deal with the rare and unseen words for the neural network models using attention. Our model uses two softmax layers in order to predict the next word in conditional language models: one predicts the location of a word in the source sentence, and the other predicts a word in the shortlist vocabulary. At each time-step, the decision of which softmax layer to use choose adaptively made by an MLP which is conditioned on the context.~We motivate our work from a psychological evidence that humans naturally have a tendency to point towards objects in the context or the environment when the name of an object is not known.~We observe improvements on two tasks, neural machine translation on the Europarl English to French parallel corpora and text summarization on the Gigaword dataset using our proposed model. Captured tweets and retweets: 1 #### Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning Karthik Narasimhan, Adam Yala, Regina Barzilay Most successful information extraction systems operate with access to a large collection of documents. In this work, we explore the task of acquiring and incorporating external evidence to improve extraction accuracy in domains where the amount of training data is scarce. This process entails issuing search queries, extraction from new sources and reconciliation of extracted values, which are repeated until sufficient evidence is collected. We approach the problem using a reinforcement learning framework where our model learns to select optimal actions based on contextual information. We employ a deep Q-network, trained to optimize a reward function that reflects extraction accuracy while penalizing extra effort. Our experiments on two databases -- of shooting incidents, and food adulteration cases -- demonstrate that our system significantly outperforms traditional extractors and a competitive meta-classifier baseline. Captured tweets and retweets: 2 #### A guide to convolution arithmetic for deep learning Vincent Dumoulin, Francesco Visin We introduce a guide to help deep learning practitioners understand and manipulate convolutional neural network architectures. The guide clarifies the relationship between various properties (input shape, kernel shape, zero padding, strides and output shape) of convolutional, pooling and transposed convolutional layers, as well as the relationship between convolutional and transposed convolutional layers. Relationships are derived for various cases, and are illustrated in order to make them intuitive. Captured tweets and retweets: 4 #### Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet Talwalkar Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While current methods offer efficiencies by adaptively choosing new configurations to train, an alternative strategy is to adaptively allocate resources across the selected configurations. We formulate hyperparameter optimization as a pure-exploration non-stochastic infinitely many armed bandit problem where a predefined resource like iterations, data samples, or features is allocated to randomly sampled configurations. We introduce Hyperband for this framework and analyze its theoretical properties, providing several desirable guarantees. Furthermore, we compare Hyperband with state-of-the-art methods on a suite of hyperparameter optimization problems. We observe that Hyperband provides speedups of five times to more than an order of magnitude over state-of-the-art Bayesian optimization algorithms on a variety of deep-learning and kernel-based learning problems. Captured tweets and retweets: 1 #### Incorporating Copying Mechanism in Sequence-to-Sequence Learning Jiatao Gu, Zhengdong Lu, Hang Li, Victor O. K. Li We address an important problem in sequence-to-sequence (Seq2Seq) learning referred to as copying, in which certain segments in the input sequence are selectively replicated in the output sequence. A similar phenomenon is observable in human language communication. For example, humans tend to repeat entity names or even long phrases in conversation. The challenge with regard to copying in Seq2Seq is that new machinery is needed to decide when to perform the operation. In this paper, we incorporate copying into neural network-based Seq2Seq learning and propose a new model called CopyNet with encoder-decoder structure. CopyNet can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence. Our empirical study on both synthetic data sets and real world data sets demonstrates the efficacy of CopyNet. For example, CopyNet can outperform regular RNN-based model with remarkable margins on text summarization tasks. Captured tweets and retweets: 1 #### The Multiscale Laplacian Graph Kernel Risi Kondor, Horace Pan Many real world graphs, such as the graphs of molecules, exhibit structure at multiple different scales, but most existing kernels between graphs are either purely local or purely global in character. In contrast, by building a hierarchy of nested subgraphs, the Multiscale Laplacian Graph kernels (MLG kernels) that we define in this paper can account for structure at a range of different scales. At the heart of the MLG construction is another new graph kernel, called the Feature Space Laplacian Graph kernel (FLG kernel), which has the property that it can lift a base kernel defined on the vertices of two graphs to a kernel between the graphs. The MLG kernel applies such FLG kernels to subgraphs recursively. To make the MLG kernel computationally feasible, we also introduce a randomized projection procedure, similar to the Nystr\"om method, but for RKHS operators. Captured tweets and retweets: 2 #### Stochastic Variance Reduction for Nonconvex Optimization Sashank J. Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, Alex Smola We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (SVRG) methods for them. SVRG and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (SGD); but their theoretical analysis almost exclusively assumes convexity. In contrast, we prove non-asymptotic rates of convergence (to stationary points) of SVRG for nonconvex optimization, and show that it is provably faster than SGD and gradient descent. We also analyze a subclass of nonconvex problems on which SVRG attains linear convergence to the global optimum. We extend our analysis to mini-batch variants of SVRG, showing (theoretical) linear speedup due to mini-batching in parallel settings. Captured tweets and retweets: 1 #### Neurally-Guided Procedural Models: Amortized Inference for Procedural Graphics Programs using Neural Networks Daniel Ritchie, Anna Thomas, Pat Hanrahan, Noah D. Goodman Probabilistic inference algorithms such as Sequential Monte Carlo (SMC) provide powerful tools for constraining procedural models in computer graphics, but they require many samples to produce desirable results. In this paper, we show how to create procedural models which learn how to satisfy constraints. We augment procedural models with neural networks which control how the model makes random choices based on the output it has generated thus far. We call such models neurally-guided procedural models. As a pre-computation, we train these models to maximize the likelihood of example outputs generated via SMC. They are then used as efficient SMC importance samplers, generating high-quality results with very few samples. We evaluate our method on L-system-like models with image-based constraints. Given a desired quality threshold, neurally-guided models can generate satisfactory results up to 10x faster than unguided models. Captured tweets and retweets: 9 #### Katyusha: The First Direct Acceleration of Stochastic Gradient Methods Zeyuan Allen-Zhu We introduce $\mathtt{Katyusha}$, the first direct, primal-only stochastic gradient method that has a provably accelerated convergence rate in convex optimization. In contrast, previous methods are based on dual coordinate descent which are more restrictive, or based on outer-inner loops which make them "blind" to the underlying stochastic nature of the optimization process. $\mathtt{Katyusha}$ is the first algorithm that incorporates acceleration directly into stochastic gradient updates. Unlike previous results, $\mathtt{Katyusha}$ obtains an optimal convergence rate. It also supports proximal updates, non-Euclidean norm smoothness, non-uniform sampling, and mini-batch sampling. When applied to interesting classes of convex objectives, including smooth objectives (e.g., Lasso, Logistic Regression), strongly-convex objectives (e.g., SVM), and non-smooth objectives (e.g., L1SVM), $\mathtt{Katyusha}$ improves the best known convergence rates. The main ingredient behind our result is $\textit{Katyusha momentum}$, a novel "negative momentum on top of momentum" that can be incorporated into a variance-reduction based algorithm and speed it up. As a result, since variance reduction has been successfully applied to a fast growing list of practical problems, our paper suggests that in each of such cases, one had better hurry up and give Katyusha a hug. Captured tweets and retweets: 1 #### Recurrent Dropout without Memory Loss Stanislau Semeniuta, Aliaksei Severyn, Erhardt Barth This paper presents a novel approach to recurrent neural network (RNN) regularization. Differently from the widely adopted dropout method, which is applied to \textit{forward} connections of feed-forward architectures or RNNs, we propose to drop neurons directly in \textit{recurrent} connections in a way that does not cause loss of long-term memory. Our approach is as easy to implement and apply as the regular feed-forward dropout and we demonstrate its effectiveness for Long Short-Term Memory network, the most popular type of RNN cells. Our experiments on NLP benchmarks show consistent improvements even when combined with conventional feed-forward dropout. Captured tweets and retweets: 2 #### One-Shot Generalization in Deep Generative Models Danilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, Daan Wierstra Humans have an impressive ability to reason about new concepts and experiences from just a single example. In particular, humans have an ability for one-shot generalization: an ability to encounter a new concept, understand its structure, and then be able to generate compelling alternative variations of the concept. We develop machine learning systems with this important capacity by developing new deep generative models, models that combine the representational power of deep learning with the inferential power of Bayesian reasoning. We develop a class of sequential generative models that are built on the principles of feedback and attention. These two characteristics lead to generative models that are among the state-of-the art in density estimation and image generation. We demonstrate the one-shot generalization ability of our models using three tasks: unconditional sampling, generating new exemplars of a given concept, and generating new exemplars of a family of concepts. In all cases our models are able to generate compelling and diverse samples---having seen new examples just once---providing an important class of general-purpose models for one-shot machine learning. Captured tweets and retweets: 2 #### Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue Ravi Garg, Vijay Kumar BG, Gustavo Carneiro, Ian Reid A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation. Captured tweets and retweets: 1 #### Combining the Best of Convolutional Layers and Recurrent Layers: A Hybrid Network for Semantic Segmentation Zhicheng Yan, Hao Zhang, Yangqing Jia, Thomas Breuel, Yizhou Yu State-of-the-art results of semantic segmentation are established by Fully Convolutional neural Networks (FCNs). FCNs rely on cascaded convolutional and pooling layers to gradually enlarge the receptive fields of neurons, resulting in an indirect way of modeling the distant contextual dependence. In this work, we advocate the use of spatially recurrent layers (i.e. ReNet layers) which directly capture global contexts and lead to improved feature representations. We demonstrate the effectiveness of ReNet layers by building a Naive deep ReNet (N-ReNet), which achieves competitive performance on Stanford Background dataset. Furthermore, we integrate ReNet layers with FCNs, and develop a novel Hybrid deep ReNet (H-ReNet). It enjoys a few remarkable properties, including full-image receptive fields, end-to-end training, and efficient network execution. On the PASCAL VOC 2012 benchmark, the H-ReNet improves the results of state-of-the-art approaches Piecewise, CRFasRNN and DeepParsing by 3.6%, 2.3% and 0.2%, respectively, and achieves the highest IoUs for 13 out of the 20 object classes. Captured tweets and retweets: 1 #### Texture Networks: Feed-forward Synthesis of Textures and Stylized Images Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, Victor Lempitsky Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys~et~al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions. Captured tweets and retweets: 6 #### Dynamic Memory Networks for Visual and Textual Question Answering Caiming Xiong, Stephen Merity, Richard Socher Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering. One such architecture, the dynamic memory network (DMN), obtained high accuracy on a variety of language tasks. However, it was not shown whether the architecture achieves strong results for question answering when supporting facts are not marked during training or whether it could be applied to other modalities such as images. Based on an analysis of the DMN, we propose several improvements to its memory and input modules. Together with these changes we introduce a novel input module for images in order to be able to answer visual questions. Our new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the \babi-10k text question-answering dataset without supporting fact supervision. Captured tweets and retweets: 9 #### Neural Architectures for Named Entity Recognition Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architectures---one based on bidirectional LSTMs and conditional random fields, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of information about words: character-based word representations learned from the supervised corpus and unsupervised word representations learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-specific knowledge or resources such as gazetteers. Captured tweets and retweets: 1 1 2 23 24 25 26 27 28 29 30
## Linear Algebra and Its Applications, Review Exercise 2.24 Review exercise 2.24. Suppose that $A$ is a 3 by 5 matrix with the elementary vectors $e_1$, $e_2$, and $e_3$ in its column space. Does $A$ has a left inverse? A right inverse? Answer: Since $e_1$, $e_2$, and $e_3$ are in the column space the dimension of the column space must be 3 (since $e_1$, $e_2$, and $e_3$ are linearly independent) and thus the rank of $A$ is $r = 3 = m$, the number of rows of $A$. Since the rank of $A$ equals the number of rows $A$ has a 5 by 3 right inverse $C$. However it does not have a left inverse $B$ since the rank $r = 3$ is less than the number of columns $n = 5$. NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang. If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books. This entry was posted in linear algebra and tagged , , , . Bookmark the permalink.
The default linters correspond to the style guide at https://style.tidyverse.org/, however it is possible to override any or all of them using the linters parameter. ## Usage lint(pkg = ".", cache = TRUE, ...) ## Arguments pkg The package to use, can be a file path to the package or a package object. See as.package() for more information. cache Store the lint results so repeated lints of the same content use the previous results. Consult the lintr package to learn more about its caching behaviour. ... Additional arguments passed to lintr::lint_package(). lintr::lint_package(), lintr::lint()
### Home > CC2MN > Chapter 5 > Lesson 5.1.2 > Problem5-21 5-21. Simplify each expression. 1. $-\frac{4}{5}+\frac{3}{10}$ Refer to problem 5-13 for help. Find a common denominator. 1. $2\frac{5}{8}-1\frac{1}{3}$ You can either separate the whole numbers from the fractions and find the differences separately, or convert both into fractions greater than one first. The latter is shown below. For each fraction, multiply the whole number by the denominator, then add the amount of the numerator to find a new numerator. Find a common denominator and multiply each fraction by the necessary Giant Ones. Simplify by subtracting the numerators. Convert back to a mixed number.
# Factorial Anova By Hand taking two tests). Its primary purpose is to determine the interaction between the two different independent variable over one dependent variable. Prerequisites. R Tutorial Series: Two-Way ANOVA with Interactions and Simple Main Effects When an interaction is present in a two-way ANOVA, we typically choose to ignore the main effects and elect to investigate the simple main effects when making pairwise comparisons. Multiple/Post Hoc Group Comparisons in ANOVA Note: We may just go over this quickly in class. Two-way ANOVA test Calculator with replication Please fill in the number of first and second factor levels below at first. In this example, a factorial model is specified, and a plot of the two-way effects is requested. That is, every level of a factor appears in (or is crossed with) every level of all other factors. An introduction to nested designs • Nested designs are also known as hierarchical designs • The factorial designs studied thus far are considered to be crossed designs. \Treatment" may be interpreted in the loosest possible sense as any categorical explanatory variable. ! The specific analysis of variance test that we will study is often referred to as the oneway ANOVA. Later entries will cover balanced factorial between-subjects ANOVA, unbalanced factorial between-subjects ANOVA, simple repeated-measures ANOVA, factorial repeated-measures ANOVA, and ultimately factorial mixed (that is, between- and within-subjects) ANOVA. Statistical Analysis of Factorial Designs Review of Interactions Kinds of Factorial Designs Causal Interpretability of Factorial Designs The F-tests of a Factorial ANOVA Using LSD to describe the pattern of an interaction Review #1-- interaction Task Presentation Paper Computer Task Difficulty Easy 90 70. Tutorial on how to calculate a Two Way ANOVA also known as Factorial Analysis. Background: A factorial ANOVA examines the effects of multiple independent variables on one dependent variable concurrently. Factorial ANOVA, on the other hand, allows you to understand the interactions between factors instead of requiring two different sets of experiments to determine the effects of the two factors. This tutorial walks you through a textbook example in 4 simple steps. Example datasets can be copy-pasted into. The factorial screening experiment is an important tool for optimizing behavioral interventions within the MOST framework. Simplicity is complex. We need to conduct a 4 (experimental group) 2 (gender) two-way independent ANOVA on the mass of alcohol per litre of exhaled breath. The second null hypothesis is that the subgroups within each group have the same means. Tom Pierce Department of Psychology Radford University In one type of study that I do, I measure people’s blood pressure. The mixed, within-between subjects ANOVA (also called a split-plot ANOVA) is a statistical test of means commonly used in the behavioral sciences. A repeated measures ANOVA is also referred to as a within-subjects ANOVA or ANOVA for correlated samples. Advantages and Disadvantages of Different ANOVA Designs: Comparison of One-Way ANOVA vs. they have at least two independent variables b. pptx from PSYC 2444 at Tarrant County College. 15: Ordinal and Disordinal Interactions This handout is downloadable at www. A critical tool for carrying out the analysis is the Analysis of Variance (ANOVA). Experimental Design: From the work you have done this semester, choose a topic and design an experiment you would be able to. Use an alpha of. We can also use ANOVA for combinations of treatments, where two factors (e. The ANOVA table of a two-series factorial is set up “as usual”. On the other hand, corrected effect sizes were called g since the beginning of the 80s. People using a sensation-monitoring strategy were able to keep their hand in the ice water for longer. Factorial Treatment Structure II. This example teaches you how to perform a single factor ANOVA (analysis of variance) in Excel. Hi everybody,,,, I'm new to minitab software :o. Completely Randomized Factorial Designs. Show transcribed image text. If one needed convincing about the advantages of using computers to calculate statistical procedures, doing a factorial ANOVA by hand will definitely convince you. A factorial design is a design that includes two or more factors. 023 Note that the F value and the p value are the same that we got when we made the deviations ourselves, and did the 1 way ANOVA. Factorial ANOVA measures whether a combination of independent variables predict the value of a dependent variable. Unchecking the additional options so that only Analysis of variance and Model summary are selected (as shown below) will make the output match Minitab 16's Two-Way ANOVA results. The ANOVA for the test sessions revealed that there were no significant main effects of feedback timing (p = 0. Experimental Design: From the work you have done this semester, choose a topic and design an experiment you would be able to. Psychological Science, 23(5), 469–471. Students are required to complete all calculations necessary to create an ANOVA summary table, interpret. Yet, if you run an ANOVA on salinity2 you will get completely different results as an ANOVA ran on salinity. ppt), PDF File (. Chapter 14 Within-Subjects Designs ANOVA must be modi ed to take correlated errors into account when multiple measurements are made for each subject. To leave out interactions, separate the. csv) used in this tutorial. Learning More about DOE. Now that you have learned how to test hypotheses using factorial ANOVA, test your knowledge with a practice exercise. Thus the term 'factor' here refers to the number of independent variables. Here, with a $$\lambda$$ of 1, no transformation is recommended. C8057 (Research Methods in Psychology): One‐Way Independent ANOVA by Hand (. If you can conduct "3-way ANOVAs", which function(s) of JMP are you using? My goal is to find. We simply partition the total variability in the response into variability that is due to our treatment (which of course we hope is significantly large) and variability in the response that is left over. Analysis of variance or ANOVA can be used to compare the means between two or more groups of values. The results of the two-way ANOVA and post hoc tests are reported in the same way as one way ANOVA for the main effects and the interaction e. Example datasets can be copy-pasted into. Body temperature increases when individuals experience salient, emotionally significant events. The One-Way ANOVA window opens, where you will specify the variables to be used in the analysis. Analysis of variance: factorial ANOVA is used to contrast a continuous dependent variable y across levels of one or more categorical independent variables x. Once the data from a screening experiment have been analyzed via ANOVA, the results can form the basis for making decisions about which components and component settings will form the optimized behavioral intervention. The mixed, within-between subjects ANOVA (also called a split-plot ANOVA) is a statistical test of means commonly used in the behavioral sciences. An Example of Factorial Design in Cross Cultural Research. It's clear that factorial designs can become cumbersome and have too many groups even with only a few factors. Here it simply tells us that our cell means differ significantly from one another. Sensitivity and specificity are fundamental characteristics of diagnostic imaging tests. A one-way analysis of variance (ANOVA) was calculated on participants' ratings of objection to the lyrics. Tutorial on how to calculate a Two Way ANOVA also known as Factorial Analysis. This calculator will generate a complete one-way analysis of variance (ANOVA) table for up to 10 groups, including sums of squares, degrees of freedom, mean squares, and F and p-values, given the mean, standard deviation, and number of subjects in each group. Each laboratory tested 3 samples from each of the treated materials. As Hayes opened, “in this chapter, [we’ll see] how [the] principles of moderation analysis are applied when the moderator is dichotomous (rather than a continuum, as in the previous chapter) as well as when both focal antecedent and moderator are continuous (p. 3-2 The points for the factorial designs are labeled in a "standard order," starting with all low levels. We can test the null hypothesis that the means of each sample are equal against the alternative that not all the sample means are the same. These functions employ Anova (from the car package) to provide test of effects avoiding the somewhat unhandy format of car::Anova. For example, in the. Check your work by clicking on the components listed below. See Real Statistics Support for Three Factor ANOVA for how perform the same sort of analysis using the Real Statistics Three Factor ANOVA data analysis tool. Here's an example of a Factorial ANOVA question: Researchers want to test a new anti-anxiety medication. Definition of a factorial experiment: The two-way ANOVA is probably the most popular layout in the Design of Experiments. 11 Two-Series Factorials: ANOVA Table Source df 1 1 1 1 1 1 1. If, on the other hand, we do an analysis of the 2 4 factorial with "Direction" kept at +1 (i. Request the Analysis To request a factorial analysis of variance, follow these steps: Click on Statistics ANOVA Factorial ANOVA. Factorial ANOVA measures whether a combination of independent variables predict the value of a dependent variable. , transverse), then we obtain a 7-parameter model with all the main effects and interactions we saw in the 2 5 analysis, except, of course, any terms involving "Direction". Then compare the F test value results to the cut-off values. Conduct a mixed-factorial ANOVA. The main point we want you take away is that factorial designs are extremely useful for determining things that cause effects to change. Meanwhile, compression properties were investigated in new and worn out carpet samples by a Hexapod tumbler tester in 4000 and 12000 revolutions. Analysis of Variance 1 - Calculating SST (Total Sum of Squares) In this video and the next few videos, we're just really going to be doing a bunch of calculations about this data set right over here. That is, we carry out some experimental manipulation. Chapter 8 Factorial ANOVA: Higher order ANOVAs An overview Page 1. Say, for example, that a b*c interaction differs across various levels of factor a. Step by step visual instructions organize data to conduct a two way ANOVA. The two-way analysis of variance is an extension to the one-way analysis of variance. The variable names in this statement (atypetx and btypeocd) are the ones I gave these factors in the SPSS data sheet that you can download from the "Factorial ANOVA (with interaction)" page. In this article, I present an alternative formula for the calculation of a factorial analysis of variance (ANOVA), which requires only the mean, standard deviation, and size for each cell of the. These are called factorial designs, and we can analyse them even if we do not have replicates. Like One-Way Anova where we separated the variance found in the independent variable into variance found between groups and variance found within groups, in Repeated Measures ANOVA we also divide the variance of our dependent variable. Repeated measures of ANOVA is very similar to Paired T-Test or the One-Way ANOVA. Factorial within-subject analysis of variance (ANOVA). (Or sometimes a subject may. We can test the null hypothesis that the means of each sample are equal against the alternative that not all the sample means are the same. Factorial ANOVA or Repeated-measures ANOVA? On the other hand, if I wait 23 years, so much has happened that there shouldn't be any correlation. Levene's Test (any continuous distribution) Test Statistic: 4. This section will discuss a basic factorial design with two factors, both of which are between-subjects factors (See Chapter 12 of the textbook for details). In this case, the abr data points {y ijk}can be shown pictorially as follows: Next, we will calculate the sums of squares needed for the ANOVA table. called Factorial Designs. Replication is particularly important with small designs, such as a 22 Factorial, where there are very few runs at each setting of a factor. The differences in methodology are based on experimental design: 1) One-Way Between-Subjects or Within-Subjects Design 2) Two-Way Between-Subjects Factorial Design We discussed the pros and cons of one-way between-subjects ANOVA and one-way. The structure of the data would look like this: Data Layout for the Kurlu. Example RCBD with 3x2 Factorial SOV Before After Rep 2 2 abV R A 2. if one deals with factorial ANOVA, and so on !!! 8 Recommendations What changes need to be made while. We will restrict ourselves to the case where all the samples are equal in size ( balanced model ). The term "way" is often used to describe the number of independent variables measured by an ANOVA test. Factorial ANOVA measures whether a combination of independent variables predict the value of a dependent variable. It tests if the value of a single variable differs significantly among three or more levels of a factor. The A i are the row sums. • The topic of this module is also One-Way ANOVA. I probably won't get through all of this today. They are used in various other areas of statistics too Seeing how well a regression model fits data Finding Repeatability and Reproducibility in Gage R & R Studies One-Way ANOVA tables are used to compare the means of more than 2 samples. 3 Create a factorial plot 2 Perform the ANOVA (by. The independent variables are termed the factor or treatment, and the various categories within that treatment are termed the levels. Even after taking a biostatistics class last semester, this is still way over my head. Click the white square labeled 2 4 in column 4 (number of factors) ANOVA and Statistical. Factorial ANOVA, Independent Samples. These expressions are used to calculate the ANOVA table entries for the (fixed effects) two-way ANOVA. Thank you :). 2 display the output produced by these statements. , three-way, four-way). learned that a factorial ANOVA is used when a researcher has more would use hand-held calculators to work through their problems. The ANOVA procedure is designed to handle balanced data (that is, data with equal numbers of observations for every combination of the classification factors), whereas the GLM procedure can analyze both balanced and unbalanced data. - Then there is the choice between ‘agreement’ ofr ‘consistency’. Topics covered in the course were sampling distributions, t tests, correlation, regression, power, One-Way and Factorial Between – Subjects ANOVA, One-Way and Factorial Within-Subjects ANOVA, mixed ANOVA, and an introduction to ANCOVA. Factorial Questions with Solutions. You see this commonly examined in repeated measures analysis (such as repeated measures ANOVA, repeated measures ANCOVA, repeated measures MANOVA or MANCOVA…etc). 64 for factor B with df = 3, 109. Show transcribed image text. 4 Regression designs and response surfaces 487 14. For instance, testing aspirin versus placebo and clonidine versus placebo in a randomized trial (the POISE-2 trial is doing this). The technical material goes slightly beyond what is covered in most text books, although there is still some simplification (which is usually indicated in the text). On the other hand the MANOVA can have two or more dependent variables. ANOVA can be extended to include one or more continuous variables that predict the outcome (or dependent variable). As you probably already know, a between-subjects ANOVA is where you are interested in knowing how two groups differ. For the most part we will focus on a 2-Factor between groups ANOVA, although there are many other designs that use the same basic underlying concepts. Interaction Effects in ANOVA This handout is designed to provide some background and information on the analysis and interpretation of interaction effects in the Analysis of Variance (ANOVA). The model is the variable which, in terms of the present discussion, describes the groups, namely male and female, so Sex should be entered here. This extends the concepts of ANOVA with only one factor to two factors. 2 X S is the variance of the cell means, and. You see this commonly examined in repeated measures analysis (such as repeated measures ANOVA, repeated measures ANCOVA, repeated measures MANOVA or MANCOVA…etc). Two-Way ANOVA EXAMPLES. they have at least two independent variables b. Factorial ANOVA using the General Linear Model commands, to preform LSD post hoc tests, and to perform simple effects tests for a significant interaction using the Split-File command, One-Way ANOVA, and some quick hand calculations. It can be used used as:. , the effects of different medications and different operations on IL-6 levels. STATA has the. anova, and. Repeated measures ANOVA is the equivalent of the one-way ANOVA, but for related, not independent groups, and is the extension of the dependent t-test. To compute the main effect of a factor "A", subtract the average response of all experimental runs for which A was at its low (or first) level from the average response of all experimental runs for which A was at its high (or second) level. if both main effects and interactions occur, interactions should be interpreted first. Post-hoc test. You will note this is a group assignment and but an individual submission – there will be no exceptions. Note that this is a between-subjects design, so different people appear in each group. Now you can see why people love Minitab! The output requires no further work to get the lettering we need for our means bar chart. Factorial Experiments. Begin by writing the expected mean squares for an all random model. Step by step visual instructions on how to calculate the sum of squares for each factor, total sum of squares, sum of. An example 8-27. If, on the other hand, we do an analysis of the 2 4 factorial with "Direction" kept at +1 (i. If you are not familiar with three-way interactions in ANOVA, please see our general FAQ on understanding three-way interactions in ANOVA. The only tutorial you'll ever need on one-way ANOVA with post hoc tests in SPSS. Factorial Analysis of Variance. The weight gain example below show factorial data. Taguchi Orthogonal Array (OA) design is a type of general fractional factorial design. This procedure will be described in detail in a later chapter. We also went through, by hand, the task of calculating an ANOVA table for a 2x2 between subjects design. You may bring a single sheet of 8 1/2 X 11 paper, using both sides, hand writtenwith any formulas that you think you may need. https://www. Web Pages that Perform Statistical Calculations! Precision Consulting -- Offers dissertation help, editing, tutoring, and coaching services on a variety of statistical methods including ANOVA, Multiple Linear Regression, Structural Equation Modeling, Confirmatory Factor Analysis, and Hierarchical Linear Modeling. For a discussion of the statistical techniques that perform analysis of variance (ANOVA) and the types of designs for which they are best suited refer to the section on Methods for Analysis of Variance. It allows you to understand the individual steps that are involved as well as how they each contribute in showing the differences between the multiple groups. Two-way ANOVA test Calculator with replication Please fill in the number of first and second factor levels below at first. There are (at least) two ways of performing “repeated measures ANOVA” using R but none is really trivial, and each way has it’s own complication/pitfalls (explanation/solution to which I was usually able to find through searching in the R-help mailing list). In such cases, we resort to Factorial ANOVA which not only helps us to study the effect of two or more factors but also gives information about their dependence or independence in the same experiment. In the factorial ANOVA, there are two or more independent variables that split the sample into groups. orgwhere you can find introductory documentation and information about books on R. This extends the concepts of ANOVA with only one factor to two factors. 1) or two way ANOVA (Model 2 of 3): two way ANOVA is the best choice. Correlations and. In a nested ANOVA, one cannot estimate an interaction between the nested factor and the non-nested factor since their levels are not completely crossed. • Multivariate ANOVA (MANOVA) • Repeated Measures ANOVA • Some data and analyses. The Comprida lagoon is situated in the coastal area of the Macaé region, Brazil (22˚30'S and 44˚42'W) (Figure 1). Reference is sometimes made to models and equations in ANOVA via CIs, using the numbering system used in the book. 2-way ANOVA Example. Assuming the levels are normally distributed with a common variance, use Two-Way Anova, at the 0. Mean squares (MS) Sum of Squares (SS). The primary purpose of a two-way ANOVA… is to understand if there is an interaction between the two independent variables on the dependent variable. The factorial design is very popular in the social sciences. Through the factorial ANOVA, the researcher not only can identify the differences in the mean values but also identify…. Pressing the "tab" key after each entry will take you down to the next text field in the group. asked by Ethel on July 7, 2013; chemistry. Below we redo the example using R. Lane Prerequisites • Chapter 15: Introduction to ANOVA Learning Objectives 1. Factorial ANOVA, Independent Samples. 11 Two-Series Factorials: ANOVA Table Source df 1 1 1 1 1 1 1. The actuator experiment from M-Lab 2 is an example of varying the type of actuator and the amount of air pressure to see how the resulting force may change. How can incomplete factorial designs be statistically analyzed? though I’m wondering whether factorial-ANOVA or nested-ANOVA can be used. The two-way factorial ANOVA is. The ANOVA. Dealing with missing data in ANOVA models June 25, 2018. On the other hand the treatment is called as fixed effects in the case of two way. , the effects of different medications and different operations on IL-6 levels. On the other hand, in practice, there is a widespread belief that the smaller the P-value, which is used as the criterion of statistical signi cance, the more e ective or the stronger the levels of the. The Null hypothesis in ANOVA is valid when all the sample means are equal, or they don’t have any significant difference. A form of hypothesis testing, it will determine whether two or more factors have the same mean. Multivariate Analysis of Variance (MANOVA): I. Two-Way ANOVA Overview & SPSS interpretation 1. However, the calculation of the Sums of Squares components of the table from the raw data involves long and tedious computations using a hand. Interpreting Effects 8-6 3. It can be used used as:. 05 significance level. 176 CHAPTER 7. For an overview of the concepts in multi-way analysis of variance, review the chapter Factorial ANOVA: Main Effects, Interaction Effects, and Interaction Plots. Factorial ANOVA is used when the experimenter wants to study the interaction effects among the treatments. We cover the basics of the one-way (single factor) analysis of variance (ANOVA) by doing an example by hand. These are data from a 2 by 4 factorial design. The A i are the row sums. 05 level of significance, to test whether the results are independent of laboratory and/or independent of brand. For a two-factor factorial, say Factor A at two levels, and Factor B at two levels, we have 4 treatment combinations: A1B1, A1B2, A2B1, and A2B2 that would be applied to the experimental units. Simple Regression Hand Computation Example. Chapter 15: Mixed design ANOVA Labcoat Leni’s Real Research The objection of desire Problem Bernard, P. for a One-Way ANOVA. (2 replies) I just did a (very) small simulation study comparing one-way ANOVA with limma and SAM for various values of pi-0, and normal and t-distributed errors, 2 replicates per treatment, 22700 genes/array. Technical Details: Use the links on the bottom of the page to navigate the tutorial. A 2x2 factorial design is a trial design meant to be able to more efficiently test two interventions in one sample. For example the DV is stress. All of the effect sizes taken from the exercise were converted from Cohen's f to eta-squared in order to input the numeric equivalent into the calculations. A sample of n = 5 students is followed over three years from fourth grade to sixth grade. In the example below, three columns contain scores from three different types of standardized tests: math, reading, and science. In SAS, we got the pair-wise comparison p-values from the table of Differences in LSmeans, but still have to work out the lettering. Posted: (9 days ago) In an experiment study, various treatments are applied to test subjects and the response data is gathered for analysis. However, ANOVA results do not identify which particular differences between pairs of means are significant. Factorial Analysis of Variance (ANOVA) on SPSS. From the ANOVA summary table above, if we use brackets ( ) to denote within subjects factors, it can be seen that: a. In this article, I present an alternative formula for the calculation of a factorial analysis of variance (ANOVA), which requires only the mean, standard deviation, and size for each cell of the design, rather than the individual scores. Two-Way ANOVA in SPSS STAT 314 Preliminary research on the production of imitation pearls entailed studying the effect of the number of coats of a special lacquer applied to an opalescent plastic bead used as the base of the pearl on the market value of the pearl. First, provide students with the research scenario and the accompanying questions to have them determine the research design, statistical analysis to use, and independent and dependent variables. Generally a researcher measures an effect of interest (their IV 1). Tutorial on how to calculate a Two Way ANOVA also known as Factorial Analysis. The normality and homogeneity of variance assumptions we made for the factorial ANOVA apply for the factorial MANOVA also, as does the “homogeneity of dispersion matrices”. You could calculate the ANOVA by hand, but that’s unnecessary because statsmodels has good support already. Analysis of Variance Designs by David M. 1 The Randomized Block Design When introducing ANOVA, we mentioned that this model will allow us to include more than one The ANOVA F-Test(Randomized Block Design). The ANOVA model for the analysis of factorial experiments is formulated as shown next. Calculation. , qualitative vs. Example Analysis using General Linear Model in SPSS. We can also use ANOVA for combinations of treatments, where two factors (e. Know the meaning of the term "factorial design". Three-way Anova with R Goal: Find which factors influence a quantitative continuous variable, taking into account their possible interactions stats package - No install required Y ~ A + B Plot the mean of Y for the different factors levels plot. Chapter 16 Factorial ANOVA. So, a two-way independent ANOVA. Psychological Science, 23(5), 469–471. We'll skim over it in class but you should be sure to ask questions if you don't understand it. Is it possible to conduct a three way interaction using JMP software? i. Design Example 1 for Last Class. All ANOVAs compare one or more mean scores with each other; they are tests for the difference in mean scores. independent variable, this would be referred to as a 2 · 5 factorial or a 2 · 5 ANOVA. The nominal variables (often called "factors" or "main effects") are found in all possible combinations. David Hand. A Two Way Anova is a kind of Factorial ANOVA where there are two independent variables, similarly, a Three Way Anova is a Factorial ANOVA where there are 3 independent variables (in all these cases there is only one dependent variable). • We previously introduced the between groups independent samples ANOVA • In the present module, we will discuss within subjects correlated samples ANOVA also known as one-way repeated measures ANOVA. Excel Example of ANOVA. pptx from PSYC 2444 at Tarrant County College. Description Usage Arguments Details Value Note Author(s) References See Also Examples. 14 From the second ANOVA table, copy the Interaction, its SS, its df, and its MS, and paste into the first ANOVA table in the row just below the name of the B variable. The research paper assignment is worth 15% of your course grade. Comparing the effects of 3 different teaching methods (A, B & C – 3 levels of the IV ‘teaching method’) on exam results Independent Variable – Teaching Method Method A Method B Method C 2. Demonstration of how to conduct a One-Way ANOVA by hand. However don't get bogged down with formulas for calculating sums of squares – A free PowerPoint PPT presentation (displayed as a Flash slide show) on PowerShow. Calculation. The main idea in the split plot is that the experimental unit has been “split” into sub units, and another treatment has been applied to those sub units. ANOVA is an acronym for. 10, Section 2) – For both: • Complete ANOVA tables • Perform F tests • Interpret SAS output • Know the general concept of a higher order. For a factor level, the least squares mean is the sum of the constant coefficient and the coefficient for the factor level. chapter in ANOVA via CIs had space been available. That was a 2 x 2 two-way ANOVA. Variance, or ANOVA. I will describe how to calculate degrees of freedom in an F-test (ANOVA) without much statistical terminology. In this module, we cover the analysis of independent groups designs (totally between designs in which each participant sees only one cell or treatment combination). Now, I want to analyze them using means of responses. Factorial ANOVA. • Factorial ANOVA. One IV is gender (male/female) and the other is infidelity type (emotional/sexual). Example 1in[R] anova repeats this same analysis. Factorial studies are becoming increasingly more common in psychology as the interactive nature of independent variables becomes more obvious. Two-way ANOVA on the other hand would not only be able to assess both time and treatment in the same test, but also whether there is an interaction between the parameters. So the study described above is a factorial design, with two between groups factors, and each factor has 3 levels (sometimes described as a 3 by 3 between groups design). Factorial Designs are those that involve more than one factor (IV). There is a population of. These are data from a 2 by 4 factorial design. An introduction to Two Way ANOVA (Factorial) also known as Factorial Analysis. treated normal mouse (sample mean 140). A two-way anova can investigate the main effects of each of two independent factor variables, as well as the effect of the interaction of these variables. Assumptions for Within-subjects ANOVA. On the other hand, it will always involve aliasing of effects. Its primary purpose is to determine the interaction between the two different independent variable over one dependent variable. MANOVA: Tests for Multivariate Data in Semi-Parametric Factorial in MANOVA. These tests are very time-consuming by hand. 05 level of significant difference between the treatment. Effect size for Analysis of Variance (ANOVA) October 31, 2010 at 5:00 pm 17 comments. Independence: Observations and groups are independent from each other. In short, a three-way interaction means that there is a two-way interaction that varies across levels of a third variable. You will experience setting up, running, and analyzing simple-to-intermediate complexity Full Factorial and Partial Factorial experiments both by hand and using computer software. You use a two-way anova (also known as a factorial anova, with two factors) when you have one measurement variable and two nominal variables. Abstract: - In this paper, a full factorial design is used to show the statistical significance of an effect that four specific factors exerts on the size of defect in a tufting process. If using hand calculation, one must be careful about the Sum of Squares formulae. I've already made a 3 factors 3 levels factorial design experiment and 27 formulas were prepared. SPSS provides several ways to analyze repeated measures ANOVA that include covariates. Eta squared is the proportion of variance associated with one or more main effects, errors or interactions in ANOVA. asked by Ethel on July 7, 2013; chemistry. Created Date: 11/4/2002 10:10:01 AM. Hello all: I am seeking advice for the analysis of a field research study that used a 2 x 4 factorial plus control arrangement of treatments. The ANOVA procedure is one of several procedures available in SAS/STAT software for analysis of variance. The raw data for the 16 subjects are listed below. I'll cover the basics quickly and then cover the advanced topic of interaction contrasts. Factorial ANOVA revealed that in both treatments an improvement of the patients’ state had taken place. Obtaining the same ANOVA results in R as in SPSS - the difficulties with Type II and Type III sums of squares I calculated the ANOVA results for my recent experiment with R. An equation is also developed to be used for. Includes a comparison with One Way ANOVA. hand might interfere with hand preference for certain tasks. Simple Effects Analysis Review of Factorial ANOVA Main effects - comparison of marginal (level) means Interaction - comparison of condition means to determine if differences between means for one level of an IV are the same as differences at the other level(s) of the IV Simple Effects Breakdown the interaction to understand what's driving it. Having uncorrelated parts makes the computations involved in ANOVA incredibly easier. Description Usage Arguments Details Value Note Author(s) References See Also Examples. Higher-order ANOVA 8-26 8. migomendoza. ," (Duxbury 1999) # # A 2^3 experiment with 2 reps per cell to study robot spray. 1971 'hand' 0. I am going back and talking about part of last Tuesday's class first. There are no such terms as " factorial ANOVA" or a " Preliminary ANOVA" , at least to my knowledge. If using hand calculation, one must be careful about the Sum of Squares formulae. 2-way ANOVA Example. Using the data in the file named Ch. Higher-order ANOVA 8-26 8. , in a longitudinal study). The task was to. You can define a factorial model that includes the two classification variables, day and shift. About • A driver distraction engineer with a strong passion for cars and research experience of current and future technologies in HCD concentration, in lieu with industry regulations, test. A One-Way Analysis of Variance is a way to test the equality of three or more means at one time by using variances. While pressing the left mouse button, spin the graph so the. This extends the concepts of ANOVA with only one factor to two factors. ANOVA 2: Calculating SSW and SSB (total sum of squares within and between) This is the currently selected item. ANOVA can be extended to include one or more continuous variables that predict the outcome (or dependent variable). The key thing to understand is that, when trying to identify where differences are between groups, there are different ways of adjusting the probability estimates to reflect the fact that multiple comparisons are being made. In the example below, three columns contain scores from three different types of standardized tests: math, reading, and science. Its primary purpose is to determine the interaction between the two different independent variable over one dependent variable. Post-hoc reasoning on two-ways. Chapter 13 Contrasts and Custom Hypotheses Contrasts ask speci c questions as opposed to the general ANOVA null vs. We started out looking at tools that you can use to compare two groups to one another, most notably the $$t$$-test (Chapter 13).
# How do you find the vertex and the intercepts for y = x^2-6x+8? Vertex is at $\left(3 , - 1\right)$ ; y-intercept is at $\left(0 , 8\right)$ and x-intercepts are at $\left(2 , 0\right) \mathmr{and} \left(4 , 0\right)$ We know the equation of parabola in vertex form is$y = a {\left(x - h\right)}^{2} + k$ where vertex is at $\left(h , k\right)$.Here $y = {x}^{2} - 6 x + 8 = {\left(x - 3\right)}^{2} - 9 + 8 = {\left(x - 3\right)}^{2} - 1 \therefore$Vertex is at $\left(3 , - 1\right)$ we find y-intercept by putting x=0 in the equation. So $y = 0 - 0 + 8 = 8$ and x-intercept by putting y=0 in the equation. So x^2-6x+8=0 or (x-4)(x-2)=0 or x=4 ; x=2 graph{x^2-6x+8 [-20, 20, -10, 10]}[Ans]
# Talk:List of operations ## On sweating the small stuff • By "complement" I think the author really means "negation"; either way, the more common notation is ¬ rather than the prime symbol. I don't know what you mean by "negation" - usally I think of negation as making something disappear or "to make negligable". It doesn't matter whats more common, everything should be on this page. You should add ¬ yourself. Fresheneesz 10:49, 11 March 2006 (UTC) • JA: Negation is a logical operation, and some writers (Quine, et al.) will further insist on a distinction between "negation" and "denial" that is parallel to the one that they make between the "conditional" (->) and the inplication (=>). Complement is a set-theoretic operation. Jon Awbrey 03:00, 12 March 2006 (UTC) • In binary logic, + is used to indicate logical disjunction? Not usually. + would more likely mean exclusive or (which is the same as addition mod 2) No, I've seen exclusive or as written with an \oplus. In my class on Logical Design, we *never* used the conjunction and disjunction operators, only plus and times. Fresheneesz 10:49, 11 March 2006 (UTC) • In general the selection seems kind of quirky and haphazard. What's the exact rationale for the article? Could be useful but needs better organization, better coverage, and more accurate reflection of the notation and terminology that are actually used. --Trovatore 06:20, 11 March 2006 (UTC) I started this page specifically because I was looking for the compose operator, which I couldn't freaking find anywhere. Otherwise, it simply seems like a logical list to have. So an interested individual can find an operator that they don't know the name for. Fresheneesz 10:49, 11 March 2006 (UTC) • JA: The prime is used for set complement, and also for negation. Jon Awbrey 07:10, 11 March 2006 (UTC) • I said, the more common notation. I have no doubt you can attest both these usages somewhere. They're not usual. --Trovatore 07:25, 11 March 2006 (UTC) • JA: It is common in CSE contexts to see "+" used for inclusive or, and this goes way back to Schroeder I think, which leads some of them to use a circled "+", like the direct sum symbol, for exclusive or, but I think we should discourage this, as it plays havoc with communication. Jon Awbrey 07:10, 11 March 2006 (UTC) • JA: Just passing on the info. Common is relative to how widely one reads. If TV producers pandered only to the middle of the distribution, there'd be nothing on but "reality shows", oops, bad example. Jon Awbrey 07:40, 11 March 2006 (UTC) Do you have to write JA: before your comments? Thats confusing. I would advocate discouraging the use of certain operators, but we still must have them on the page. Fresheneesz 10:49, 11 March 2006 (UTC) • JA: I'm a pragmatist about semiotics, or sign usage in general. I observe the usages that people use and report what I observe them using. I observe the historical changes in usage and report on those phenomena for what they're worth to whom they're worth. It is only when I observe people becoming a danger to themselves and others, intellectually speaking, that I make recommendations based on experience with what is likely to happen, and I've seen lots of Road Runner cartoons, so I'm familiar with the behavior of many different species of semiotic critters and the empirically probable sequels to all due signs of Things to Come. I can of course best succeed in reforming my own practices, and so I will continue to do that. I use "(" in ")" in pairs and for much the same reason use "JA" and Jon Awbrey 16:10, 11 March 2006 (UTC) ## Duplication I think this page is to some extent duplicative of Table of mathematical symbols. --Trovatore 23:46, 12 March 2006 (UTC) • JA: There is obviously a lot of overlap, and also with the material in Wikipedia Formula Help and Wikipedia:Mathematical symbols. Still, a good many of the symbols in the HTML version don't work on many browsers, and I remember how much time I wasted trying to tie a bowtie as ${\displaystyle \triangleright \!\triangleleft }$ before someone told me there was already a ${\displaystyle \bowtie }$. Also, there could eventually be some purpose to the article in elucidating the semantics, not just the iconography of the various symbols for operators. Jon Awbrey 04:04, 13 March 2006 (UTC)
| | | | ## Classification of the Agricultural Crops Using Landsat-8 NDVI Parameters by Support Vector Machine #### Emrullah ACAR [1] , Müslime ALTUN [2] Along with the data obtained from the developing remote sensing technologies, the use of machine learning techniques is widely employed in classification at a more effective and precise level. In this study, support vector machines (SVM) technique, one of the machine learning approaches, was utilized with the help of data obtained from satellite image, and it was aimed to classify agricultural products. Moreover, lentil and wheat products were employed for object detection, and Landsat-8 satellite was preferred as satellite imagery. In order to determine the plant indexes in the image, Landsat-8 image of the development period of agricultural products dated May 6, 2018 was used and 98 sample points were taken with the help of GPS on the pilot area. After that, the position of these points were transferred to Landsat-8 satellite image employing the QGIS program and NDVI values were calculated from these points, which corresponds to Landsat-8 NDVI image pixels. The obtained NDVI values were then utilized in the SVM as inputs. As a result, the accuracy of the overall system for crop classification on the pilot area was computed as 83.3%. Remote Sensing, SVM, Landsat-8, NDVI, Crop Classification • [1] Q. Weng, "Introduction to Remote Sensing Systems, Data, Applications."Remote Perception of Natural Resources July 2013, pp 3-20 • [2] T. Kavzoğlu, and İ. Çölkesen, "Remote Sensing Technologies and Applications." Sustainable Land Management Workshop In Turkey, 26-27 May 2011. • [3] Huang, J., Blanz, V., & Heisele, B. (2002, August). Face recognition using component-based SVM classification and morphable models. In International Workshop on Support Vector Machines (pp. 334-341). Springer, Berlin, Heidelberg. • [4] Kobayashi, N., Tani, H., Wang, X., & Sonobe, R. (2020). Crop classification using spectral indices derived from Sentinel-2A imagery. Journal of Information and Telecommunication, 4(1), 67-90. • [5] Htitiou, A., Boudhar, A., Lebrini, Y., Hadria, R., Lionboui, H., & Benabdelouahab, T. (2020). A comparative analysis of different phenological information retrieved from Sentinel-2 time series images to improve crop classification: A machine learning approach. Geocarto International, (just-accepted), 1-20. • [6] Acar, E., & ÖZERDEM, M. S. (2020). On a yearly basis prediction of soil water content utilizing sar data: a machine learning and feature selection approach. Turkish Journal of Electrical Engineering & Computer Sciences, 28(4), 2316-2330. • [7] Chakhar, A., Ortega-Terol, D., Hernández-López, D., Ballesteros, R., Ortega, J. F., & Moreno, M. A. (2020). Assessing the Accuracy of Multiple Classification Algorithms for Crop Classification Using Landsat-8 and Sentinel-2 Data. Remote Sensing, 12(11), 1735. • [8] Nasa U.S. Geological Survey. Landsat Data Continuity Mission ,February 2013, pp.1-17. • [9] A. Gönenç, " Comparison of NDVI and RVI Vegetation Indices Using Satellite Images. " 2019 Master Thesis, D.Ü, Institute of Science, Diyarbakır, 22-56 • [10] N. Kobayashi, T.Tani, X. Wang, R. Sonobe, "Product classification using spectral indices derived from Sentinel-2A images, " 2019. • [11] Gonenc, A., OZERDEM, M. S., & Emrullah, A. C. A. R. (2019, July). Comparison of NDVI and RVI Vegetation Indices Using Satellite Images. In 2019 8th International Conference on Agro-Geoinformatics (Agro-Geoinformatics) (pp. 1-4). IEEE. • [12] P. Kumar, D.K. Gupta, V.N. Mishra, R. Prasad, "Comparison of spectral angle matching algorithms for crop classification using support vector machine, artificial neural network, and LISS IV data." International Journal of Remote Sensing 2015. Primary Language en Computer Science, Artifical Intelligence January 2021 Araştırma Articlessi Orcid: 0000-0002-1897-9830Author: Emrullah ACAR (Primary Author)Institution: BATMAN UNIVERSITYCountry: Turkey Orcid: 0000-0001-9787-3286Author: Müslime ALTUNInstitution: BATMAN UNIVERSITYCountry: Turkey Publication Date : January 30, 2021 Bibtex @research article { bajece863147, journal = {Balkan Journal of Electrical and Computer Engineering}, issn = {2147-284X}, address = {}, publisher = {Balkan Yayın}, year = {2021}, volume = {9}, pages = {78 - 82}, doi = {10.17694/bajece.863147}, title = {Classification of the Agricultural Crops Using Landsat-8 NDVI Parameters by Support Vector Machine}, key = {cite}, author = {Altun, Müslime} } APA Acar, E , Altun, M . (2021). Classification of the Agricultural Crops Using Landsat-8 NDVI Parameters by Support Vector Machine . Balkan Journal of Electrical and Computer Engineering , 9 (1) , 78-82 . DOI: 10.17694/bajece.863147 MLA Acar, E , Altun, M . "Classification of the Agricultural Crops Using Landsat-8 NDVI Parameters by Support Vector Machine" . Balkan Journal of Electrical and Computer Engineering 9 (2021 ): 78-82 Chicago Acar, E , Altun, M . "Classification of the Agricultural Crops Using Landsat-8 NDVI Parameters by Support Vector Machine". Balkan Journal of Electrical and Computer Engineering 9 (2021 ): 78-82 RIS TY - JOUR T1 - Classification of the Agricultural Crops Using Landsat-8 NDVI Parameters by Support Vector Machine AU - Emrullah Acar , Müslime Altun Y1 - 2021 PY - 2021 N1 - doi: 10.17694/bajece.863147 DO - 10.17694/bajece.863147 T2 - Balkan Journal of Electrical and Computer Engineering JF - Journal JO - JOR SP - 78 EP - 82 VL - 9 IS - 1 SN - 2147-284X- M3 - doi: 10.17694/bajece.863147 UR - https://doi.org/10.17694/bajece.863147 Y2 - 2021 ER - EndNote %0 Balkan Journal of Electrical and Computer Engineering Classification of the Agricultural Crops Using Landsat-8 NDVI Parameters by Support Vector Machine %A Emrullah Acar , Müslime Altun %T Classification of the Agricultural Crops Using Landsat-8 NDVI Parameters by Support Vector Machine %D 2021 %J Balkan Journal of Electrical and Computer Engineering %P 2147-284X- %V 9 %N 1 %R doi: 10.17694/bajece.863147 %U 10.17694/bajece.863147 ISNAD Acar, Emrullah , Altun, Müslime . "Classification of the Agricultural Crops Using Landsat-8 NDVI Parameters by Support Vector Machine". Balkan Journal of Electrical and Computer Engineering 9 / 1 (January 2021): 78-82 . https://doi.org/10.17694/bajece.863147 AMA Acar E , Altun M . Classification of the Agricultural Crops Using Landsat-8 NDVI Parameters by Support Vector Machine. Balkan Journal of Electrical and Computer Engineering. 2021; 9(1): 78-82. Vancouver Acar E , Altun M . Classification of the Agricultural Crops Using Landsat-8 NDVI Parameters by Support Vector Machine. Balkan Journal of Electrical and Computer Engineering. 2021; 9(1): 78-82. IEEE E. Acar and M. Altun , "Classification of the Agricultural Crops Using Landsat-8 NDVI Parameters by Support Vector Machine", Balkan Journal of Electrical and Computer Engineering, vol. 9, no. 1, pp. 78-82, Jan. 2021, doi:10.17694/bajece.863147 Authors of the Article [2]
# Erogodicity in a Monte Carlo simulation Q1: What is the ergodicity and ergodicity breaking in a Monte Carlo simulation of a statistical physics problem? Q2: How does one ensure that the ergodicity is maintained ? • Ergodicity is a description of a system which has filled all degrees of freedom equally. For example, if you use MC method to simulate gas molecules, with constant initial velocity. The system will be ergodically distributed when the velocity follows the Maxwell-Boltzmann distribution. This is my understanding I'm sure there is a better definition involving entropy. Breaking this condition sounds like it implies a decrease in entropy. – boyfarrell Apr 2 '13 at 13:22 To complete the answers already given, ergodicity in MC simulations is a practical problem and not a conceptual one contrary to the ergodicity property in physics. Normally, if you sample your phase space with a Markov chain, it is possible to show that whatever the initial trial distribution, your Markov chain will eventually sample the Gibbs distribution associated to the statistical ensemble you are interested in. In practice however, your system can be trapped in local minima of the potential energy surface and be ergodic only within those minima (akin to what would really happen in a supercooled liquid). This is an obvious case of ergodicity breaking in the sense suggested by sebastian above. This can be tested quite easily as your simulations will give different averages depending on the initial condition for instance. There are many algorithms involving multicanonical sampling or parallel tempering just to name these two that can get rid of this issue. In the context of a Monte Carlo (MC) simulation, ergodicity means that the algorithm that you use is designed in such a way that all points in the corresponding phase space (the one that contains the trajectory of your statistical ensemble) would be visited if the algorithm ran for an infinite amount of time. There is no way to prove that an algorithm is ergodic, as we just cannot let a simulation run infinitely. In the literature, you can find the concepts of balance and detailed balance. From a practitioner's point of view, if an algorithm fulfils detailed balance, it is safe to assume that the system behaves ergodic. In general, you cannot show that a system is ergodic. In statistical physics, ergodicity is assumed for systems in thermal equilibrium, but this assumption cannot be proven (to my (limited) knowledge). A good online resource for learning MC is this site with lecture notes. A good book that covers everything from the beginning is Tuckerman: Statistical Mechanics. A book that is more detailed on MC but requires solid knowledge in statistical physics is Frenkel & Smit: Understanding Molecular Simulation. • I believe this answer is flawed. Detailed balance and ergodicity are two different and largely unrelated concepts. For instance, the identity transfer matrix for Markov Chain Monte Carlo satisfies detailed balance, but it is as non-ergodic as possible — the system just stays in one state and never evolves. – 4ae1e1 Mar 18 '15 at 3:52 • I retract my comment "detailed balance and ergodicity are two different and largely unrelated concepts." Instead I should say detailed balance is not enough to guarantee ergodicity. The algorithm also need to be able to traverse the phase space. – 4ae1e1 Mar 18 '15 at 4:45
## Geometry: Common Core (15th Edition) Find the coordinates of each city. Brooklyn: (8,2); Charleston: (4,5) Use the distance formula: $d=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$ $d=\sqrt{(4-8)^2+(5-2)^2}$ $d=\sqrt{(-4)^2+3^2}$ $d=\sqrt{16+9}$ $d=\sqrt{25}$ $d=5$
# Derivation of asymptotic solution of $\tan(x) = x$. An equation that seems to come up everywhere is the transcendental $\tan(x) = x$. Normally when it comes up you content yourself with a numerical solution usually using Newton's method. However, browsing today I found an asymptotic formula for the positive roots $x$: $x = q - q^{-1} - \frac23 q^{-3} + \cdots$ with $q = (n + 1/2) \pi$ for positive integers $n$. For instance here: http://mathworld.wolfram.com/TancFunction.html, and here: http://mathforum.org/kb/message.jspa?messageID=7014308 found from a comment here: Solution of tanx = x?. The Mathworld article says that you can derive this formula using series reversion, however I'm having difficulty figuring out exactly how to do it. Any help with a derivation would be much appreciated. - I doubt there's a nice formula for the coefficients; series reversion here basically says that "an expansion for the inverse exists and one can back-solve for the coefficients if one needs to calculate a given number of them." Do you want to see the series reversion method in practice for this particular case? –  anon Feb 17 '12 at 5:08 yes that would be good, I can get close, but if you could show the set up that would be great. –  Kyle Feb 17 '12 at 5:29 How does this "come up everywhere"? –  Michael Hardy Feb 17 '12 at 16:54 Take a look at the pdf in the second link. If we were to nitpick I could change "come up everywhere" with "come up more than one would naively expect". –  Kyle Feb 17 '12 at 21:28 You may be interested in N. G. de Bruijn's book Asymptotic Methods in Analysis, which treats the equation $\cot x = x$. What follows is essentially a minor modification of that section in the book. The central tool we will use is the Lagrange inversion formula. The formula given in de Bruijn differs slightly from the one given on the wiki page so I'll reproduce it here. Lagrange Inversion Formula. Let the function $f(z)$ be analytic in some neighborhood of the point $z=0$ of the complex plane. Assuming that $f(0) \neq 0$, we consider the equation $$w = z/f(z),$$ where $z$ is the unknown. Then there exist positive numbers $a$ and $b$ such that for $|w| < a$ the equation has just one solution in the domain $|z| < b$, and this solution is an analytic function of $w$: $$z = \sum_{k=1}^{\infty} c_k w^k \hspace{1cm} (|w| < a),$$ where the coefficients $c_k$ are given by $$c_k = \frac{1}{k!} \left\{\left(\frac{d}{dz}\right)^{k-1} (f(z))^k\right\}_{z=0}.$$ Essentially what this says is that we can solve the equation $w = z/f(z)$ for $z$ as a power series in $w$ when $|w|$ and $|z|$ are small enough. Okay, on to the problem. We wish to solve the equation $$\tan x = x.$$ As with many asymptotics problems, we need a foothold to get ourselves going. Take a look at the graphs of $\tan x$ and $x$: We see that in each interval $\left(\pi n - \frac{\pi}{2}, \pi n + \frac{\pi}{2}\right)$ there is exactly one solution $x_n$ (i.e. $\tan x_n = x_n$), and, when $n$ is large, $x_n$ is approximately $\pi n + \frac{\pi}{2}$. But how do we show this second part? Since $\tan$ is $\pi$-periodic we have $$\tan\left(\pi n + \frac{\pi}{2} - x_n\right) = \tan\left(\frac{\pi}{2} - x_n\right)$$ $$\hspace{2.4 cm} = \frac{1}{\tan x_n}$$ $$\hspace{2.6 cm} = \frac{1}{x_n} \to 0$$ as $n \to \infty$, where the second-to-last equality follows from the identites $$\sin\left(\frac{\pi}{2} - \theta\right) = \cos \theta,$$ $$\cos\left(\frac{\pi}{2} - \theta\right) = \sin \theta.$$ Since $-\frac{\pi}{2} < \pi n + \frac{\pi}{2} - x_n < \frac{\pi}{2}$ and since $\tan$ is continuous in this interval we have $\pi n + \frac{\pi}{2} - x_n \to 0$ as $n \to \infty$. Thus we have shown that $x_n$ is approximately $\pi n + \frac{\pi}{2}$ for large $n$. Now we begin the process of putting the equation $\tan x = x$ into the form required by the Lagrange inversion formula. Set $$z = \pi n + \frac{\pi}{2} - x$$ and $$w = \left(\pi n + \frac{\pi}{2}\right)^{-1}.$$ Note that we do this because when $|w|$ is small (i.e. when $n$ is large) we may take $|z|$ small enough such that there will be only one $x$ (in the sense that $x = \pi n + \frac{\pi}{2} - z$) which satisfies $\tan x = x$. Plugging $x = w^{-1} - z$ into the equation $\tan x = x$ yields, after some simplifications along the lines of those already discussed, $$\cot z = w^{-1} - z,$$ which rearranges to $$w = \frac{\sin z}{\cos z + z\sin z} = z/f(z),$$ where $$f(z) = \frac{z(\cos z + z\sin z)}{\sin z}.$$ Here note that $f(0) = 1$ and that $f$ is analytic at $z = 0$. We have just satisfied the requirements of the inversion formula, so we may conclude that we can solve $w = z/f(z)$ for $z$ as a power series in $w$ in the form given earlier in the post. We have $c_1 = 1$ and, since $f$ is even, it can be shown that $c_{2k} = 0$ for all $k$. Calculating the first few coefficients in Mathematica gives $$z = w + \frac{2}{3}w^3 + \frac{13}{15}w^5 + \frac{146}{105}w^7 + \frac{781}{315}w^9 + \frac{16328}{3465}w^{11} + \cdots.$$ Substituting this into $x = w^{-1} - z$ and using $w = \left(\pi n + \frac{\pi}{2}\right)^{-1}$ gives the desired series for $x_n$ when $n$ is large enough: $$x_n = \pi n + \frac{\pi}{2} - \left(\pi n + \frac{\pi}{2}\right)^{-1} - \frac{2}{3}\left(\pi n + \frac{\pi}{2}\right)^{-3} - \frac{13}{15}\left(\pi n + \frac{\pi}{2}\right)^{-5} - \frac{146}{105}\left(\pi n + \frac{\pi}{2}\right)^{-7} - \frac{781}{315}\left(\pi n + \frac{\pi}{2}\right)^{-9} - \frac{16328}{3465}\left(\pi n + \frac{\pi}{2}\right)^{-11} + \cdots$$ - great answer, thank you. –  Kyle Feb 22 '12 at 19:46 @Kyle: apart from de Brujin's beautiful classic, I will also recommend looking at chapter 7 of Gil/Segura/Temme's Numerical Methods for Special Functions; the example treated there is the transcendental equation $\cot\,x=\kappa x$, but the methods remain applicable to your equation. –  Guess who it is. Apr 10 '12 at 11:18
# [java] How do I import a custom class? ("Bad class file class file contains wrong class") ## Recommended Posts I defined my own class foo() in a custom utility package, in foo.java thus: [source="java"] package custom_util; public class foo { private int i; public foo(int _i) { i = _i; } public int geti() { return i; } } Then in another file I try to use this class in another class in the same directory (in particular, I am trying to test function foo). [source="java"] public class footest { public void main() { foo f = new foo(1); } } When I try javac, I get "Bad class file: foo.class, class file contains wrong class". If I try to explicitly import custom_util.foo, it states that it "cannot access foo" and then continues with the rest of the previous error message. In short, I can't compile, and I'm not honestly clear as to why not. It occurs to me that this is easily solvable...but I have no idea how; thanks for your help in advance. ##### Share on other sites Quote: If I try to explicitly import custom_util.foo, it states that it "cannot access foo" Is foo.java inside a folder named custom_util? ##### Share on other sites Yes, both of these source files are in a folder called custom_util. EDIT: This is roughly seven sub-directories deep though (as in, /DIR/DIR/DIR/DIR/custom_util ....I don't think that matters, but if it does, there it is. ##### Share on other sites Quote: Yes, both of these source files are in a folder called custom_util. Move footest.java one folder up. Unnamed packages shouldn't be inside another package's folder. You may have to set the classpath to this same folder for it to work, but at first, try to run javac from the same folder as footest.java. ##### Share on other sites You're going to show us the exact command line you're running, and the exact directory layout (and where you're running the command line from). My wild guess would be that you've either got your source and compiled classes laid out differently to what you're telling javac, or you're passing a path to your source files when you should be passing a path to your compiled .class files. ##### Share on other sites My compile line is just javac footest.java The directory structure is DIR/DIR/DIR/custom_util/footest.java DIR/DIR/DIR/custom_util/foo.java DIR/DIR/DIR/custom_util/foo.class Thanks for your efforts, guys; I'll try building footest.java one dir up and see what happens. EDIT: Trying to build footest.java one directory up (outside of custom_util) gives me "package.custom_util does not exist". This has got to be a classpath issue. How does one examine/set the class path? ##### Share on other sites That's a bit odd. I've just built the following files: foo.java package custom_util;public class foo { private int i; public foo(int _i) { i = _i; } public int geti() { return i; }} footest.java import custom_util.foo;public class footest { public static void main(String[] args) { foo f = new foo(1); }} Using a directory structre like somewhere/footest.javasomewhere/custom_util/foo.java And it builds (and runs) just fine with javac footest.java while inside the folder somewhere. If your classpath is already defined as an environment variable (\$CLASSPATH), try printing it. You may set the classpath using -classpath blabla1;blabla2, but that's useful when you've got .class files, not .java files. Try building foo.java first, from the parent directory, javac custom_util/foo.java and then build footest the way you did it before. Just for the sake of it, remove previously built .class files in there, if they could be causing any problems.. ##### Share on other sites Quote: Original post by serratemplarThis has got to be a classpath issue. How does one examine/set the class path? With the -classpath (-cp for short) flag for javac. In particular, try 'javac -cp . footest.java'. (The '.' is of course the special name for the current directory.) Also, moving to Java Development. ## Create an account or sign in to comment You need to be a member in order to leave a comment ## Create an account Sign up for a new account in our community. It's easy! Register a new account • ### Forum Statistics • Total Topics 627733 • Total Posts 2978839 • 10 • 9 • 21 • 14 • 12
# Adjusting for Dropout Variance in Batch Normalization and Weight Initialization 8 Jul 2016Dan HendrycksKevin Gimpel We show how to adjust for the variance introduced by dropout with corrections to weight initialization and Batch Normalization, yielding higher accuracy. Though dropout can preserve the expected input to a neuron between train and test, the variance of the input differs... (read more) PDF Abstract
## Arc Length Radians Worksheet Pdf Students finish the first section of the chapter finding arc length and applying the formula for the length of a circular arc to analyze the. Students should be asked to apply their understanding of angle measure to the construction of the radian ruler for the globe and repeat the measurements using the newly made radian ruler (and the globe). Example 2: Circle Sectors & Arc Length. Radians Preferred by Mathematicians. Radian Measure If a circle of radius 1 is drawn with the vertex of an angle at its center, then the measure of this (positive) angle in radians is the length of the arc that subtends the the angle (see the picture below). arc length of AB b. 1 Learning Objectives 2 4 3. † is the central angle of a circular arc that has a radius of 225 feet. Find the length of the intercepted arc in cm. 3π 5 radians 14. To convert degrees to radians, multiply. Convert the following into radians: a) 60 b) −135 Example 6 Convert the following into degrees: a) 3 π radians b) 5 3 − π radian c) 1 radian The Length of a Circular Arc Recall that we said earlier that length of the intercepted arc s radius r θ= =, well this leads to the following formula: s r= θ. Unit Test - Scheduled for Feb 26 p. Homework: Punchline worksheet 154 on back of Yellow Bell Ringer & Begin MN Chapter 4,5,6 Beat the Test Problems due Monday Tuesday, March 21st: Math Nations Section 12- Topic 2 Arcs and Circumference/ Finding Arc Length Homework: Big Ideas 11. Here is a set of assignement problems (for use by instructors) to accompany the Arc Length section of the Applications of Integrals chapter of the notes for Paul Dawkins Calculus II course at Lamar University. Finally, the intercepted arc length is determined to be approximately 0. 4 Day 2 & 13. The area of a sector is a fraction of the area of the circle. Arc length of XY —— C = m XY — 360° c. arc length is 36 feet central angle 9 is 32B. To convert degrees to radians, multiply. We have seen that an angle of 1 radian is subtended by an arc of length r as illustrated in the left-most diagram in Figure 3. The Corbettmaths Practice Questions on Arc Length. There are , or a little more than 6, radians in a circle. Degrees are the oldest way to measure angles, but in many ways radians are the better way to measure angles. I can’t wait! Arc Length – Worksheet. Pre-Calculus Section 4. Precalculus with Limits, Answers to Section 4. 6 cm, centre O. Radians are particularly important in calculus. Example problem: A circular arc has an angle of 3 radians and a. In other words when a= r. Examples: 1. Finding Linear Speed & Angular Velocity s. To see this, set up a. m RS P A B 60° 8 cm Z X 40° Y 4. Math Analysis Honors - Worksheet 71 Arc Length, Radian Measure Use the appropriate arc length formula to find the missing information. More generally, if a central angle ∠AOB of θ radians (θ is the Greek letter theta) intercepts an arc AB of length s on a circle of radius r (Figure 1. 1 (Word problems ) 1) A central angle in a circle of radius 4 cm is 75ο. 10 PQ is an arc of a circle of radius 8 cm, centre O. 4) -670° 4) 5) 1440° 5) Convert the radian measure to degree measure. 24 , 6 r mi radians π = =θ 16. The straight sections are therefore tangent to the arc. sin 1 p 3 2! 9. Find, to 3 significant figures,. Since an arc is a portion of a circle, its length is a fraction of the circumference. determine whether his calculator was in degrees or radians mode. [2 marks] The angle is 102\degree, which means that this sector is \frac{102}{360} as a fraction of the whole circle. Radians 1 - An Introduction and Converting between Degrees (+ worksheet). For Circle E, what IS the rat10 of the arc length to the. -135 Convert each angle in radians to degrees. Arc Length = θr. 5 radians 3. I can use the arc length formula to find missing information. This task involves students using the circumference of a circle to derive the arc length formula. All circles are similar (G. The circumference of the unit circle is 2πr = 2π(1) = 2π, so the radian. Example: A circle turns two revolutions. This arc length maze is composed of 11 circles with arc measures in either degrees or radians. First convert the measure of the central angle from degrees to radians. 6 cm, centre O. Round your answers to three decimal places. 6 ft 957t 397t Find the length Of each arc. Processing. 1) 11 ft 315 ° 2) 13 ft 270 ° 3) 16 ft 3 π 2. π 23π 31) 32) 6 6 33) 30° 34) 930°. There are 5 problems included, and the measure of the central angle and the radius are both given in each problem. Radians are the ratios between the length of an arc and its radius. 28 m 44 m SOLUTION a. The signs in each quadrant. If the measure of an angle were 360°, the radian measure would be ${2π}$ since the length of the arc that the 360° angle intercepts on the unit circle would be the whole circumference of the circle (which we already established was ${2π}$). Geometry Worksheet Date Arc Length, Sector Area, Segment Area Find the shaded area. 1 Radians 6 S 4 S 2 S S 2 3S 2S 3 7S 4S Degrees Radians 10q 25q 45q 90q 120q 180q 300q 407q Radians 30q 45q 60q 90q 120q 180q 450q Radians 0. degree_to_radian_wks. Find the length of the arc, S, on a circle of radius 3 meters intercepted by a central angle of 210°. 5 radians or = 2. 147 147 18 0° 1 degree 18 0° 4 6 9 0 Then find the length of the arc. Radian measure arcs sectors and segments Name: Class: Date: a) Find l given r = 9 m and. Read the notes: 4. Because the radian measure of an angle of one full revolution is 2 S you obtain. Arc Length from subtended (central) angle 2. When the value of the angle 1 radian, then the length of the arc is equal to the length of the radius. arc length and areas of sectors If the complete circumference of a circle can be calculated using 𝑪= 𝟐𝝅𝒓 then the length of an arc, (a portion of the circumference) can be found by proportioning the whole circumference. Definition: A radian is the angle subtended by a circular arc on a circle whose length equals the radius of the circle. In discussing angular displacements, you must transition to describing the translational displacement around an arc in terms of the variable s, while continuing to use. The circular measurement problems also need to cover the length of an arc and area of a sector. Determine the arc length of a sector with a radius r = 26 inches and central angle 𝜃=144°. 3200 and radius = 14. t4 2 t3 3 + 3t2 2 7t+C 5. 2: To learn what a radian is. Find the degree measure of †. Students realize that arc length is proportional to circumference. When considering the content part of this learning outcome it is important that these problems give the learner the opportunity to convert multiples of π radians to degrees and vice versa. 38 3 s π =≈ C. Trigonometry Rules for Radians. 6 is intercepted by a central angle of 2 3. For ease of calculation lets make that be 1, a= r= 1. As circumference C = 2πr, L / Θ = 2πr / 2π. 5 cm ? 3 S rad 11 4 in 7 in. Save Image. 21 5 =πcm Simplify. For example, look at the sine function for very small values:. 130º The tip of a pendulum, after swinging through an arc of 2. 6 cm, centre O. Hence, the arc length is equal to radius multiplied by the central angle (in radians). Convert 1500 to radians. Processing. 12 WORKSHEET 3 7. arc is independent of (does not depend on) the size of the circle. On this page, we put 325 Degrees into our formula to calculate the radians. 14 radians = 180 degrees, how about 1 radian? 1radian = 180/3. Worksheets are Mcr3ui radian work, Angles and angle measure date period, Degrees radians conversion practice date, , Degrees and radians, Radian and degree measure, Degrees and radians, Radians and degrees. Find the values of two angles in radians given their sum is 7 4 ∘ and their difference is 2 𝜋 9. Find the radian measure of the central angle of a circle with radius r that intercepts an arc of length s. Radians are the ratios between the length of an arc and its radius. One radian is the angle made by taking the radius and wrappmg it along the edge (an arc) of the circle 1 radian INVESTIGATE: How many pieces ofthis length do you think it would take to represent one complete circumference of the circle? To help, cut a piece of pipe cleaner/string to a length equal to radius CA and bend it. An angle of 1 radian is defined to be the angle at the center of a unit circle which spans an arc of length 1, measured counterclockwise. Math 175 Trigonometry Worksheet We begin with the unit circle. Convert each of the following angles from radians to degrees, giving your answer to 1 decimalplace. for the area of a sector is area 1 2 angle of sector r 2 worksheet on arc length and area of sector 12 3 sector area and arc length practice Media Publishing eBook, ePub, Kindle PDF View ID b4078fd4e Mar 09, 2020 By John Creasey. 14)(60) = 360x 1. Which slice is the better buy?. 5 cm ? 3 S rad 11 4 in 7 in. In the rest of this booklet, we will be using radian measure only. 1 Exercises worksheet 5 mins 20 mins. The arc of the circle AB subtends an angle of 1. Linear Speed and Angular Velocity Word Problems. r r r 1 r 1 1 2r r Figure 3. 2p 3 radians 10. angular 10. These arc length and sector area notes and worksheets cover:A review of circumference and area of a circle that lead to arc length and sector area formulas (1 pg. The Spherical Law of Cosines says that. Area of triangles: 1 2 a b \frac{1}{2} ab 2 1 a b sinC. These ratios enable students to quickly calculate between degree and radian measures. Trigonometric ratios of angles in radians. Finding Linear Speed & Angular Velocity s. I explain that mathematicians this measurement is important because there are applications for arclength that require a measurement that is not dependent on the. — Find the length of the arc on a circle with the given radius intercept by the given central angle Express arc length in terms of It. Note: One complete revolution = 360. The length of this arc is the radian measure of the angle x; the fact that the radian measure is an actual geometric length is largely responsible for the usefulness of radian measure. arc length of AB b. , inscribed angle is half measure of intercepted arc c. An angle of 1 radian is defined to be the angle at the center of a unit circle which spans an arc of length 1, measured counterclockwise. worksheet is due at the beginning of the next class. continued 9. Take note not to confuse when to use the two formulae for finding arc length. If a circle has a radius of length 1 unit, nd the arc length subtended by the central angle of 7ˇ 6. (sr TT, where is measured in radians) s r T 7 ? 2 in. Arc length of XY —— C = m XY — 360° c. 23) -600° 24) 405° Convert each radian measure into degrees. Understand radian measure of an angle as the length of the arc on the unit circle subtended by the angle. Radians are based on the geometry of the arc of a circle. If the angle is in degrees, we must use the correct symbol ′ ° ′ to show that the angle has been measured in degrees. Standard III. It consists of a region bounded by two radii and an arc lying between the radii. 3 radian 6 27 2 9 6 27 s s r 70. Worksheet by Kuta Software LLC Convert each degree measure into radians and each radian measure into degrees. 𝟓𝝅 𝝅 𝟒 Stake 3 (𝟎,𝟏𝟐) 𝟔𝝅 𝝅 𝟐 (𝟎,𝟏𝟖) 𝟗𝝅 𝝅 𝟐 Stake 4 (−𝟔 √𝟐, )𝟗𝝅. The major arc CD subtends an angle 7 x at O. Similarly 1 4. Trigonometric ratios of angles in radians. Circular Trig Worksheet #1 Angles and Radians Show work on the back or on a separate sheet of paper. 1 Radian is defined as the size of an angle that is subtended by an arc with a length equal to the radius of the circle. π/2 radian and similarly for each of the ‘string lengths’ marked on the circle below (Diagram 5, Transparency 11. keys when evaluating recip- rocal functions. Convert 60° to radians. Radian Measure – a new unit of measuring an angle that is more useful in science and engineering Definition of Radian: A unit of angular measure equal to the angle subtended at the center of a circle by an arc equal in length divided by the radius of the circle Example 1: Convert to radians. The advent of infinitesimal calculus led to a general formula that provides closed-form solutions in some cases. The sector of a circle has centre C as shown. Now let’s take a look at an example of calculating the arc length when the angle is given in radians. Radians are based on the geometry of the arc of a circle. 238 #10‐17, 55‐56, 58. 32 7 radians 7 s r 68. 4 radians at O. Radian measure is defined by the following ratio: Length of intercepted arc Length of the radius. Radian is the measure of an angle that, when drawn as a central angle of a circle, intercepts an arc whose length is equal to the length of the radius of the circle. Find the radian measure of the central angle in a circle of radius 6 cm with arc length_8. The circumference (or length around) this unit circle is ${2π}$, since ${C=2πr}$, and r=1. 25) 4p 9 26) - 31p 6 Find the exact value of each trigonometric function. and that an isosceles triangle is a triangle with two sides of equal length. 4 04/20 Quiz 13. Find, in terms of π, the length of the minor arc CD. Worksheet # 3: Review of Trigonometry O A C E F 1 sin csc cos sec cot tan 1. Leave TC in your answer. edu December 6, 2014 Solutions to the practice problems posted on November 30. ar The minute hand of a clock is 5 inches long. Length of the arc AB s=? r=7 in 1) Length of the arc PQ = 2) Length of the arc DE = 3) Length of the arc LM = 4) Length of the arc GH = 5) Length of the arc AB = 6) Length of the arc RS = 7) Length of the arc YZ = 8) Length of the arc JK = 9) Length of the arc EF = 43. Arcs: Arc length is a partial distance along the outside of a circle. m RS P A B 60° 8 cm Z X 40° Y 4. acute; obtuse 6. A central angle of a circle of radius r and intercepts an arc of length s, then the radian measure of is Solution Latitude gives the measure of a central angle with vertex at Earth's center whose initial Find the area of the sector-shaped field shown in Figure 9. Sketch Pads -- Dynamic Analytic Geometry: Angles NewCentral -- One or 2 angles in standard position. 75 cm I radia radius = 2. Monday, October 24, 2011. Coterminal angles. 4 – Radians, Arc Length, Linear Speed, and Angular Speed Variables: ( ) ( ) ( ) ( ) Formulas: Note: We derive by combing with to get Dimensional analysis conversion factors: Steps to Solve: 1. When the value of the angle 1 radian, then the length of the arc is equal to the length of the radius. 5 6; r = 3 ft 2. Since 2 =360 , and therefore radians =180 , you can use the proportion 180 d r radians radians to convert between degrees and radians. J (Random) Solving Triangles. If the measure of the arc and the radius of the circle is known, the arc length can be calculated. One radian is the measure of an angle θ in standard position with a terminal side that intercepts an arc with the same length as the radius of the circle. 150 ft 75¼ 17. There are , or a little more than 6, radians in a circle. Use the degrees stated for the given angle to calculate chord length. A radian is defined by the radius of a circle. r>a/v^ o"t^~ \ 29. The key to understanding trig functions is to understand the unit circle | given an angle between 0 and ˇ=2 (measured in radians!), each of the six trig functions measures a length related to the unit circle. It is a relative measurement. Find, to 3 significant figures,. An angle of 1 radian is an angle at the center of a circle measured in the counterclockwise direction that subtends an arc length equal to 1 radius. 2) to find our arc length, which is 3. Find the degree measure of an angle of 4. show how many radians we have in a circle lets replace that with a variable rad. "Considering radian, the value of tan 1. Suppose a sector has a central angle of 𝜃=4𝜋 5 and an arc length of 8𝜋 15 cm. Worksheet 6. When an object is moving in a circle, its "angular velocity" is the angle per unit time through which it rotates about the center. Round your answers to three decimal places. 75 cm I radia radius = 2. Let’s assume we get a measurement of 2. Radians, Arc Length and Sectors. 4 Day 2 & 13. Find the length of the arc, S, on a circle of radius 3 meters intercepted by a central angle of 210°. Angles and arcs worksheet. We can use this proportion to convert radian angle measure to degrees and vice-versa. These ratios enable students to quickly calculate between degree and radian measures. The Corbettmaths Practice Questions on Arc Length. 3 radian 6 27 2 9 6 27 s s r 70. Convert each angle from radians to degrees. If in two circles, arcs of the same length subtend angles and at the centre, find the ratio of their radii. Radians is the ratio of the arc length to the radius of a circle. Thus, on the unit circle an angle whose size is one radian subtends a circular arc on the unit circle whose length is exactly one. 31492 Solutions to Inverse and Joint Variation Word Problems Worksheet. 4 5 60 75 4 5 radian 60 75 s r 69. The arc length formed by this central angle is about 10. Understand radian measure of an angle as the length of the arc on the unit circle subtended by the angle. θ = 75° r = 9 since that is half of the diameter. Similarly, the arc length of this curve is given by L = ∫ a b 1 + (f ′ (x)) 2 d x. radians, which are units based on arc length. Example: A circle turns two revolutions. Solution First. 1 radian x y (0,0) (1,0) Figure 1. So, arc length (s) = (6-4) = 2. Convert the following degree measurements to radians. Convert the degree measure to radian measure. Just for fun, you should understand exactly what a radian is. Degrees to Radians: Multiply degree measure by ᵒ Radians to Degrees:. formulas for arc Length, chord and area of a sector In the above formulas t is in radians. Central Angle: 3. 3 cm t p 3 14. In this way, the degrees cancel out and the measurement is left in radians. " A radian is defined as the angle equal to the ratio of the arclength to Worksheet 7 P. of radius 1 is equal to the length of the arc that it inter-cepts. Find the radian measure of a central angle given the radius and arc length. Definition of a radian. Since the unit circle has a radius equal to one, it follows that. This is a comprehensive and perfect collection of everything on the SAT Math that a test taker needs to learn before the test day. Click on pop-out icon or print icon to worksheet to print or download. To convert degrees to radians, multiply. 1 / Trigonometric Functions / Extend The Domain Of Trigonometric Functions Using The Unit Circle / Understand radian measure of an angle as the length of the arc on the unit circle subtended by the angle. The circular measurement problems also need to cover the length of an arc and area of a sector. 6553 7) Solve the problem. Hypotenuse length: a 2 + b 2 = c 2. Central Angle: 3. Defn: Radian A central angle of a circle has measure 1 radian if it intercepts an arc with the same length as the radius. degree_to_radian_wks. Name_____ Geometry 201 Worksheet on Radians. One radian is the measure of a central angle 𝜃 that: Algebraically this means that 𝜃= A central angle of one full revolution (counterclockwise) corresponds to an arc length of =_____. Explain why an arc length equal to the circumference of a circle is l = 2π. What is the angle of the sector in radians? 6. 1) 325° 2) 60° 3) -240° 4) 345° 5) 570°. A radian is defined by the radius of a circle. 18q Convert from degrees to radians. Chords: A chord is a line that connects two points on the circumference of the circle. 5 radians, forms an arc whose length is 10 inches. 1: Radian Measure I. This concept teaches students to find the length of an arc. It all depends on whether the angle subtended by the arc is given in degrees or radians. Complete the odd numbers #1-#37 in the Section 4. Homework: Punchline worksheet 154 on back of Yellow Bell Ringer & Begin MN Chapter 4,5,6 Beat the Test Problems due Monday Tuesday, March 21st: Math Nations Section 12- Topic 2 Arcs and Circumference/ Finding Arc Length Homework: Big Ideas 11. This task involves students using the circumference of a circle to derive the arc length formula. 17) 15 km 270° 18) 4 ft 255° 19) 15 ft 45° 20) 12 km 105°. Central Angle: 2. feet, feet r 8 14 4 7 radian r 14 s 8 71. Students realize that arc length is proportional to circumference. Convert 120° to radians. Feb 22, 2019 - This arc length maze is composed of 11 circles with arc measures in either degrees or radians. Take note not to confuse when to use the two formulae for finding arc length. Radians: One radian is the measure of a central angle T that intercepts an arc equal in length to the radius of the circle. Central angle θ intercepts an arc with a length of 15π. Asector Arc length: 3, Asector Arc length 3100 600 What is the radian measure of the central angle? Seb: 1200 What is the radian measure of the central angle? -L. 4 Day 2 Topic: Radian Measure and Arc Length Do: Convert from degree to radian. The measure of the central angle, 𝜃. Example A Converting degrees to radians. As mentioned above, radian measure times radius = arc length, so, using the letters for this problem, ar = l, but a needs to be converted from degree measurement to radian measurement first. radii would it take to make up the arc length. A radian measures a distance around an arc equal to the length of the arc’s radius. Take note not to confuse when to use the two formulae for finding arc length. Radian measures are very common in calculus, so it is important to have an understanding of what a radian is. length of the arc QST. *To learn the arc length formula for radian measure. D = d x α(radians) where α is in radians and d and D are given in the same length units. A 5400-lb sport utility vehicle is parked on an 18 slope. 23) -600° 24) 405° Convert each radian measure into degrees. The ratios 360̊ = 2π radians and 180̊ = π radians is used to obtain 1 radian = 180̊π / and 1̊= π/180̊. The circular measurement problems also need to cover the length of an arc and area of a sector. Radius: r = 14 ft. Law of cosines. The resulting arc has a length of $$t$$ units and therefore the corresponding angle has radian measure equal to $$t$$. The measure of an angle can be expressed using a number of units, but the most often used are radians and degrees. Convert the degree measure to radian measure. WORKSHEETS: Regents-Radian Measure 1 A2/B define: 1/4: TST PDF DOC TNS: Regents-Radian Measure 2 A2/B/SIII to radians: 5/1/18: TST PDF DOC TNS: Regents-Radian Measure 3 A2/B/SIII to degrees: 11/3/14: TST PDF DOC TNS: Regents-Unit Circle AII. Calculator Allowed: Be sure you are in Degree Mode. 3 Degrees to Radians Goal: Covert degree to radians and radians to degrees. Given that ∠AOB = θ radians and that the length of the arc AB is 32. a) b) c) 3 d) 3. 01745 radians. 6radians b) 2. Radians into degrees. Then in radians: s =rθ 10 total 4 in degrees: 2() 360 s. s = rθ, when θ is measured in radians “Life is full of circles. 5:00 Problem 2 Find the length of the intercepted arc of a circle with radius 9 and arc length in radians of 11Pi/12. Determine the length of the diameter. pdf: File Size: 184 kb File Size: 281 kb: File Type: pdf: Download File. kilometers, kilometers s r 160 80 2 radians s 160 r 80 73. Then, a circle that has a radius of 1 cm is drawn with its center at the vertex. Displaying all worksheets related to - Radians. s r Formula for the length of an arc s 10 4 6 9 0 r 10. notebook 2 May 14, 2014 Mar 24­9:56 PM Example: A pizza vendor sells a small slice for $2 and a large slice for$3. Feb 22, 2019 - This arc length maze is composed of 11 circles with arc measures in either degrees or radians. Find the radian measure of the central angle in a circle of radius 6 cm with arc length_8. uilfplveneto. 1 of 2) radius 1 radius 2 radii 3 radii A little more than 3 radians in 180 1 radian (radians) (radians) radians degrees 1 57. To see how the formula is used, let’s go through an example together. Express the following in angular measurement (radians): a. If an arc of a circle is drawn such that the radius is the same length as the arc,. Solution to Question 14: The use of the arc length formula s = r t where t is the angle (in radians) that subtends an arc of length s gives 1. As circumference C = 2πr, L / Θ = 2πr / 2π. Round your answers to the nearest tenth. Answer: The length is about 8. The Calculus Worksheets are randomly created and will never repeat so you have an endless supply of quality Calculus Worksheets to use in the. The radian measure of an angle 𝜃 is the length of the arc subtended by angle 𝜃, divided by the radius 𝑟 of the circle (see page A24). Adding Negative Numbers Worksheet Pdf. If TS = 24 find the length of SE. ? 12 40 cm ? 20q 13 ? 5 ft. 5l2 % change = 22 2 0. 1 C (meaning 1 radian)= 57. Learn about the Math central angle an angle whose vertex is the centre of a circle and whose sides pass through a pair of points on the. It is a relative measurement. The measure of a major arc an arc greater than a. Relationships Between Trig Functions q. Find the measure of aP, in degrees, and find the length of rod PT , to the nearest tenth of a centimeter. In a circle of radius 2 centimeters,a cen-tral angle whose measure is 1 radian. Click here for a copy of the worksheet - http://www. 360 degrees = 1 revolution = 2π radians A radian is defined to be the measure of the angle formed when the arc length and radius of a circle are the same. One radian is the measure of a central angle of a circle that is subtended by an arc whose length is equal to the radius of the circle. 4 Day 2 Topic: Radian Measure and Arc Length Do: Convert from degree to radian. 5 cm ? 3 S rad 11 4 in 7 in. 8 Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 2 Name_____ Arc Length and Sector Area Date_____ Period____ Find the length of each arc. 2!4!6! xxx!+!+! If h is a function such that h'(x)=cosx3, then the coefficient of x7 in the Taylor series for h(x) about x = 0 is. 5 Worksheet Name_____ Date_____ Find the arc length and the sector area. Scroll down the page for more explanations, examples and worksheets for the area of sectors and segments. (Find the answer to this problem on "other" page. So in the above diagram, the angle ø is equal to one radian since the arc AB is the same length as the radius of the circle. Therefore, to convert degree into radians, we multiply the number of degrees by. How far does the ball travel? a. 3 cm t p 3 14. Trigonometry Sections 2. Let and be radii of two circles in which arcs of same length subtend angles and respectively. Arc length of AB 60= —° 360° ⋅ 2π(8) ≈ 8. Radians, Arc Length and Sectors. Math Analysis Honors – Worksheet 71 Arc Length, Radian Measure Use the appropriate arc length formula to find the missing information. 7 let length of wire = 3l area of A = 1 2 × l2 × sin π 3 = 0. Big Ideas: Calculate and apply the length of an arc by using angles and circumference. arc which is the same length as the radius of the circle. radius) pizza. Remember to convert 39. 75 cm I radia radius = 2. — Find the length of the arc on a circle with the given radius intercept by the given central angle Express arc length in terms of It. There are several key advantages to using radians, and they all arise from the geometric nature of the definition of a radian. Radian Measure. The circumference of the unit circle is 2πr = 2π(1) = 2π, so the radian. Example 2: Circle Sectors & Arc Length. The definition of a radian is that it is the angle subtended at the centre of a circle by a piece of the circumference of the circle of length equal to the radius of the circle. We have seen that an angle of 1 radian is subtended by an arc of length r as illustrated in the left-most diagram in Figure 3. Example 2: Convert from degrees to radians or radians to degrees: a) 200° b) 7 8 radians. Key Concept For Your Words The ratio. arc length and sector worksheet sector arc length formula section3. The arc of the circle AB subtends an angle of 1. AC = 19 in 7 in θ B C A 1. Two rays that have a common endpoint (vertex) form an angle. Trignometry 2. 25) 4p 9 26) - 31p 6 Find the exact value of each trigonometric function. Scroll down the page for more explanations, examples and worksheets for the area of sectors and segments. The topics are ordered by which topics we nd students need to practise the most. Consequently, 360°=2π radians. 1 Understand radian measure of an angle as the length of the arc on the unit circle subtended by the angle. com › Explore › Education and Science › Math If you take a part of the circumference of a circle then the distance along this arc is called the arc length. notebook 2 May 14, 2014 Mar 24­9:56 PM Example: A pizza vendor sells a small slice for $2 and a large slice for$3. Thus, in a right triangle one of the angles is 90 and the other two angles are acute angles whose sum is 90 (i. 2 Explain how the unit circle in the coordinate plane enables the extension of trigonometric functions to all real numbers, interpreted as radian measures of angles traversed counterclockwise around the unit circle. 11) c 12) rescr. #4 - I can find positive and negative angles conterminal with a given angle (working in both radians and degrees). 1) 325° 2) 60° 3) -240° 4) 345° 5) 570°. 5 radians intercepts an arc of 24. Try Counting by 0 b. One radian is the measure of an angle θ in standard position with a terminal side that intercepts an arc with the same length as the radius of the circle. Let and be radii of two circles in which arcs of same length subtend angles and respectively. Round your answers to three decimal places. 8 2π radians 3 3 Concept #3 Finding the Length of an Arc s = rΘ Ex. 21) x y-590° 22) x y 7p 6 Convert each degree measure into radians. Round to two decimal places. Introduction to radians, moving onto the area of a sector and arc length. Find the radian measure of the central angle intercepted by an arc length of 1000 km. The circumference of the circle is equal to 2πR, where R is the radius of the circle. Radian Measure. Write the following in both degrees and radians and represent them on a diagram. cm using. Radians are based on the geometry of the arc of a circle. 33 meters e. pdf (213k) Robert Trakimas, Nov 17, 2015, 6:58 PM. Angular velocity may also be expressed in radians per unit of time. First we break the curve into small lengths and use the Distance Between 2 Points formula on each length to come up with an approximate answer: The distance from x 0 to x 1 is: S 1 = √ (x 1 − x 0) 2 + (y 1 − y 0) 2. radian measure: radian measure is defined as the actual length of the arc between the points (1,0) and (x,y). Just over 6 radians in a full circle hence 2 S Make sure to make clear S in radians is 180 degrees and as a distance is 3. Step 2: Substitute into formula. A 5400-lb sport utility vehicle is parked on an 18 slope. When an object is moving in a circle, its "angular velocity" is the angle per unit time through which it rotates about the center. 7 Relation between Length of a Circular Arc and the adian Measure of its Central Angle. s (in m) s = rθ Arc Length – how much of a circumference an object travels. Use the inbuilt Excel RADIANS function. 5 radians, we write = 2. Following Equation (12. Analytic Trigonometry. The circumference of a circle is 2r, where r is the length of a radius. Answer questions correctly to move the progress bar forward. Let and be radii of two circles in which arcs of same length subtend angles and respectively. 11-2 Geometry Practice – Radian Measure Find the measure of each angle in radians. Calculate the perimeter of a semicircle of radius 1. radian, sine, cosine and tangent functions. 28 Arc Length Worksheet Arc Length And Radians Arc. These terms are shown in Figure 13-8. pdf Arc Length and Area of a Sector To find the length of arc AB, we convert to radians by. I can use the arc length formula to find missing information. Unit Circle Worksheets. Conversion between radians and degrees, arc length, areas of sectors and segments. Calculate the arc length and area of a circle sector and apply this to real life examples. Ratio of arc length to radius Location Distance from (𝟏𝟖,𝟎) along circular path Ratio of arc length to radius Stake 1 (12,0) 0 0 (18,0) 0 0 Stake 2 ( 𝟔√𝟐, ) 𝟑𝝅 (𝝅 𝟒 𝟗√𝟐, ) 𝟒. The resulting arc has a length of $$t$$ units and therefore the corresponding angle has radian measure equal to $$t$$. radians or degrees. it Assume that lines which appear tangent are tangent. Arcs: Arc length is a partial distance along the outside of a circle. A central angle of 2 radians in a circle of radius r intercepts an arc of length 2r on the circle, a central angle of 4 3 radian intercepts an arc of length 4 3 r, and so forth. 75 (radians) We now convert t in degrees. Write Equation (8-1a) in the space below. Arc Length Excel Download. Homework: Punchline worksheet 154 on back of Yellow Bell Ringer & Begin MN Chapter 4,5,6 Beat the Test Problems due Monday Tuesday, March 21st: Math Nations Section 12- Topic 2 Arcs and Circumference/ Finding Arc Length Homework: Big Ideas 11. Given that arc PQ has length 12 cm, find a the angle, in radians, subtended by PQ at O, b the area of sector OPQ. 2 Deliverables Class Project - Constructing the unit circle Worksheet 15 - Converting between radian and degree. cleaner/string to a length equal to radius r INVESTIGATE: How many pieces of this length do you think it would take to represent one complete circumference of the circle? To help, cut a piece of pipe. 7 Relation between Length of a Circular Arc and the adian Measure of its Central Angle. Hence, the arc length is equal to radius multiplied by the central angle (in radians). We have that converts to radians. formulas for arc Length, chord and area of a sector In the above formulas t is in radians. A car tire has diameter 64cm. Find the length of an arc that subtends [forms] a central angle of 45° in a circle of radius 10 m. If you cannot get four in a row correct, ask for help from a tutor. a a 1 radian DEFINITION Radian A central angle of a circle has measure 1 if it intercepts an arc with the same length as the radius. 25) 4p 9 26) - 31p 6 Find the exact value of each trigonometric function. 2π radians 8. an arc length of 100 miles. radius is 14 inches central angle 6 is 1800 Find the radius. Let and be radii of two circles in which arcs of same length subtend angles and respectively. Students finish the first section of the chapter finding arc length and applying the formula for the length of a circular arc to analyze the. 1 Learning Objectives 2 4 3. This task involves students using the circumference of a circle to derive the arc length formula. For lower KS5. 1Radians Radians are often used instead of degrees when measuring angles. What is the difference between CD and CD? 9. 2 360 m Lr ⎛⎞° =π ⎝⎠° Formula for arc length () 84 2 9 cm 360 ⎛⎞° =π ⎜⎟ ⎝⎠° Substitute 9 cm for r and 84° for m°. math-worksheet. Write Equation (8-1a) in the space below. 42 cm Substitute 4 for r and 135 for m. 1 Arc length, area sector, vocab, coterminal, reference angles_JB­A Block. Answer: The length is about 8. This answer closely relates to:3 words 8 letters if i say it will i be yours soft. Trigonometry Rules for Radians. radian measure of the angle. !An arc with a measure of 190° has an arc length of 40/ centimeters. Hence, the argument that we need to pass must be in radians. 3 6 Degrees Calculation Calculated Value (to 2 decimal places). Ask students to use MATHOMAT shape 3 to draw a unit circle and accurately reproduce the diagram below: emphasise that each of the 12 evenly spaced points on the circumference has associated with it an arc whose length is. Sector Angle = Arc Length * 360 degrees / 2π * Radius. You can think of a radian as a partial circumference of a circle. Improve your math knowledge with free questions in "Radians and arc length" and thousands of other math skills. This is a FREE worksheet to give some quick practice on finding the length of an arc of a circle and the area of a sector of a circle. Yes; the arc length is dependent on the radius of the circle. Example: a) What is the length of the arc intercepted by an. To convert an angle into a radian, there are two methods. If the merry-goround has a radius of 4 m, through what angle (theta) does the child travel? Give the angle in radians, degrees, and rotations. Analytic Trigonometry. formulas for arc Length, chord and area of a sector In the above formulas t is in radians. A radian is a unit of measurement defined as the angle at the center of the circle made when the arc length equals the radius. 5 radians, then the intercepted arc is. Convert an angle from degrees to radians or vice versa. x= 1 3 p y(y 3);1 y 9 Solution: Use the formula L= R 9 1 q 1 + (dx dy) 2dy. Express the answer in terms of ππππ. Radians are based on the geometry of the arc of a circle. 14radians d) 1radian 5. Name what is the difference between a minor arc and a major arc. Need to nd dx dy: x= 1 3 p y(y 3) = 1 3 (y3=2 3y1=2) ) dx. cm using. This is a FREE worksheet to give some quick practice on finding the length of an arc of a circle and the area of a sector of a circle. a) Draw a picture and write the height of the tree as a function of θ. Scroll down the page for more explanations, examples and worksheets for the area of sectors and segments. 3 Properties of Chords Pg 667 # 3-14, 27, 28, 35-37 Quiz # 1-4 TH 3/12 10. 6a) 0 = cos(AB)cos(BC) (2. Here, the shaft position could be [(π)/2] radians, or it could be [(3 π)/2] radians. A radian is defined by the radius of a circle. Concepts and Skills to Master • Define a radian as the length of the arc on the unit circle subtended by the angle. In the special case r =1, (a)2πradians = 360 (b) πradians = 180 (c)1radian= 180 π (d) π 180. show how many radians we have in a circle lets replace that with a variable rad. You can think of a radian as a partial circumference of a circle. 2!4!6! xxx!+!+! If h is a function such that h'(x)=cosx3, then the coefficient of x7 in the Taylor series for h(x) about x = 0 is. Name what is the difference between a minor arc and a major arc. Key Concept For Your Words The ratio. Central Angle: 2. Find (a) the force required to keep the vehicle from rolling down the hill and (b) the force perpendicular to the hill. † is the central angle of a circular arc that has a radius of 225 feet. Here we will show you how to convert 325 Degrees to radians. 9 cm A) 508. Sine, Cosine, Tangent Chart. There are 5 problems included, and the measure of the central angle and the radius are both given in each problem. PDF View ID b4078fd4e Mar 07 r 144 ft challenge problems arc length radians 1 our mission is to provide a free world class education 2011 60819 pm worksheet. In degrees a complete revolution of a circle is 360 , however in radians it is 2ˇ. 3) says that cos(AC) = cos(AB)cos(BC) (2. Using the Unit Circle and Arc Length to see Radians. Practice Problems Worksheet. Therefore radians and the conversion factors follow: 1 radian = degrees and 1 degree radians. Case 4 (adjacent right side and right angle): Suppose both ∠ABC and AB measure π 2 radians. Find the measure of each angle indicated. It is a self-checking worksheet that allows students to strengthen their skills at calculating arc length. 36° 36° = 36° (𝜋 radians 180°) πMultiply by radians 180° = π 5 radians or π 5. Concept Summary: - Circumference of a circle (perimeter), C = 2πr = dπ - Arc length is a proportion of the circumference - 180 degrees is π radians Khan Academy Videos: 1. PDF version SMART Class Notes. 111 112 angles and arcs in a circle. a) b) c) 3 d) 3. An easy to use online calculator to calculate the arc length s , the length d of the Chord and the area A of a sector given its radius and its central angle t. π 23π 31) 32) 6 6 33) 30° 34) 930°. Find the arc length. In the rest of this booklet, we will be using radian measure only. Here are some useful formulas for radians. In general, for a circle with radius R and an arc with central angle X in radians, (arc length)=RX. Find the length of the arc on a circle of radius 10 cm with a central angle of — radians. arc: arc from point A to point B radians: radians angle unit: 360° = 2π rad: c: radians: radians angle unit: 360° = 2π c: grad: gradians / gons: grads angle unit:. Sine, Cosine, Tangent to find Side Length of Right Triangle. Solve Right Triangles b. 24 , 6 r mi radians π = =θ 16. 1) 325° 2) 60° 3) -240° 4) 345° 5) 570°. Worksheet to calculate arc length and area of sector (radians). 30 or approximately 0. For our trigonometric functions, we use radians as our arguments. Construct an angle with measure 2 radians. Round your answers to the nearest tenth. arc length and area a sector worksheet areas circles and sectors worksheet the best worksheets image arcs and central angles worksheet unique objective properties arcs and central angles worksheet elegant inscribed shapes find 58 fantastic reading a protractor worksheet – free worksheets arcs and central angles worksheet best angles in a circle how to determine the geometry of a circle. Radians Preferred by Mathematicians. Understand radian measure of an angle as the length of the arc on the unit circle subtended by the angle. 1 is" Multiple Choice Questions (MCQ) on algebraic function and equations with choices 1. Find each arc length. [2 marks] The angle is 102\degree, which means that this sector is \frac{102}{360} as a fraction of the whole circle. 3π 5 radians 14. ARC LENGTH If θ is a central angle in a circle with radius r, then the length of the intercepted arc, s, is given by: (where θ is measured in radians) Find the length of the intercepted arc given the an π Examples Convert to radians! gle measure and radius. 1 radian x y (0,0) (1,0) Figure 1. Arc Length and Radian Measure Reteach Arc length is the distance along a circular arc, measured in linear units. Example: A circle turns two revolutions. Note: One complete revolution = 360. 5 Derive using similarity the fact that the length of the arc intercepted by an angle is proportional to the radius, and define the radian measure of the angle as the constant of proportionality; derive the formula for the area of a sector. Find the radian measure of the central angle of the circle of radius 6 centimeters that intercepts an. It consists of a region bounded by two radii and an arc lying between the radii. 5 50 29 radians 25 14. Take a circle of radius r, and then mark off an arc of length r along the perimeter of the circle. 75 cm I radia radius = 2.
# Infinite expression In mathematics, an infinite expression is an expression in which some operators take an infinite number of arguments, or in which the nesting of the operators continues to an infinite depth.[1] A generic concept for infinite expression can lead to ill-defined or self-inconsistent constructions (much like a set of all sets), but there are several instances of infinite expressions that are well defined. Examples of well-defined infinite expressions include[2][3] infinite sums, whether expressed using summation notation or as an infinite series, such as ${\displaystyle \sum _{n=0}^{\infty }a_{n}=a_{0}+a_{1}+a_{2}+\cdots \,;}$ infinite products, whether expressed using product notation or expanded, such as ${\displaystyle \prod _{n=0}^{\infty }b_{n}=b_{0}\times b_{1}\times b_{2}\times \cdots }$ ${\displaystyle {\sqrt {1+2{\sqrt {1+3{\sqrt {1+\cdots }}}}}}}$ infinite power towers, such as ${\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{\cdot ^{\cdot ^{\cdot }}}}}}$ and infinite continued fractions, whether expressed using Gauss's Kettenbruch notation or expanded, such as ${\displaystyle c_{0}+\operatorname {*} {K}_{n=1}^{\infty }{\frac {1}{c_{n}}}=c_{0}+{\cfrac {1}{c_{1}+{\cfrac {1}{c_{2}+{\cfrac {1}{c_{3}+{\cfrac {1}{c_{4}+\ddots }}}}}}}}}$ In infinitary logic, one can use infinite conjunctions and infinite disjunctions. Even for well-defined infinite expressions, the value of the infinite expression may be ambiguous or not well defined; for instance, there are multiple summation rules available for assigning values to series, and the same series may have different values according to different summation rules if the series is not absolutely convergent. ## From the hyperreal viewpoint From the point of view of the hyperreals, such an infinite expression ${\displaystyle E_{\infty }}$ is obtained in every case from the sequence ${\displaystyle \langle E_{n}:n\in \mathbb {N} \rangle }$ of finite expressions, by evaluating the sequence at a hypernatural value ${\displaystyle n=H}$ of the index n, and applying the standard part, so that ${\displaystyle E_{\infty }=\operatorname {st} (E_{H})}$.
• General • News • Popular Stocks • Personal Finance • Reviews & Ratings • Wealth Management • Popular Courses • Courses by Topic # Internal Rate of Return (IRR) ## What Is Internal Rate of Return (IRR)? The internal rate of return (IRR) is a metric used in financial analysis to estimate the profitability of potential investments. IRR is a discount rate that makes the net present value (NPV) of all cash flows equal to zero in a discounted cash flow analysis. IRR calculations rely on the same formula as NPV does. Keep in mind that IRR is not the actual dollar value of the project. It is the annual return that makes the NPV equal to zero. Generally speaking, the higher an internal rate of return, the more desirable an investment is to undertake. IRR is uniform for investments of varying types and, as such, can be used to rank multiple prospective investments or projects on a relatively even basis. In general, when comparing investment options with other similar characteristics, the investment with the highest IRR probably would be considered the best. ### Key Takeaways • The internal rate of return (IRR) is the annual rate of growth that an investment is expected to generate. • IRR is calculated using the same concept as net present value (NPV), except it sets the NPV equal to zero. • IRR is ideal for analyzing capital budgeting projects to understand and compare potential rates of annual return over time. 1:30 ## Formula and Calculation for IRR The formula and calculation used to determine this figure are as follows: \begin{aligned} &\text{0}=\text{NPV}=\sum_{t=1}^{T}\frac{C_t}{\left(1+IRR\right)^t}-C_0\\ &\textbf{where:}\\ &C_t=\text{Net cash inflow during the period t}\\ &C_0=\text{Total initial investment costs}\\ &IRR=\text{The internal rate of return}\\ &t=\text{The number of time periods}\\ \end{aligned} ### How to Calculate IRR 1. Using the formula, one would set NPV equal to zero and solve for the discount rate, which is the IRR. 2. The initial investment is always negative because it represents an outflow. 3. Each subsequent cash flow could be positive or negative, depending on the estimates of what the project delivers or requires as a capital injection in the future. 4. However, because of the nature of the formula, IRR cannot be easily calculated analytically and instead must be calculated iteratively through trial and error or by using software programmed to calculate IRR (e.g., using Excel). ### How to Calculate IRR in Excel Using the IRR function in Excel makes calculating the IRR easy. Excel does all the necessary work for you, arriving at the discount rate you are seeking to find. All you need to do is combine your cash flows, including the initial outlay as well as subsequent inflows, with the IRR function. The IRR function can be found by clicking on the Formulas Insert (fx) icon. Here is a simple example of an IRR analysis with cash flows that are known and annually periodic (one year apart). Assume a company is assessing the profitability of Project X. Project X requires $250,000 in funding and is expected to generate$100,000 in after-tax cash flows the first year and grow by $50,000 for each of the next four years. In this case, the IRR is 56.72%, which is quite high. Excel also offers two other functions that can be used in IRR calculations: the XIRR, and the MIRR. XIRR is used when the cash flow model does not exactly have annual periodic cash flows. The MIRR is a rate-of-return measure that includes the integration of cost of capital and the risk-free rate. 4:20 #### How to Calculate IRR in Excel ## Understanding IRR The ultimate goal of IRR is to identify the rate of discount, which makes the present value of the sum of annual nominal cash inflows equal to the initial net cash outlay for the investment. Several methods can be used when seeking to identify an expected return, but IRR is often ideal for analyzing the potential return of a new project that a company is considering undertaking. Think of IRR as the rate of growth that an investment is expected to generate annually. Thus, it can be most similar to a compound annual growth rate (CAGR). In reality, an investment will usually not have the same rate of return each year. Usually, the actual rate of return that a given investment ends up generating will differ from its estimated IRR. ## What Is IRR Used For? In capital planning, one popular scenario for IRR is comparing the profitability of establishing new operations with that of expanding existing operations. For example, an energy company may use IRR in deciding whether to open a new power plant or to renovate and expand an existing power plant. While both projects could add value to the company, it is likely that one will be the more logical decision as prescribed by IRR. Note that because IRR does not account for changing discount rates, it’s often not adequate for longer-term projects with discount rates that are expected to vary. IRR is also useful for corporations in evaluating stock buyback programs. Clearly, if a company allocates substantial funding to repurchasing its shares, then the analysis must show that the company’s own stock is a better investment—that is, has a higher IRR—than any other use of the funds, such as creating new outlets or acquiring other companies. Individuals can also use IRR when making financial decisions—for instance, when evaluating different insurance policies using their premiums and death benefits. The general consensus is that policies that have the same premiums and a high IRR are much more desirable. Note that life insurance has a very high IRR in the early years of policy—often more than 1,000%. It then decreases over time. This IRR is very high during the early days of the policy because if you made only one monthly premium payment and then suddenly died, your beneficiaries would still get a lump sum benefit. Another common use of IRR is in analyzing investment returns. In most cases, the advertised return will assume that any interest payments or cash dividends are reinvested back into the investment. What if you don’t want to reinvest dividends but need them as income when paid? And if dividends are not assumed to be reinvested, are they paid out, or are they left in cash? What is the assumed return on the cash? IRR and other assumptions are particularly important on instruments like annuities, where the cash flows can become complex. Finally, IRR is a calculation used for an investment’s money-weighted rate of return (MWRR). The MWRR helps determine the rate of return needed to start with the initial investment amount factoring in all of the changes to cash flows during the investment period, including sales proceeds. ## Using IRR with WACC Most IRR analyses will be done in conjunction with a view of a company’s weighted average cost of capital (WACC) and NPV calculations. IRR is typically a relatively high value, which allows it to arrive at an NPV of zero. Most companies will require an IRR calculation to be above the WACC. WACC is a measure of a firm’s cost of capital in which each category of capital is proportionately weighted. All sources of capital, including common stock, preferred stock, bonds, and any other long-term debt, are included in a WACC calculation. In theory, any project with an IRR greater than its cost of capital should be profitable. In planning investment projects, firms will often establish a required rate of return (RRR) to determine the minimum acceptable return percentage that the investment in question must earn to be worthwhile. The RRR will be higher than the WACC. Any project with an IRR that exceeds the RRR will likely be deemed profitable, although companies will not necessarily pursue a project on this basis alone. Rather, they will likely pursue projects with the highest difference between IRR and RRR, as these likely will be the most profitable. IRR may also be compared against prevailing rates of return in the securities market. If a firm can’t find any projects with IRR greater than the returns that can be generated in the financial markets, then it may simply choose to invest money in the market. Market returns can also be a factor in setting an RRR. Analyses will also typically involve NPV calculations at different assumed discount rates. ## IRR vs. Compound Annual Growth Rate The CAGR measures the annual return on an investment over a period of time. The IRR is also an annual rate of return. However, the CAGR typically uses only a beginning and ending value to provide an estimated annual rate of return. IRR differs in that it involves multiple periodic cash flows—reflecting that cash inflows and outflows often constantly occur when it comes to investments. Another distinction is that CAGR is simple enough that it can be calculated easily. ## IRR vs. Return on Investment (ROI) Companies and analysts may also look at the return on investment (ROI) when making capital budgeting decisions. ROI tells an investor about the total growth, start to finish, of the investment. It is not an annual rate of return. IRR tells the investor what the annual growth rate is. The two numbers normally would be the same over the course of one year but won’t be the same for longer periods of time. ROI is the percentage increase or decrease of an investment from beginning to end. It is calculated by taking the difference between the current or expected future value and the original beginning value, divided by the original value and multiplied by 100. ROI figures can be calculated for nearly any activity into which an investment has been made and an outcome can be measured. However, ROI is not necessarily the most helpful for lengthy time frames. It also has limitations in capital budgeting, where the focus is often on periodic cash flows and returns. ## Limitations of the IRR IRR is generally most ideal for use in analyzing capital budgeting projects. It can be misconstrued or misinterpreted if used outside of appropriate scenarios. In the case of positive cash flows followed by negative ones and then by positive ones, the IRR may have multiple values. Moreover, if all cash flows have the same sign (i.e., the project never turns a profit), then no discount rate will produce a zero NPV. Within its realm of uses, IRR is a very popular metric for estimating a project’s annual return. However, it is not necessarily intended to be used alone. IRR is typically a relatively high value, which allows it to arrive at an NPV of zero. The IRR itself is only a single estimated figure that provides an annual return value based on estimates. Since estimates in IRR and NPV can differ drastically from actual results, most analysts will choose to combine IRR analysis with scenario analysis. Scenarios can show different possible NPVs based on varying assumptions. As mentioned, most companies do not rely on IRR and NPV analyses alone. These calculations are usually also studied in conjunction with a company’s WACC and an RRR, which provides for further consideration. Companies usually compare IRR analysis to other tradeoffs. If another project has a similar IRR with less up-front capital or simpler extraneous considerations, then a simpler investment may be chosen despite IRRs. In some cases, issues can also arise when using IRR to compare projects of different lengths. For example, a project of short duration may have a high IRR, making it appear to be an excellent investment. Conversely, a longer project may have a low IRR, earning returns slowly and steadily. The ROI metric can provide some more clarity in these cases, although some managers may not want to wait out the longer time frame. ## Investing Based on IRR The internal rate of return rule is a guideline for evaluating whether to proceed with a project or investment. The IRR rule states that if the IRR on a project or investment is greater than the minimum RRR—typically the cost of capital, then the project or investment can be pursued. Conversely, if the IRR on a project or investment is lower than the cost of capital, then the best course of action may be to reject it. Overall, while there are some limitations to IRR, it is an industry-standard for analyzing capital budgeting projects. ## IRR Example Assume a company is reviewing two projects. Management must decide whether to move forward with one, both, or neither. Its cost of capital is 10%. The cash flow patterns for each are as follows: Project A • Initial Outlay =$5,000 • Year one = $1,700 • Year two =$1,900 • Year three = $1,600 • Year four =$1,500 • Year five = $700 Project B • Initial Outlay =$2,000 • Year one = $400 • Year two =$700 • Year three = $500 • Year four =$400 • Year five = $300 The company must calculate the IRR for each project. Initial outlay (period = 0) will be negative. Solving for IRR is an iterative process using the following equation:$0 = Σ CFt ÷ (1 + IRR)t where: • CF = net cash flow • IRR = internal rate of return • t = period (from 0 to last period) -or- $0 = (initial outlay * −1) + CF1 ÷ (1 + IRR)1 + CF2 ÷ (1 + IRR)2 + ... + CFX ÷ (1 + IRR)X Using the above examples, the company can calculate IRR for each project as: ### IRR Project A:$0 = (−$5,000) +$1,700 ÷ (1 + IRR)1 + $1,900 ÷ (1 + IRR)2 +$1,600 ÷ (1 + IRR)3 + $1,500 ÷ (1 + IRR)4 +$700 ÷ (1 + IRR)5 IRR Project A = 16.61 % ### IRR Project B: $0 = (−$2,000) + $400 ÷ (1 + IRR)1 +$700 ÷ (1 + IRR)2 + $500 ÷ (1 + IRR)3 +$400 ÷ (1 + IRR)4 + \$300 ÷ (1 + IRR)5 IRR Project B = 5.23 % Given that the company’s cost of capital is 10%, management should proceed with Project A and reject Project B. ## What does internal rate of return mean? The internal rate of return (IRR) is a financial metric used to assess the attractiveness of a particular investment opportunity. When you calculate the IRR for an investment, you are effectively estimating the rate of return of that investment after accounting for all of its projected cash flows together with the time value of money. When selecting among several alternative investments, the investor would then select the investment with the highest IRR, provided it is above the investor’s minimum threshold. The main drawback of IRR is that it is heavily reliant on projections of future cash flows, which are notoriously difficult to predict. ## Is IRR the same as ROI? Although IRR is sometimes referred to informally as a project’s “return on investment,” it is different from the way most people use that phrase. Often, when people refer to ROI, they are simply referring to the percentage return generated from an investment in a given year or across a stretch of time. But that type of ROI does not capture the same nuances as IRR, and for that reason, IRR is generally preferred by investment professionals. Another advantage of IRR is that its definition is mathematically precise, whereas the term ROI can mean different things depending on the context or the speaker. ## What is a good internal rate of return? Whether an IRR is good or bad will depend on the cost of capital and the opportunity cost of the investor. For instance, a real estate investor might pursue a project with a 25% IRR if comparable alternative real estate investments offer a return of, say, 20% or lower. However, this comparison assumes that the riskiness and effort involved in making these difficult investments are roughly the same. If the investor can obtain a slightly lower IRR from a project that is considerably less risky or time-consuming, then they might happily accept that lower-IRR project. In general, though, a higher IRR is better than a lower one, all else being equal. ### Article Sources Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy. 1. Microsoft. “IRR function.” Accessed Sept. 15, 2020. 2. Microsoft. “Insert Function.” Accessed Sept. 15, 2020. 3. Microsoft. “XIRR function.” Accessed Sept. 15, 2020. 4. Microsoft. “MIRR function.” Accessed Sept. 15, 2020.
Skip to main content ## Subsection1.1First-time Registration 1. Open the following url in another tab: https://runestone.academy/runestone/default/user/register 1 . The form is pretty self-explanatory, except for the Course Name field. There is information about that on the right side. You can specify one of the strings that identify one of the public courses that the server knows about: thinkcspy, pythonds, etc. Or you can enter the string for a course that someone else has created and provided you with. 2. Click Register 3. If you are on the login page instead at https://runestone.academy/runestone/default/user/login 2  then click the Register button which is near the bottom of the page to get to the registration page. https://runestone.academy/runestone/default/user/register https://runestone.academy/runestone/default/user/login
## softballislove3 3 years ago Find cos θ if sin θ = -5/13 and tan θ > 0. 1. myininaya If sine is negative and tangent is positive, then cosine is ____?____ tangent=sine/cosine=negative/cosine=positive =>cosine is ___?____ 2. softballislove3 positive? 3. myininaya So negative/positive is positive No... 4. softballislove3 no its negative 5. myininaya Right. You want tangent to be positive not negative 6. softballislove3 positive 7. myininaya So you need cosine to be negative in order for tangent to be positive since sine is negative. 8. myininaya So we are in the (-,-) quadrant. 9. softballislove3 yea 10. softballislove3 correct 11. myininaya |dw:1351269639105:dw| $\sin( \theta)=-5/13 \text{ is given to you}$ 12. softballislove3 so then use pythagorean theorem to fine cosine 13. myininaya Find the other side using the Pythagorean thm. 14. softballislove3 which would end up being -12/13 15. myininaya the other side is 12 is what you are saying and no i thought you were looking for cosine... 16. myininaya oops I labeled the triangle wrong a little bit. 17. softballislove3 i am 18. myininaya |dw:1351270002298:dw| 19. myininaya And you will be correct. :) 20. softballislove3 |dw:1351270017306:dw| 21. softballislove3 yay 22. softballislove3 thank you!!!! 23. myininaya Right.
QUESTION # Find the possible expressions for the length and breadth of the rectangle whose area is $3{x^2} - 8x + 5$ ? Hint- In this type of question since we know that the area of the rectangle is equal to length multiplied by breadth. So, we will just approach this problem in such a way that we will just factorize the polynomial into two factors such that it resembles the formula for the Area of rectangle $= length \times breadth$ and we will factorize the polynomial on the basis of middle term splitting. Given area of the rectangle $= 3{x^2} - 8x + 5$ We will use the middle term splitting we will find the product of the first and the last term which is 3 and 5 and the product is $3 \times 5 = 15$. Now, find the factors of 15 such that addition or subtraction of those factors results in the middle term which is -8. The two factors come up to be -3 and -5 as on their multiplication we get 15 and, on their addition, we get -8. Now, rewrite the polynomial like, $3{x^2} - 3x - 5x + 5$ . Now, regroup the terms by finding the common factors like here we get the polynomial as $3x(x - 1) - 5(x - 1)$ by taking 3$x$ common from the first two terms and -5 common from the last two terms. Now, factor out the shared binomial parenthesis i.e. $(x - 1)$ $(3x - 5)(x - 1)$ Now, it’s been split up into two factors which is resembling the formula for area of rectangle $= length \times breadth$ So, possible expressions for the length and breadth are $3x - 5,x - 1$ Hence, the possible expressions for the length and breadth of the rectangle whose area is $3{x^2} - 8x + 5$ are $3x - 5,x - 1$ . Note- For such questions, just relate the given expression of area to the formula for the area of rectangle where area of rectangle $= length \times breadth$ due to which we get the possible expressions for the length and breadth of the rectangle by the concept of factoring a quadratic polynomial on the basis of any of the techniques we want to.
# Show that o(ab) = o(ba) • Jul 31st 2013, 07:01 PM phys251 Show that o(ab) = o(ba) a, b are in group G. Show o(ab) = o(ba). EDIT: The back of the book has: o(ab) = m => (ba)^m = (a^-1)(a)(ba)^m = (a^-1)(ab)^m(a) (**) I have NO clue how they got the step marked by (**). • Aug 1st 2013, 09:23 AM emakarov Re: Show that o(ab) = o(ba) Quote: Originally Posted by phys251 o(ab) = m => (ba)^m = (a^-1)(a)(ba)^m = (a^-1)(ab)^m(a) (**) I have NO clue how they got the step marked by (**). Consider some examples. $a^{-1}a(ba)^3 =a^{-1}a(ba)(ba)(ba) =a^{-1}(ab)(ab)(ab)a =a^{-1}(ab)^3a$ If you need a precise proof, you can prove that $a(ba)^m = (ab)^ma$ by induction on m. • Aug 1st 2013, 11:26 AM phys251 Re: Show that o(ab) = o(ba) Ohhhh, you just associate everything once to the left, so now instead of ba's inside parentheses, we have ab's. Thanks.
Hilbert space A vector space $H$ over the field of complex (or real) numbers, together with a complex-valued (or real-valued) function $( x, y)$ defined on $H \times H$, with the following properties: 1) $( x, x) = 0$ if and only if $x = 0$; 2) $( x, x ) \geq 0$ for all $x \in H$; 3) $( x + y, z) = ( x, z) + ( y, z)$, $x, y, z \in H$; 4) $( \alpha x, y) = \alpha ( x, y)$, $x, y \in H$, $\alpha$ a complex (or real) number; 5) $( x, y) = \overline{ {( y, x) }}\;$, $x, y \in H$; 6) if $x _ {n} \in H$, $n = 1, 2 \dots$ and if $$\lim\limits _ {n, m \rightarrow \infty } \ ( x _ {n} - x _ {m} , x _ {n} - x _ {m} ) = 0,$$ then there exists an element $x \in H$ such that $$\lim\limits _ {n \rightarrow \infty } \ ( x - x _ {n} , x - x _ {n} ) = 0;$$ the element $x$ is called the limit of the sequence $( x _ {n} )$; 7) $H$ is an infinite-dimensional vector space. The function $( x, y)$ which satisfies axioms 1)–5) is called the scalar (or inner) product of $x$ and $y$. The magnitude $\| x \| = ( x, x) ^ {1/2}$ is said to be the norm (or the length) of $x \in H$. The inequality $| ( x, y ) | \leq \| x \| \cdot \| y \|$ is valid. If a distance between elements $x, y \in H$ is introduced in $H$ by means of the equality $\rho ( x, y ) = \| x - y \|$, $H$ is converted into a metric space. Two Hilbert spaces $H$ and $H _ {1}$ are said to be isomorphic (or isometrically isomorphic) if there exists a one-to-one correspondence $x \iff x _ {1}$, $x \in H$, $x _ {1} \in H _ {1}$, between $H$ and $H _ {1}$ which preserves the linear operations and the scalar product. Hilbert spaces constitute the class of infinite-dimensional vector spaces that are most often used and that are the most important as far as applications are concerned. They are the natural extension of the concept of a finite-dimensional vector space with a scalar product (i.e. a finite-dimensional Euclidean space or a finite-dimensional unitary space). In fact, if a scalar product is specified in a finite-dimensional vector space (over the field of real or complex numbers), then property 6), which is called the completeness of the Hilbert space, is automatically satisfied. Infinite-dimensional vector spaces $H$ with a scalar product are known as pre-Hilbert spaces; there exist pre-Hilbert spaces for which property 6) does not hold. Any pre-Hilbert space can be completed to a Hilbert space. In the definition of a Hilbert space the condition of infinite dimensionality is often omitted, i.e. a pre-Hilbert space is understood to mean a vector space over the field of complex (or real) numbers with a scalar product, while a Hilbert space is the name given to a complete pre-Hilbert space. Contents Examples of Hilbert spaces. 1) The complex space $l _ {2}$( or $l ^ {2}$). The elements of this Hilbert space are infinite sequences of complex numbers $x = \{ \xi _ {1} , \xi _ {2} ,\dots \}$, $y = \{ \eta _ {1} , \eta _ {2} ,\dots \}$ that are square summable: $$\sum _ {k = 1 } ^ \infty | \xi _ {k} | ^ {2} < + \infty ,\ \ \sum _ {k = 1 } ^ \infty | \eta _ {k} | ^ {2} < + \infty .$$ The scalar product is defined by the equation $$( x, y) = \ \sum _ {k = 1 } ^ \infty \xi _ {k} \overline \eta \; _ {k} .$$ 2) The space $l _ {2} ( T)$( a generalization of Example 1)). Let $T$ be an arbitrary set. The elements of the Hilbert space $l _ {2} ( T)$ are complex-valued functions $x( t)$ on $T$ differing from zero in at most countably many points $t \in T$ and such that the series $$\sum _ {t \in T } | x ( t) | ^ {2}$$ converges. The scalar product is defined by the equation $$( x, y) = \ \sum _ {t \in T } x ( t) \overline{ {y ( t) }}\; .$$ Any Hilbert space is isomorphic to the space $l _ {2} ( T)$ for some suitably chosen $T$. 3) The space $L _ {2} ( S, \Sigma , \mu )$( or $L ^ {2} ( S, \Sigma , \mu )$) of complex-valued functions $x( s)$ defined on a set $S$ with a totally-additive positive measure $\mu$( given on the $\sigma$- algebra of subsets $\Sigma$ of $S$) which are measurable and have an integrable square modulus: $$\int\limits _ { S } | x ( s) | ^ {2} d \mu ( s) < + \infty .$$ In this Hilbert space the scalar product is defined by: $$( x ( s), y ( s)) = \ \int\limits _ { S } x ( s) \overline{ {y ( s) }}\; d \mu ( s).$$ 4) The Sobolev space $W _ {l} ^ {2} ( \Omega )$, which is also denoted by $H _ {(} l)$( cf. Imbedding theorems). 5) A Hilbert space of functions with values in a Hilbert space. Let $H$ be some Hilbert space with scalar product $( x, y)$, $x, y \in H$. Further, let $\Omega$ be an arbitrary domain in $\mathbf R ^ {n}$, and let $f( x)$, $x \in \Omega$, be a function with values in $H$ that is Bochner-measurable (cf. Bochner integral) and is such that $$\int\limits _ \Omega \| f ( x) \| _ {H} ^ {2} dx < \infty ,$$ where $d x$ is Lebesgue measure on $\Omega$( instead of Lebesgue measure one may take any other positive countably-additive measure). If one defines the scalar product $$( f ( x), g ( x)) _ \perp = \ \int\limits _ \Omega ( f ( x), g ( x)) dx$$ on this set of functions, a new Hilbert space $H _ {1}$ is obtained. 6) The set of continuous Bohr almost-periodic functions on the real line forms a pre-Hilbert space if the scalar product is defined by $$( x ( t), y ( t)) = \ \lim\limits _ {T \rightarrow \infty } \ { \frac{1}{2T} } \int\limits _ {- T } ^ { T } x ( t) \overline{ {y ( t) }}\; dt.$$ The existence of the limit follows from the theory of almost-periodic functions. This space is completed to the class $B ^ {2}$ of Besicovitch almost-periodic functions. The spaces $l _ {2}$ and $L _ {2}$ were introduced and studied by D. Hilbert [1] in his fundamental work on the theory of integral equations and infinite quadratic forms. The definition of a Hilbert space was given by J. von Neumann [3], F. Riesz [4] and M.H. Stone [13], who also laid the basis for their systematic study. A Hilbert space is a natural extension of the ordinary three-dimensional space in Euclidean geometry, and many geometric concepts have their interpretation in a Hilbert space, so that one is entitled to speak about the geometry of Hilbert space. Two vectors $x$ and $y$ from a Hilbert space $H$ are said to be orthogonal $( x \perp y)$ if $( x, y ) = 0$. Two linear subspaces $\mathfrak M$ and $\mathfrak N$ in $H$ are said to be orthogonal $( \mathfrak M \perp \mathfrak N )$ if each element of $\mathfrak M$ is orthogonal to each element from $\mathfrak N$. The orthogonal complement of a set $A \subset H$ is the set $B = \{ {x } : {( x, A) = 0 } \}$, i.e. the set of elements $x \in H$ which are orthogonal to all elements of $A$. It is denoted by $H \ominus A$ or, if $H$ is understood, by $A ^ \perp$. The orthogonal complement $\mathfrak N$ of an arbitrary set $\mathfrak M$ in $H$ is a closed linear subspace. If $\mathfrak M$ is a closed linear subspace in a Hilbert space (which may also be referred to as a Hilbert subspace), then any element $x \in H$ can be uniquely represented as the sum $x = y + z$, $y \in \mathfrak M$, $z \in \mathfrak N$. This decomposition is known as the theorem on orthogonal complements and is usually written as $$H = \mathfrak M \oplus \mathfrak N .$$ A set $A \subset H$ is said to be an orthonormal set or an orthonormal system if any two different vectors from $A$ are orthogonal and if the norm of each vector $y \in A$ is equal to one. An orthonormal set is said to be a complete orthonormal set if there is no non-zero vector from $H$ that is orthogonal to all the vectors of this set. If $\{ y _ {i} \}$ is an orthonormal sequence and $\{ \alpha _ {i} \}$ is a sequence of scalars, then the series $$\sum _ { i } \alpha _ {i} y _ {i}$$ converges if and only if $$\sum _ { i } | \alpha _ {i} | ^ {2} < \infty ;$$ moreover $$\left \| \sum _ { i } \alpha _ {i} y _ {i} \right \| ^ {2} = \ \sum _ { i } | \alpha _ {i} | ^ {2}$$ (Pythagoras' theorem in Hilbert spaces). Let $A$ be an orthonormal set in a Hilbert space $H$ and let $x$ be an arbitrary vector from $H$. Then $( x, y) = 0$ for all $y \in A$, with the exception of a finite or countable set of vectors. The series $$Px = \sum _ {y \in A } ( x, y) y$$ converges, and its sum is independent of the order of its non-zero terms. The operator $P$ is the orthogonal projection operator, or projector, on the (closed) Hilbert subspace generated by $A$. A set $A \subset H$ is said to be an orthonormal basis of a linear subspace $\mathfrak N \subseteq H$ if $A$ is contained in $\mathfrak N$ and if the equality $$x = \sum _ {y \in A } ( x, y) y$$ is valid for any $x \in \mathfrak N$, i.e. if any vector $x \in \mathfrak N$ can be expanded with respect to the system $A$, that is, can be represented with the aid of vectors from $A$. The set of numbers $\{ {( x, y) } : {y \in A } \}$ is called the set of Fourier coefficients of the element $x$ with respect to the basis $A$. Each subspace of a Hilbert space $H$( in particular, $H$ itself) has an orthonormal basis. An orthonormal basis in $l _ {2} ( T)$ is a set of functions $\{ {x _ {t} } : {t \in T } \}$ defined by the formula $x _ {t} ( s) = 1$ if $s = t$ and $x _ {t} ( s) = 0$ if $s \neq t$. In a space $L _ {2} ( S, \Sigma , \mu )$ the expansion of a vector with respect to a basis takes the form of an expansion with respect to a system of orthogonal functions; this represents an important method for solving problems in mathematical physics. For an orthonormal set $A \subset H$ the following statements are equivalent: $A$ is complete; $A$ is an orthonormal basis for $H$; and $\| x \| ^ {2} = \sum _ {y \in A } | ( x, y) | ^ {2}$ for any $x \in H$. All orthonormal bases of a given Hilbert space have the same cardinality. This fact makes it possible to define the dimension of a Hilbert space. In fact, the dimension of a Hilbert space is the cardinality of an arbitrary orthonormal basis in it. This dimension is sometimes referred to as the Hilbert dimension (as distinct from the linear dimension of a Hilbert space, i.e. the cardinality of the Hamel basis (cf. Basis) — a concept which does not take into account the topological structure of the Hilbert space). Two Hilbert spaces are isomorphic if and only if their dimensions are equal. The concept of a dimension is connected with that of the deficiency of a Hilbert subspace, also called the codimension of a Hilbert subspace. In fact, the codimension of a Hilbert subspace $H _ {1}$ of a Hilbert space $H$ is the dimension of the orthogonal complement $H _ {1} ^ \perp = H \ominus H _ {1}$. A Hilbert subspace with codimension equal to one, i.e. the orthogonal complement to which is one-dimensional, is known as a hyperspace. A translate of a hyperspace is called a hyperplane. Some of the geometrical concepts involve the use of the terminology of linear operators in a Hilbert space; they include, in particular, the concept of an opening of linear subspaces. The opening of two subspaces $M _ {1}$ and $M _ {2}$ in a Hilbert space $H$ is the norm $\theta ( M _ {1} , M _ {2} )$ of the difference of the operators which project $H$ on the closure of these linear subspaces. The simplest properties of an opening are: a) $\theta ( M _ {1} , M _ {2} ) = \theta ( \overline{M}\; _ {1} , \overline{M}\; _ {2} ) - \theta ( H \ominus \overline{M}\; _ {1} , H \ominus \overline{M}\; _ {2} )$; b) $\theta ( M _ {1} , M _ {2} ) \leq 1$, and, in the case of strict inequality, $\mathop{\rm dim} M _ {1} = \mathop{\rm dim} M _ {2}$. Many problems in Hilbert spaces involve only finite sets of vectors of a Hilbert space, i.e. elements of finite-dimensional linear subspaces of a Hilbert space. This is why the concepts and methods of linear algebra play an important role in the theory of Hilbert spaces. Vectors $g _ {1} \dots g _ {n}$ in a Hilbert space are said to be linearly independent if the equation $$\sum _ {k = 1 } ^ { n } \alpha _ {k} g _ {k} = 0,$$ where $\alpha _ {k}$ are scalars, holds only if all $\alpha _ {k}$ are equal to zero. Vectors are linearly independent if their Gram determinant does not vanish. A countable sequence of vectors $g _ {1} \dots g _ {n} \dots$ is said to be a linearly independent sequence if all its finite subsets are linearly independent. Each linearly independent sequence can be orthonormalized, i.e. it is possible to construct an orthonormal system $e _ {1} , e _ {2} \dots$ such that for all $n$ the linear hulls (cf. Linear hull) of the sets $\{ g _ {k} \} _ {k=} 1 ^ {n}$ and $\{ e _ {k} \} _ {k=} 1 ^ {n}$ coincide. This construction is known as the Gram–Schmidt orthogonalization (orthonormalization) process and consists of the following: $$e _ {1} = \frac{g _ {1} }{\| g _ {1} \| } ,\ \ h _ {2} = g _ {2} - ( g _ {2} , e _ {1} ) e _ {1} ,\ \ e _ {2} = \frac{h _ {2} }{\| h _ {2} \| } \dots$$ $$h _ {n} = g _ {n} - \sum _ {k = 1 } ^ { {n } - 1 } ( g _ {n} , e _ {k} ) e _ {k} ,\ e _ {n} = \frac{h _ {n} }{\| h _ {n} \| } ,\dots .$$ Operations of direct sum and tensor product are defined in the set of Hilbert spaces. The direct sum of Hilbert spaces $H _ {i}$, $i= 1 \dots n$, where each $H _ {i}$ has a corresponding scalar product, is the Hilbert space $$H = H _ {1} \oplus \dots \oplus H _ {n}$$ defined as follows: In the vector space $H _ {1} + \dots + H _ {n}$— the direct sum of the vector spaces $H _ {1} \dots H _ {n}$— the scalar product is defined by $$([ x _ {1} \dots x _ {n} ], [ y _ {1} \dots y _ {n} ]) = \ \sum _ {i = 1 } ^ { n } ( x _ {i} , y _ {i} ) _ {H _ {i} } .$$ If $i \neq j$, the elements of $H _ {i}$ and $H _ {j}$ in the direct sum $$H = \sum _ {i = 1 } ^ { n } \oplus H _ {i}$$ are mutually orthogonal, and the projection of $H$ onto $H _ {i}$ coincides with the orthogonal projection of $H$ onto $H _ {i}$. The concept of the direct sum of Hilbert spaces has been generalized to the case of an infinite set of direct components. Let a Hilbert space $H _ \nu$ be specified for each $\nu$ of some index set $A$. The direct sum of Hilbert spaces (denoted by $\sum _ {\nu \in A } \oplus H _ \nu$) is the set $H$ of all functions $\{ x _ \nu \}$ defined on $A$ such that $x _ \nu \in H _ \nu$ for each $\nu \in A$, and $\sum _ {\nu \in A } \| x _ \nu \| ^ {2} < \infty$. The linear operations in $H$ are defined by $$\{ x _ \nu \} + \{ y _ \nu \} = \{ x _ \nu + y _ \nu \} ,\ \ \alpha \{ x _ \nu \} = \{ \alpha x _ \nu \} ,$$ while the scalar product is defined by $$( \{ x _ \nu \} , \{ y _ \nu \} ) = \ \sum _ {\nu \in A } ( x _ \nu , y _ \nu ) _ {H _ \nu } .$$ If the linear operations and the scalar product are defined in this manner, the direct sum $$H = \sum _ {\nu \in A } \oplus H _ \nu$$ becomes a Hilbert space. Another important operation in the set of Hilbert spaces is the tensor product. The tensor product of Hilbert spaces $H _ {i}$, $i = 1 \dots n$, is defined as follows. Let $H _ {1} \odot \dots \odot H _ {n}$ be the tensor product of the vector spaces $H _ {1} \dots H _ {n}$. In the vector space $H _ {1} \odot \dots \odot H _ {n}$ there exists a unique scalar product such that $$( x _ {1} \odot \dots \odot x _ {n} , y _ {1} \odot \dots \odot y _ {n} ) = \ \prod _ {i = 1 } ^ { n } ( x _ {i} , y _ {i} ) _ {H _ {i} }$$ for all $x _ {i} , y _ {i} \in H _ {i}$. Thus, the vector space becomes a pre-Hilbert space, whose completion is a Hilbert space, denoted by $H _ {1} \otimes \dots \otimes H _ {n}$, or $\prod _ {i=} 1 ^ {n} H _ {i}$, and is known as the tensor product of the Hilbert spaces $H _ {i}$. Hilbert spaces form an important class of Banach spaces; any Hilbert space $H$ is a Banach space with respect to the norm $\| x \| = ( x, x) ^ {1/2}$, and the following parallelogram identity holds for any two vectors $x, y \in H$: $$\| x + y \| ^ {2} + \| x - y \| ^ {2} = \ 2 ( \| x \| ^ {2} + \| y \| ^ {2} ).$$ The parallelogram identity distinguishes the class of Hilbert spaces from the Banach spaces, viz. if the parallelogram identity is valid in a real normed space $B$ for any pair of elements $x, y \in B$, then the function $$( x, y) = { \frac{1}{4} } ( \| x + y \| ^ {2} - \| x - y \| ^ {2} )$$ satisfies the axioms of a scalar product, and thus makes $B$ into a pre-Hilbert space (if $B$ is a Banach space, it is made a Hilbert space). From the parallelogram identity it follows that every Hilbert space is a uniformly-convex space. As in any Banach space, two topologies may be specified in a Hilbert space — a strong (norm) one and a weak one. These topologies are different. A Hilbert space is separable in the strong topology if and only if it is separable in the weak topology; a convex set (in particular, a linear subspace) in a Hilbert space is strongly closed if and only if it is weakly closed. As in the theory of general Banach spaces, so, too, in the theory of Hilbert spaces, the concept of separability plays an important role. A Hilbert space is separable if and only if it has countable dimension. The Hilbert spaces $l _ {2}$ and $H _ {(} l)$ are separable; the Hilbert space $l _ {2} ( T)$ is separable if and only if $T$ is at most countable; a Hilbert space $L _ {2} ( S, \Sigma , \mu )$ is separable if the measure $\mu$ has a countable basis. The Hilbert space $B _ {2}$ is not separable. Any orthonormal basis in a separable Hilbert space $H$ is at the same time an unconditional Schauder basis in $H$, regarded as a Banach space. However, non-orthogonal Schauder bases also exist in separable Hilbert spaces. Accordingly, the following theorem is valid [7]: Let $\{ f _ {k} \}$ be a complete system of vectors in a Hilbert space $H$ and let $\lambda _ {n}$ and $\Lambda _ {n}$ be the smallest and the largest eigen values of the Gram matrix $$\{ \alpha _ {jk} \} _ {j, k = 1 } ^ {n} ,\ \ \alpha _ {jk} = ( f _ {k} , f _ {j} ).$$ If $$\lim\limits _ {n \rightarrow \infty } \inf \lambda _ {n} > 0 \ \ \textrm{ and } \ \ \lim\limits _ {n \rightarrow \infty } \sup \Lambda _ {n} < \infty ,$$ then 1) the sequence $\{ f _ {k} \}$ is a basis in $H$; and 2) there exists a sequence $\{ g _ {k} \}$ biorthogonal to $\{ f _ {k} \}$ which is also a basis in $H$. As in any Banach space, the description of the set of linear functionals on a Hilbert space and the study of the properties of these functionals is very important. Linear functionals on Hilbert spaces have a particularly simple structure. Any linear functional $f$ on a Hilbert space $H$ can be uniquely denoted by $f( x) = ( x, x ^ {*} )$ for all $x \in H$, where $x ^ {*} \in H$; moreover $\| f \| = \| x ^ {*} \|$. The space $H ^ {*}$ of linear functionals $f$ on $H$ is isometrically anti-isomorphic to $H$( i.e. the correspondence $f \rightarrow x ^ {*}$ is isometric, additive and anti-homogeneous: $\alpha f \rightarrow \overline \alpha \; x ^ {*}$). In particular, a Hilbert space is reflexive (cf. Reflexive space), and for this reason the following statements are valid: a Hilbert space is weakly sequentially complete; a subset of a Hilbert space is relatively weakly compact if and only if it is bounded. The main content of the theory of Hilbert spaces is the theory of linear operators on them. The concept of a Hilbert space itself was formulated in the works of Hilbert [2] and E. Schmidt [14] on the theory of integral equations, while the abstract definition of a Hilbert space was given by von Neumann [3], F. Riesz [4] and Stone [13] in their studies of Hermitian operators. The theory of operators on a Hilbert space is a fundamental branch of the general theory of operators for two reasons. First, the theory of self-adjoint and unitary operators on a Hilbert space is not only one of the most developed parts of the general theory of linear operators, but is also of wide use in other parts of functional analysis and in a number of other parts of mathematics and physics. The theory of linear operators on a Hilbert space makes it possible to look at various problems in mathematical physics from a unified point of view; above all, these are the questions concerning eigen values and eigen functions. Moreover, the theory of self-adjoint operators on a Hilbert space is a mathematical tool in quantum mechanics: In the description of a quantum-mechanical system, the observed quantities (energy, momentum, position, etc.) are interpreted as self-adjoint operators on some Hilbert space, while the states of the system are elements of that space. In turn, the problems of quantum mechanics have up to our time an influence on the development of the theory of self-adjoint operators, and also on the theory of operator algebras on Hilbert spaces. Secondly, the intensively developed theory of self-adjoint operators (cf. Self-adjoint operator) on a Hilbert space (in particular, that of cyclic, nilpotent, cellular, contractible, spectral, and scalar operators) is an important model of the theory of linear operators on more general spaces. An important class of linear operators on a Hilbert space is formed by the everywhere-defined continuous operators, also called bounded operators. If one introduces on the set $\mathfrak B ( H)$ of bounded linear operators on $H$ the operations of addition, multiplication by a scalar and multiplication of operators, as well as the norm of an operator, by the usual rules (see Linear operator) and defines the involution in $\mathfrak B ( H)$ as transition to the adjoint operator, then $\mathfrak B ( H)$ becomes a Banach algebra with involution. Important classes of bounded operators on a Hilbert space are the self-adjoint operators, the unitary operators and the normal operators (cf. Self-adjoint operator; Unitary operator; Normal operator), since they have special properties with respect to the scalar product. These classes of operators are well-studied; the fundamental instruments in their study are the simplest bounded self-adjoint operators, such as the operators of orthogonal projection, or simply projectors (cf. Projector). The means by which any self-adjoint, unitary or normal operator on a complex Hilbert space is constructed from projectors, is given by the spectral decomposition of a linear operator, which is especially simple in the case of a separable Hilbert space. A more complex branch of the theory of linear operators on a Hilbert space is the theory of unbounded operators. The most important unbounded operators on a Hilbert space are the closed linear operators with a dense domain of definition; in particular, unbounded self-adjoint and normal operators. Between the self-adjoint and the unitary operators on a Hilbert space there is a one-to-one relation, defined by the Cayley transformation (cf. Cayley transform). Of importance (especially in the theory of linear differential operators) is the class of symmetric operators (cf. Symmetric operator) on a Hilbert space, and the theory of self-adjoint extensions of such operators. Unbounded self-adjoint and normal operators on a complex Hilbert space $H$ also have a spectral decomposition. The spectral decomposition is the greatest achievement of the theory of self-adjoint and normal operators on a Hilbert space. It corresponds to the classical reduction theory of Hermitian and normal complex matrices on an $n$- dimensional unitary space. Namely, the spectral decomposition and the operator calculus for self-adjoint and normal operators which is related to it ensure a wide range of applications in various parts of mathematics for the theory of operators on a Hilbert space. For bounded self-adjoint operators on $l _ {2}$ the spectral decomposition was found by Hilbert [1], who also introduced the important concept of a resolution of the identity for a self-adjoint operator. Nowadays, several approaches to the spectral theory of self-adjoint and normal operators are available. One of the most profound is given by the theory of Banach algebras. The spectral decomposition of an unbounded self-adjoint operator was found by von Neumann [3]. His work preceded the important investigations of T. Carleman [8], who obtained the spectral decomposition for the case of a symmetric integral operator, and who also discovered that there is no complete analogy between symmetric bounded and unbounded operators. The importance of the concept of a self-adjoint operator was first drawn attention to by Schmidt (cf. [3]). Note that both for the investigations by Hilbert, and for much later investigations, the works of P.L. Chebyshev, A.A. Markov and Th.J. Stieltjes on the classical problems of moments, Jacobi matrices and continued fractions (cf. [9]) were of great importance (cf. Continued fraction; Jacobi matrix; Moment problem). References [1] D. Hilbert, "Grundzüge einer allgemeinen Theorie der linearen Integralgleichungen" , Chelsea, reprint (1953) MR0056184 Zbl 0050.10201 [2] A.S. Besicovitch, "Almost periodic functions" , Cambridge Univ. Press (1932) MR0068029 MR1522718 Zbl 0004.25303 Zbl 58.0264.02 [3] J. von Neumann, "Allgemeine Eigenwerttheorie Hermitischer Funktionaloperatoren" Math. Ann. , 102 (1929) pp. 49–131 [4] F. Riesz, "Ueber die linearen Transformationen des komplexen Hilbertschen Raumes" Acta. Sci. Math. Szeged , 5 : 1 (1930) pp. 23–54 Zbl 56.0356.02 [5] J.A. Dieudonné, "Foundations of modern analysis" , Acad. Press (1961) (Translated from French) MR1531180 Zbl 0646.58001 Zbl 0708.46002 Zbl 0176.00502 Zbl 0122.29702 Zbl 0100.04201 [6] N. Bourbaki, "Elements of mathematics. Topological vector spaces" , Addison-Wesley (1977) (Translated from French) MR0583191 Zbl 1106.46003 Zbl 1115.46002 Zbl 0622.46001 Zbl 0482.46001 [7] N.I. [N.I. Akhiezer] Achieser, I.M. [I.M. Glaz'man] Glasman, "Theorie der linearen Operatoren im Hilbert Raum" , Akademie Verlag (1954) (Translated from Russian) MR0066560 Zbl 0056.11101 [8] T. Carleman, "Sur les équations intégrales singulières à noyau réel et symmétrique" Univ. Årsskrift : 3 , Uppsala (1923) [9] N.I. Akhiezer, "The classical moment problem and some related questions in analysis" , Oliver & Boyd (1965) (Translated from Russian) MR0184042 Zbl 0135.33803 [10] N. Dunford, J.T. Schwartz, "Linear operators" , 1–3 , Interscience (1958–1971) MR1009164 MR1009163 MR1009162 MR0412888 MR0216304 MR0188745 MR0216303 MR1530651 MR0117523 Zbl 0635.47003 Zbl 0635.47002 Zbl 0635.47001 Zbl 0283.47002 Zbl 0243.47001 Zbl 0146.12601 Zbl 0128.34803 Zbl 0084.10402 [11] F. Riesz, B. Szökefalvi-Nagy, "Functional analysis" , F. Ungar (1955) (Translated from French) MR0071727 Zbl 0732.47001 Zbl 0070.10902 Zbl 0046.33103 [12] M.A. Naimark, "Lineare Differentialoperatoren" , Akademie Verlag (1960) (Translated from Russian) MR0216049 [13] M.H. Stone, "Linear transformations in Hilbert space and their applications to analysis" , Amer. Math. Soc. (1932) Zbl 0005.40003 Zbl 58.0420.02 [14] A.N. Kolmogorov, S.V. Fomin, "Elements of the theory of functions and functional analysis" , 1–2 , Graylock (1957–1961) (Translated from Russian) MR1025126 MR0708717 MR0630899 MR0435771 MR0377444 MR0234241 MR0215962 MR0118796 MR1530727 MR0118795 MR0085462 MR0070045 Zbl 0932.46001 Zbl 0672.46001 Zbl 0501.46001 Zbl 0501.46002 Zbl 0235.46001 Zbl 0103.08801
# Mattern et al. (2018) – SEDIGISM: The kinematics of ATLASGAL filaments Analysing the kinematics of filamentary molecular clouds is a crucial step towards understanding their role in the star formation process. Therefore, we study the kinematics of 283 filament candidates in the inner Galaxy, that were previously identified in the ATLASGAL dust continuum data. The $^{13}$CO(2 – 1) and C$^{18}$O(2 – 1) data of the SEDIGISM survey (Structure, Excitation, and Dynamics of the Inner Galactic Inter Stellar Medium) allows us to analyse the kinematics of these targets and to determine their physical properties at a resolution of 30 arcsec and 0.25 km/s. To do so, we developed an automated algorithm to identify all velocity components along the line-of-sight correlated with the ATLASGAL dust emission, and derive size, mass, and kinematic properties for all velocity components. We find two-third of the filament candidates are coherent structures in position-position-velocity space. The remaining candidates appear to be the result of a superposition of two or three filamentary structures along the line-of-sight. At the resolution of the data, on average the filaments are in agreement with Plummer-like radial density profiles with a power-law exponent of p = 1.5 +- 0.5, indicating that they are typically embedded in a molecular cloud and do not have a well-defined outer radius. Also, we find a correlation between the observed mass per unit length and the velocity dispersion of the filament of $m \sim \sigma_v^2$. We show that this relation can be explained by a virial balance between self-gravity and pressure. Another possible explanation could be radial collapse of the filament, where we can exclude infall motions close to the free-fall velocity. Mattern, M.; Kauffmann, J.; Csengeri, T.; Urquhart, J. S.; Leurini, S.; Wyrowski, F.; Giannetti, A.; Barnes, P. J.; Beuther, H.; Bronfman, L.; Duarte-Cabral, A.; Henning, T.; Kainulainen, J.; Menten, K. M.; Schisano, E.; Schuller, F. 2018, ArXiv e-prints, 1808, arXiv:1808.07499
什么时候向下转型会被允许? From Stackoverflow -> https://stackoverflow.com/questions/380813/downcasting-in-java Object o = getSomeObject(); String s = (String) o; // this is allowed because o could reference a String Object o = new Object(); String s = (String) o; // this will fail at runtime, because o doesn't reference a String Object o = "a String"; String s = (String) o; // this will work, since o references a String Integer i = getSomeInteger(); String s = (String) i; // the compiler will not allow this, since i can never reference a String.
Recent Activity Distribution and upper bound of mimic numbers ★★ Author(s): Bhattacharyya Problem Let the notation denote '' divides ''. The mimic function in number theory is defined as follows [1]. Definition   For any positive integer divisible by , the mimic function, , is given by, By using this definition of mimic function, the mimic number of any non-prime integer is defined as follows [1]. Definition   The number is defined to be the mimic number of any positive integer , with respect to , for the minimum value of which . Given these two definitions and a positive integer , find the distribution of mimic numbers of those numbers divisible by . Again, find whether there is an upper bound of mimic numbers for a set of numbers divisible by any fixed positive integer . Keywords: Divisibility; mimic function; mimic number Coloring random subgraphs ★★ Author(s): Bukh If is a graph and , we let denote a subgraph of where each edge of appears in with independently with probability . Problem   Does there exist a constant so that ? Keywords: coloring; random graph Are vertex minor closed classes chi-bounded? ★★ Author(s): Geelen Question   Is every proper vertex-minor closed class of graphs chi-bounded? Keywords: chi-bounded; circle graph; coloring; vertex minor Graphs with a forbidden induced tree are chi-bounded ★★★ Author(s): Gyarfas Say that a family of graphs is -bounded if there exists a function so that every satisfies . Conjecture   For every fixed tree , the family of graphs with no induced subgraph isomorphic to is -bounded. Keywords: chi-bounded; coloring; excluded subgraph; tree Asymptotic Distribution of Form of Polyhedra ★★ Author(s): Rüdinger Problem   Consider the set of all topologically inequivalent polyhedra with edges. Define a form parameter for a polyhedron as where is the number of vertices. What is the distribution of for ? Keywords: polyhedral graphs, distribution Domination in plane triangulations ★★ Author(s): Matheson; Tarjan Conjecture   Every sufficiently large plane triangulation has a dominating set of size . Keywords: coloring; domination; multigrid; planar graph; triangulation Bounding the chromatic number of triangle-free graphs with fixed maximum degree ★★ Author(s): Kostochka; Reed Conjecture   A triangle-free graph with maximum degree has chromatic number at most . Keywords: chromatic number; girth; maximum degree; triangle free Erdös-Szekeres conjecture ★★★ Author(s): Erdos; Szekeres Conjecture   Every set of points in the plane in general position contains a subset of points which form a convex -gon. 4-flow conjecture ★★★ Author(s): Tutte Conjecture   Every bridgeless graph with no Petersen minor has a nowhere-zero 4-flow. Keywords: minor; nowhere-zero flow; Petersen graph Inequality of the means ★★★ Author(s): Question   Is is possible to pack rectangular -dimensional boxes each of which has side lengths inside an -dimensional cube with side length ? Keywords: arithmetic mean; geometric mean; Inequality; packing P vs. PSPACE ★★★ Author(s): Folklore Problem   Is there a problem that can be computed by a Turing machine in polynomial space and unbounded time but not in polynomial time? More formally, does P = PSPACE? Keywords: P; PSPACE; separation; unconditional Sums of independent random variables with unbounded variance ★★ Author(s): Feige Conjecture   If are independent random variables with , then Grunbaum's Conjecture ★★★ Author(s): Grunbaum Conjecture   If is a simple loopless triangulation of an orientable surface, then the dual of is 3-edge-colorable. Keywords: coloring; surface Refuting random 3SAT-instances on $O(n)$ clauses (weak form) ★★★ Author(s): Feige Conjecture   For every rational and every rational , there is no polynomial-time algorithm for the following problem. Given is a 3SAT (3CNF) formula on variables, for some , and clauses drawn uniformly at random from the set of formulas on variables. Return with probability at least 0.5 (over the instances) that is typical without returning typical for any instance with at least simultaneously satisfiable clauses. Keywords: NP; randomness in TCS; satisfiability Does the chromatic symmetric function distinguish between trees? ★★ Author(s): Stanley Problem   Do there exist non-isomorphic trees which have the same chromatic symmetric function? Keywords: chromatic polynomial; symmetric function; tree Shannon capacity of the seven-cycle ★★★ Author(s): Problem   What is the Shannon capacity of ? Keywords: Book Thickness of Subdivisions ★★ Author(s): Blankenship; Oporowski Let be a finite undirected simple graph. A -page book embedding of consists of a linear order of and a (non-proper) -colouring of such that edges with the same colour do not cross with respect to . That is, if for some edges , then and receive distinct colours. One can think that the vertices are placed along the spine of a book, and the edges are drawn without crossings on the pages of the book. The book thickness of , denoted by bt is the minimum integer for which there is a -page book embedding of . Let be the graph obtained by subdividing each edge of exactly once. Conjecture   There is a function such that for every graph , Keywords: book embedding; book thickness Frobenius number of four or more integers ★★ Author(s): Problem   Find an explicit formula for Frobenius number of co-prime positive integers for . Keywords: Magic square of squares ★★ Author(s): LaBar Question   Does there exist a magic square composed of distinct perfect squares? Keywords: Diophantine quintuple conjecture ★★ Author(s): Definition   A set of m positive integers is called a Diophantine -tuple if is a perfect square for all . Conjecture  (1)   Diophantine quintuple does not exist. It would follow from the following stronger conjecture [Da]: Conjecture  (2)   If is a Diophantine quadruple and , then Keywords:
# How do you solve x^ { 2} + 12x = - 60? May 8, 2017 #### Explanation: First we need to add 60 to make the right-hand side 0: ${x}^{2} + 12 x + 60 = 0$ Since this quadratic equation is not factorable, we apply the quadratic equation: $x = \frac{- 12 \pm \sqrt{{12}^{2} - 4 \cdot 1 \cdot 60}}{2 \cdot 1}$ $x = \frac{- 12 \pm \sqrt{- 96}}{2}$ Since we cannot square root a negative value, we use $i$ to denote $\sqrt{- 1}$ as the imaginary number $x = - 6 \pm \frac{4 i \sqrt{6}}{2}$ $x = - 6 \pm 2 i \sqrt{6}$ Therefore, our answers are $x = - 6 + 2 i \sqrt{6}$ and $x = - 6 - 2 i \sqrt{6}$ May 8, 2017 $x = - 6 \pm i \sqrt{- 24}$ #### Explanation: Give - ${x}^{2} + 12 x = - 60$ Let use completing the square method ${x}^{2} + 12 x + 36 = - 60 + 36$ ${\left(x + 6\right)}^{2} = - 24$ $x + 6 = \pm i \sqrt{- 24}$ $x = - 6 \pm i \sqrt{- 24}$
Math & Programming # /Backgrounds/ (no description) "One can imagine that the ultimate mathematician is one who can see analogies between analogies." – Stefan Banach.
Home » » Choose the odd one out among the following? # Choose the odd one out among the following? ### Question Choose the odd one out among the following? A) Peru B) Venezuela C) Bolivia D) Indonesia
1 visibility What is the temperature shown on the given thermometer? • $20^\circ F$ • $30^\circ F$ • $15^\circ F$
# Question #c5f12 Reduction of N-methylbenzamide with $L i A l H 4$ gives benzylmethylamine: ${C}_{6} {H}_{5} C O N H C {H}_{3}$ + $L i A l {H}_{4}$ $\to$ ${C}_{6} {H}_{5} C {H}_{2} N H C {H}_{3}$ + $L i A l {\left(O H\right)}_{4}$
## probabilistic thinking application to daily life analyzing the coffee addiction habit of the author is an opportunity to reflect on probabilistic thinking and Bayesian decision analysis. ## coffee drinking habit Coffee is the most popular drink in the world and in the right dose it gives our body various benefits. But coffee, as we all know, contains caffeine, a substance which must not be abused in order not to incur a series of side effects which in most cases are minor but in others may involve health risks (nervousness, insomnia , loss of appetite, damage to the cardiovascular system). Some studies define that the ideal amount of coffee should not exceed 400 milligrams per day, the equivalent of four cups, for an adult man without any particular health problems. The author is a coffee lover, maybe a coffee addicted. He has an heavy coffee drinking habit that in some case go beyond the daily ideal number of cups. This post studies the problem of coffee consumption, taken from the author’s daily life, using probabilistic analysis tools. The goal is to outline a problem analysis methodology rooted in probabilistic thinking. ### one informal point of view … Suppose an informed person gives a description of the habit of drinking coffee. The provided statement is: the author drinks between 1 and 7 cups of coffee a day . This description may be valid in an informal conversation, but it is not if we want to study the phenomenon of the author’s coffee consumption. A probabilistic formulation of the informal sentence above shall define: • the possible values that the consumption phenomenon can take in terms of cups of coffee; • the probability of each of these values; • assumptions about the coffee consumption phenomenon along the day sequence. According to this schema, the probabilistic translation of the sentence could be: the author can drink from 1 to 7 coffees every day, the number of cups per day is equally likely and it is independent of the cups taken the day before. ### … formalized in a uniform distribution In probability theory and statistics, the discrete uniform distribution is a probability distribution wherein a finite number of values are equally likely to be observed; every one of n values has equal probability 1/n. Another way of saying “discrete uniform distribution” would be “a known, finite number of outcomes equally likely to happen”. $cups \sim Unif(1,7)$ In order to understand what is the meaning of the probabilistic definition, a random draw from the uniform distribution is displayed in the following graph covering last 3 months. The above visualization of the uniform description of the coffee drinking habit shows how data points (cups) are distributed evenly from 1 to 7 cups a day and no data point is allowed outside this boundary. but this is what the person questioned meant? or the natural and informal language used has been misleading? ### another simple point of view … Suppose another informed person gives a different description of the habit of drinking coffee. The provided statement is: the author drinks 4 coffees a day on average. As before the informal description has to be translated in probabilistic terms which precisely define the uncertainty. the count of coffee cups drunk by the author in a day has a constant average rate of 4 cups a day, the number of cups each day is equally likely and it is independent of the cups taken the day before. ### … formalized in a poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known constant mean rate and independently of the time since the last event. $cups \sim Poisson(4)$ In order to understand what is the meaning of the probabilistic definition, a realization from the poisson distribution, a draw from a poisson distribution with rate 4 each day, is displayed in the following graph covering last 3 months. The above visualization of the poisson description of the coffee drinking habit shows how data points (cups) are more dense in the middle but also that some day cups drunk are too many. Is this simulated coffee cups drunk visualization closer to reality? ## actual data Suppose the author collected data in order to study his coffee drinking habit. Each day in the last 3 months the coffee cups drunk have been reported together with the hours worked. The actual cups of coffee drunk each day are visualized below. and the actual coffee cups distribution in the following graph. Drinking more than 6 cups of coffee each day is rare, most of the time the author drank between 2 and 6 cups. The hypothesis that the drinking habit could be related to worked hours is supported by the following exploratory visualization in which a linear model line is fitted to the data. It seems that a weak positive relation exists such that number of coffee cups drunk increases as the daily working hours grow. Another potential effect could be related also to the day of the week. The above visualization says that coffee cups distribution changes depending on the day of week: during the weekend the coffee “drinking habit addiction” seems to be weaker. But it is likely that the relationship among the variables are as per the following directed acyclic graph (DAG). Coffee consumption (cups) depends solely on worked hours that in turn is affected by the day of the week. ## a Bayesian modeling attempt For the purpose of this post the coffee cups distribution is modeled with a simple poisson regression with the mean rate parameter ($$\lambda$$) depending only from the hours worked. $cups \sim Poisson(\lambda) \\ \lambda = \beta \cdot hours_{worked} \\ \beta_{prior} \sim N(0.25, 0.1)$ (A better model could include an intercept term but, for the purpose of this post, the focus is on the hours worked as the actionable variable). The following graphs display the coefficients credible intervals and their mcmc trace. The credible interval for the worked hours beta does not contain zero. Furthermore the corresponding trace plot for the coefficient, showing the sampled values per chain and parameter throughout iterations, indicates that chains reached convergence and are well mixed together. As a measure of the goodness of fit of this model a posterior predictive check is performed. This model validation procedure compares what the model predicts versus what it is expected (i.e. the data you have). The assumption underlying this concept is that a good model should generate fake data that is similar to the actual data set used to make the model while a bad model will generate data that is in some way fundamentally or systematically different. From the two plots above (first one for density overlay, the second one for the empirical cumulative distribution overlay) it is possible to state that the actual data could come form the data generating process expressed by the model especially when more than 2 cups of coffee have been drunk. ## working hours policies Assuming the author has to work at least 40 hours to earn a living, possible working time policies are limited to the following: • no overtime, no working weekend (form Monday to Sunday the working hours sequence is 8,8,8,8,8,0,0) • reduced working hours but working weekend (form Monday to Sunday the working hours sequence is 7,7,7,7,6,4,2) • increased working hours but Friday off (form Monday to Sunday the working hours sequence is 10,10,10,10,0,0,0) Using the model trained for predicting the response variable (posterior distribution using Bayesian term) on this three different working hours policies, it is possible to determine the overall average coffee related satisfaction for each of them. The computation steps involve the following two sequential mapping: $hours_{worked} \mapsto cups_{coffee} \mapsto wellness$ where - the first mapping is done applying the simple fitted Bayesian model to predict the coffee consumption; • the second mapping is performed with the defined above mapping function that relates coffee drinking to the subjective wellness of the author. The Bayesian decision analysis on the habit of coffee consumption leads the author to recommend himself distributing working hours even during weekends by reducing the time devoted to work each working day. Compressing work in fewer days but limiting to 40 hours seems instead to lead also to a more unsatisfactory coffee drinking habit. As far as coffee drinking satisfaction is concerned, the author should in any case reduce the overall working hours and distribute them evenly. ## final considerations Although the use case analyzed is quite simple and trivial, the author’s intention is to outline an approach for probabilistic thinking. This approach includes the following steps: • evaluate how the uncertainty related to the context of the problem is described by the subject matter expert in his or her informal language; • collect and explore data for the problem at hand; • model the uncertainty relating the response to a variable which is actionable; • map the response to a measure of (dis)satisfaction; • define the available policies on the actionable variable; • use the model to predict the measure of satisfaction for each policy and define which policy is best for the specified objective. Bayesian decision analysis is extensively covered in literature, but not often used in public policy or business strategy context. Application to daily life decision making seems to be really rare. The author believes that the ultimate goal of the data science cultural shift is to promote probabilistic thinking as an effective counterpart to any form of reasoning that mainly implies the principle: go where your heart takes you. Feel free to email me if you would like to go deeper in the analysis, thanks for reading! The analysis shown in this post have been executed using R as main computation tool together with its gorgeous ecosystem ( tidyverse included). In particular Bayesian analysis was based on rstanarm and bayesplot packages.
Math Help - Related Rates 3 1. Related Rates 3 3) Coffee is draining from a conical filter nto a cylindrical coffee pot at the rate of 10 in^3/min. a) How fast is the level in the pot rising when the coffee in the cone is 5 inches deep? b) How hast is the level in the cone falling then? [A diagram on the page shows the height and diameter of the cone is 6 inches; diameter of pot is also 6 inches] Huge thanks in advance to whoever helps 2. So there are two figures to analyze here. The cylindrical pot: ---radius, r1 = 6/2 = 3 in. ---dV2 /dt = 10 cu.in/min a) How fast is the level in the pot rising when the coffee in the cone is 5 inches deep? Whatever is the height of the coffee in the cone, the volume of coffee pouring into the pot is 10 cu.in/min always. V2 = pi(r2)^2 *h2 r2 does not change. It is constant at 3 inches, so, V2 = pi(3^2)h2 = (9pi)h2 So, dV2 /dt = (9pi)(dh2/dt) 10 = (9pi)(dh2/dt) dh2/dt = 10/(9pi) in/sec Therefore, the level of the coffee in the pot is rising at the rate of 10/(9pi) ft/min. ------------------answer. The conical filter. ---total height = 6 in. ---radius at top = 6/2 = 3 in. ---dV1 /dt = -10 cu.in/min V1 = (1/3)[pi(r1)^2]h1 = (pi/3)(r1)^2 *h1 -----(i) Since we are interested on the height of the coffee only, we will express r1 in terms of h1 Imagine, or draw the figure on paper. It is an inverted isosceles triangle that is 6in at the top, 0 at the bottom, and 6in high. Draw the level of the coffee inside. Another inverted isosceles triangle is formed, whose top is 2r1 wide, 0 at the bottom, and h1 high. By proportion, top/height: 6/6 = 2r1/h1 Cross multiply, 6*h1 = 6*2r1 h1 = 2r1 r1 = h1 /2 Plug that into (i), V1 = (pi/3)(h1 /2)^2 *h1 V1 = (pi/12)(h1)^3 dV1 /dt = (pi/12)[3(h1)^2 dh1/dt] dV1/dt = (pi/4)(h1)^2 dh1/dt Plug in the givens, -10 = (pi/4)(5^2) dh1/dt -10 = 25pi/4 dh1/dt dh1/dt = -10 / (25pi/4) = -40/(25pi) = -8/(5pi) in/min Therefore, at that instant, the depth of the coffee in the cone is decreasing at the rate of 8/(5pi) in/min. --------------answer. -------------------------------------------------------------- Uh, I may be wrong in my understanding of Coffee is draining from a conical filter nto a cylindrical coffee pot at the rate of 10 in^3/min. My understanding is the rate is always 10 cu.in/min always, whatever is the depth of the coffee in the conical filter. If the 10 cu.in/min is only when the conical filter is full, then my answers above are wrong. 3. Hello, Super Mallow! 3) Coffee is draining from a conical filter nto a cylindrical coffee pot at the rate of 10 in³/min. a) How fast is the level in the pot rising when the coffee in the cone is 5 inches deep? b) How fast is the level in the cone falling then? [A diagram on the page shows the height and diameter of the cone is 6 inches; diameter of pot is also 6 inches] Code: : - 3 - : - 3 - : - *-------+-------* : \ | / : \ | r / *---------------* : \----+----/ | | 6 \:::|:::/ |-------+-------| : \::|h:/ |:::::::|:::::::| : \:|:/ |:::::::|:::::::|y : \|/ |:::::::|:::::::| - * *-------+-------* : - 3 - : - 3 - : (a) The pot has radius of 3 inches. The coffee in the pot has radius 3 and height $y$. The volume of the coffee in the pot is: . $V \:=\:\pi r^2h$ Hence, the volume of coffee is: . $V \:=\:\pi(3^2)y \:=\:9\pi y$ Differentiate with respect to time: . $\frac{dV}{dt} \:=\:9\pi\left(\frac{dy}{dt}\right)$ We are told that: $\frac{dV}{dt} = 10$ in³/min. So we have: . $10 \:=\:9\pi\left(\frac{dy}{dt}\right)\quad\Rightarro w\quad\frac{dy}{dt} \:=\:\frac{10}{9\pi}$ The coffee in the pot is rising at a rate of about $\boxed{0.35\text{ in/min}}$ (b) The cone has radius 3 and height 6. The coffee in the cone has radius $r$ and height $h$. The volume of the coffee is: . $V \:=\:\frac{\pi}{3}r^2h$ .[1] From the similar right triangles, we have: . $\frac{r}{h} \:=\:\frac{3}{6}\quad\Rightarrow\quad r \,=\,\frac{h}{2}$ Substitute into [1]: . $V \:=\:\frac{\pi}{3}\left(\frac{h}{2}\right)^2h \:=\:\frac{\pi}{12}h^3$ Differentiate with respect to time: . $\frac{dV}{dt} \:=\:\frac{\pi}{4}h^2\left(\frac{dh}{dt}\right)$ .[2] We are given: . $h =5,\;\frac{dV}{dt} =-10$ in³/min Substitute into [2]: . $-10 \:=\:\frac{\pi}{4}(5^2)\left(\frac{dh}{dt}\right)\ quad\Rightarrow\quad\frac{dh}{dt}\:=\:-\frac{8}{5\pi}$ The coffee in the cone is falling at about $\boxed{0.51\text{ in/min}}$ 4. Ah, I start to see it now. The Conical part was a bit confusing; I tried the problem and actually got part A! Part B makes sense now. Thanks so much for hte help guys!
1887 Abstract An electromagnetic and radiowaves model of diamondiferous rocks is presented which is derived<br>from the outcomes of researches carried out by the authors in the Western Jakutija. The possibilities of<br>method used in the course of researches and construction of the model are considered. /content/papers/10.3997/2214-4609-pdb.38.F280 2003-09-01 2020-04-05
# Multi-Lidar Calibration This example shows how to calibrate multiple 3-D lidar sensors mounted on a vehicle to estimate a relative transformation between them. Traditional methods, such as marker-based registration, are difficult when the lidar sensors have a negligible overlap between their fields of view (FOVs). The calibration also becomes more difficult as the number of lidar sensors increases. This example demonstrates the use of the trajectories of individual lidar sensors to estimate the transformation between them. This method of calibration is also known as hand-eye calibration. The use of multiple lidar sensors on an autonomous vehicle helps to remove blind spots, increases redundancy, and enables high-resolution map creation. To extract meaningful information from multiple lidar sensors, you can fuse the data using the transformation between them. Fusing multiple lidars can be challenging because of variations in resolution between different lidar sensors. This example also demonstrates how to create a high-resolution point cloud map by fusing the point clouds from multiple lidar sensors. This example uses synthetic input data generated using the Unreal Engine® by Epic Games®. The figure shows the configuration of the sensors mounted on the vehicle. The generated data simulates a vehicle on a predefined trajectory in an urban road setting. For details on how to interactively select a sequence of waypoints from a scene and generate vehicle trajectories, see the Select Waypoints for Unreal Engine Simulation (Automated Driving Toolbox) example. Use the helperShowSceneImage helper function to visualize the path the vehicle follows while collecting the data. % Load reference path for recorded drive segment % Set up workspace variables used by model refPosesX = xData.refPosesX; refPosesY = yData.refPosesY; refPosesT = yawData.refPosesT; if ~ispc error(['3D Simulation is only supported on Microsoft', ... char(174),' Windows',char(174),'.']); end sceneName = "VirtualMCity"; hScene = figure; helperShowSceneImage(sceneName) hold on scatter(refPosesX(:,2),refPosesY(:,2),7,'filled') xlim([-50 100]) ylim([-50 75]) ### Record Synthetic Data The MultiLidarSimulation Simulink model is configured for the Virtual Mcity (Automated Driving Toolbox) 3-D environment using the Simulation 3D Scene Configuration (Automated Driving Toolbox) block. A vehicle of type box truck is configured in the scene using the Simulation 3D Vehicle with Ground Following (Automated Driving Toolbox) block. The vehicle has two lidar sensors mounted on it using the Simulation 3D Lidar (Automated Driving Toolbox) block. The two lidars are mounted such that one lidar sensor is mounted at the front bumper and the other at the rear bumper. The mounting position of the lidar sensor can be adjusted using the Mounting tab in the simulation block. modelName = 'MultiLidarSimulation'; open_system(modelName) The model records synthetic lidar data and saves it to the workspace. % Update simulation stop time to end when reference path is completed simStopTime = refPosesX(end,1); set_param(gcs,StopTime=num2str(simStopTime)); % Run the simulation simOut = sim(modelName); ### Extract Lidar Odometry There are several simultaneous localization and mapping (SLAM) methods that estimate the odometry from lidar data by registering successive point cloud frames. You can further optimize the relative transform between the frames through loop closure detection. For more details on how to generate a motion trajectory using the NDT-based registration method, see the Design Lidar SLAM Algorithm Using Unreal Engine Simulation Environment (Automated Driving Toolbox) example. For this example, use the helperExtractLidarOdometry helper function to generate the motion trajectory as a pcviewset object to the simulation output simOut. % Front lidar translation and rotation frontLidarTranslations = simOut.lidarLocation1.signals.values; frontLidarRotations = simOut.lidarRotation1.signals.values; % Back lidar translation and rotation backLidarTranslations = simOut.lidarLocation2.signals.values; backLidarRotations = simOut.lidarRotation2.signals.values; % Extract point clouds from the simulation output [frontLidarPtCloudArr,backLidarPtCloudArr] = helperExtractPointCloud(simOut); % Extract lidar motion trajectories frontLidarVset = helperExtractLidarOdometry(frontLidarTranslations,frontLidarRotations, ... frontLidarPtCloudArr); backLidarVset = helperExtractLidarOdometry(backLidarTranslations,backLidarRotations, ... backLidarPtCloudArr); The helperVisualizeLidarOdometry helper function visualizes the accumulated point cloud map with the motion trajectory overlaid on it. % Extract absolute poses of lidar sensor frontLidarAbsPos = frontLidarVset.Views.AbsolutePose; backLidarAbsPos = backLidarVset.Views.AbsolutePose; % Visualize front lidar point cloud map and trajectory figure plot(frontLidarVset) hold on plot(backLidarVset) legend({'Front Lidar Trajectory','Back Lidar Trajectory'}) title("Lidar Trajectory") view(2) The trajectories of the two lidar sensors appear to be shifted by 180 degrees. This is because the lidar sensors are configured facing in opposite directions in the Simulink model. ### Align Lidar Trajectory General registration-based methods, using point clouds, often fail to calibrate lidar sensors with nonoverlapping or negligible-overlap fields of view because they lack of sufficient corresponding features. To overcome this challenge, use the motion of the vehicle for registration. Because of the rigid nature of the vehicle and the sensors mounted on it, the motion of each sensor correlates to the relative transformation between the sensors. To extract this relative transformation, formulate the solution to align lidar trajectory as a hand-eye calibration that involves solving the equation $\mathrm{AX}=\mathrm{XB}$, where $Α\text{\hspace{0.17em}}$and $Β$ are successive poses of the two sensors,$\text{\hspace{0.17em}}Α\text{\hspace{0.17em}}$and $Β$. You can further decompose this equation into its rotation and translation components. ${\mathit{R}}_{{\mathit{a}}_{\mathit{k}-1}}^{{\mathit{a}}_{\mathit{k}}}*{\mathit{R}}_{\mathit{b}}^{\mathit{a}}={\mathit{R}}_{\mathit{b}}^{\mathit{a}}*{\mathit{R}}_{{\mathit{b}}_{\mathit{k}-1}}^{{\mathit{b}}_{\mathit{k}}}$ ${\mathit{R}}_{{\mathit{a}}_{\mathit{k}-1}}^{{\mathit{a}}_{\mathit{k}}}*{\mathit{t}}_{\mathit{b}}^{\mathit{a}}\text{\hspace{0.17em}}+{\mathit{t}}_{{\mathit{a}}_{\mathit{k}-1}}^{{\mathit{a}}_{\mathit{k}}}={\mathit{R}}_{\mathit{b}}*{\mathit{t}}_{\mathit{a}}$ ${\mathit{R}}_{{\mathit{a}}_{\mathit{k}-1}}^{{\mathit{a}}_{\mathit{k}}}$, ${\mathit{t}}_{{\mathit{a}}_{\mathit{k}-1}}^{{\mathit{a}}_{\mathit{k}}}$ are the rotation and translation components of sensor$Α$from timestamp $\mathit{k}-1$ to $\mathit{k}$.$\text{\hspace{0.17em}}{\mathit{R}}_{\mathit{b}}^{\mathit{a}}$,$\text{\hspace{0.17em}}{\mathit{t}}_{\mathit{b}}^{\mathit{a}}\text{\hspace{0.17em}}$are the rotation and translation components of sensor$Α$ relative to sensor $Β$. This figure shows the relationship between the relative transformation and the successive poses between the two sensors. ${\mathit{T}}_{{\mathit{a}}_{\mathit{k}-1}}^{{\mathit{a}}_{\mathit{k}}}$,${\text{\hspace{0.17em}}\mathit{T}}_{{\mathit{b}}_{\mathit{k}-1}}^{{\mathit{b}}_{\mathit{k}}}$ is total transformation of sensors $Α\text{\hspace{0.17em}},$$Β$ and ${\text{\hspace{0.17em}}\mathit{T}}_{\mathit{b}}^{\mathit{a}}$ is the relative transformation. There are multiple ways to solve the equations for rotation and translation[1]. Use the helperEstimateHandEyeTransformation helper function attached as a supporting file to this example, to estimate the initial transformation between the two lidar sensors as a rigid3d object. To extract the rotation component of the equation, the function converts the rotation matrices into a quaternion form restructured as a linear system. The function finds the closed-form solution of this linear system using singular value decomposition[2]. tformInit = helperEstimateHandEyeTransformation(backLidarAbsPos, frontLidarAbsPos); ### Transformation Refinement To further refine the transformation, use a registration-based method. Input the translation of each lidar sensor from their respective trajectories to the registration. Use the helperExtractPosFromTform helper function to convert the trajectories of the sensors into showPointCloud objects. For registration, use the pcregistericp function with the calculated rotation component tformInit as your initial transformation. % Extract the translation of each sensor in the form of a point cloud object frontLidarTrans = helperExtractPosFromTform(frontLidarAbsPos); backLidarTrans = helperExtractPosFromTform(backLidarAbsPos); % Register the trajectories of the two sensors tformRefine = pcregistericp(backLidarTrans,frontLidarTrans, ... 'InitialTransform',tformInit,Metric='pointToPoint'); Note that the accuracy of the calibration depends on how accurately you estimate the motion of each sensor. To simplify the computation, the motion estimate for the vehicle assumes the ground plane is flat. Because of this assumption, the estimation loses one degree of freedom along the Z-axis. You can estimate the transformation along the Z-axis by using the ground plane detection method[3]. Use the pcfitplane function to estimate the ground plane from the point clouds of the two lidar sensors. The function estimates the height of each sensor from the detected ground planes of the two lidar sensors. Use the helperExtractPointCloud helper function to extract a pointCloud object array from the simulation output simOut. % Maximum allowed distance between the ground plane and inliers maxDist = 0.8; % Reference vector for ground plane refVecctor = [0 0 1]; % Fit plane on the a single point cloud frame frame = 2; frontPtCloud = frontLidarPtCloudArr(2); backPtCloud = backLidarPtCloudArr(2); [~,frontLidarInliers,~] = pcfitplane(frontPtCloud,maxDist,refVecctor); [~,backLidarInliers,~] = pcfitplane(backPtCloud,maxDist,refVecctor); % Extract relative translation between Z-axis frontGroundPlane = select(frontPtCloud,frontLidarInliers); backGroundPlane = select(backPtCloud,backLidarInliers ); frontGroundPts = frontGroundPlane.Location; backGroundPlane = backGroundPlane.Location; % Compute the difference between mean values of the extracted ground planes zRel = mean(frontGroundPts(:,3)) - mean(backGroundPlane(:,3)); % Update the initial transformation with the estimated relative translation % in the Z-axis tformRefine.Translation(3) = zRel; ### Fuse point cloud After obtaining the relative transformation between the two lidar sensors, fuse the point clouds from the two lidar sensors. Then fuse the fused point cloud sequentially to create a point cloud map of the data from the two lidar sensors. This figure shows the point cloud fusion method of point cloud map creation. Use the helperVisualizedFusedPtCloud helper function to fuse the point clouds from the two lidar sensors, overlaid with the fused trajectory after calibration. From the fused point cloud map, you can visually infer the accuracy of the calibration. helperVisualizedFusedPtCloud(backLidarVset,frontLidarVset,tformRefine) ### Results The accuracy of the calibration is measured with respect to the ground truth transformation obtained from the mounting location of the sensors. The Sport Utility Vehicle (Vehicle Dynamics Blockset) documentation page provides the details of the mounting position of the two lidar sensors. The relative transformation between the two lidar sensors is loaded from the gTruth.mat file. tformGt = gt.gTruth; % Compute the translation error along the x-, y-, and z-axes transError = tformRefine.Translation - tformGt.Translation; fprintf("Translation error along x in meters: %d",transError(1)); Translation error along x in meters: 8.913606e-03 fprintf("Translation error along y in meters: %d",transError(2)); Translation error along y in meters: 6.720094e-03 fprintf("Translation error along z in meters: %d",transError(3)); Translation error along z in meters: 2.294692e-02 % Compute the translation error along the x-, y-, and z-axes rotError = rEst - rGt; fprintf("Rotation error along x in degrees: %d",rotError(3)); Rotation error along x in degrees: -4.509040e-04 fprintf("Rotation error along y in degrees: %d",rotError(2)); Rotation error along y in degrees: 2.201822e-05 fprintf("Rotation error along z in degrees: %d",rotError(1)); Rotation error along z in degrees: 2.545250e-02 ### Supporting Functions helperExtractPointCloud extracts an array of pointCloud objects from a simulation output. function [ptCloudArr1,ptCloudArr2] = helperExtractPointCloud(simOut) % Extract signal ptCloudData1 = simOut.ptCloudData1.signals.values; ptCloudData2 = simOut.ptCloudData2.signals.values; numFrames = size(ptCloudData1,4); % Create a pointCloud array ptCloudArr1 = pointCloud.empty(0,numFrames); ptCloudArr2 = pointCloud.empty(0,numFrames); for n = 1:size(ptCloudData1,4) ptCloudArr1(n) = pointCloud(ptCloudData1(:,:,:,n)); ptCloudArr2(n) = pointCloud(ptCloudData2(:,:,:,n)); end end helperExtractLidarOdometry extracts the total transformation of the sensors. function vSet = helperExtractLidarOdometry(location,theta,ptCloud) numFrames = size(location, 3); vSet = pcviewset; tformRigidAbs = rigid3d; yaw = theta(:,3,1); rot = [cos(yaw) sin(yaw) 0; ... -sin(yaw) cos(yaw) 0; ... 0 0 1]; % Use first frame as reference frame tformOrigin = rigid3d(rot,location(:,:,1)); for i = 2:numFrames yawCurr = theta(:,3,i); rotatCurr = [cos(yawCurr) sin(yawCurr) 0; ... -sin(yawCurr) cos(yawCurr) 0; ... 0 0 1]; transCurr = location(:,:,i); tformCurr = rigid3d(rotatCurr,transCurr); % Absolute pose tformRigidAbs(i) = rigid3d(tformCurr.T * tformOrigin.invert.T); % Transform between frame k-1 and k relPose = rigid3d(tformRigidAbs(i-1).T * tformRigidAbs(i).invert.T); end end helperVisualizedFusedPtCloud visualizes a point cloud map from the fusion of two lidar sensors. function helperVisualizedFusedPtCloud(movingVset,baseVset,tform) hFig = figure(Name='Point Cloud Fusion', ... NumberTitle='off'); ax = axes(Parent = hFig); % Create a scatter object for map points scatterPtCloudBase = scatter3(ax,NaN,NaN,NaN, ... 2,'magenta','filled'); hold(ax, 'on'); scatterPtCloudMoving = scatter3(ax,NaN,NaN,NaN, ... 2,'green','filled'); scatterMap = scatter3(ax,NaN,NaN,NaN, ... 5,'filled'); % Create a scatter object for relative positions positionMarkerSize = 5; scatterTrajectoryBase = scatter3(ax,NaN,NaN,NaN, ... positionMarkerSize,'magenta','filled'); scatterTrajectoryMoving = scatter3(ax,NaN,NaN,NaN, ... positionMarkerSize,'green','filled'); hold(ax,'off'); % Set background color ax.Color = 'k'; ax.Parent.Color = 'k'; % Set labels xlabel(ax,'X (m)') ylabel(ax,'Y (m)') % Set grid colors ax.GridColor = 'w'; ax.XColor = 'w'; ax.YColor = 'w'; % Set aspect ratio for axes axis(ax,'equal') xlim(ax, [-30 100]); ylim(ax, [-80 40]); title(ax,'Lidar Point Cloud Map Building',Color=[1 1 1]) ptCloudsMoving = movingVset.Views.PointCloud; absPoseMoving = movingVset.Views.AbsolutePose; ptCloudsBase = baseVset.Views.PointCloud; absPoseBase = baseVset.Views.AbsolutePose; numFrames = numel(ptCloudsMoving); % Extract relative positions from the absolute poses relPositionsMoving = arrayfun(@(poseTform) transformPointsForward(poseTform, ... [0 0 0]),absPoseMoving,UniformOutput=false); relPositionsMoving = vertcat(relPositionsMoving{:}); relPositionsBase = arrayfun(@(poseTform) transformPointsForward(poseTform, ... [0 0 0]),absPoseBase,UniformOutput=false); relPositionsBase = vertcat(relPositionsBase{:}); set(scatterTrajectoryBase,'XData',relPositionsMoving(1,1),'YData', ... relPositionsMoving(1,2),'ZData',relPositionsMoving(1,3)); set(scatterTrajectoryMoving,'XData',relPositionsBase(1,1),'YData', ... relPositionsBase(1,2),'ZData',relPositionsBase(1,3)); % Set legend legend(ax, {'Front Lidar','Back Lidar'}, ... Location='southwest',TextColor='w') skipFrames = 5; for n = 2:skipFrames:numFrames pc1 = pctransform(removeInvalidPoints(ptCloudsMoving(n)),absPoseMoving(n)); pc2 = pctransform(removeInvalidPoints(ptCloudsBase(n)),absPoseBase(n)); % Transform moving point cloud to the base pc1 = pctransform(pc1,tform); % Create a point cloud map and merge point clouds from the sensors baseMap = pcalign(ptCloudsBase(1:n),absPoseBase(1:n),1); movingMap = pcalign(ptCloudsMoving(1:n),absPoseMoving(1:n),1); movingMap = pctransform(movingMap,tform); map = pcmerge(baseMap,movingMap,0.1); % Transform the position of the moving sensor to the base xyzTransformed = [relPositionsMoving(1:n,1),relPositionsMoving(1:n,2), ... relPositionsMoving(1:n,3)]*tform.Rotation + tform.Translation; % Plot current point cloud of individual sensor with respect to the ego % vehicle set(scatterPtCloudBase,'XData',pc2.Location(:,1),'YData', ... pc2.Location(:,2),'ZData',pc2.Location(:,3)); set(scatterPtCloudMoving,'XData',pc1.Location(:,1),'YData', ... pc1.Location(:,2),'ZData',pc1.Location(:,3)) % Plot fused point cloud map set(scatterMap,'XData', map.Location(:,1),'YData', ... map.Location(:,2),'ZData', map.Location(:,3),'CData', map.Location(:,3)); % Plot trajectory set(scatterTrajectoryBase,'XData',relPositionsBase(1:n,1),'YData', ... relPositionsBase(1:n,2),'ZData',relPositionsBase(1:n,3)); set(scatterTrajectoryMoving,'XData',xyzTransformed(:,1),'YData', ... xyzTransformed(:,2),'ZData',xyzTransformed(:,3)); % Draw ego vehicle assuming the dimensions of a sports utility vehicle eul = rotm2eul(absPoseBase(n).Rotation'); t = xyzTransformed(end,:) + [4.774 0 0]/2*(absPoseBase(n).Rotation); pos = [t 4.774 2.167 1.774 theta(2) theta(3) theta(1)]; showShape('cuboid',pos,Color='yellow',Parent=ax,Opacity=0.9); view(ax,2) drawnow limitrate end end helperExtractPosFromTform converts translation from a pose to a pointCloud object. function ptCloud = helperExtractPosFromTform(pose) numFrames = numel(pose); location = zeros(numFrames,3); for i = 1:numFrames location(i,:) = pose(i).Translation; end ptCloud = pointCloud(location); end ### References [1] Shah, Mili, Roger D. Eastman, and Tsai Hong. ‘An Overview of Robot-Sensor Calibration Methods for Evaluation of Perception Systems’. In Proceedings of the Workshop on Performance Metrics for Intelligent Systems - PerMIS ’12, 15. College Park, Maryland: ACM Press, 2012. https://doi.org/10.1145/2393091.2393095. [2] Chou, Jack C. K., and M. Kamel. "Finding the Position and Orientation of a Sensor on a Robot Manipulator Using Quaternions". The International Journal of Robotics Research 10, no. 3 (June 1991): 240–54. https://doi.org/10.1177/027836499101000305. [3] Jiao, Jianhao, Yang Yu, Qinghai Liao, Haoyang Ye, Rui Fan, and Ming Liu. ‘Automatic Calibration of Multiple 3D LiDARs in Urban Environments’. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 15–20. Macau, China: IEEE, 2019. https://doi.org/10.1109/IROS40897.2019.8967797.
# LOG#063. SM(V): Gauge fixing. Gauge theories require that we select “a gauge” in order to calculate physical observables. That is, you have to fix the gauge to eliminate field configurations that are physically equivalent ( they can not be distintinguished, as field configurations). The gauge fixing procedure is very hard of practically impossible for non-abelian YM theories unless you work with the so-called “functional approach”, isung some devices invented by Feynman himself and called path integrals ( you can imagine path integrals as -iterated integrals, or infinite differential forms somehow but we will not require a high understanding of these topics, since my blog posts on the SM basics don’t pretend to cover such advanced topics in a very precise formulation, but if you are interested, learn more about path integrals). Using the path integral or functional approach, theoretical physicists have to apply a technique, called Fadeev-Popov (FP) method/procedure, to erase every physically equivalent field configuration in the path integral after the selection of gauge. That is, in summary, the key idea is: Moreover, the gauge fixing consists generally in a prescription of “picking some constraining functions”. Those functions can be any functions of the fields, and thus, we can distinguish two types of gauge fixing: lineal gauge fixing and non-linear gauge fixing. The essence of the FP procedure is to restrict and constrain the functional integral/path integral: The restriction is realized by a gauge-fixing condition, and it can take the form of a functional delta function: and then, we perform the integration over with the aid of a “Gaussian” weight function. The function is arbitrary. The method works nicely: and then ## Linear gauge fixing Linear gauge fixing is by definition a gauge fixing procedure that uses LINEAR functions in the fields, e.g., Functional methods allow us to introduce two point correlation fucntions from the effective lagrangian: The gauge propagator is defined to be: This one parameter class of gauge elections is known as the gauge. Several concrete gauges have their own names in the literature due to their uses. Thus, we have 3 specially useful gauges: 1st. Landau gauge . 2nd. Feynman-‘t Hooft gauge . 3rd. Unitary gauge . ## Gauge fixing in the SM In the context of SSB theories, the SM in particular, the gauge is introduced using the gauge-fixing functions defined as After a functional integration, the so-called gauge-fixing Lagrangian pieces arise in the SM: and where the fields are the Goldstone bosons corresponding to the broken gauge symmetries we select, and where the sum symbol is taken to add lagrangian pieces which are invariant under the UNBROKEN gauge symmetry of the theory left after the SSB. This choice is associated to the massive gauge boson propagators: Thus, the effect of giving mass terms for the Goldstone bosons (“modes”) are two: 1st. Goldstone bosons are proportional to the gauge parameters . 2nd. The introduction of unphysical degrees of freedom (the Goldstone modes ) that are visible or existing as virtual particles inside Feynman diagrams. Something similar happens with the some extra fields we will require later, the Fadeev-Popov ghosts, for consistency. The FP ghosts will be unphysical as well. If some Goldstone massive mode survive, it is a hint of the SSB, as we will explain in a forthcoming comment. ## The unitary gauge There are several gauge choices, as we remarked above. One of them, the unitary gauge, is particularly interesting since it erases the Goldsone boson terms completely, and then, it results in a propagator: for the gauge bosons. It can be obtained by sending to the infinity the in the gauge. This gauge is a very simple choice, and we don’t have to include Goldstone bosons in the perturbative calculations via Feynman diagrams. It reduces the complexity of the mathematical expressions arising in our QFT/SM. However, there is a subtle problem with this gauge. The gauge propagator does NOT fall off for and it produces some problems in the high energy limit, as an asymptotic series, specially serious if we consider loop diagrams (higher order selfinteractions, vacuum polarization effects, etc). This problem is specific to this gauge and it does not arise in the gauge. Indeed, the gauge can even be used to prove the renormalizability of general gauge theories in a general setting. ## Non linear gauge-fixing The general FP procedure states that the constraining functions can be arbitrary, not necessarily linear at all. Then, we should be careful to keep the lagrangian dimension equal to four as a maximum in order to protect the renormalizability of our theory. By the addition of non-linear parts to the constraining functions we wish to get even more extra gauge-fixing parameters in the total theory, and these terms are helpful to verify the correction of our final results. Therefore, to keep things clear and neat. The linear gauge fixing piece in the SM lagrangian is given by a term: and generally, most of the time and people set the gauge-fixing parameters to the same quantity: However, we can choose the constraining functions defined by: We introduce additional non-linear constraining functions: Here, the Goldstone bosons are , arising after SSB, is the physical Higgs boson field and are the cosine and the sine of the Weinberg angle. The parameters are new gauge fixing parameters which we can choose for free. This choice of non-linear constraints has a specaila feature: it does NOT change the quadratic part of the SM lagrangian, i.e., the propagators are NOT affected by those new gauge fixing terms, only interaction vertices ARE affected. ## Interaction vertices If we combine the linear and non-linear gauge-fixing functions, we obtain the total gauge-fixing lagrangian piece, and it can be expanded in such a way that we can write the cubic and quartic terms in the fields, the interaction terms, explicitly! Furthermore, the quadratic parts defining the propagators of gauge fields ARE not affected by such non-linear gauge-fixing terms. In summary, the gauge-fixing of non-linear functions allow us to introduce interactions vertices in a non-trivial way that is consistent with gauge invariance! And this result is a very important theoretical fact since we did not want to spoil the gauge invariance at the end of our calculations. So, it is quite remarkable we can make all these calculations avoiding those issues. After performing the explicit expansion of the gauge-fixing lagrangian, we can write the following seven (7 is cool number, isn’t it?) interaction vertices: with In addition to these 7 interaction pieces, we have to add for inner consistency the so-called ghost field interaction terms. The ghosts are usually represented by the letter and . There are 5 main terms of this class in the SM: ## Renormalization and the SM The whole renormalization process is a complicated procedure of any gauge theory. In the case of the SM, it can be summarized in some simple steps: 1st. Choose any set of independent physical parameters. 2nd. Separate the bare parameters and fields into two different types: renormalized parameters (fields) and renormalization constants. 3rd. Choose renormalization conditions to fix the so-called “counter-terms”. 4th. Express physical quantities as a function of the renormalized parameters. 5th. Choose input data in order to fix the value of the renormalized paramenters. 6th. Evaluate predictions for physical quantities as functions of the input data. It sounds hard, don’t it? In fact, the first 3 conditions specify what physicists call the renormalization scheme. The most popular renormalization schemes are: A) The on-shell scheme. It uses the knowledge that all external particles are physical, i.e, on-shell, as the boundary conditions. B) Minimal substraction scheme (MS). It simply absosrb the divergent parts to the counterterms. More generally, in QFT,  the minimal subtraction scheme, or MS scheme, sometimes written as , is a particular renormalization scheme used to absorb the infinities that arise in perturbative calculations beyond the leading-order (tree-level). It was introduced independently by ‘t Hooft and Weinberg in 1973.  The MS scheme consists of absorbing only the divergent part of the radiative corrections into the counterterms. There is a similar and more widely used modified minimal subtraction, or , where one absorbs the divergent part plus a universal constant (which always arises along with the divergence in Feynmann diagram calculations) into the counterterms. We are completely free to choose independent parameters in the SM. For renormalization in the Standard Model, we usually select Renormalization in the SM IS rather simple in principle, but it IS a very difficult and technical task. Indeed, the issue of renormalization and the mathematics behind it is even a high-tech topic for mathematical physicists and pure mathematicians. Moreover, the total landscape of renormalization in the SM is complicated by the fact that the SM includes spontaneous symmetry breaking AND flavor mixing requiring renormalization of the CKM and PMNS matrices. The set of independent parameters used as renormalization parameters in the SM is generally Let the gauge-fixing be with you! See you in the next SM post! View ratings
# Given that the tangent line to the graph of y = g(x) at the point (4,5) passes through the point ###### Question: Given that the tangent line to the graph of y = g(x) at the point (4,5) passes through the point (7,0). Find g(4) and 9'(4). 2 Given that y ~2x +4 is tangent to y = h(x), determine a tangent line to y = h(x + 2). 3. Given that y =4x - 7 is tangent to y f(x) and that f(x) is invertible, determine a tangent line to y = f-1 (x) Determine € S0 that for f(x) x2 the average rate of change between a and b equals the instantaneous rate of change at c 5_ Find the instantaneous rate of change for y = (sin x cOS x)2 +sin(2x) at x 3T #### Similar Solved Questions ##### (15 pxwinta) Ve Dc Morgan Iaw& to write negations for the following EatrmensSu 1 math major , and Sue" Ister is computer science majorThe units digit of 467 is or it is 6,The dollar is at an all-time high, and the record low. stock market is at (15 pxwinta) Ve Dc Morgan Iaw& to write negations for the following Eatrmens Su 1 math major , and Sue" Ister is computer science major The units digit of 467 is or it is 6, The dollar is at an all-time high, and the record low. stock market is at... ##### Do SWOT analysis. CASE 01 Mystic Monk Coffee connect . David L. Turnipseed University of South... do SWOT analysis. CASE 01 Mystic Monk Coffee connect . David L. Turnipseed University of South Alabama . wishing to donate to the monks' cause. Father Prior Daniel Mary did not have a great deal of experience in business matters but considered to what extent the monastery could r... ##### If f()dx 6. thea JG' F(3e + 1)deSelect one:J|0 6. 2c.3d, 4e.nonc If f()dx 6. thea JG' F(3e + 1)de Select one: J| 0 6. 2 c.3 d, 4 e.nonc... ##### This is a differential equations course, please answer all the questions. Find the general solution. 1... This is a differential equations course, please answer all the questions. Find the general solution. 1 1 2 1| +(t) 5. r'(༩) = [1 ༡ 2 1 1 6. r'(༩) = 1 2 =5 -1+(t) = =3 3 2 4) 7. z(t) = {2 0 2]z(t) 2. 1 8 (༩) ཡཁས =1 2 1 -1]r(t) 2 ) • 9. (0 = ( =... ##### Question 1 (15 points)currentA wire: has current _ 20 A as shown in the picture At certain instant; chaige with velocity 0f 305 ns Ahoin in the picture .38 pC is travelingI) (3) Sketch the diagram Your Wntch work , Show the direction of the magnetic field creatcd by the wire the location of the charge directly on the diagram: Usc dot for out of the page, and for into the page. Explain your answcr bricfly2) (3) Show the direction of the magnctic force = chargeF crcated by the lield directly on th Question 1 (15 points) current A wire: has current _ 20 A as shown in the picture At certain instant; chaige with velocity 0f 305 ns Ahoin in the picture . 38 pC is traveling I) (3) Sketch the diagram Your Wntch work , Show the direction of the magnetic field creatcd by the wire the location of the ... ##### 7 Provide a curved arrow mechanism and predict the major and Minor products for the following reactions: (12 pts)Biz, HzOHg(Oacl Ho NaBHa NaOH 7 Provide a curved arrow mechanism and predict the major and Minor products for the following reactions: (12 pts) Biz, HzO Hg(Oacl Ho NaBHa NaOH... ##### Points) Let fx) = [3x- -Jx - 2]x -3x-2] WVhat are the horizontal asymptotes of f{x)? What are the slant asymptoteg of f(x)? WVhat are the vertical asymptotes of f{x)? Plot flx) Plot the asymptotes_ points) Let fx) = [3x- -Jx - 2]x -3x-2] WVhat are the horizontal asymptotes of f{x)? What are the slant asymptoteg of f(x)? WVhat are the vertical asymptotes of f{x)? Plot flx) Plot the asymptotes_... ##### For what positive numbers will the cube of a number exceed four times its square? For what positive numbers will the cube of a number exceed four times its square?... ##### Companies use different resources to learn more about the specific market segments they plan to target.... Companies use different resources to learn more about the specific market segments they plan to target. Using the resources in this module, search for information using a zip code of your choice. Some options to consider include Beverly Hills 90210, Las Vegas 89101, Miami Beach 33139, or Manchester,... ##### What percent of 128.6 is 4? What percent of 128.6 is 4?... ##### A 32 pound weight stretches a spring 4 feet. Assume the surrounding medium offers a damping... A 32 pound weight stretches a spring 4 feet. Assume the surrounding medium offers a damping force of 4 times the insta of 25 cost. The correct differential equation for the position, x(f), of the mass at a function of time, t.is (Use g = 3291/52) Select the correct answer dx dr en + 4x = 25cost d&su... ##### (-1) (-1)n Let y = 2 x2n + € x2n _ +1 be n=0 (2n +4) n=0 (2n+7) the power series solution to the I.V.P y"+p(x) y' +q(x) y = 0,y(0) = 4 y' (0) = 1 Find a10 the coefficient of xlO)a31 238b1 17C14d. Nonee_ 14 (-1) (-1)n Let y = 2 x2n + € x2n _ +1 be n=0 (2n +4) n=0 (2n+7) the power series solution to the I.V.P y"+p(x) y' +q(x) y = 0,y(0) = 4 y' (0) = 1 Find a10 the coefficient of xlO) a 31 238 b 1 17 C 14 d. None e_ 14... ##### Evaluate the function of the line integral where ? is given by ?=11cos ?, ?=11??? ?... Evaluate the function of the line integral where ? is given by ?=11cos ?, ?=11??? ? in an interval of x2-y2dk,x-y)dy, and (x-yds 0t2... ##### Be sure (0 ariswer al) parts.Amixture Of gascs contains 0.300 mol CI;, 0.230 mol CzH; and 4.290 mol CzHa The Ictal pressure Is 15alm. Calculate the partial pressures Of the gases.(a) CH() CzHa(c) CHzPrev14 of 25 Be sure (0 ariswer al) parts. Amixture Of gascs contains 0.300 mol CI;, 0.230 mol CzH; and 4.290 mol CzHa The Ictal pressure Is 15alm. Calculate the partial pressures Of the gases. (a) CH () CzHa (c) CHz Prev 14 of 25... ##### FEGlil'(-1-1to (L I) along the curve x =y'ifF=yitxi Evaluale Fa dr from FEGlil '(-1-1to (L I) along the curve x =y'ifF=yitxi Evaluale Fa dr from... ##### Determine the critical load, Per, required to cause failure of a 21 ft long column made... Determine the critical load, Per, required to cause failure of a 21 ft long column made of A-36 structural steel with a moment of inertia of 2.19 in* about the y-y axis, 16.4 in about the x-x axis, and a cross sectional area of A = 2.68 in2. Assume that the column behaves as if it is fixed at its ba... ##### Problem 3 A scalar field f R2 _ R is given byf(z,y) =ry 1. Show that f is differentiable everywhere. 2. Find the derivative of f at the point a = (a1,62_ with respect to the vector (01, U2) _ Is the result as expected for v = (1,0)? Explain why: Problem 3 A scalar field f R2 _ R is given by f(z,y) =ry 1. Show that f is differentiable everywhere. 2. Find the derivative of f at the point a = (a1,62_ with respect to the vector (01, U2) _ Is the result as expected for v = (1,0)? Explain why:... ##### How do you evaluate \frac { 6} { 7} \cdot ( \frac { 11} { 18} - \frac { 5} { 12} )? How do you evaluate \frac { 6} { 7} \cdot ( \frac { 11} { 18} - \frac { 5} { 12} )#?... ##### The establishment of a competent information management structure is a high priority for any health care organization.... The establishment of a competent information management structure is a high priority for any health care organization. Over the last few years, there has been a growing interest in the development of electronic health records as an integral portion of this structure. While implementation of an elect... ##### Use your knowledge of resonance, aromaticity, and general stability trends to analyze the following reaction. Provide... Use your knowledge of resonance, aromaticity, and general stability trends to analyze the following reaction. Provide your reasoning why that reaction is fast or slow 0000 : Base faster Br Base slower... ##### Find the limit, if it exists: Show work:Find: im fZsh)-KZL for flx) = -X+ 1. h 0 Find the limit, if it exists: Show work: Find: im fZsh)-KZL for flx) = -X+ 1. h 0... ##### Need help finishing this outline introduction: thesis: I don’t think that the government should take penalty... need help finishing this outline introduction: thesis: I don’t think that the government should take penalty from those who don’t have health insurance as it depends on the affordability and interest of an individual. The government can make awareness about the importance of taking... ##### Consider the curve $\mathbf{r}(t)=\sin t \cos t \mathbf{i}+\sin ^{2} t \mathbf{j}+\cos t \mathbf{k}$, $0 \leq t \leq 2 \pi$ (a) Show that the curve lies on a sphere centered at the origin. (b) Where does the tangent line at $t=\pi / 6$ intersect the $x y$ -plane? Consider the curve $\mathbf{r}(t)=\sin t \cos t \mathbf{i}+\sin ^{2} t \mathbf{j}+\cos t \mathbf{k}$, $0 \leq t \leq 2 \pi$ (a) Show that the curve lies on a sphere centered at the origin. (b) Where does the tangent line at $t=\pi / 6$ intersect the $x y$ -plane?... ##### Expert Q&Adenntne Che5the picture belova Expert Q&A denntne Che5 the picture belova... ##### Your labmate gives you a gene with the following sequence of the coding strand for the first 5 amino acids: EATGTCGTATCCCCGG 3' What is the sequence of the mRNA? Write your answer 5' to 3' but do not label ends What is the encoded pentapeptide? Please write your answer in the form of: Pro-Thr-Asn You sequenced your labmate's provided gene, and found the following sequence: ATGTCATATCCGCGG. Will your labmate get their protein of interest from this sequence? (Yes or No?) Your labmate gives you a gene with the following sequence of the coding strand for the first 5 amino acids: EATGTCGTATCCCCGG 3' What is the sequence of the mRNA? Write your answer 5' to 3' but do not label ends What is the encoded pentapeptide? Please write your answer in the form of:... ##### I Ilie Mi iih Aut: eelabilin 0A"ne m dumnilics Vana , Waua( W Suntn 4o Wriao aal Aahe Fi' I Ca Itwit Wo conelude badl I Mioi weliable ehine I( m S"0 Siknii ace Heves (o)Onk I Ilie Mi iih Aut: eelabilin 0A" ne m dumnilics Vana , Waua( W Suntn 4o Wriao aal Aahe Fi' I Ca Itwit Wo conelude badl I Mioi weliable ehine I( m S"0 Siknii ace Heves (o) Onk... ##### Relacion entre las moleculas mostradas:CH}Select one: EnantiomerosIsomeros constitucionalesDlasteromerosd. Isomeros geometricosCh3 Relacion entre las moleculas mostradas: CH} Select one: Enantiomeros Isomeros constitucionales Dlasteromeros d. Isomeros geometricos Ch3... ##### Define the diffirence between accounting theory and accounting practice. Define the diffirence between accounting theory and accounting practice.... ##### Two boxes A and B are connected to each end ofa light vertical string: A constant force 0, F-80 N is applied to box A_ Starting from rest, box B decends 12 m in sec_ The tension of the string is 36 N Determine the mass of block A and B_ '^. 1 . 3 t" < ' 4 : ' || 4^ ' .H; bnji > 'Ut can01Ju - N) `)| Two boxes A and B are connected to each end ofa light vertical string: A constant force 0, F-80 N is applied to box A_ Starting from rest, box B decends 12 m in sec_ The tension of the string is 36 N Determine the mass of block A and B_ '^. 1 . 3 t" < ' 4 : ' || 4^ ' .H... ##### (5%0) Problem 12: driver in Scottland is on route that has posted speed limit of 75 knhr and is traveling over the speed limit by Ve 14 knhr.A SUV pulls out of a parking place into the street in front of the driver d = 23 m in front of the driver and stops there The driver immediately slams on the brakes and begins decelerating at rate of 4b 3.7 ms?V+Vctheexpertta.COm25 % Part (a) By what conversion factor do you need to multiply speed expressed in kilometers per hour in order to obtain the val (5%0) Problem 12: driver in Scottland is on route that has posted speed limit of 75 knhr and is traveling over the speed limit by Ve 14 knhr.A SUV pulls out of a parking place into the street in front of the driver d = 23 m in front of the driver and stops there The driver immediately slams on the b... ##### Part A)Coada sranu nbcb ocrus m de mgkae adoonhis bphittad 1;0)+Lo)-QHl) Ibzes LUghppdacsshuexzt trznupls IWzHL ChbetpatyuirttanxPart B)For this Icaction, 4.47 g carbon (graphite) Icacts With 15.6 g owgen 218 carbon (eraphite) ($) oxgen carbon dioxide (g)What i$ thc mrmum mass 0f carbon dioxide that cin tc formcd?What is thc FORMLLA for thc limiting Icagcnr?What mss of thc cxccss Icagcnt Icmains afct thc Icaction Is complctc |Part C)Tbe theoreticalyield ofa reactiog is te zmount ofproduct obtai Part A) Coada sranu nbcb ocrus m de mgkae adoonhis bphittad 1;0)+Lo)-QHl) Ibzes LUghppdacsshuexzt trznupls IWzHL Chbetpatyuirttanx Part B) For this Icaction, 4.47 g carbon (graphite) Icacts With 15.6 g owgen 218 carbon (eraphite) ($) oxgen carbon dioxide (g) What i$ thc mrmum mass 0f carbon dioxide... ##### Compare before and afer-surgery liver function for patients undergoing the new operation. State and test appropriate hypotheses. Calculate con- fidence interval for the mean difference in liver function for this group of patients. Compare before and after-surgery liver function for patients undergoing the standard operation State and test appropriate hypotheses Calculate confidence interval for the mean difference in liver function for this group Ol patients. Compare before and afer-surgery liver function for patients undergoing the new operation. State and test appropriate hypotheses. Calculate con- fidence interval for the mean difference in liver function for this group of patients. Compare before and after-surgery liver function for patients undergoi... ##### 9) Execution by lethal injection involves administration of the following drugs, in order: Sodium thiopental (a... 9) Execution by lethal injection involves administration of the following drugs, in order: Sodium thiopental (a barbiturate), which activates ionotropic GABA receptors Tubocurarine chloride (a.k.a.curare), which blocks ACAR Potassium chloride, which increases extracellular potassium levels a. For ea...
Volume 8, Issue 4 A Study of Crack-Face Boundary Conditions for Piezoelectric Strip Cut Along Two Equal Collinear Cracks Adv. Appl. Math. Mech., 8 (2016), pp. 573-587. Published online: 2018-05 Preview Purchase PDF 10 4295 Export citation Cited by • Abstract A problem of two equal, semi-permeable, collinear cracks, situated normal to the edges of an infinitely long piezoelectric strip is considered. Piezoelectric strip being prescribed out-of-plane shear stress and in-plane electric-displacement. The Fourier series and integral equation methods are adopted to obtain analytical solution of the problem. Closed-form analytic expressions are derived for various fracture parameters viz. crack-sliding displacement, crack opening potential drop, field intensity factors and energy release rate. An numerical case study is considered for poled PZT−5H, $BaTiO_3$ and PZT−6B piezoelectric ceramics to study the effect of applied electro-mechanical loadings, crack-face boundary conditions as well as inter-crack distance on fracture parameters. The obtained results are presented graphically, discussed and concluded. • Keywords Collinear cracks, Fourier series method, piezoelectric strip, semi-permeable crack. • AMS Subject Headings 65M10, 78A48 • BibTex • RIS • TXT @Article{AAMM-8-573, author = {R. R. Bhargava , and Verma , Pooja Raj}, title = {A Study of Crack-Face Boundary Conditions for Piezoelectric Strip Cut Along Two Equal Collinear Cracks}, journal = {Advances in Applied Mathematics and Mechanics}, year = {2018}, volume = {8}, number = {4}, pages = {573--587}, abstract = { A problem of two equal, semi-permeable, collinear cracks, situated normal to the edges of an infinitely long piezoelectric strip is considered. Piezoelectric strip being prescribed out-of-plane shear stress and in-plane electric-displacement. The Fourier series and integral equation methods are adopted to obtain analytical solution of the problem. Closed-form analytic expressions are derived for various fracture parameters viz. crack-sliding displacement, crack opening potential drop, field intensity factors and energy release rate. An numerical case study is considered for poled PZT−5H, $BaTiO_3$ and PZT−6B piezoelectric ceramics to study the effect of applied electro-mechanical loadings, crack-face boundary conditions as well as inter-crack distance on fracture parameters. The obtained results are presented graphically, discussed and concluded. }, issn = {2075-1354}, doi = {https://doi.org/10.4208/aamm.2014.m866}, url = {http://global-sci.org/intro/article_detail/aamm/12104.html} } TY - JOUR T1 - A Study of Crack-Face Boundary Conditions for Piezoelectric Strip Cut Along Two Equal Collinear Cracks AU - R. R. Bhargava , AU - Verma , Pooja Raj JO - Advances in Applied Mathematics and Mechanics VL - 4 SP - 573 EP - 587 PY - 2018 DA - 2018/05 SN - 8 DO - http://doi.org/10.4208/aamm.2014.m866 UR - https://global-sci.org/intro/article_detail/aamm/12104.html KW - Collinear cracks, Fourier series method, piezoelectric strip, semi-permeable crack. AB - A problem of two equal, semi-permeable, collinear cracks, situated normal to the edges of an infinitely long piezoelectric strip is considered. Piezoelectric strip being prescribed out-of-plane shear stress and in-plane electric-displacement. The Fourier series and integral equation methods are adopted to obtain analytical solution of the problem. Closed-form analytic expressions are derived for various fracture parameters viz. crack-sliding displacement, crack opening potential drop, field intensity factors and energy release rate. An numerical case study is considered for poled PZT−5H, $BaTiO_3$ and PZT−6B piezoelectric ceramics to study the effect of applied electro-mechanical loadings, crack-face boundary conditions as well as inter-crack distance on fracture parameters. The obtained results are presented graphically, discussed and concluded. R. R. Bhargava & Pooja Raj Verma. (2020). A Study of Crack-Face Boundary Conditions for Piezoelectric Strip Cut Along Two Equal Collinear Cracks. Advances in Applied Mathematics and Mechanics. 8 (4). 573-587. doi:10.4208/aamm.2014.m866 Copy to clipboard The citation has been copied to your clipboard
Thread: Euclidean GCD Algorithm (Proof) 1. Euclidean GCD Algorithm (Proof) I'd like to have a formal proof of the following case (case 2) related to the euclidean GCD algorithm: We have M > N, and N > M/2, I'd like to show that M - N > M / 2. Thank you! 2. Re: Euclidean GCD Algorithm (Proof) You might have inequalities wrong. Also, are there any restrictions on M,N? Otherwise here's a counterexample. M= 1.1 N = 1 M > N, N > M/2 M-N = .1 M/2 = .55 .55 > .1, and hence the statement you posted is false. 3. Re: Euclidean GCD Algorithm (Proof) Thank you for your reply, but we are dealing here with integers. 4. Re: Euclidean GCD Algorithm (Proof) Originally Posted by mohamedennahdi Thank you for your reply, but we are dealing here with integers. Convert M and N from post #2 from dollars to cents; then you'll have a counterexample with integers. 5. Re: Euclidean GCD Algorithm (Proof) Is this demonstration a valid one: Let M, N € Z, M > N, and N > M / 2, show that M - N > M / 2: Basis: M = 1, N = 0 --> 1 - 0 > 1 / 2 --> 1 > 1 / 2 (TRUE) Induction: Assume true k = M, l = N k - l > k / 2 l > k - k / 2 l > k / 2 6. Re: Euclidean GCD Algorithm (Proof) Originally Posted by mohamedennahdi Is this demonstration a valid one: Originally Posted by emakarov Convert M and N from post #2 from dollars to cents; then you'll have a counterexample with integers. . 7. Re: Euclidean GCD Algorithm (Proof) Originally Posted by mohamedennahdi Is this demonstration a valid one: Let M, N € Z, M > N, and N > M / 2, show that M - N > M / 2: That statement is false. Let $M=11~\&~N=10$ $M>N,~N>\frac{M}{2}\text{ BUT }M-N\not >\frac{M}{2}~!$ 8. Re: Euclidean GCD Algorithm (Proof) You're right, Plato, let me rectify: We have M > N, and N > M/2, I'd like to show that M - N < M / 2. The actual statement that I want to prove is: case 2: N > M/2. In this case the remainder would be M - N, and since N > M/2, M - N would be less than M/2. How would you write a formal proof ? 9. Re: Euclidean GCD Algorithm (Proof) Originally Posted by mohamedennahdi We have M > N, and N > M/2, I'd like to show that M - N < M / 2. How would you write a formal proof ? $N>\frac{M}{2}\\-N<-\frac{M}{2}\\M-N<\frac{M}{2}$.
### Hoverboard Motor Teardown I got some hoverboard motors in the mail! I decided to go with used ones to start, since they're just so darn cheap. The set of 2 hoverboard motors cost $30 (the price is now back to$40), with free shipping. One of the motors didn't turn more than 90 degrees or so. So this one would be the first to come apart. They are very easy to take apart once you figure it out. Start with the 6 screws on the back. They have thread locker on them, so they might be tight. Next take the back cover off. It is not attached to the shaft, but it has a bearing and a seal on it, so it is hard to move it the first 1/2" or so. Once you wiggle it up that far using flat head screw drivers, you can pull it off by hand. Now you can see the stator and the rotor magnets! The coils actually look quite good. Pretty clean windings. The next step is to get the stator out. This can be a bit tricky, since the stator is a giant chunk of steel, surrounded by very strong magnets. I found that attaching it to my table vise with room underneath, then pulling very quickly upwards worked well. You want to go quickly because if you slowly apply pressure, it could get pulled down again. You want to get it out of the way of those magnets as soon as it loses grip. This is the stator with the rotor already removed. You want the end of the flat part of the shaft to be touching the bottom of the vice, so that you can't pull the stator upwards. This works well because it also gives you plenty of room to grip the tire without risk of your fingers getting crunched. And here's the culprit! This motor has a wave spring on both side of the shaft, presumably to apply a preload on the bearings. That's actually a nice feature, but this wave spring has broken into 3 pieces and is rubbing between the magnets and stator. The magnets and stator are pretty scratched up from this, but it doesn't seem like it would affect the performance at all. I removed all the debris and put it back together and it seems to spin just fine. On to the second motor: This motor looks even nicer. It has some silastic holding the wires in place and darker insulation on the wires, which I assume means it's thicker. The magnets still have their coating on them too, which is good. This motor also feels heavier. It looks like the stator and the rotor are both significantly heavier, but it turns out this isn't the case. Motor 1 (Yuanxing branding on the tire): My scale doesn't go quite high enough for the full motor, so I'll just add up these masses for the final mass. The total mass of motor 1 is 2.97kg Motor 2 (Risingsun branding on the tire): The total mass of motor 2 is 3.01kg. So they are almost the exact same mass. Despite looking heavier and having a stronger magnetic pull, the stator in motor 2 is actually lighter. The rotor in motor 2 is definitely heavier, though, and you can feel it. The outside surface seems to be thicker aluminum. Having a low rotational inertia is pretty important for robotics applications, and these motors currently have a giant ring of rubber around the outside of them. Inertia scales as mr^2, so having a lot of mass around the outside is especially bad. Let's try to take the tire off! That was easy! Here's a video of the process: First I use a hacksaw to cut as far as possible without cutting the aluminum rim. Then use a sharp utility knife to cut as much as possible away. Then use a flat head screwdriver to pull the tire up and use the knife to cut it the rest of the way. The tire shouldn't be glued or attached in any way, so once there's a cut going all the way through, it should just detach. The tire does seem to weigh quite a lot. Here are the two tires from the two motors: So somewhere around 415g. That reduces the mass of the rotor from about 1.4kg to less than 1kg! I think these will make excellent motors. Stay tuned for some testing with the ODrive. ### Hoverbots There are a large number of parts that make robots expensive. Luckily all of them are getting cheaper. This started with the processors getting better and cheaper because of the popularity of smartphones. It has moved to linear motion components and aluminum extrusions due to the popularity of 3d printers. Most recently it has moved to the drive electronics, like with the ODrive project, which is an open source brushless servo motor controller. A key part that has always been expensive is the actuators that move the robot. These could be pneumatics, hydraulics, brushed motors, or brushless motors. In all of these cases, moving quickly, precisely, or with large forces meant expensive actuators. The best option has been brushed DC motors, but these are not very power dense, due to the mass of the brushes, the inefficiency of the commutation, and the limit of power that can flow through brushes. Good motors got a lot cheaper when hobby rc brushless motors got more popular. You can buy a relatively high quality motor that can output multiple kilowatts of power for less than $50. Unfortunately most of these motors just aren't meant for robotics applications. They are usually wound for high speed rotation to power a propeller. They are actually ideal if you use a big gearbox (10:1 or more), but gearboxes are exceedingly expensive; many times the cost of the motor itself. There are "affordable" motors available with the specs needed, but they are still pretty expensive since they are for a niche market (at least$300). Now imagine you are a 13 year old kid who loves Vine and [insert other Gen Z stereotypes here]. You don't care about all this dumb motor stuff. You just want a cheap hoverboard. So China obliged. Hoverboards are super cheap. A pretty standard price is $150. So for$150 you get a frame, a gyro stabilizer board, a Lithium Polymer battery, 2 brushless motor controllers, and 2 super high torque brushless motors. We don't really care about all the other stuff, but the part that's interesting is the motors. It turns out that these motors can output 20-30Nm of peak torque. That is an insanely large number. But what's more is the price. You can pick up a brand new motor on Amazon for $40. And if you order direct from China you can get it for less than$20! These motors are practically the same price as the raw materials going into them. Used motors go for around $15 on ebay. What makes them so cheap? They certainly aren't high quality devices. You wouldn't want these motors anywhere safety critical. The main thing that makes them cheap is the insane quantity that they're manufactured in. Here's a picture from a seller on Alibaba who says they can supply 30,000 units per week: This new source of high torque, low cost motors got me thinking about what I could do with one. So I came up with a few fun ideas. The first idea is something like Boston Dynamics Spot, or the MIT Cheetah. This is using a belt drive system. This is similar to the newest version of the MIT Cheetah, which uses chain for the knee joint. This is using a direct drive, delta type system. This version is very similar to the MIT mini Cheetah robot, or to the GOAT leg. Both of these robots could be built for around$1000 including everything. That is dirt cheap compared to the hundreds of thousands of dollars that go into a Boston Dynamics or MIT robot. Here's the approximate cost breakdown: Hoverboard Motors: 8x $20:$160 Encoders: 8x $20:$160 ODrive Motor controllers: 4x $150:$600 Mechanical Parts: $150-$300 "Brains": $50-$500 (Beaglebone to Nvidia Jetson) So the cost ends up being somewhere around $1k-$1.5k. Another idea I had was a delta robot. Usually these robots have high speed motors with large gear ratios. This is probably technically better, but we're looking for cheaper, and gearboxes are expensive. So instead I think this would work great with no gearbox and 3 hoverboard motors. I haven't put together a CAD sketch of what it would look like yet, but it would be basically exactly the same as any angular delta. Most 3d printers use a linear delta configuration, so check out industrial delta robots if you want to see what those look like: Oskar from ODrive Robotics is working on a large robot arm using these motors, so that will be awesome too. As I come up with more neat ideas for cheap robotics that can use these motors, I will post them here. ### New Blog I'm starting this blog as a new way to document my projects. In the past, I haven't kept a good log of projects, or the logs were in different locations. A lot of my projects ended up having their own documentation, but it just didn't work well. For now, most of my projects are going to be robotics related. Once I finished a project, or get to a significant milestone in it, I will add it to my project portfolio. Check it out at kyleb.me/projects The projects documented on this blog are done on my personal time, unrelated to my full time job.
DEseq2 design formula correcting for batch 1 0 Entering edit mode l.rijnberk • 0 @lrijnberk-22839 Last seen 2.4 years ago I have RNA sequencing data for which I would like to look at the differential gene expression effect of a certain treatment or condition while correcting for batch. However, I am not sure how to set up a proper design formula to do this. My data looks like this (simplified): batch treatment 1 a control 2 b treated 3 c control 4 c treated Except, in my actual data I have between 15-19 replicates of each of these 4. Now, if all of these where processed in a different batch, I would use the following design: ~ batch + treatment However, in my case, I think that there should be a better way to do this. If I look at the differentially expressed genes between 3 and 4, there should be no batch effect there to be corrected for. If I look at the differential expression between 1 and 2, there is a batch effect on top of the effect of the actual treatment effect that I am looking for. I think I should be able to look at what does and what does not overlap between these two comparisons, and use that to better define my batch effect. I have been looking through other peoples experiment designs in DEseq and think that I should take the interaction between different variables into account in my design formula, but I am not completely sure how to properly set this up. Anyone have some insight into this? deseq2 design formula batch effect • 686 views 0 Entering edit mode swbarnes2 ★ 1.1k @swbarnes2-14086 Last seen 7 hours ago San Diego 1) I'd just trust DESeq. 2) If all your batch a's are control, you can't model batch anyway. 0 Entering edit mode I definitely trust DESeq, but I think my design formula could better reflect the difference that I am looking for than the simple formula that I described. In addition, I think it may help to clarify a bit more what actually my 'batches' are. The difference between the batches is the index primer that we used to label them, and the day the libraries were sequenced. Actually, a more precise representation would be the following: primer day treatment 1 a 1 control 2 b 1 treated 3 c 2 control 4 c 2 treated So when I would control for primer (with the formula below), I assume that I will be removing part of the variation that I am looking for from my data with the batch effects. ~ primer + treatment But when I control only for the day of sequencing (see next), I maintain part of the batch effect that was due to primer difference on sequencing on day 1. ~ day + treatment As the batch effect that I see seems to be linear when judging from the PCA, my thought was that the difference between the comparisons 1 versus 2 and 3 versus 4 would help to better define the batch effect, and setting up the right design formula including this comparison could be an elegant way to do this. I am just not sure how to set up something like that, and whether it is possible. Any suggestions are definitely appreciated! 0 Entering edit mode Did you already look at PCA or MDS plot to verify if the batch effect you expect is really present? 0 Entering edit mode My PCA plot shows that the batch effect is one of the biggest differences present in my dataset. I can add the plot if that helps? 0 Entering edit mode Neither day of sequencing nor indexing primer add any technical error worth modeling. (Day of library prep, or day of RNA extraction can matter) 0 Entering edit mode Ah so when I said day of sequencing, that also includes the day of library prep and day of RNA extraction. However, with the two samples from day 1, the only real difference is the primer. Maybe the PCA plots with my data will be useful. This image is a PCA plot colored by the treatment condition: https://imgshare.io/image/GLHJy This is the same PCA plot, but colored by the batches (i.e. primer number, with 025 and 026 processed on the exact same day): https://imgshare.io/image/GL0Xe What strikes me is that all the replicates come from the same batch and where all frozen at the same moment. I would therefore not expect any batch effects, but I clearly see it in the PCA plot. 0 Entering edit mode Sequencing index by itself doesn't add any technical variance. But in your case, it might be a surrogate for something else. Your groups A and AC are thoroughly confounded with your primers. All your 026s are group A, all your 025s are group AC. It looks like the ones you did on that day are separating by group A/AC, and the 027 batch from the other day is not. 0 Entering edit mode Exactly! And with batch 027, everything was pooled together and processed with the same primer. So the only thing I know to be different is the primer, but it is off course possible that something happened to one pooled sample during processing that slightly effects the results - although this is not something that I can check. Now, what I was wondering was if I can use the information that I have to better define the batch effect in my DEseq design formula, or if that is not possible at all, in which case I would just look at my hits gene by gene if the effect is not batch-related (using the plotCounts function). Would this be the right way to go? 0 Entering edit mode So include batch in your design. But not sequencing primer. Sequencing primer is not a problem. (Or, if it were, since you completely confounded it with treatment, you can't tell.) How on earth did you only have three sequencing primers for all those samples? 0 Entering edit mode To be clear, the samples have different CelSeq2 primers, but the samples where then pooled together in a library and given a library index (which is where I see a batch effect).
# R. Zekri Search this author in Google Scholar Articles: 4 ### Factor representations of infinite semi-direct products R. Zekri Methods Funct. Anal. Topology 17 (2011), no. 2, 180-192 In this article, we propose a new method to study unitary representations of inductive limits of locally compact groups. For the group of infinite upper triangular matrices, we construct a family of type III factorial representations. These results are complements to previous results of A. V. Kosyak, and Albeverio and Kosyak [1, 5]. ### Regular representations of infinite dimensional group $B_0^Z$ and factors Methods Funct. Anal. Topology 7 (2001), no. 4, 43-48 ### Regular representations of infinite dimensional groups and factors. I Methods Funct. Anal. Topology 6 (2000), no. 2, 50-59 ### Anti-Wick symbols on infinite tensor product spaces Methods Funct. Anal. Topology 5 (1999), no. 2, 29-39
Want to ask us a question? Click here Browse Questions Ad 0 votes # Two plates are 20 cm apart and the potential difference between them is 10 V. The electric field between the plates is Can you answer this question? ## 1 Answer 0 votes $(A) 50\; Vm^{–1}$ answered Jun 24, 2014 by
## Intermediate Algebra: Connecting Concepts through Application part a: $(0,7)$ part b: $(12,0)$ part c: -$14$ part d: $15$ part e: $13$ part a: By looking at the graph, the vertical intercept is where the line intersects the y-axis. That would be about 7. Written in proper form as an (x,y) coordinate, the answer would be $(0,7)$. part b: By looking at the graph, the horizontal intercept is where the line intersects the x-axis. That would be about 12. Written in proper form as an (x,y) coordinate, the answer would be $(12,0)$. part c: We know that we are looking for the input or x value that makes the output=15 or y=15 true. (x,15) is a better way to visualize this. At y=15 on the graph, the line reaches about -$14$ on the x-axis. part d: We know that we are looking for the input or x value that makes the output=-2 or y=-2 true. (x,-2) is a better way to visualize this. At y=-2 on the graph, the line reaches $15$ on the x-axis. part e: We know that we are looking for the output or y value that makes the input=-10 or x=-10 true. (-10,y) is a better way to visualize this. At x=-10 on the graph, the line reaches about $13$ on the y-axis.
Determine the dissociation constants for the following acids. Express the answers in proper scientific notation where appropriate a. acid a has a pka of 6.0 what is its ka? b. acid b has a pka of 8.60 whats its ka? c. acid c has a pka of -2.0 whats its ka? How do you determine? ## Want an answer? ### Get this answer with Chegg Study Practice with similar questions Q: Determine the dissociation constants for the following acids. Express the answers in proper scientific notation where appropriate a. acid a has a pka of 2.0 what is its ka? b. acid b has a pka of 9.10 whats its ka? c. acid c has a pka of -2.0 whats its ka? which is the strongest acid? A: See answer Q: Determine the dissociation constants for the following acids. Express the answers in proper scientific notation where appropriate a. acid a has a pka of 2.0 what is its ka? b. acid b has a pka of 8.60 whats its ka? c. acid c has a pka of -1.0 whats its ka? which is the strongest acid? A: See answer
# Biography I am a PhD student in applied mathematics at the TU Eindhoven under the supervision of dr.ir. R. Duits. I am currently working on mathematical models related to geometric machine learning. ### Interests • Differential Geometry • Mathematical Image Analysis • Probability Theory • Machine Learning • Applied Analysis • Motorsport • History ### Education • MSc in Applied Mathematics, 2019 TU Eindhoven • BSc in Applied Mathematics, 2017 TU Eindhoven # Recent & Upcoming Talks ### PDE-based CNNs with Morphological Convolutions Convolutional neural networks have found wide adoption and great success in image processing yet have drawbacks such as needing huge … # Recent Publications ### PDE-based Group Equivariant Convolutional Neural Networks We present a PDE-based framework that generalizes Group equivariant Convolutional Neural Networks (G-CNNs). In this framework, a … ### Total Variation and Mean Curvature PDEs on the Homogeneous Space of Positions and Orientations (JMIV) Two key ideas have greatly improved techniques for image enhancement and denoising: the lifting of image data to multi-orientation … ### Total Variation and Mean Curvature PDEs on $\mathbb{R}^d \rtimes S^{d−1}$ (SSVM) Total variation regularization and total variation flows (TVF) have been widely applied for image enhancement and denoising. To include …
# Logarithm Problem I can't figure out • Nov 3rd 2009, 06:08 PM ConMan Logarithm Problem I can't figure out These questions are from several math contests, and I'm trying to do these but I can't figure it out. 1. If x and y > 0 , log(base y)x + log(base x)y = 10/3 ,and (x)(y)=144 ,find x + y /2 2. How many real numbers x satisfy the equation (1/5) log(base 2)x = sin(5pi x) ? I tried making y=144/x and plugging that into the equation, but I have no idea what to do for the second question Any help from you guys is greatly appreciated :) • Nov 4th 2009, 01:26 AM earboth Quote: Originally Posted by ConMan These questions are from several math contests, and I'm trying to do these but I can't figure it out. 1. If x and y > 0 , log(base y)x + log(base x)y = 10/3 ,and (x)(y)=144 ,find x + y /2 ... 1. You are supposed to know that $\log_b(a)=\dfrac1{\log_a(b)}$ 2. Using this rule your equation becomes: $\log_y(x)+\dfrac1{\log_y(x)} = \dfrac{10}3$ Use the substitution $z = \log_y(x)$ . Then you have: $z-\dfrac1z = \dfrac{10}3~\implies~z^2-\dfrac{10}3 z + 1 = 0$ which yields $z = 3~\vee~z = \dfrac13$ 3. Re-substituting yields: $x = y^3~\vee~x = \sqrt[3]{y}$ 4. Now use the second equation $x\cdot y = 144$ to calculate the values of x and y. • Nov 4th 2009, 03:08 PM ConMan Thank you for your timely reply earboth, I really appreciate your help :) Also, does anyone know about some online resources regarding some properties/rules on logarithms? EDIT: this is nice: http://en.wikipedia.org/wiki/List_of...mic_identities • Nov 4th 2009, 03:26 PM Defunkt Logarithm - Wikipedia, the free encyclopedia Wikipedia, man's best friend! • Nov 5th 2009, 02:15 PM ConMan Okay scratch the second question, since sin of (pi) is equal to zero. (Doh!)
# Gravitational redshift The gravitational redshift of a light wave as it moves upwards against a gravitational field (produced by the yellow star below). The effect is greatly exaggerated in this diagram. In astrophysics, gravitational redshift or Einstein shift is the process by which electromagnetic radiation originating from a source that is in a gravitational field is reduced in frequency, or redshifted, when observed in a region at a higher gravitational potential. This is a direct result of gravitational time dilation—if one is outside of an isolated gravitational source, the rate at which time passes increases as one moves away from that source. As frequency is inverse of time (specifically, time required for completing one wave oscillation), frequency of the electromagnetic radiation is reduced in an area of higher gravitational potential. There is a corresponding reduction in energy when electromagnetic radiation is red-shifted, as given by Planck's relation, due to the electromagnetic radiation propagating in opposition to the gravitational gradient. There also exists a corresponding blueshift when electromagnetic radiation propagates from an area of higher gravitational potential to an area of lower gravitational potential. If applied to optical wavelengths, this manifests itself as a change in the colour of visible light as the wavelength of the light is shifted toward the red part of the light spectrum. Since frequency and wavelength are inversely proportional, this is equivalent to saying that the frequency of the light is reduced towards the red part of the light spectrum, giving this phenomenon the name redshift. ## DefinitionEdit Redshift is often denoted with the dimensionless variable ${\displaystyle z\,}$ , defined as the fractional change of the wavelength[1] ${\displaystyle z={\frac {\lambda _{o}-\lambda _{e}}{\lambda _{e}}}}$ where ${\displaystyle \lambda _{o}\,}$  is the wavelength of the electromagnetic radiation (photon) as measured by the observer. ${\displaystyle \lambda _{e}\,}$  is the wavelength of the electromagnetic radiation (photon) when measured at the source of emission. The gravitational redshift of a photon can be calculated in the framework of general relativity (using the Schwarzschild metric) as ${\displaystyle \lim _{r\to \infty }z(r)={\frac {1}{\sqrt {1-{\frac {r_{s}}{R_{e}}}}}}-1}$ ${\displaystyle r_{s}={\frac {2GM}{c^{2}}}}$ , where ${\displaystyle G}$  denotes Newton's gravitational constant, ${\displaystyle M}$  the mass of the gravitating body, ${\displaystyle c}$  the speed of light, and ${\displaystyle R_{e}}$  the distance between the center of mass of the gravitating body and the point at which the photon is emitted. The redshift is not defined for photons emitted inside the Schwarzschild radius, the distance from the body where the escape velocity is greater than the speed of light. Therefore, this formula only applies when ${\displaystyle R_{e}}$  is larger than ${\displaystyle r_{s}}$ . When the photon is emitted at a distance equal to the Schwarzschild radius, the redshift will be infinitely large, and it will not escape to any finite distance from the Schwarzschild sphere. When the photon is emitted at an infinitely large distance, there is no redshift. In the Newtonian limit, i.e. when ${\displaystyle R_{e}}$  is sufficiently large compared to the Schwarzschild radius ${\displaystyle r_{s}}$ , the redshift can be approximated by a binomial expansion to become ${\displaystyle \lim _{r\to \infty }z_{\mathrm {approx} }(r)={\frac {1}{2}}{\frac {r_{s}}{R_{e}}}={\frac {GM}{c^{2}R_{e}}}}$ The redshift formula for the frequency ${\displaystyle \nu =c/\lambda }$  (and therefore also for the energy ${\displaystyle h\nu }$  of a photon) can simply deduced from the wavelength-formula above to be ${\displaystyle \lim _{r\to \infty }\nu _{r}=\nu _{e}{\sqrt {1-{\frac {r_{s}}{R_{e}}}}}}$ with ${\displaystyle \nu _{e}}$  the emitted frequency at the emission point and ${\displaystyle \nu _{r}}$  the frequency at distance ${\displaystyle r>R_{e}}$  from the center of mass of the gravitating body causing this gravitational potential. Moreover, we get from the law of energy conservation ${\displaystyle h\nu _{\infty }=h\nu _{1}{\sqrt {1-{\frac {r_{s}}{R_{1}}}}}=h\nu _{2}{\sqrt {1-{\frac {r_{s}}{R_{2}}}}}}$  the general case for a photon of frequency ${\displaystyle \nu _{2}}$  emitted at distance ${\displaystyle R_{2}}$  to observer distance ${\displaystyle R_{1}}$  (measured as distances from the gravitational center of mass) the equation ${\displaystyle \nu _{1}=\nu _{2}{\sqrt {\frac {R_{1}(R_{2}-r_{s})}{R_{2}(R_{1}-r_{s})}}}}$ as long as ${\displaystyle R_{1},R_{2}>r_{s}}$  holds. ## HistoryEdit The gravitational weakening of light from high-gravity stars was predicted by John Michell in 1783 and Pierre-Simon Laplace in 1796, using Isaac Newton's concept of light corpuscles (see: emission theory) and who predicted that some stars would have a gravity so strong that light would not be able to escape. The effect of gravity on light was then explored by Johann Georg von Soldner (1801), who calculated the amount of deflection of a light ray by the sun, arriving at the Newtonian answer which is half the value predicted by general relativity. All of this early work assumed that light could slow down and fall, which was inconsistent with the modern understanding of light waves. Once it became accepted that light was an electromagnetic wave, it was clear that the frequency of light should not change from place to place, since waves from a source with a fixed frequency keep the same frequency everywhere. One way around this conclusion would be if time itself were altered—if clocks at different points had different rates. This was precisely Einstein's conclusion in 1911. He considered an accelerating box, and noted that according to the special theory of relativity, the clock rate at the bottom of the box was slower than the clock rate at the top. Nowadays, this can be easily shown in accelerated coordinates. The metric tensor in units where the speed of light is one is: ${\displaystyle ds^{2}=-r^{2}dt^{2}+dr^{2}\,}$ and for an observer at a constant value of r, the rate at which a clock ticks, R(r), is the square root of the time coefficient, R(r)=r. The acceleration at position r is equal to the curvature of the hyperbola at fixed r, and like the curvature of the nested circles in polar coordinates, it is equal to 1/r. So at a fixed value of g, the fractional rate of change of the clock-rate, the percentage change in the ticking at the top of an accelerating box vs at the bottom, is: ${\displaystyle {R(r+dr)-R(r) \over R}={dr \over r}=gdr\,}$ The rate is faster at larger values of R, away from the apparent direction of acceleration. The rate is zero at r=0, which is the location of the acceleration horizon. Using the principle of equivalence, Einstein concluded that the same thing holds in any gravitational field, that the rate of clocks R at different heights was altered according to the gravitational field g. When g is slowly varying, it gives the fractional rate of change of the ticking rate. If the ticking rate is everywhere almost this same, the fractional rate of change is the same as the absolute rate of change, so that: ${\displaystyle {dR \over dx}=g=-{dV \over dx}\,}$ Since the rate of clocks and the gravitational potential have the same derivative, they are the same up to a constant. The constant is chosen to make the clock rate at infinity equal to 1. Since the gravitational potential is zero at infinity: ${\displaystyle R(x)=1-{V(x) \over c^{2}}\,}$ where the speed of light has been restored to make the gravitational potential dimensionless. The coefficient of the ${\displaystyle dt^{2}}$  in the metric tensor is the square of the clock rate, which for small values of the potential is given by keeping only the linear term: ${\displaystyle R^{2}=1-2V\,}$ and the full metric tensor is: ${\displaystyle ds^{2}=-\left(1-{2V(r) \over c^{2}}\right)c^{2}dt^{2}+dx^{2}+dy^{2}+dz^{2}}$ where again the c's have been restored. This expression is correct in the full theory of general relativity, to lowest order in the gravitational field, and ignoring the variation of the space-space and space-time components of the metric tensor, which only affect fast moving objects. Using this approximation, Einstein reproduced the incorrect Newtonian value for the deflection of light in 1909. But since a light beam is a fast moving object, the space-space components contribute too. After constructing the full theory of general relativity in 1916, Einstein solved for the space-space components in a post-Newtonian approximation, and calculated the correct amount of light deflection – double the Newtonian value. Einstein's prediction was confirmed by many experiments, starting with Arthur Eddington's 1919 solar eclipse expedition. The changing rates of clocks allowed Einstein to conclude that light waves change frequency as they move, and the frequency/energy relationship for photons allowed him to see that this was best interpreted as the effect of the gravitational field on the mass–energy of the photon. To calculate the changes in frequency in a nearly static gravitational field, only the time component of the metric tensor is important, and the lowest order approximation is accurate enough for ordinary stars and planets, which are much bigger than their Schwarzschild radius. ## Important points to stressEdit • The receiving end of the light transmission must be located at a higher gravitational potential in order for gravitational redshift to be observed. In other words, the observer must be standing "uphill" from the source. If the observer is at a lower gravitational potential than the source, a gravitational blueshift can be observed instead. • Tests done by many universities continue to support the existence of gravitational redshift.[2] • General relativity is not the only theory of gravity that predicts gravitational redshift. Other theories of gravitation require gravitational redshift, although their detailed explanations for why it appears vary.[citation needed] (Any theory that includes conservation of energy and mass–energy equivalence must include gravitational redshift.) • Gravitational redshift does not assume the Schwarzschild metric solution to Einstein's field equation – in which the variable ${\displaystyle M\;}$  cannot represent the mass of any rotating or charged body. ## Experimental VerificationEdit ### Initial observations of gravitational redshift of white dwarf starsEdit A number of experimenters initially claimed to have identified the effect using astronomical measurements, and the effect was considered to have been finally identified in the spectral lines of the star Sirius B by W.S. Adams in 1925.[3] However, measurements by Adams have been criticized as being too low[3][4] and these observations are now considered to be measurements of spectra that are unusable because of scattered light from the primary, Sirius B.[4] The first accurate measurement of the gravitational redshift of a white dwarf was done by Popper in 1954, measuring a 21 km/sec gravitational redshift of 40 Eridani B.[4] The redshift of Sirius B was finally measured by Greenstein et al. in 1971, obtaining the value for the gravitational redshift of 89±19 km/sec, with more accurate measurements by the Hubble Space Telescope, showing 80.4±4.8 km/sec. ### Terrestrial testsEdit The effect is now considered to have been definitively verified by the experiments of Pound, Rebka and Snider between 1959 and 1965. The Pound–Rebka experiment of 1959 measured the gravitational redshift in spectral lines using a terrestrial 57Fe gamma source over a vertical height of 22.5 metres.[5] using measurements of the change in wavelength of gamma-ray photons generated with the Mössbauer effect, which generates radiation with a very narrow line width. The accuracy of the gamma-ray measurements was typically 1%. An improved experiment was done by Pound and Snider in 1965, with an accuracy better than the 1% level.[6] A very accurate gravitational redshift experiment was performed in 1976,[7] where a hydrogen maser clock on a rocket was launched to a height of 10,000 km, and its rate compared with an identical clock on the ground. It tested the gravitational redshift to 0.007%. Later tests can be done with the Global Positioning System (GPS), which must account for the gravitational redshift in its timing system, and physicists have analyzed timing data from the GPS to confirm other tests. When the first satellite was launched, it showed the predicted shift of 38 microseconds per day. This rate of discrepancy is sufficient to substantially impair function of GPS within hours if not accounted for. An excellent account of the role played by general relativity in the design of GPS can be found in Ashby 2003. ### Later Astronomical measurementsEdit James W. Brault, a graduate student of Robert Dicke at Princeton University, measured the gravitational redshift of the sun using optical methods in 1962. In 2011 the group of Radek Wojtak of the Niels Bohr Institute at the University of Copenhagen collected data from 8000 galaxy clusters and found that the light coming from the cluster centers tended to be red-shifted compared to the cluster edges, confirming the energy loss due to gravity.[8] Other precision tests of general relativity,[9] not discussed here, are the Gravity Probe A satellite, launched in 1976, which showed gravity and velocity affect the ability to synchronize the rates of clocks orbiting a central mass; the Hafele–Keating experiment, which used atomic clocks in circumnavigating aircraft to test general relativity and special relativity together;[10][11] and the forthcoming Satellite Test of the Equivalence Principle. ## ApplicationEdit Gravitational redshift is studied in many areas of astrophysical research. ## Exact SolutionsEdit A table of exact solutions of the Einstein field equations consists of the following: Non-rotating Rotating Uncharged Schwarzschild Kerr Charged Reissner–Nordström Kerr–Newman The more often used exact equation for gravitational redshift applies to the case outside of a non-rotating, uncharged mass which is spherically symmetric. The equation is: ${\displaystyle \lim _{r\to +\infty }z(r)={\frac {1}{\sqrt {1-\left({\frac {2GM}{R^{*}c^{2}}}\right)}}}-1}$ , where • ${\displaystyle G\,}$  is the gravitational constant, • ${\displaystyle M\,}$  is the mass of the object creating the gravitational field, • ${\displaystyle R^{*}\,}$  is the radial coordinate of the point of emission (which is analogous to the classical distance from the center of the object, but is actually a Schwarzschild coordinate), • ${\displaystyle r,}$  is the radial coordinate of the observer (in the formula, this observer is at an infinitely large distance), and • ${\displaystyle c\,}$  is the speed of light. ## Gravitational redshift versus gravitational time dilationEdit When using special relativity's relativistic Doppler relationships to calculate the change in energy and frequency (assuming no complicating route-dependent effects such as those caused by the frame-dragging of rotating black holes), then the gravitational redshift and blueshift frequency ratios are the inverse of each other, suggesting that the "seen" frequency-change corresponds to the actual difference in underlying clockrate. For rotating systems, route-dependence due to frame-dragging may complicate the process of determining globally-agreed differences in underlying clock rate. While gravitational redshift refers to what is seen, gravitational time dilation refers to what is deduced to be "really" happening once observational effects are taken into account. ## NotesEdit 1. ^ See for example equation 29.3 of Gravitation by Misner, Thorne and Wheeler. 2. ^ General Relativity 3. ^ a b Hetherington, N. S., "Sirius B and the gravitational redshift - an historical review", Quarterly Journal Royal Astronomical Society, vol. 21, Sept. 1980, p. 246-252. Accessed 6 April 2017. 4. ^ a b c Holberg, J. B., "Sirius B and the Measurement of the Gravitational Redshift", Journal for the History of Astronomy, Vol. 41, 1, 2010, p. 41-64. Accessed 6 April 2017. 5. ^ Pound, R.; Rebka, G. (1960). "Apparent Weight of Photons". Physical Review Letters. 4 (7): 337–341. Bibcode:1960PhRvL...4..337P. doi:10.1103/PhysRevLett.4.337.. This paper was the first measurement. 6. ^ Pound, R. V.; Snider J. L. (November 2, 1964). "Effect of Gravity on Nuclear Resonance". Physical Review Letters. 13 (18): 539–540. Bibcode:1964PhRvL..13..539P. doi:10.1103/PhysRevLett.13.539. 7. ^ Vessot, R. F. C.; M. W. Levine; E. M. Mattison; E. L. Blomberg; T. E. Hoffman; G. U. Nystrom; B. F. Farrel; R. Decher; et al. (December 29, 1980). "Test of Relativistic Gravitation with a Space-Borne Hydrogen Maser". Physical Review Letters. 45 (26): 2081–2084. Bibcode:1980PhRvL..45.2081V. doi:10.1103/PhysRevLett.45.2081. 8. ^ Bhattacharjee, Yudhijit (2011). "Galaxy Clusters Validate Einstein's Theory". News.sciencemag.org. Retrieved 2013-07-23. 9. ^ "Gravitational Physics with Optical Clocks in Space" (PDF). S. Schiller (PDF). Heinrich Heine Universität Düsseldorf. 2007. Retrieved 19 March 2015. 10. ^ Hafele, J. C.; Keating, R. E. (July 14, 1972). "Around-the-World Atomic Clocks: Predicted Relativistic Time Gains". Science. 177 (4044): 166–168. Bibcode:1972Sci...177..166H. doi:10.1126/science.177.4044.166. PMID 17779917. 11. ^ Hafele, J. C.; Keating, R. E. (July 14, 1972). "Around-the-World Atomic Clocks: Observed Relativistic Time Gains". Science. 177 (4044): 168–170. Bibcode:1972Sci...177..168H. doi:10.1126/science.177.4044.168. PMID 17779918. ## Primary sourcesEdit • Michell, John (1784). "On the means of discovering the distance, magnitude etc. of the fixed stars". Philosophical Transactions of the Royal Society. 74: 35–57. doi:10.1098/rstl.1784.0008.
# How do I append a graph to last column of a table? rmsdata= Partition[ReadList["RMStabledata.txt", Number],29] {{0, 2.96142, 36.7357, 9.05539, 9, 0, 0, 1, 1, 0, 2, -1, 1, -2, 2, -4, 2, -3, 1, -5, 1, -7, 1, -6, 0, -7, -1, -9, -1}, {1, 2.68942, 30.2974, 8.24621, 8, 0, 0, -2, 0, -4, 0, -5, 1, -7, 1, -6, 2, -8, 2, -9, 1, -10, 0, -8, 0, -7, -1, -8, -2}, {2, 3.04704, 38.8905, 10., 5, 0, 0, 1, 1, -1, 1, -2, 2, -4, 2, -3, 3, -2, 4, -3, 5, -5, 5, -6, 6, -7, 7, -6, 8}, {3, 2.0943, 18.3723, 4.24264, 14, 0, 0, 1, -1, 0, -2, -2, -2, -1, -1, -2, 0, -1, 1, 1, 1, 0, 2, -1, 3, -2, 4, -3, 3}, {4, 2.97999, 37.1979, 10., 9, 0, 0, 2, 0, 4, 0, 3, 1, 5, 1, 4, 2, 5, 3, 7, 3, 8, 4, 9, 5, 7, 5, 8, 6}, {5, 2.48083, 25.7801, 7.61577, 7, 0, 0, 2, 0, 4, 0, 3, 1, 2, 2, 0, 2, 1, 3, 2, 4, 4, 4, 6, 4, 5, 3, 7, 3}} sortedrmsdata = SortBy[rmsdata, #[[5]] &] xydata = Partition[Flatten[Take[sortedrmsdata, All, {6, 29}]], 2] gridedvalues = Take[sortedrmsdata, All, {1, 5}] col = {SpanFromBelow, SpanFromBelow, SpanFromBelow, SpanFromBelow, SpanFromBelow, SpanFromBelow}; Table[AppendTo[gridedvalues[[i]], col[[i]]], {i, 6}] grided = Prepend[ gridedvalues, {"Number", "RMS displacement", "Volume", "RMS end-end", "Interactions", "Graph"}] Grid[AppendTo[ grided, {SpanFromAbove, SpanFromAbove, SpanFromAbove, SpanFromAbove, SpanFromAbove, ListPlot[Partition[xydata, 12], Mesh -> All, MeshStyle -> PointSize[Large], Joined -> True, PlotLegend -> Take[sortedrmsdata, All, 1]]}], Frame -> All] Worked: col ={ListPlot[Partition[xydata, 12], Mesh -> All, MeshStyle -> PointSize[Large], Joined -> True], SpanFromAbove, SpanFromAbove, SpanFromAbove, SpanFromAbove, SpanFromAbove}; gridedvalues = Take[sortedrmsdata, All, {1, 5}];Table[AppendTo[gridedvalues[[i]], col[[i]]], {i, 6}]; grided = Prepend[gridedvalues, {"Number", "RMS displacement", "Volume", "RMS end-end", "Interactions", "Graph"}]; Grid[grided, Alignment -> Center, Frame -> All] - Right now this outputs a grid with 6 columns but instead of the graph being positioned in the entire last column it is in the bottom corner. –  MCF Aug 2 '13 at 17:52 I want the graph to be the last column where it reads Graph not the row. I tried posting a picture but I dont have enough reputation.Thank you –  MCF Aug 2 '13 at 18:10 Also, I only have 6 columns the last one that you show isnt showing on mine –  MCF Aug 2 '13 at 18:13 u have 7 columns and rmsdata=data i believe –  Rorschach Aug 2 '13 at 18:17 @nasser you have to have 10 rep or more befire you can pist images. –  Sjoerd C. de Vries Aug 2 '13 at 18:18 It might not be the complete solution,but I may be this way you can do it. gridedvalues = Take[sortedrmsdata, All, {1, 6}]; (*not 5*) I don't know what this SpanFromBelow means because it doesn't show up in documentation.But I have let it be there. Table[AppendTo[gridedvalues[[i]], col[[i]]], {i, 6}]; grided = Prepend[ gridedvalues, {"Number", "RMS displacement", "Volume", "RMS end-end", "Interactions", "Graph"}] Instead of adding it at end, append plot here, as under AppendTo[gridedvalues, ListPlot[Partition[xydata, 12], Mesh -> All, MeshStyle -> PointSize[Large]]]; Grid[AppendTo[grided, gridedvalues], Alignment -> Right, Frame -> All] - Not sure why but this didnt quite work for me but led me in the right direction. I edited my question with what worked for me. Thank you for your help. –  MCF Aug 2 '13 at 19:49 I am glad, I could help :) –  Rorschach Aug 2 '13 at 19:56
# Converting Between Percentages and Decimals In this lesson, we convert between percentages and decimals. This means we solve problems involving converting both percentages into decimals and decimals into percentages as well. This involves shifting the decimal point two places either to the left or to the right and then dropping or adding the percent sign. Rules to convert a percentage to a decimal • First, we write the percentage with a decimal point. For example, 12% = 12.0% • Then shift the decimal point two places to the left. For example, 0.12% • Then we drop the percentage sign. For example, 0.12. So, 12% = 0.12 Rules to convert a decimal to a percentage • First, we write the decimal. For example, 0.43 • Then shift the decimal point two places to the right. For example, 43.0 • Then we add the percentage sign. For example, 43.0% So, 0.43 = 43.0% = 43% We have put a 0 as a place holder as we are one digit short. Solving the problem Writing/Converting 0.276 as a percentage 0.276 = 27.6% Here we have moved the decimal point two places to the right to convert to a percentage. Write 4.5% as a decimal ### Solution Step 1: By definition of a per cent, for any whole number x, x% = $\frac{x}{100}$ = 0.0x Step 2: To convert the percentage to a decimal, the decimal point is shifted two places to the left and we have put a 0 as a place holder as we are one digit short. Then the percent sign is dropped. So, 4.5% = 0.045 Write 0.378 as a percentage ### Solution Step 1: To convert a decimal to a percentage we multiply it by 100. It is same as shifting the decimal 2 places to the right and adding a percent sign. Step 2: So, 0.378 = 37.8% Write 127% as a decimal ### Solution Step 1: By definition of a per cent, for any whole number x, x% = $\frac{x}{100}$ = 0.0x Step 2: To convert the percentage to a decimal, the decimal point is shifted two places to the left and we have put a 0 as a place holder as we are one digit short. Then the percent sign is dropped. So, 127% = 127.0% = 1.27
## College Algebra (11th Edition) The parent function is $f(x)x^2$ (in red), but the graph of $g(x)=2(x^2)$ (in blue) will be narrower, as the coefficient multplies each y-value of the parent function by $2$. By the use of graph transformation techniques, the graph is stretched by a factor of $2$. For drawing the exact graph of the parent function, here is the table of values: $f(-2)=-2^2=4$ $f(-1)=(-1)^2=1$ $f(0)=0^2=0$ $f(1)=1^2=1$ $f(2)=2^2=4$
# Graph of y = sec x y = sec x is periodic function. The period of y = sec x is 2π. Therefore, we will draw the graph of y = sec x in the interval [-π, 2π]. For this, we need to take the different values of x at intervals of 10°. Then by using the table of natural cosines we will get the corresponding values of cos x. Take the values of cos x correct to two place of decimal. The values of cos x for the different values of x in the interval [-π, 2π] are given in the following table. We draw two mutually perpendicular straight lines XOX’ and YOY’. XOX’ is called the x-axis which is a horizontal line. YOY’ is called the y-axis which is a vertical line. Point O is called the origin. Now represent angle (x) along x-axis and y (or sec x) along y-axis. Along the x-axis: Take 1 small square = 10°. Along the y-axis: Take 10 small squares = 1 unity. Now plot the above tabulated values of x and y on the co-ordinate graph paper. Then join the points by free hand. The continuous curve obtained by free hand joining is the required graph of y = sec x. Properties of y = sec x: (i) The graph of the function y = cos x is not a continuous graph, but consists of infinite number of separate branches, the points of discontinuities are at x = (2n + 1)$$\frac{π}{2}$$, where n = 0, ±1, ±2, ±3, ±4, ……………... . The straight lines parallel to y-axis at these points of discontinuities are asymptotes to the different branches of the curve. (ii) Comparing cosecant-graph and secant-graph we see that cosecant-graph coincides with secant-graph if the former is shifted to the left through 90° this is due to the fact that cos (90° + x) = sec x. (iii) No part of the graph lies between the lines = 1 and y = -1, since |sec x| ≥ 1. (iv) The portion of the graph between 0 to 2π is repeated over and over again on either side, since the function y = sec x is periodic of period 2π.
# How to find the values of the following forces when it isn't given? 2 posts / 0 new Alun How to find the values of the following forces when it isn't given? How to find the values of the following forces when it isn't given? Tags: Jhun Vert I think you just express the moment at A in terms of Fn. Assuming counterclockwise moment as positive: $M_A = 4\left( \frac{3}{5}F_1 \right) - 5\left( \frac{4}{5}F_1 \right) - 1(F_2) + 5\left( \frac{5}{13}F_3 \right) + 2\left( \frac{12}{13}F_3 \right)$ Subscribe to MATHalino.com on
# 1 Introduction You will probably be familiar with multiple testing procedures that take a set of p-values and then calculate adjusted p-values. Given a significance level $$\alpha$$, one can then declare the rejected hypotheses. In R this is most commonly done with the p.adjust function in the stats package. Similarly, IHW (Independent Hypothesis Weighting) is a multiple testing procedure, but in addition to the p-values it allows you to specify a covariate for each test. The covariate should be informative of the power or prior probability of each individual test, but is chosen such that the p-values for those hypotheses that are truly null do not depend on the covariate (Ignatiadis et al. 2016). Therefore the input of IHW is the following: • a vector of p-values (of length $$m$$), • a matching vector of covariates, • the significance level $$\alpha \in (0,1)$$ at which the False Discovery Rate should be controlled. IHW then calculates weights for each p-value (non-negative numbers $$w_i \geq 0$$ such that average to 1, $$\sum_{i=1}^m w_i = m$$). IHW also returns a vector of adjusted p-values by applying the procedure of Benjamini Hochberg (BH) to the weighted p-values $$P^\text{weighted}_i = \frac{P_i}{w_i}$$. The weights allow different prioritization of the individual hypotheses, based on their covariate. This means that the ranking of hypotheses with p-value weighting is in general different than without. Two hypotheses with the same p-value will can have different weighted p-values: the one with the higher weight will have a smaller value of $$P^\text{weighted}_i$$, and consequently it can even happen that one but not the other gets rejected by the subsequent BH procedure. Let’s see how to use the IHW package in analysing for RNA-Seq differential gene expression and then also mention some other examples where the method is applicable. # 2 IHW and DESeq2 ## 2.1 IHW for FDR control We analyze the airway RNA-Seq dataset using DESeq2 (Love, Huber, and Anders 2014). library("ggplot2") library("methods") library("airway") library("DESeq2") data("airway") dds <- DESeqDataSet(se = airway, design = ~ cell + dex) dds <- DESeq(dds) de_res <- as.data.frame(results(dds)) The output is a data.frame object, which includes the following columns for each gene: colnames(de_res) ## [1] "baseMean" "log2FoldChange" "lfcSE" "stat" ## [5] "pvalue" "padj" In particular, we have p-values and baseMean (i.e., the mean of normalized counts) for each gene. As argued in the DESeq2 paper, these two statistics are approximately independent under the null hypothesis. Thus we have all the ingredient necessary for a IHW analysis (p-values and covariates), which we will apply at a significance level 0.1. library("IHW") ihw_res <- ihw(pvalue ~ baseMean, data = de_res, alpha = 0.1) This returns an object of the class ihwResult. We can get, e.g., the total number of rejections. rejections(ihw_res) ## [1] 4873 And we can also extract the adjusted p-values: head(adj_pvalues(ihw_res)) ## [1] 0.001130506 NA 0.160424028 0.880323612 1.000000000 1.000000000 sum(adj_pvalues(ihw_res) <= 0.1, na.rm = TRUE) == rejections(ihw_res) ## [1] TRUE We can compare this to the result of applying the method of Benjamini and Hochberg to the p-values only: padj_bh <- p.adjust(de_res$pvalue, method = "BH") sum(padj_bh <= 0.1, na.rm = TRUE) ## [1] 4081 IHW produced quite a bit more rejections than that. How did we get this power? Essentially it was possible by assigning appropriate weights to each hypothesis. We can retrieve the weights as follows: head(weights(ihw_res)) ## [1] 2.116234 NA 2.429560 2.292776 1.502119 0.000000 Internally, what happened was the following: We split the hypotheses into $$n$$ different strata (here $$n=22$$) based on increasing value of baseMean and we also randomly split them into $$k$$ folds (here $$k=5$$). Then, for each combination of fold and stratum, we learned the weights. The discretization into strata facilitates the estimation of the distribution function conditionally on the covariate and the optimization of the weights. The division into random folds helps us to avoid overfitting the data, something which can result in loss of control of the False Discovery Rate (Ignatiadis et al. 2016). The values of $$n$$ and $$k$$ can be accessed through c(nbins(ihw_res), nfolds(ihw_res)) ## [1] 22 5 In particular, each hypothesis test gets assigned a weight depending on the combination of its assigned fold and stratum. We can also see this internal representation of the weights as a ($$n$$ X $$k$$) matrix: weights(ihw_res, levels_only = TRUE) ## [,1] [,2] [,3] [,4] [,5] ## [1,] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 ## [2,] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 ## [3,] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 ## [4,] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 ## [5,] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 ## [6,] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 ## [7,] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 ## [8,] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 ## [9,] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 ## [10,] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 ## [11,] 0.3012165 0.1929644 0.4853261 0.4907848 0.2956142 ## [12,] 0.3012165 0.5013695 0.4676253 0.4907848 0.4961546 ## [13,] 0.6227621 0.7859151 1.2021934 1.3173403 0.7777406 ## [14,] 1.2640912 1.5021192 1.2021934 1.3835750 1.7530250 ## [15,] 2.1978658 2.4505372 2.2772911 2.7822738 2.3312196 ## [16,] 2.1562363 2.4505372 2.2927765 2.7822738 2.3312196 ## [17,] 3.4526740 2.4505372 2.2927765 2.4383975 2.3312196 ## [18,] 2.9427856 2.5112086 2.2927765 2.4383975 2.4295601 ## [19,] 2.4829555 2.5112086 2.1908606 2.1162338 2.4295601 ## [20,] 2.4829555 2.5112086 2.3106615 2.5113523 2.4295601 ## [21,] 1.6923820 2.2724171 2.1105565 1.5544364 2.1091202 ## [22,] 1.8961571 2.2724171 2.1105565 2.2540883 2.1091202 ### 2.1.1 Diagnostic plot: estimated weights plot(ihw_res) We see that the general trend is driven by the covariate (stratum) and not as much by the fold. Recall that IHW assumes that the “optimal” weights should be a function of the covariate (and hence the stratum) only. Therefore, the weight functions calculated on random (overlapping) splits of the data should behave similarly, while there should be no trend driven by the folds. Also as expected, genes with very low baseMean count get assigned a weight of 0, while genes with high baseMean count get prioritized. ### 2.1.2 Diagnostic plot: raw versus adjusted p-values gg <- ggplot(as.data.frame(ihw_res), aes(x = pvalue, y = adj_pvalue, col = group)) + geom_point(size = 0.25) + scale_colour_hue(l = 70, c = 150, drop = FALSE) gg gg %+% subset(as.data.frame(ihw_res), adj_pvalue <= 0.2) As you can see above, the ihwResult object ihw_res can be converted to a data.frame, which contains the following columns: ihw_res_df <- as.data.frame(ihw_res) colnames(ihw_res_df) ## [1] "pvalue" "adj_pvalue" "weight" "weighted_pvalue" ## [5] "group" "covariate" "fold" ## 2.2 IHW for FWER control The standard IHW method presented above controls the FDR by using a weighted Benjamini-Hochberg procedure with data-driven weights. The same principle can be applied for FWER control by using a weighted Bonferroni procedure. Everything works exactly as above by using the keyword argument adjustment_type. For example: ihw_bonf <- ihw(pvalue ~ baseMean, data=de_res, alpha = 0.1, adjustment_type = "bonferroni") # 3 Choice of a covariate ## 3.1 Necessary criteria for choice of a covariate In which cases is IHW applicable? Whenever we have a covariate that is: 1. informative of power 2. independent of the p-values under the null hypothesis 3. not notably related to the dependence structure -if there is any- of the joint test statistics. ## 3.2 A few examples of such covariates Below we summarize some examples where such a covariate is available: • For row-wise $$t$$-tests we can use the overall (row-wise) variance (Bourgon, Gentleman, and Huber 2010). • For row-wise rank-based tests (e.g. Wilcoxon) we can use any function that does not depend on the order of arguments (Bourgon, Gentleman, and Huber 2010). • In DESeq2, we can use baseMean, as illustrated above (Love, Huber, and Anders 2014). • In eQTL analysis we can use the SNP-gene distance, the DNAse sensitivity, a HiC score, etc. (Ignatiadis et al. 2016). • In genome-wide association (GWAS), the allele frequency. • In quantitative proteomics with mass spectrometry, the number of peptides (Ignatiadis et al. 2016). ## 3.3 Why are the different covariate criteria necessary? The power gains of IHW are related to property 1, while its statistical validity relies on properties 2 and 3. For many practically useful combinations of covariates with test statistics, property 1 is easy to prove (e.g. through Basu’s theorem as in the $$t$$-test / variance example), while for others it follows by the use of deterministic covariates and well calibrated p-values (as in the SNP-gene distance example). Property 3 is more complicated from a theoretical perspective, but rarely presents a problem in practice – in particular, when the covariate is well thought out, and when the test statistics is such that it is suitable for the Benjamini Hochberg method without weighting. If one expects strong correlations among the tests, then one should take care to use a covariate that is not a driving force behind these correlations. For example, in genome-wide association studies, the genomic coordinate of each SNP tested is not a valid covariate, because the position is related to linkage disequilibrium (LD) and thus correlation among tests. On the other hand, in eQTL, the distance between SNPs and phenotype (i.e. transcribed gene) is not directly related to (i.e. does not increase or decrease) any potential correlations between test statistics, and thus is a valid covariate. ## 3.4 Diagnostic plots for the covariate Below we describe a few useful diagnostics to check whether the criteria for the covariates are applicable. If any of these are violated, then one should not use IHW with the given covariate. ### 3.4.1 Scatter plots To check whether the covariate is informative about power under the alternative (property 1), one should plot the p-values (or usually better, $$-log_{10}(\text{p-value})$$) against the ranks of the covariate: de_res <- na.omit(de_res) de_res$geneid <- as.numeric(gsub("ENSG[+]*", "", rownames(de_res))) # set up data frame for plotting df <- rbind(data.frame(pvalue = de_res$pvalue, covariate = rank(de_res$baseMean)/nrow(de_res), covariate_type="base mean"), data.frame(pvalue = de_res$pvalue, covariate = rank(de_res$geneid)/nrow(de_res), covariate_type="gene id")) ggplot(df, aes(x=covariate, y = -log10(pvalue))) + geom_hex(bins = 100) + facet_grid( . ~ covariate_type) On the left, we plotted $$-log_{10}(\text{p-value})$$ agains the (normalized) ranks of the base mean of normalized counts. This was the covariate we used in our DESeq2 example above. We see a clear trend: Low p-values are enriched at high covariate values. For very low covariate values, there are almost no small p-values. This indicates that the base mean covariate is correlated with power under the alternative. On the other hand, the right plot uses a less useful statistic; the gene identifiers interpreted as numbers. Here, there is no obvious trend to be detected. ### 3.4.2 Stratified p-value histograms One of the most useful diagnostic plots is the p-value histogram (before applying any multiple testing procedure). We first do this for our DESeq2 p-values: ggplot(de_res, aes(x = pvalue)) + geom_histogram(binwidth = 0.025, boundary = 0) This is a well calibrated histogram. As expected, for large p-values (e.g., for p-values $$\geq 0.5$$) the distribution looks uniform. This part of the histogram corresponds mainly to null p-values. On the other hand, there is a peak close to 0. This is due to the alternative hypotheses and can be observed whenever the tests have enough power to detect the alternative. In particular, in the airway dataset, as analyzed with DESeq2, we have a lot of power to detect differentially expressed genes. If you are not familiar with these concepts and more generally with interpreting p-value histograms, we recommend reading David Robinson’s blog post. Now, when applying IHW with covariates, it is instrumental to not only check the histogram over all p-values, but also to check histograms stratified by the covariate. Here we split the hypotheses by the base mean of normalized counts into a few strata and then visualize the conditional histograms: de_res$baseMean_group <- groups_by_filter(de_res$baseMean, 8) ggplot(de_res, aes(x=pvalue)) + geom_histogram(binwidth = 0.025, boundary = 0) + facet_wrap( ~ baseMean_group, nrow = 2) Notice that all of these histograms are well calibrated, since all of them show a uniform distribution at large p-values. In many realistic examples, if this is the case, then IHW will control the FDR. Thus, this is a good check of whether properties 2 and 3 hold. In addition, these conditional histograms also illustrate whether property 1 holds: Notice that as we move to strata corresponding to higher mean counts, the peak close to 0 becomes taller and the height of the uniform tail becomes lower. This means that the covariate is associated with power under the alternative. The empirical cumulative distribution functions (ECDF) offer a variation of this visualisation. Here, one should check whether the curves can be easily distinguished and whether they are almost linear for high p-values. ggplot(de_res, aes(x = pvalue, col = baseMean_group)) + stat_ecdf(geom = "step") Finally, as an example of an invalid covariate, we use the estimated log fold change. Of course, this is not independent of the p-values under the null hypothesis. We confirm this by plotting conditional histograms / ECDFs, which are not well calibrated: de_res$lfc_group <- groups_by_filter(abs(de_res$log2FoldChange),8) ggplot(de_res, aes(x = pvalue)) + geom_histogram(binwidth = 0.025, boundary = 0) + facet_wrap( ~ lfc_group, nrow=2) ggplot(de_res, aes(x = pvalue, col = lfc_group)) + stat_ecdf(geom = "step") For more details regarding choice and diagnostics of covariates, please also consult the Independent Filtering paper (Bourgon, Gentleman, and Huber 2010), as well as the genefilter vignettes. # 4 Advanced usage: Working with incomplete p-value lists So far, we have assumed that a complete list of p-values is available, i.e. one p-value per hypothesis. However, this information is not always available or practical: • This can be related to the software tools used for the calculation of the p-values. For example, as noted in (Ochoa et al. 2015), some tools such as HMMER, only return the lowest p-values. In addition, other tools, such as MatrixEQTL (Shabalin 2012) by default only return p-values below a pre-specified threshold, for example all p-values below $$10^{-5}$$. In the case of HMMER, this is done because higher p-values are not reliable, while for MatrixEQTL it reduces storage requirements. • Even if p-values for all hypotheses are available, explicit computation on them might exhaust the available computing resources (in particular, working memory). Since rejections take place for low p-values (at the tails of the p-value distribution), we do not lose a lot of information by discarding the high p-values from the analysis, as long as we keep track of how many large p-values have been omitted. Thus, the above situations can be easily handled. Before proceeding with the walkthrough for handling such cases with IHW, we quickly review how this is handled by p.adjust. We first simulate some data, where the power under the alternative depends on a covariate. p-values are calculated by a simple one-sided z-test. set.seed(1) X <- runif(100000, min = 0, max = 2.5) # covariate H <- rbinom(100000, size = 1, prob = 0.1) # hypothesis true or false Z <- rnorm(100000, mean = H*X) # Z-score pvalue <- 1 - pnorm(Z) # pvalue sim <- data.frame(X = X, H = H, Z = Z, pvalue = pvalue) We can apply the Benjamini-Hochberg procedure to these p-values: padj <- p.adjust(sim$pvalue, method="BH") sum( padj <= 0.1) ## [1] 501 Now assume we only have access to the p-values $$\leq 0.1$$: filter_threshold <- 0.1 selected <- which(pvalue <= filter_threshold) pvalue_filt <- pvalue[selected] Then we can still use p.adjust, as long as we inform it of how many hypotheses were really tested (not just the ones with p-value $$\leq 0.1$$). We specify this by setting the n function argument. padj_filt <- p.adjust(pvalue_filt, method = "BH", n = length(pvalue)) qplot(padj[selected], padj_filt) sum(padj_filt <= 0.1) ## [1] 501 We see that we get exactly the same number of rejections, as when we used the whole p-value vector as input. Now, the same approach can be used with IHW, but is slighly more complicated. In particular, we need to provide information about how many hypotheses were conducted at each given value of the covariate. This means that there are two modifications to the standard IHW workflow: • If a numeric covariate is provided, IHW internally discretizes it and in this way bins the hypotheses into groups (strata). For the advanced functionality, this discretization has to be done manually by the user. In other words, the covariate provided by the user has to be a factor. For this, the convenience function groups_by_filter is provided, which returns a factor that stratifies a numeric covariate into a given number of groups with approximately the same number of hypotheses in each of the groups. This is a very simple function, largely equivalent to cut(., quantile(., probs=seq(0, 1, length.out=nbins)). • For the algorithm to work correctly, it is necessary to know the total number of hypotheses in each of the bins. However, if filtered p-values are used, IHW obviously cannot infer the number of hypotheses per bin automatically.Therefore, the user has to specify the number of hypotheses per bin manually via the m_groups option. (When there is only 1 bin, IHW reduces to BH and m_groups would be equivalent to the n keyword of p.adjust.) For example, when the whole grouping factor is available (e.g. when it was generated by using groups_by_filter on the full vector of covariates), then one can apply the table function on it to calculate the number of hypotheses per bin. This is then used as an input for the m_groups argument. More elaborate strategies might be needed in more complicated case, e.g. when the full vector of covariates can also not fit into RAM. nbins <- 20 sim$group <- groups_by_filter(sim$X, nbins) m_groups <- table(sim$group) Now we can subset our data frame to only keep low p-values and then apply IHW with the manually specified m_groups. sim_filtered <- subset(sim, sim\$pvalue <= filter_threshold) ihw_filt <- ihw(pvalue ~ group, data=sim_filtered, alpha = .1, m_groups = m_groups) rejections(ihw_filt) ## [1] 947 # References Bourgon, Richard, Robert Gentleman, and Wolfgang Huber. 2010. “Independent Filtering Increases Detection Power for High-Throughput Experiments.” Proceedings of the National Academy of Sciences 107 (21). National Acad Sciences: 9546–51. Ignatiadis, Nikolaos, Bernd Klaus, Judith B Zaugg, and Wolfgang Huber. 2016. “Data-Driven Hypothesis Weighting Increases Detection Power in Genome-Scale Multiple Testing.” Nature Methods. doi:10.1038/nmeth.3885. Love, Michael I, Wolfgang Huber, and Simon Anders. 2014. “Moderated Estimation of Fold Change and Dispersion for RNA-Seq Data with DESeq2.” Genome Biology 15 (12). BioMed Central Ltd: 550. Ochoa, Alejandro, John D Storey, Manuel Llinás, and Mona Singh. 2015. “Beyond the E-Value: Stratified Statistics for Protein Domain Prediction.” PLoS Comput Biol 11 (11). Public Library of Science: e1004509. Shabalin, Andrey A. 2012. “Matrix EQTL: Ultra Fast EQTL Analysis via Large Matrix Operations.” Bioinformatics 28 (10). Oxford Univ Press: 1353–8.
# Difference between revisions of "2017 USAJMO Problems/Problem 2" ## Problem: Consider the equation $$\left(3x^3 + xy^2 \right) \left(x^2y + 3y^3 \right) = (x-y)^7.$$ (a) Prove that there are infinitely many pairs $(x,y)$ of positive integers satisfying the equation. (b) Describe all pairs $(x,y)$ of positive integers satisfying the equation.
# Proton NMR of tert-amyl alcohol I just started learning about proton NMR. According to Molbase, the HNMR data for tert-amyl alcohol (2-methylbutan-1-ol) shows four kinds of protons at 0.9 ppm, 1.24 ppm, 1.44, and 3.65 ppm. However, it seems contradictory to what I learnt, since the protons on secondary carbon (1.44 ppm) are downfield to the protons on tertiary carbon (1.24 ppm), which confuses me, since I learnt that protons on tertiary carbons experience more de-shielding. Is there something I have missed out in my thought process? EDIT: Thanks to @orthocresol and @Buttonwood for pointing out my error. My revised question would then be: Between the two types of protons on secondary carbons (1.24 and 1.44 ppm), why is it that the one at 1.44 ppm is more deshielded? From what I understand, it is because carbon is more electronegative than hydrogen, therefore causing the deshielding effect, but at the same time from organic chemistry I learnt that alkyl groups are electron donating, which should cause shielding. Is there some misconception in my thinking? • (1) Since you're just starting, it's a great time for you to unlearn the terms "downfield" and "upfield". These are ancient terms which come from the era of continuous-wave NMR and for some reason have not been erased from the literature yet. Stick to "deshielded" and "shielded"; they're much more easily understood. (2) In tert-amyl alcohol, I don't see any protons attached to tertiary carbons; only a secondary carbon, and two different types of methyl carbons. Could you clarify? – orthocresol Aug 8 '20 at 15:04 • In addition to @orthocresol's comment, your reference accounts for four different types of protons. Both in the visual representation as well as in the molecular structure-like one, there is the one shown for 3.65 ppm (which, depending on the experimental circumstances, either a) equally could be recorded at a different position, or b) could be invisible). – Buttonwood Aug 8 '20 at 15:14 • As a beginner to NMR, you should be aware of how important integration values in $\mathrm{^1H-NMR}$. If you knew and had them in your NMR, you could have easily identify these peaks. – Mathew Mahindaratne Aug 8 '20 at 16:09 • I recommend you add to your question a list with all peaks, their integral, and the group it belongs to. The description in your text is not correct, the molecule has only one secondary carbon atom. – Karl Aug 8 '20 at 16:54 • There are only two kinds of protons: those are on primary carbon(s) and those on secondary carbon. – Mathew Mahindaratne Aug 8 '20 at 16:57
# Vector class design This topic is 4409 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi! I'm currently making a game engine and I've just made the classes Vector2, Vector3 and Vector4. I have two questions : First, I have templated my vector classes so they can hold any type of data. I am wondering if I'm going into too much trouble doing so. That would mean having also templated matrix classes and all math operations templated. Should I just keep it to floats only? Second, I am using anonymous unions and structures in my vector : class Vector4 { public: union { float i[4]; struct { float x, y, z, w; }; struct { float r, g, b, a; }; }; }; I do know that anonymous unions are not standard C++ but they are very practical. I'm not currently intending to port my engine but that might happen. In this case, should I completely avoid anonymous unions and structs? I've seen OGRE's vector classes using them and it is still cross-platform. Can I find Linux and Macintosh compilers supporting this feature? Thanks a lot! ##### Share on other sites Quote: I am wondering if I'm going into too much trouble doing so. Too much trouble? It should be about the same unless you plan on supporting SIMD for multiple types (MMX for integers, SSE for floats, etc.). Quote: I do know that anonymous unions are not standard C++ but they are very practical. I'm not sure I get you. How are they practical? You'll most likely get padding problems if you try having references/pointers to different members of the union at the same time. EDIT: If you don't know how to provide both by name (x,y,z) and indexed access, look at this standard C++ code with well-defined behavior. Quote: In this case, should I completely avoid anonymous unions and structs? Should I avoid using "u" instead of "you" when writing English? Of course, both are understandable, the former will however make me look like an immature child, while the latter make me seem like I'm actually using English, instead of pretending I am. Quote: I've seen OGRE's vector classes using them and it is still cross-platform. I might get flamed for this, but Ogre isn't the best C++ code available. The very first line of every header file isn't guaranteed to be valid (all identifiers starting with _ followed by uppercase or another _ is reserved for the compiler). Also cross-platformness means it works on more than one platform (this could just be two) and how well isn't defined. Do you expect Vector4::i[0] to be equal to Vector4::x? Well it isn't guaranteed to, actually nothing is guaranteed since you aren't programming C++ anymore. One of them could be padded, debug information could be inserted, etc. Imagine all the nights you're going to stay up trying to first figure out such a bug and then force ALL your compilers to behave properly when given improper code. Also why is r, g, b and a members of the vector? This isn't a color, and you'll run into many problems if you try to pretend it is. ##### Share on other sites Original post by Trillian Second, I am using anonymous unions and structures in my vector : class Vector4{public: union { float i[4]; struct { float x, y, z, w; }; struct { float r, g, b, a; }; };}; I do know that anonymous unions are not standard C++ ... Thanks a lot![/quote] Anonymous unions are standard C++ but anonymous unions with nested types are not. So the above could be written: class Vector4{public: union { float i[4]; float x, y, z, w; float r, g, b, a; };}; ##### Share on other sites Quote: Original post by CmpDevAnonymous unions are standard C++ but anonymous unions with nested types are not. So the above could be written:*** Source Snippet Removed *** Yes, you could write it that way, but the result would be completely different. ##### Share on other sites I think that I am going to drop the template idea and use your array indexing technique. This is a good compromise for me so thanks a lot! As for the colors, do you think I should do a similar class for colors containing r,g,b,a floats? I found grouping them in a single class pratical but I understand it might to be the greatest idea. ##### Share on other sites I'm ignorant as to why it would be different Promit could you explain please. ##### Share on other sites Quote: Original post by TrillianI think that I am going to drop the template idea and use your array indexing technique. This is a good compromise for me so thanks a lot! You could still use templates, but of course floats are going to work fine for most purposes. Quote: As for the colors, do you think I should do a similar class for colors containing r,g,b,a floats? I found grouping them in a single class pratical but I understand it might to be the greatest idea. The problem here is that even though the intuitive representation for both of them is similar their purpose, interface and type shouldn't be the same. It gets way too easy to confuse vectors and colors. Additionally you might need to chance the representation for one of them later, what if suddenly you want vectors of double extended precision (it might be the native format for your FPU) and colors of single precision (to reduce BUS traffic). So even though they might seem similar, they aren't, and they should be separated. Initially a color could just be a Vector4 internally, but with the possibility to change it later: class ColorARGB{ Vector4 data;public: // ...}; ##### Share on other sites Quote: Original post by CmpDevI'm ignorant as to why it would be different Promit could you explain please. Because i[0], r, g, b, a, x, y, z and w all occupy the same space in memory. ##### Share on other sites EDIT: Too slow, that is what happens when you try to quote the standard Quote: Original post by CmpDevI'm ignorant as to why it would be different Promit could you explain please. float i[4];float x, y, z, w;float r, g, b, a; Is equivalent to: float i[4];float x;float y;float z;float w;float r;float g;float b;float a; See first page of Declarators (chapter 8) in the C++ standard. Quote: A declarator declares a single object, function, or type, within a declaration. The init-declarator-list appearing in a declaration is a comma-separated sequence of declarators, each of which can have an initializer. init-declarator-list: init-declarator: init-declarator-list , init-declarator init-declarator: declarator initializeropt...Each init-declarator in a declaration is analyzed separately as if it was in a declaration by itself85)....85) A declaration with several declarators is usually equivalent to the corresponding sequence of declarations each with a single declarator. That isT D1, D2, ... Dn;is usually equvalent toT D1; T D2; ... T Dn;where T is a decl-specifier-seq and each Di is a init-declarator. The exception occurs when a name introduced by one of the declarators hides a type name used by the dcl-specifiers, so that when the same dcl-specifiers are used in a subsequent declaration, they do not have the same meaning, as instruct S { ... };S S, T; // declare two instances of struct Swhich is not equivalent tostruct S { ... };S S;S T; // error ##### Share on other sites Quote: Original post by Brother Bob Quote: Original post by CmpDevI'm ignorant as to why it would be different Promit could you explain please. Because i[0], r, g, b, a, x, y, z and w all occupy the same space in memory. lol that was too simple, why didn't I think of that:( 1. 1 2. 2 3. 3 Rutin 16 4. 4 5. 5 • 11 • 26 • 10 • 11 • 9 • ### Forum Statistics • Total Topics 633718 • Total Posts 3013522 ×