text
stringlengths
100
356k
# Test: Geometric Series I - Challenging Double click on maths expressions to zoom Question 1:   In the geometric series $\text{\hspace{0.17em}}\left({a}_{n}\right)\text{\hspace{0.17em}}$ it’s known that $\text{\hspace{0.17em}}{a}_{10}=2$ . Find product of the nineteen first terms of this progression. $38$ ${2}^{10}$ ${2}^{19}$ $3.8$ Question 2:   The second term of the geometric series is $\text{\hspace{0.17em}}{a}_{2}=4$ . Find the product of three first terms of this progression. $128$ $32$ $16$ $64$ Question 3:   Find the initial term and the common ratio of a geometric series $\text{\hspace{0.17em}}\left({a}_{n}\right)$ , if $\text{\hspace{0.17em}}{a}_{5}=3{a}_{3};\text{\hspace{0.17em}}{a}_{6}-{a}_{2}=48$ . ${a}_{1}=2\sqrt{3}\text{\hspace{0.17em}}$ , $\text{\hspace{0.17em}}r=\sqrt{3}$ ${a}_{1}=-2\sqrt{3}\text{\hspace{0.17em}}$ , $\text{\hspace{0.17em}}r=\sqrt{3}\text{\hspace{0.17em}}\text{\hspace{0.17em}}$ or $\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{1}=2\sqrt{3}\text{\hspace{0.17em}}$ , $\text{\hspace{0.17em}}r=-\sqrt{3}\text{\hspace{0.17em}}$ ${a}_{1}=2\sqrt{3}\text{\hspace{0.17em}}$ , $\text{\hspace{0.17em}}r=\sqrt{3}\text{\hspace{0.17em}}\text{\hspace{0.17em}}$ or $\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{1}=-2\sqrt{3}\text{\hspace{0.17em}}$ , $\text{\hspace{0.17em}}r=-\sqrt{3}$ ${a}_{1}=0.2\text{\hspace{0.17em}}$ , $\text{\hspace{0.17em}}r=\sqrt{3}$ Question 4:   Find the initial term and the common ratio of a geometric sequence which consists of $\text{\hspace{0.17em}}6\text{\hspace{0.17em}}$ terms. The sum of the first three terms is $\text{\hspace{0.17em}}168$ , the sum of the last three terms is $\text{\hspace{0.17em}}21\text{\hspace{0.17em}}$ . Question 5:   The sum of three positive sequent numbers that make up an arithmetic series $\text{\hspace{0.17em}}\left({a}_{n}\right)\text{\hspace{0.17em}}$ is $\text{\hspace{0.17em}}21$ . If you add to these numbers, they would make up a geometric series $\text{\hspace{0.17em}}\left({b}_{n}\right)$ . Find these initial numbers. Please note, you have solved only half of the test. For the complete test get a Must Have account. Get started
• Create Account # Global variables in comparison to #define Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 28 replies to this topic ### #1littletray26  Members   -  Reputation: 267 Like 2Likes Like Posted 05 July 2012 - 04:50 PM As far as I can understand, if you use the pre processor command #define, for example, #define money 100, it'll replace all the instances of "money" and replace them with 100. As far as I know it is global and can be used all over your code? What is the point of using #define instead of a global variable? Don't they pretty much do the same thing? What's the difference between the two? The majority of Internet Explorer users don't understand the concept of a browsing application, or that there are options. They just see the big blue 'e' and think "Internet". The thought process usually does not get much deeper than that. Worms are the weirdest and nicest creatures, and will one day prove themselves to the world. I love the word Clicky ### #2Aardvajk  Crossbones+   -  Reputation: 5982 Like 5Likes Like Posted 05 July 2012 - 04:58 PM Global variables respect namespaces. For example: int x; struct s { char x; }; void f(s v) { v.x = 23; // okay } #define x 100 void g(s v) { v.x = 34; // gets converted to v.100 by preprocessor, not good } This silly example is just that, but try working with an API that makes heavy use of the preprocessor, for example Win32 API, and you soon find the problems. Another point of course is that the compiler can produce more meaningful error messages with proper variables. And any decent compiler will generate the same code for a const T t as a #define. There's no reason really to use the preprocessor for anything except including files, conditional compilation and weirdo stuff like __LINE__ and __FILE__ macros these days. Rule of thumb - if it is possible to do it without the preprocessor, do it the other way. ### #3ApochPiQ  Moderators   -  Reputation: 15757 Like 9Likes Like Posted 05 July 2012 - 04:59 PM A #define macro is more akin to a global constant than a global variable. For instance, this code won't compile: #define MONEY 100 int main () { MONEY = 50; } Whereas this is totally legit: int MONEY = 100; int main () { MONEY = 50; } To get similar to a define you want something like this: const int MONEY = 100; int main () { MONEY = 50; // oops, won't compile (but is more useful error than the #define version, try it!) } #define basically creates a text substitution in your code, like a programmable find and replace. It is handy when you want to do precisely that - replace one bit of text with another. It is dangerous for many reasons, some of which are covered here for example. In general, you should prefer constants to macros. Maker of Machinery ### #4Zoomulator  Members   -  Reputation: 273 Like 2Likes Like Posted 05 July 2012 - 05:01 PM I'm assuming you mean C++. A define statement replaces -any- instance of "money" in your code, no matter if it's a variable or function in any namespace or scope. You get no namespace and the possibility of name clashes is pretty much guaranteed unless you give it a very long and unique name like "MYPROJECT_MONEY". A const global can be overridden in a scope defining another instance of "money" and you can even put it in a specific namespace avoiding other files declaring the same name. Defines are uncontrollable and will find a way of leaking into places where you don't want it unless you give them very specific and ugly names. The windows.h header is a great example of this.. you better define LEAN_AND_MEAN before using it and hope all the defines really get undefined. They're only "global" in the sense that if you include a file with it, you'll include the define it contains as well. But the same goes for globally defined const values, so there's no difference there. ### #5littletray26  Members   -  Reputation: 267 Like 0Likes Like Posted 05 July 2012 - 05:31 PM So basically you're all saying that if I can use a global constant rather than a #define, I should? The majority of Internet Explorer users don't understand the concept of a browsing application, or that there are options. They just see the big blue 'e' and think "Internet". The thought process usually does not get much deeper than that. Worms are the weirdest and nicest creatures, and will one day prove themselves to the world. I love the word Clicky ### #6Zoomulator  Members   -  Reputation: 273 Like 1Likes Like Posted 05 July 2012 - 05:42 PM So basically you're all saying that if I can use a global constant rather than a #define, I should? You can only benefit by doing so. Const values can be defined in headers as well. If you need a global variable (god forbid) you'll have to use the extern keyword in the header and define it in an implementation file. ### #7davepermen  Members   -  Reputation: 1008 Like 1Likes Like Posted 06 July 2012 - 04:53 AM define is just a text replace, and does not care about the language. that can lead to some interesting abuses, and some interesting uses (the header include once thing). other than that, use language features, as they don't want to bite you in the back. like #define max did all the time for me... if you don't plan to ctrl-r replace-all-text, don't use #define. If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia My Page davepermen.net | My Music on Bandcamp and on Soundcloud ### #8krippy2k8  Members   -  Reputation: 646 Like 1Likes Like Posted 06 July 2012 - 03:48 PM Preprocessor macros can be useful for many things, but I would not use them for constant values because you lose type safety and conflicts are possible that could create really hard to find bugs. ### #9L. Spiro  Crossbones+   -  Reputation: 13600 Like 1Likes Like Posted 07 July 2012 - 08:40 AM The above advice is all perfectly valid and I don’t want my post to be misunderstood as a way to “get around” these faulty macro points—there is no real substitution for inlined functions etc. I just want to add some safety tips for those few times when you really do need a macro. • Naming macros such as “MONEY” is too generic. Due to the consequences of text replacement, you could end up with some very abstract and hard-to-trace errors if you use too-general names for your macros. The best way to combat this is to add a fake namespace to your macro. For example, in my engine there are 16 projects each with one namespace. lse, lss, lsm, lsg, etc. Within those projects, I replicate the namespaces within the macros. LSE_ELEMENTS( VAR ), LSG_OPENGL, LSG_DIRECTX11, etc. • The above not only reduces conflicts but also lets know you 2 things: #1: Is this macro from my own library?, and #2: Which library? LSG_ = L. Spiro Graphics library. Easy. • #undefine macros as soon as they are no longer needed. Header guards etc. should never be undefined, but within translation units (.cpp files) you might have some macros inside functions to make some tasks easier. An example in my engine is “#define LSG_HANDLE2CBUF( HANDLE )”, which, in DirectX 11 and DirectX 10, translates my custom handle into a custom cbuffer pointer, and is used only inside the CDirectX11CompiledShader and CDirectX10CompiledShader .CPP files. It is considered tidy to clean up after yourself, so #undef at the bottom of the .CPP files is a good idea. I have heard rumors of the possibility of macros “leaking” from one translation unit into another under some compilers so this is a good idea in general to avoid bugs. • __ (2 underscores) is a prefix reserved for the system/compiler. If you want to make absolutely sure your macros will never conflict with anything, you could add some underscores in front, but make sure it is not just 2 underscores. At work we use 3. L. Spiro Edited by L. Spiro, 07 July 2012 - 08:40 AM. It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011 I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013 I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.” L. Spiro Engine: http://lspiroengine.com L. Spiro Engine Forums: http://lspiroengine.com/forums ### #10sundersoft  Members   -  Reputation: 216 Like 0Likes Like Posted 07 July 2012 - 03:18 PM __ (2 underscores) is a prefix reserved for the system/compiler. If you want to make absolutely sure your macros will never conflict with anything, you could add some underscores in front, but make sure it is not just 2 underscores. At work we use 3. Anything starting with two underscores or one underscore and a capital letter is reserved for the compiler. So, anything starting with three underscores is reserved (since it also starts with two underscores) and any capitalized macro that starts with any underscores is reserved. Also, there can't be any sequence of two underscores in the identifier, even if it's not at the start. The compiler is not likely to define a macro that starts with three underscores but it is still allowed to do so. ### #11Cornstalks  Crossbones+   -  Reputation: 6994 Like 1Likes Like Posted 07 July 2012 - 06:29 PM so #undef at the bottom of the .CPP files is a good idea. I agree with most of #3 except for doing this, as I think it's going too far. I think if I saw it being done I'd say "WTF are they doing this for???" (and I think 99.99% of other programmers would say the same (what I'm trying to say is you'll just confuse other programmers for the most part with it)). I've never heard of a compiler needing this, and I think following it just on "rumor" is going waaay too far, asking for unnecessary mental overhead in developing. Additionally, if a compiler leaks macros/identifiers it shouldn't from one translation unit to another, it's worth reporting that bug to the compiler vendor, and expecting them to fix it. __ (2 underscores) is a prefix reserved for the system/compiler. If you want to make absolutely sure your macros will never conflict with anything, you could add some underscores in front, but make sure it is not just 2 underscores. At work we use 3. Anything starting with two underscores or one underscore and a capital letter is reserved for the compiler. So, anything starting with three underscores is reserved (since it also starts with two underscores) and any capitalized macro that starts with any underscores is reserved. Also, there can't be any sequence of two underscores in the identifier, even if it's not at the start. The compiler is not likely to define a macro that starts with three underscores but it is still allowed to do so. +1. In addition: "Each name that begins with an underscore is reserved to the implementation for use as a name in the global namespace." So macros simply should never start with an underscore, and no variable in the global namespace should either, even if it's followed by a lower case letter. Edited by Cornstalks, 07 July 2012 - 06:32 PM. [ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ] ### #12ApochPiQ  Moderators   -  Reputation: 15757 Like 0Likes Like Posted 07 July 2012 - 08:44 PM so #undef at the bottom of the .CPP files is a good idea. I agree with most of #3 except for doing this, as I think it's going too far. I think if I saw it being done I'd say "WTF are they doing this for???" (and I think 99.99% of other programmers would say the same (what I'm trying to say is you'll just confuse other programmers for the most part with it)). I've never heard of a compiler needing this, and I think following it just on "rumor" is going waaay too far, asking for unnecessary mental overhead in developing. Additionally, if a compiler leaks macros/identifiers it shouldn't from one translation unit to another, it's worth reporting that bug to the compiler vendor, and expecting them to fix it. Actually, #undefing your macros is still a good idea, in case someone gets antsy about build times and tries to deploy a unity build structure to your C/C++ project. Leaving macros defined all over the place can get incredibly painful in unity builds. Maker of Machinery ### #13Cornstalks  Crossbones+   -  Reputation: 6994 Like 0Likes Like Posted 07 July 2012 - 08:57 PM so #undef at the bottom of the .CPP files is a good idea. I agree with most of #3 except for doing this, as I think it's going too far. I think if I saw it being done I'd say "WTF are they doing this for???" (and I think 99.99% of other programmers would say the same (what I'm trying to say is you'll just confuse other programmers for the most part with it)). I've never heard of a compiler needing this, and I think following it just on "rumor" is going waaay too far, asking for unnecessary mental overhead in developing. Additionally, if a compiler leaks macros/identifiers it shouldn't from one translation unit to another, it's worth reporting that bug to the compiler vendor, and expecting them to fix it. Actually, #undefing your macros is still a good idea, in case someone gets antsy about build times and tries to deploy a unity build structure to your C/C++ project. Leaving macros defined all over the place can get incredibly painful in unity builds. Are you and I talking about the same thing (#undef at the bottom of the .CPP files)? Like I said, I agree with most of #3 (cleaning up your macros is a good thing). But I think cleaning them up at the end of a source file is a waste of time and space, and I can't see how that would decrease compile times at all. [ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ] ### #14ApochPiQ  Moderators   -  Reputation: 15757 Like 0Likes Like Posted 07 July 2012 - 09:05 PM The whole idea of a "unity build" is to #include several of your .cpp files into one "master" translation unit, which does in fact help with compile times in some cases. (My personal feeling is that unity builds are a bandage over terrible header and dependency management issues, but that's another debate.) Consider the following code: // Foo.cpp #define bool int bool FooFunction() { return 1; } // Bar.cpp bool BarFunction() { return true; } // Unity.cpp #include "foo.cpp" #include "bar.cpp" This is typical of how unity builds are implemented. Clearly, in this example, you can expect the #define to cause havoc. If you use unity builds, it's generally a very good idea to keep macros tightly scoped and #undef them as soon as possible. If that happens to be at the end of a .cpp file, so be it. Maker of Machinery ### #15Cornstalks  Crossbones+   -  Reputation: 6994 Like 0Likes Like Posted 07 July 2012 - 09:21 PM *snip* Ah, I see. I wasn't familiar with the term "unity build" (though I'm familiar with the concept; I'm more familiar with the term "amalgamation," thanks to SQLite) and had Unity (as in Unity3D) come to mind. Yes, I must agree then iff a unity build is being done. But L. Spiro was talking about macros spilling over from one translation unit to another, and in this normal workflow with multiple translation units I think it's pointless. Edited by Cornstalks, 07 July 2012 - 09:25 PM. [ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ] ### #16L. Spiro  Crossbones+   -  Reputation: 13600 Like 0Likes Like Posted 07 July 2012 - 09:39 PM I said to #undef them at the bottom but that was not to be taken too literally. I personally #undef them at the earliest possible moment, almost always being inside the same function in which they are created (even if I have a family of related functions in a row that end up redefining the same macro the same way), but I only wanted to mention the most common case where people “leak” macros, where you might #define some macro at the top of the .CPP, and then just let it go. Definitely, if you are defining something inside a function, #undef at the end of the function, not the end of the file. Also, having macros leak into other translation units (in a normal environment, not unity builds) is of course a special rare case, and is not the motivation for the #undef. That is only a secondary point, since it is unlikely you will ever even encounter that. L. Spiro It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011 I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013 I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.” L. Spiro Engine: http://lspiroengine.com L. Spiro Engine Forums: http://lspiroengine.com/forums ### #17Acotoz  Members   -  Reputation: 73 Like -4Likes Like Posted 07 July 2012 - 10:18 PM #define: constant global variable: variable ### #18Aardvajk  Crossbones+   -  Reputation: 5982 Like 0Likes Like Posted 08 July 2012 - 02:35 AM #define: constant global variable: variable *Sigh* const int i = 123; // constant int i = 123; // global variable #define i 123 // technically creates a literal, not a constant, everywhere it is replaced, with no meaningful name for the compiler to use Edited by Aardvajk, 08 July 2012 - 02:36 AM. ### #19Matt-D  Crossbones+   -  Reputation: 1453 Like 3Likes Like Posted 08 July 2012 - 02:40 PM In addition, it's worth noting that "const" has certain limitations; compare: [source lang="cpp"]struct C{ inline static int getval() {return 4;}};const int MAX=1024;const int MIN=C::getval();[/source] "MAX" is a constant integral expression (can be used as an array-size in array declarations, as a case label in switch statements, etc.), while "MIN" is not. See: http://www.devx.com/...tion/33327/1954 In C++11 there's a new declaration specifier, "constexpr", which allows you to solve this problem and, for example, do this: [source lang="cpp"]constexpr int getDefaultArraySize (int multiplier){ return 10 * multiplier;}int my_array[ getDefaultArraySize( 3 ) ];// perfectly legal, "getDefaultArraySize( 3 )" is a constant integral expression equal to 30 at compile-time[/source] See: http://www.cprogramm...-constexpr.html More: http://en.cppreferen...guage/constexpr http://cpptruths.blo...texpr-meta.html http://thenewcpp.wor...1/14/constexpr/ http://kaizer.se/wik...onstexpr_foldr/ Edited by Matt-D, 08 July 2012 - 02:45 PM. ### #20Acotoz  Members   -  Reputation: 73 Like -1Likes Like Posted 08 July 2012 - 06:30 PM #define: constant global variable: variable *Sigh* const int i = 123; // constant int i = 123; // global variable #define i 123 // technically creates a literal, not a constant, everywhere it is replaced, with no meaningful name for the compiler to use Alright, let's have another case here. what happens if I do this? #define CYCLE for (int i = 0; i < 25; i++) CYCLE will be defined by that little instruction, so that is not a literal, not a constant, not a variable. Good luck Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. PARTNERS
# Slips System of BCC We have been taught that there are 48 slip Systems in BCC. I need the miller indices of each slip plane and direction. Out of 48 I got miller indices of 12 systems of $$\{110\}\ \left<111\right>$$ and 12 systems of $$\{112\}\ \left<111\right>$$ , but I am not getting remaining 24 slip Systems of $$\{123\}\ \left<111\right>$$. I don't know how to find all 24 planes of $$\{123\}$$ family of planes.If anyone know how to do it, it will be helpful • This is simple enumeration of the possibilities. There are more because there are no repeats of indices that are equivalent. – Jon Custer Apr 16 '16 at 15:40 How to find all 24 planes of $$\{123\}$$ family of planes? For any of the $$\left<111\right>$$ directions, we have $$3!$$ i.e. $$6$$ $$\{123\}$$ slip planes. And we have $$4$$ $$\left<111\right>$$ slip directions for each plane. So in total $$6\times 4 = 24$$ slip systems.
Esther Klein (later Esther Szekeres) famously observed that five points in the plane with no three in line must contain the vertices of a convex quadrilateral. Similarly, nine points in the plane with no three in line must contain the vertices of a convex pentagon, and more generally for every $$n$$ there exists a larger number $$N$$ such that every $$N$$ points in the plane with no three in line must contain the vertices of a convex $$n$$-gon. The “happy ending problem” asks for a more precise relation between $$n$$ and $$N$$; it is still the subject of ongoing research, including a 2016 breakthrough by Andrew Suk, and is one of the topics covered in my about-to-be-published book Forbidden Configurations in Discrete Geometry. After this incident, the particulars of Klein’s life become difficult to separate from those of her eventual husband, George Szekeres (the one who proved the more general statement about $$n$$-gons). Neither started out as mathematicians. Because of the restrictions placed on Jews in Hungary in the late 1920s, only two students from Szekeres’s school could study science or mathematics at the university in Budapest; Márta Svéd took the mathematics position, so Klein necessarily studied physics instead. George studied chemical engineering, motivated by his family’s leather business. The two became refugees in Shanghai, and then after the end of World War II moved to Adelaide, where they shared an apartment with Márta Svéd and her family. George became a university mathematics lecturer and Esther raised their children while working as a mathematics tutor. In 1964, the family moved to Sydney. Esther became one of the first mathematicians at the newly-founded Macquarie University, where she is “fondly remembered as a gifted and inspiring tutor”; Macquarie gave her an honorary doctorate in 1990. She and her husband died within hours of each other, in 2005. Their joint Sydney Morning Herald obituary writes of Esther that “The mathematical love of her life was always geometry, in which she outshone George.” So with this as background, I was interested to learn more about some of her work in geometry. I found a paper, “Einfache Beweise zweier Dreieckssätze”, that she published in 1967 in the journal Elemente der Mathematik (in German, despite being a Hungarian in Australia). The title promises two theorems about triangles, both of which concern what happens when you inscribe a triangle $$XYZ$$ into a larger triangle $$ABC$$ (with $$X$$ opposite $$A$$, etc), dividing $$ABC$$ into four smaller triangles. Szekeres’s two theorems are that the area and perimeter of the central triangle $$XYZ$$ are at least equal to the minimum area or perimeter among the three surrounding triangles. It’s possible for $$XYZ$$ to be one of the smallest triangles, but this can only happen when $$XYZ$$ has equal area or perimeter to another of the four small triangles; it can never be the unique smallest one. For instance, when $$XYZ$$ is the medial triangle of $$ABC$$, all four smaller triangles are congruent to each other (and similar to the big triangle). The theorems themselves are not original to Szekeres, and her paper details their history of publication and solution in various mathematical problem columns. The perimeter inequality is also connected with a classical piece of geometry, Fagnano’s problem of finding an inscribed triangle $$XYZ$$ of the minimum possible perimeter. Stripped of some unnecessary detail, her proof of the area theorem is simple and elegant. Suppose that $$BX:BC$$ is the smallest of the six ratios into which the three points $$XYZ$$ divide the sides of the triangle; the other five cases are symmetric. Draw two additional lines, $$L$$ through $$X$$ parallel to $$AB$$, and $$M$$ parallel to $$XZ$$ but twice as far from $$B$$. Then it follows from the choice of $$BX:BC$$ as the smallest ratio that $$Y$$ lies on the segment of $$AC$$ on the far side of $$B$$ from $$L$$, and that $$M$$ separates $$X$$ from this segment. So if we place a point $$D$$ at the intersection of line $$M$$ and segment $$XY$$, we have $\operatorname{area}(XZB)=\operatorname{area}(XZD) \le\operatorname{area}(XYZ),$ where the left equality relates two triangles with the same base $$XZ$$ and equal heights, and the right inequality is containment of one triangle in the other. Most of Szekeres’s other publications were in mathematical problem columns, and included similar styles of reasoning applied to other geometry problems. Beyond geometry, the subjects of her research included arithmetic combinatorics and graph theory. Still, it is clear that it is in geometry, and in particular in the problem of convex polygons in point sets, where she made her most far-reaching contribution to mathematics. Her problem became foundational for two major fields, discrete geometry and Ramsey theory, and has led to a huge body of research by other mathematicians.
# Problem regarding concept of conservation of angular momentum [duplicate] This question already has an answer here: Two cylinders of radii r1, and r2 having moments of inertia I1, and I2, about their respective axes. Initially, the cylinders rotate about their axes with angular speeds w1, and w2 as shown in the figure. The cylinders are moved closer to touch each other keeping the axes parallel. The cylinders first slip over each other at the contact but the slipping finally ceases due to the friction between them. Find the angular speeds of the cylinders after the slipping ceases. I applied conservation of momentum here but I'm unable to obtain the right answer. Taking both cylinders as system, since only friction acts and these forces contribute to internal torques so with absence of external torques I conserved angular momentum of the system but the answer is incorrect. My question is why cant we conserve angular momentum in such a scenario? How is there external torque and what forces are providing the external torque? ## marked as duplicate by John Rennie, Emilio Pisanty, Kyle Kanos, Jon Custer, sammy gerbilFeb 20 '18 at 19:42 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. • Hi Ola. Please note that we don't answer homework or worked example type questions. Please see this Meta post on asking homework/exercise questions and this Meta post for "check my work" problems. – John Rennie Feb 20 '18 at 8:29 • Are the axles (axes of rotation) constrained to move only along a left/right line? If the answer is yes then there are are external torques acting on the system. – Farcher Feb 20 '18 at 9:23 • @Farcher Can you please explain how there are external torques acting on the system – Hola Feb 20 '18 at 9:42 • I just want to know the CONCEPTUAL ERROR of using conservation of angular momentum in this problem, nothing else. Please can you help me with why I can't conserve angular momentum / why is external torque acting on the system and which are these forces contributing to external torque, thats all. My question is only regarding THE CONCEPT. – Hola Feb 20 '18 at 9:48 ## 1 Answer If there are no forces acting on the axles on the disc then the force diagram is as shown in the left hand diagram with internal frictional force $F_{12}$ and $F_{21}$ acting on the discs. If the system was like this then as each of the discs has one fore on it the centre of mass of each of the discs will undergo a translational acceleration. To stop the centres of mass moving two external force $F_{1E}$ and $F_{2E}$ must act on the discs as shown in the right hand diagram. Together these two external forces form a external couple and hence an external torque acting on the system of two discs. • Ok but acc to diagram, where is any horizontal force for translation here? Wont the normal from the ground = weight of cylinder + friction for first one and for second, normal + friction = weight? – Hola Feb 20 '18 at 11:12 • @Ola I have neglected gravitational forces as they would just add an extra unnecessary complication to the problem or you could think of the discs as being horizontal and you are looking at a plan of the system. The translational motion in the left hand diagram would be up and down. – Farcher Feb 20 '18 at 11:37 • 'or you could think of the discs as being horizontal and you are looking at a plan of the system. The translational motion in the left hand diagram would be up and down' I didnt understand can you please explain again? – Hola Feb 20 '18 at 14:00 • @Ola The up and down refers to the directions as seen on my diagram. The reference to horizontal discs was there to remove the necessity to consider gravitational attraction. – Farcher Feb 20 '18 at 14:19 • The discs are already horizontal right. How does that remove the necessity to consider gravity, I'm unable to follow. Horizontal wrt what? – Hola Feb 20 '18 at 14:47
# Till what altitude above earth sounds can be heard? In the new video of SpaceX SAOCOM 1B launch and landing RCS thrusters and other sounds can be heard during boostback burn. Falcon 9 boostback happens at nearly 100 Km altitude. Does the air density at that altitude allow normal propagation of sound or was the footage audio enhanced? • Are you sure those sounds are picked up through the air and not for example through the rocket body? Sep 10 '20 at 6:38 • Ditto what @JörgWMittag wrote. I suspect that the sounds in the linked video is from vibration sensors rather than microphones just outside the rocket. Also keep in mind that this is a massively sped-up video. I suspect the data from the vibration sensors has been downsampled to match the downsampling of the camera data (downsampling is an easy way to speed up a video) and frequency shifted to human hearing range. A generic name for this kind of approach to generating sound from scientific data is sonification. Sep 10 '20 at 10:33 • My above comment is not an answer to the question raised in the title. It is also not a question to the implied question in the body of the OP, which is What is the sound in the linked video? because I'm making assumptions. The body implicitly assumes the sound in the video is sound transmitted through the air to a microphone, which is not a good assumption. Sep 10 '20 at 10:41 • In essence, a microphone is a vibration sensor, only it´s optimized for Air vibration. Also any microphone will pickup vibrations on its encasement, that´s why sound studios have these "Spiders" to affix them. I suspect the sounds in the video are mostly transmitted though the rocket body. Trying to pickup sound "over the air" should result in crazy levels of wind noises. Sep 10 '20 at 11:55 • Some sounds can certainly propagate to the ground from high altitude. For example shock waves from supersonic military jets flying above civil aviation flight levels can be audible on the ground with no equipment except a pair of ears, so long as the environment is quiet enough, and the hearer knows the likely cause of the quiet "double thump" sound he/she heard. (I live in a location where this happens quite often). But that is probably irrelevant to the OP's question. Sep 10 '20 at 19:11 The title of the question asks Till what altitude above earth sounds can be heard? @uhoh gave a detailed answer to that question. I'll instead speculatively answer an implied question in the body of the OP, What is the sound in the linked video? The OP implicitly assumes the sounds in the video were transmitted through the air to a microphone. (Many commenters at a Reddit thread make a similar assumption.) A couple of things to note from the linked video: • The speedup is not constant. The linked video is 2 minutes and 19 seconds long. Stage 1 landed at about 8 minutes and 7 seconds into the flight. If the speedup was constant, that would have meant MECO should have occurred about 40 seconds into the video. MECO instead occurs about 12 seconds into the video. • The sound remains more or less the same for the first 12 seconds of the linked video. There is little change at 5 seconds into the video, which is when the vehicle went supersonic. Back when the Concorde flew, passengers oftentimes remarked how quiet the aircraft suddenly became on going supersonic. While the passengers could still feel the rumble of the jet's engines, they could no longer hear the massive sounds emitted by the engines' exhaust. The sound of the exhaust would only have been audible behind the aircraft. The aircraft left the sound behind. That the sound did not suddenly drop 5 seconds into the video suggests that the sounds were not recorded by microphones in the air. They sound instead most likely comes from vibration sensors such as accelerometers designed to be sensitive to vibrations or microphones "listening" to the launch vehicle itself. In addition to sounding cool, the recorded vibration data would be very useful to SpaceX engineers. Engineers perform stability and controllability analyses with regard to a launch vehicle's control system with regard to the control system itself and with regard to how flexing of the vehicle interacts with the control system. These analyses also need to address sloshing of liquids in the tanks for those vehicles that use liquid propellants. The key problems are that excessive flexing or sloshing can make the control system behave in a very bad way if there are overlaps between the flex, slosh, and control frequencies, and that the control system can similarly excite excessive flexing or sloshing in a very bad way if such overlaps occur. Flex and slosh can excite one another if their frequency responses overlap. Vehicle flex can be reduced / changed in frequency by adding stiffeners, and tank slosh can be reduced / changed in frequency by adding baffles to the tanks. But if these are unneeded, the stiffeners and baffles are just excess weight that reduces payload mass. Engineers use multiple models to estimate flex and slosh modes, but in the end, these are just models. "All models are wrong, but some are useful." Having actual measurements of vehicle vibration during launch would be very beneficial toward validating and refining these models. • Accelerometers designed to measure acceleration typically involve a low pass filter that removes vibration data. Simply replacing the low pass filter with a high pass filter to remove steady-state / low frequency accelerations (and sampling at a higher rate) effectively changes the accelerometer into a very nice vibration sensor. Sep 10 '20 at 12:52 • Is additional hardware necessary? I presume that vibrations in the spacecraft will be transmitted to the microphone whether they excite the ambient air or not. The quality may be lower than air sounds, but unless the microphone was intentionally mechanically isolated, it should still operate as a general vibration detector, no? Sep 10 '20 at 18:48 • What microphone, @LawnmowerMan? Sep 10 '20 at 20:24 • Presumably one attached to the video camera, assuming a COTS piece. Sep 10 '20 at 22:59 tl;dr: There's certainly some propagation of sound waves possible at 100 km altitude. With a density a million times lower than at the surface the mean free path of individual molecules will approach a millimeter, so ultrasonics might be impacted, but for Human or GoPro frequencies it will be much quieter, but still there. Till what altitude above earth sounds can be heard? There is no single altitude at which sounds can suddenly not be heard. There is a steady drop-off in sound pressure with atmospheric pressure, and the drop-off accelerates when the mean free path approaches the wavelength of a particular sound, but these are smooth transitions. Does the air density at that altitude allow normal propagation of sound or was the footage audio enhanced? I'm not sure what "normal propagation" means. The volume of transmitted sound steadily decreases as density decreases, the same way that it gets even louder under water (+61 dB!) But at some point when the mean free path (only several microns for a standard atmosphere) begins to approach the wavelength of the sound, then the drop off changes to exponential as propagation becomes evanescent. This is explained in great detail in @ honeste_vivere's excellent answer to At what altitude would the air be too thin to carry a sound wave? I'll quote the last bit here: The model only went to 100 km but even so, our source would become difficult to hear if we moved a little more than ~100 m from it. Given that the density decreases exponentially with an e-folding distance of only ~8.5 km (pressure does so similarly as well), if we extrapolate our estimates for $$L_{i,src}\left( h \right)$$ then the value drops to ~10 dB by ~177 km. So by ~200 km a human probably could not hear a source ~1 m away that produced a 100 dB, 1000 Hz intensity level at sea level. From the last one:
or Subelement E8 SIGNALS AND EMISSIONS Section E8C Digital signals: digital communication modes; information rate vs bandwidth; error correction How is Forward Error Correction implemented? • By the receiving station repeating each block of three data characters • By transmitting a special algorithm to the receiving station along with the data characters • By transmitting extra data that may be used to detect and correct transmission errors • By varying the frequency shift of the transmitted signal according to a predefined algorithm In a Forward Error Correction system, each character is sent with additional data. The receiver uses the additional data to verify the sent data, and potentially correct transmitted errors. The error correction is sent forward with the data, rather than depending on retransmission. Note the answers: An algorithm is used to encode and decode the data, but the algorithm itself is not sent, as the receiver should already have this. Last edited by kwhubby. Register to edit Tags: none What is the definition of symbol rate in a digital transmission? • The number of control characters in a message packet • The duration of each bit in a message sent over the air • The rate at which the waveform of a transmitted signal changes to convey information • The number of characters carried per second by the station-to-station link In digital communications, symbol rate, also known as baud rate and modulation rate, is the number of symbol changes, waveform changes, or signaling events, across the transmission medium per time unit using a digitally modulated signal or a line code. The symbol rate is measured in baud (Bd) or symbols per second. akguido Hint: The only and correct answer has the word rate in it. -KE0IPR Last edited by ke0ipr. Register to edit Tags: none When performing phase shift keying, why is it advantageous to shift phase precisely at the zero crossing of the RF carrier? • This results in the least possible transmitted bandwidth for the particular mode • It is easier to demodulate with a conventional, non-synchronous detector • It improves carrier suppression • All of these choices are correct When the phase is changed precisely at 0, and the modulation scheme is BPSK, there is effectively 0 energy involved in the transition from one state to another when performed at a zero crossing. At any other time, the signal level would need to instantaneously jump to match the new phase, but at a zero crossing, the levels of two opposite phases are equal. When a sharp jump happens, this creates an edge that results in increased bandwidth. Last edited by kc9zyz. Register to edit Tags: none What technique is used to minimize the bandwidth requirements of a PSK31 signal? • Zero-sum character encoding • Reed-Solomon character encoding • Use of sinusoidal data pulses • Use of trapezoidal data pulses Some PSK stuff: PSK31’s name is derived from the modulation type – phase-shift keying – and data rate, which is actually 31.25 bauds. PSK uses the 128-character code and the full 256 ANSI (American National Standards Institute). PSK's emission designator for PSK31 is J2B PSK63 is a faster variation. PSK uses a variable-length code called Varicode. The PSK bandwidth is minimized by the **special sinusoidal shaping of the transmitted data symbols in the form of pulses**. -KE0IPR Last edited by ke0ipr. Register to edit Tags: none What is the necessary bandwidth of a 13-WPM international Morse code transmission? • Approximately 13 Hz • Approximately 26 Hz • Approximately 52 Hz • Approximately 104 Hz Given: CW Words Per Minute (WPMCW) = 13 What is the necessary bandwidth (BW) for this transmission? For a CW transmission, remember: Bandwidth (BWCW) ≈ 4 * WPMCW So in this case: BWCW ≈ 4 Hz/WPM * 13 WPM BWCW52 Hz Last edited by qubit. Register to edit Tags: none What is the necessary bandwidth of a 170-hertz shift, 300-baud ASCII transmission? • 0.1 Hz • 0.3 kHz • 0.5 kHz • 1.0 kHz The necessary bandwidth of a 170-hertz shift, 300-baud ASCII transmission is 0.5 kHz. The ARRL Extra Class License manual states: bandwidth (BW) is: $\text{BW}_{(\text{Hertz})} = (K \times \text{shift}) + B$ where: • $K = 1.2$ (an estimated empirical factor) • $B$ = baud, or symbol rate. Therefore, \begin{align} \text{BW} &= (1.2 \times 170\text{ Hz}) + 300\text{ baud}\\ &= 504\text{ Hz}\\ &\approx 0.5\text{ kHz} \end{align} Last edited by wileyj2956. Register to edit Tags: none What is the necessary bandwidth of a 4800-Hz frequency shift, 9600-baud ASCII FM transmission? • 15.36 kHz • 9.6 kHz • 4.8 kHz • 5.76 kHz Given: Frequency Shift = 4800 Hz Transmission Rate = 9600 baud What is the necessary bandwidth (BW)? Remember: Keying ($K$) should be 1.2 for most amateur radio purposes \begin{align} \text{BW} &= (K \cdot \text{shift}) + \text{baud rate}\\ &=( 1.2 \cdot 4800\text{ Hz} ) + 9600\\ &= 15,360\text{ Hz}\\ &= 15.36\text{ kHz} \end{align} Last edited by wileyj2956. Register to edit Tags: none How does ARQ accomplish error correction? • Special binary codes provide automatic correction • Special polynomial codes provide automatic correction • If errors are detected, redundant data is substituted • If errors are detected, a retransmission is requested In automatic repeat-request (ARQ) systems the transmitter sends the data and also an error checking code. The receiver checks for errors and request retransmission of erroneous data. Last edited by wileyj2956. Register to edit Tags: none Which is the name of a digital code where each preceding or following character changes by only one bit? • Binary Coded Decimal Code • Extended Binary Coded Decimal Interchange Code • Excess 3 code • Gray code For the values 0,1,2,3,4,5,6,7 Normal binary encoding looks like this: 000, 001, 010, 011, 100, 101, 110, 111 In some cases, all 3 values change between adjacent values. For example, from 3 to 4 the sequence goes from 011 to 100. An example gray code is: 000, 001, 011, 010, 110, 111, 101, 100 In this case, there is still a unique encoding for each possible value, but only one bit changes between adjacent values. Gray codes are useful components in implementing hardware and in error correcting codes. Last edited by flyguystudy. Register to edit Tags: none What is an advantage of Gray code in digital communications where symbols are transmitted as multiple bits • It increases security • It has more possible states than simple binary • It has more resolution than simple binary • It facilitates error detection If a symbol contains two bits, and normal binary values are used, such that for example 100Hz represents 00, 150Hz represents 01, 200Hz represents 10, and 250Hz represents 11... misdetecting 200Hz as the adjacent value 150Hz because of noise causes both output bits to flip (from 10 to 01). This makes things relatively hard on an error correction/error detection system. If a gray code is used, where only one bit changes for adjacent values-- for example, 100Hz represents 00, 150Hz represents 01, 200Hz represents 11, and 250Hz represents 10-- then a similar error only flips one bit. Last edited by icee. Register to edit Tags: none What is the relationship between symbol rate and baud? • They are the same • Baud is twice the symbol rate • Symbol rate is only used for packet-based modes • Baud is only used for RTTY Baud is another name for symbol rate. Symbol rate is the number of different transmission units sent per second over a link. If each symbol can contain two different values, it is equivalent to bits per second. It is possible for a symbol to contain more different values. For example, if it contains four different possible values, each symbol contains 2 bits of information, and the number of bits per second is double the buad rate. Last edited by icee. Register to edit Tags: none
203 views Consider the following two types of elections to determine which of two parties $A$ and $B$ forms the next government in the 2014 Indian elections. Assume for simplicity an Indian population of size $545545 (=545 * 1001)$. There are only two parties $A$ and $B$ and every citizen votes. TYPE C: The country is divided into $545$ constituencies and each constituency has $1001$ voters. Elections are held for each constituency and a party is said to win a constituency if it receive a majority of the vote in that constituency. The party that wins the most constituencies forms the next government. TYPE P: There are no constituencies in this model. Elections are held throughout the country and the party that wins the most votes (among $545545$ voters forms the government. Which of the following is true? 1. If the party forms the govt. by election TYPE C winning at least two-third of the constituencies, then it will also forms the govt. by election TYPE P. 2. If a party forms govt. by election TYPE C, then it will also form the govt. by election TYPE P. 3. If a party forms govt. by election TYPE P, then it will also form the govt. by election TYPE C. 4. All of the above 5. None of the above. | 203 views Minimum condition for Type C winning: if any party wins $273$ constituencies out of $545$ and with vote $501$ out of $1001$ for each. Minimum condition for Type P winning: if any party wins $272773$ votes out of $545545$ Option A: • Type C : Let party $A$ wins $2/3$ of  constituencies i.e., $364$ wins by $1$ vote and $181$ loss by all vote $364\times 501 + 181\times 0 = 182364$ • Type P : If $A$ wins it should have more than half of vote i.e., $272773$ $\qquad$ So, it is FALSE since $A$ got only $182364$ votes Option B : Similar to option A Option C: • Type P: Let $A$ win by $272773$ votes • Type C: If $A$ wins $272$ constituencies with $1001$ votes and loss $272$ with $1001$ and $1$ constituency with $1000$ votes then, total received votes $=272\times 1001+272\times 0+1\times 1= 272773$. But $A$ lose the  election since $A$ won only $272$ constituencies out of $545$ So option E must be correct. by Boss (16.5k points) edited by
<< April 2018 >> ## Monday, April 2, 2018 ### Agostino Capponi - Columbia University Seminar: Other Related Seminars | April 2 | 3:30-5 p.m. | 3108 Etcheverry Hall Agostino Capponi, Columbia University Agostino Capponi joined Columbia University's IEOR Department in August 2014, where he is also a member of the Institute for Data Science and Engineering. His main research interests are in the area of networks, with a special focus on systemic risk, contagion, and control. In the context of financial networks, the outcome of his research contributes to a better understanding of risk...   More > ## Tuesday, April 3, 2018 ### Solving composite minimization problems arising in statistics and engineering, with applications to phase retrieval Seminar: Neyman Seminar: Berkeley-Stanford Joint Colloquium at Berkeley | April 3 | 4-5 p.m. | 60 Evans Hall John C. Duchi, Stanford University Department of Statistics We consider minimization of stochastic functionals that are compositions of a (potentially) non-smooth convex function h and smooth function c. We develop two stochastic methods--a stochastic prox-linear algorithm and a stochastic (generalized) sub- gradient procedure--and prove that, under mild technical conditions, each converges to stationary points of the stochastic objective. Additionally,...   More > ## Wednesday, April 4, 2018 ### Poisson-Dirichlet interval partition evolutions related to the Aldous diffusion Seminar: Probability Seminar | April 4 | 3:10-4 p.m. | 1011 Evans Hall Matthias Winkel, University of Oxford Department of Statistics We construct diffusions on a space of interval partitions of [0,1] that are stationary with Poisson-Dirichlet laws. The processes of ranked interval lengths of our partitions are diffusions introduced by Ethier and Kurtz (1981) and Petrov (2009). Specifically, we decorate the jumps of a spectrally positive stable process with independent squared Bessel excursions. In the spirit of Ray-Knight...   More > ### Center for Computational Biology Seminar: Dr. Alexis Battle, Assistant Professor, Biomedical Engineering, Johns Hopkins University Seminar: Other Related Seminars | April 4 | 4:30-5:30 p.m. | 125 Li Ka Shing Center Center for Computational Biology Title: Modeling the complex impact of genetic variation on gene expression Abstract: Non-coding and regulatory genetic variation plays a significant role in human health, but the impact of regulatory variants has proven difficult to predict from sequence alone. Further, genetic effects can be modulated by context, such as cell type and environmental factors. We have developed machine learning...   More > ## Thursday, April 5, 2018 ### Seminar 217, Risk Management: The Securitization and Solicited Refinancing Channel of Monetary Policy Seminar: Risk Seminar | April 5 | 12:30-2 p.m. | 1011 Evans Hall Speaker: Rupal Kamdar, UC Berkeley I document the “securitization and solicited refinancing channel,” a novel transmission mechanism of monetary policy and its heterogenous regional effects. The mechanism predicts that mortgage lenders who sell their originations to Government Sponsored Enterprises or into securitizations no longer hold the loan’s prepayment risk, and when rates drop, these lenders are more likely to signal to...   More > ### Statistical inference of properties of distributions: theory, algorithms, and applications Seminar: Other Related Seminars | April 5 | 4-5 p.m. | Soda Hall, HP Auditorium, 306 Soda Hall Jiantao Jiao, Ph.D. Candidate, Stanford University Modern data science applications frequently involve pipelines of exploratory analysis requiring accurate inference of a property of the distribution governing the data. This talk will focus on recent progress in the performance, structure, and deployment of near-minimax-optimal estimators for a large variety of properties in high-dimensional and nonparametric settings. ## Wednesday, April 11, 2018 ### A unifying framework for constructing MCMC algorithms from irreversible diffusion processes Seminar: Probability Seminar | April 11 | 3:10-4 p.m. | 1011 Evans Hall Yian Ma, U. C. Berkeley Department of Statistics In this talk, I will first present a general recipe for constructing MCMC algorithms from diffusion processes with the desired stationary distributions. The recipe translates the task of finding valid continuous Markov processes into one of choosing two matrices. Importantly, any diffusion process with the target stationary distribution (given an integrability condition) can be represented in our...   More > ### Using visualisation to understand R theory Seminar | April 11 | 4-5 p.m. | 10 Evans Hall | Note change in location Department of Statistics In this talk, I will introduce the lobstr package which provides tools to visualise R's data structures on the command line. I'll show three R functions ast(), cst(), and ref() and use them to discuss three important components of R's theory: 1. All R code possesses a tree like structure, known as the abstract syntax tree. 2. R's lazy evaluation introduces a tree-like structure into the...   More > ## Thursday, April 12, 2018 ### Teaching and Research Resource Fair Special Event: News and Events | April 12 | 11 a.m.-1 p.m. | Dwinelle Hall, Room 117, Level D Connect with dozens of campus service providers to get new ideas, find support, and learn about resources and services. Enjoy focused conversations and technology demos while meeting other instructors and researchers. ID required. by April 10. ### Seminar 217, Risk Management: The Long-lasting Effects of Propaganda on Financial Risk-Taking Seminar: Risk Seminar | April 12 | 12:30-2 p.m. | 1011 Evans Hall Speaker: Ulrike Malmendier, UC Berkeley We argue that emotional coloring of experiences via political propaganda has long-term effects on risk taking. We show that living in an anti-capitalist system reduces individuals' willingness to invest in the stock market even decades later. ## Friday, April 13, 2018 ### Dissertation talk: Detection limits and fluctuation results in some spiked random matrix models Seminar: Other Related Seminars | April 13 | 2:30-3:30 p.m. | 521 Cory Hall Ahmed El Alaoui, EECS In this talk, we will investigate the fundamental limits of detecting the presence of a structured low-rank signal buried inside a large noise matrix. This setting serves among other things as a simple model for principal component analysis: Given a set of data points in Euclidean space, find out whether there exists a distinguished direction (a "spike") along which these data points align. It...   More > ## Wednesday, April 18, 2018 ### Rigid structures in the universal enveloping traffic space Seminar: Probability Seminar | April 18 | 3:10-4 p.m. | 1011 Evans Hall Benson Au, U.C. Berkeley Department of Statistics For a tracial $*$-probability space $(\mathcal{A}, \varphi)$, C\'{e}bron, Dahlqvist, and Male constructed an enveloping traffic space $(\mathcal{G}(\mathcal{A}), \tau_\varphi)$ that extends the trace $\varphi$. The CDM construction provides a universal object that allows one to appeal to the traffic probability framework in generic situations, prioritizing an understanding of its structure. We...   More > ### Global Testing Against Sparse Alternatives under Ising Models Seminar: Neyman Seminar | April 18 | 4-5 p.m. | 1011 Evans Hall Rajarshi Mukherjee, UC Berkeley Department of Statistics We study the effect of dependence on detecting sparse signals. In particular, we focus on global testing against sparse alternatives for the magnetizations of an Ising model and establish how the interplay between the strength and sparsity of a signal determines its detectability under various notions of dependence (i.e. the coupling constant of the Ising model). The impact of dependence can be...   More > ### Center for Computational Biology Seminar: Dr. Long Cai, Research Professor, Department of Biology and Biological Engineering, Caltech Seminar: Other Related Seminars | April 18 | 4:30-5:30 p.m. | 125 Li Ka Shing Center Center for Computational Biology Spatial genomics and single cell lineage dynamics by seqFISH and MEMOIR ## Thursday, April 19, 2018 ### Seminar 217, Risk Management: Could Probability of Informed Trading Predict Market Volatility? Seminar: Risk Seminar | April 19 | 12:30-2 p.m. | 1011 Evans Hall Speaker: John Wu, LBL Significant market events such as Flash Crash of 2010 undermine the trust of the capital market system. An ability to forecast such events would give market participants and regulators time to react to such events and mitigate their impact. For this reason, there have been a number of attempts to develop early warning indicators. In this work, we explore one such indicator named Probability of...   More > ## Tuesday, April 24, 2018 ### DataEDGE Conference Conference/Symposium: Other Related Seminars | April 24 | 9 a.m.-6 p.m. | Sutardja Dai Hall, Banatao Auditorium Information, School of Are you and your organization taking advantage of the opportunities created by today’s flood of new data? Do you know about the latest tools for storing, analyzing, and visualizing data? Have you considered the privacy implications of working with data? How do leaders balance intuition and data in making important decisions? The UC Berkeley School of Information’s DataEDGE conference will...   More > ## Wednesday, April 25, 2018 ### The weak Pinsker property Seminar: Probability Seminar | April 25 | 3:10-4 p.m. | 1011 Evans Hall Tim Austin, UCLA Department of Statistics This talk is about the structure theory of measure-preserving systems: transformations of a finite measure space that preserve the measure. Many important examples arise from stationary processes in probability, and simplest among these are the i.i.d. processes. In ergodic theory, i.i.d. processes are called Bernoulli shifts. Some of the main results of ergodic theory concern an invariant of...   More > ## Thursday, April 26, 2018 ### Seminar 217, Risk Management: Statistical Arbitrage Seminar | April 26 | 12:30-2 p.m. | 1011 Evans Hall Speaker: George Papanicolaou, Stanford Statistical arbitrage is a collection of trading algorithms that are widely used today but can have very uneven performance, depending on their detailed implementation. I will introduce these methods and explain how  the data used as trading signals are prepared so that they depend weakly on market dynamics but have adequate statistical regularity. The trading algorithm itself will be presented...   More > ### Ribosomes, traffic jams, and phase transitions Seminar: Berkeley-Davis Joint Colloquium at Davis | April 26 | 4-5 p.m. | Mathematical Sciences Building, UC Davis, 1147 (Colloquium Room) 399 Crocker Lane, Davis, CA 95616 Yun Song, UC Berkeley Department of Statistics Since its introduction, the totally asymmetric simple exclusion process (TASEP) has been widely used to model transport phenomena in non-equilibrium interacting particle systems. Many mathematicians and physicists have studied this stochastic process under various conditions motivated by a broad range of applications. In biology, for example, the TASEP has been used to describe the dynamics of...   More > ## Monday, April 30, 2018 ### Avraham Shtub -Technion Seminar: Other Related Seminars | April 30 | 3:30-5 p.m. | 3108 Etcheverry Hall Avraham Shtub, Technion Professor Avraham Shtub holds the Stephen and Sharon Seiden Chair in Project Management. He was a faculty member of the department of Industrial Engineering at Tel Aviv University from 1984 to 1998 where he also served as a chairman of the department (1993-1996)...   More >
# 7-1. Arise, Shine; Arise, shine; for thy light is come! “Arise, shine; for thy light is come, and the glory of the LORD is risen          upon thee.” (Isa 60:1) When Jesus came to the earth for the first time as the son of man, Peter, James and John who were the foremost disciples were all networkers using fishing nets, but in the end times, the networkers of God with the use of the Internet are urgently needed. (Mt 4:18-22) <Mt 4:18-22>                                                                                                                                          18 And Jesus, walking by the sea of Galilee, saw two brethren, Simon called Peter, and Andrew his brother, casting a net into the sea: for they were fishers.                                        19 And he saith unto them, Follow me, and I will make you fishers of men.                              20 And they straightway left their nets, and followed him.                                                            21 And going on from thence, he saw other two brethren, James the son of Zebedee, and John his brother, in a ship with Zebedee their father, mending their nets; and he called them.                                                                                                                                                       22 And they immediately left the ship and their father, and followed him. To be a successful networker of God, all you need is to put God as the priority in your life and making 12 partnering disciples who are the brides of Jesus within 3 years. Also, they together begin as the messengers of Jesus with partnering disciples and then become new readers who read the same words of prophecy along the mentor through online or off line. This is the achieving of the hope of the eternal life, which has been promised by the genuine God since the beginning of the world. Depending on different times, he presented these words with evangelism, which is the process of building of the wall(Rev 21:17) commanded by our Savior God unto us in the end times and also the extremely important calling in which, as the kingdom and the priests of God, the journey of salvation is fulfilled with the partnering disciples. <Tit1:1-3>                                                                                                                                                 1 Paul, a servant of God, and an apostle of Jesus Christ, according to the faith of God’s elect, and the acknowledging of the truth which is after godliness;                                            2 In hope of eternal life, which God, that cannot lie, promised before the world began;       3  But hath in due times manifested his word through preaching, which is committed unto me according to the commandment of God our Saviour; <Lk 23:39-43>                                                                                                                                          39 And one of the malefactors which were hanged railed on him, saying, If thou be Christ, save thyself and us.                                                                                                                              40 But the other answering rebuked him, saying, Dost not thou fear God, seeing thou art in the same condemnation?                                                                                                                 41 And we indeed justly; for we receive the due reward of our deeds: but this man hath done nothing amiss.                                                                                                                       42 And he said unto Jesus, Lord, remember me when thou comest into thy kingdom. 43 And Jesus said unto him, Verily I say unto thee, To day shalt thou be with me in paradise When somebody die in the Lord from now on in the process of this evangelism, he or she will be saved and die in the Lord transcend death and reach God’s paradise, “the Third Heaven” in holy body without seeing and tasting of death as follows. He or she experience an instantaneous transference of dimensions through which the tabernacle of body, which is a protective shield in this life, is stripped away and instead takes on the Holy body, symbolizing an eternal dwelling place, from the Third Heaven and proceeds to the step of the everlasting life. He or she will never see death and taste of death. (Jn 8:51-52/Jn11:25-26) This is the special blessing for the brides of Jesus in the end times. Thus, the Holy Spirit has been given as an assurance in order for us to achieve this Holy body of eternal life (2 Co 5:5). <2 Co 5:1-5>                                                                                                                                        1 For we know that if our earthly house of this tabernacle were dissolved, we have a building of God, an house not made with hands, eternal in the heavens.                                  2 For in this we groan, earnestly desiring to be clothed upon with our house which is from heaven:                                                                                                                                                      3 If so be that being clothed we shall not be found naked.                                                            4 For we that are in this tabernacle do groan, being burdened: not for that we would be unclothed, but clothed upon, that mortality might be swallowed up of life.                              5 Now he that hath wrought us for the selfsame thing is God, who also hath given unto us the earnest of the Spirit. <Jn 11:25-26>                                                                                                                                       25 Jesus said unto her, I am the resurrection, and the life: he that believeth in me, though he were dead, yet shall he live:                                                                                                           26 And whosoever liveth and believeth in me shall never die. Believest thou this? <Jn 8:51-54>                                                                                                                                            51 Verily, verily, I say unto you, If a man keep my saying, he shall never see death.              52 Then said the Jews unto him, Now we know that thou hast a devil. Abraham is dead, and the prophets; and thou sayest, If a man keep my saying, he shall never taste of death. 53 Art thou greater than our father Abraham, which is dead? and the prophets are dead: whom makest thou thyself?                                                                                                                54 Jesus answered, If I honour myself, my honour is nothing: it is my Father that honoureth me; of whom ye say, that he is your God: <Php 3:20-21>                                                                                                                                           20 For our conversation is in heaven; from whence also we look for the Saviour, the Lord Jesus Christ:                                                                                                                                 21 Who shall change our vile body, that it may be fashioned like unto his glorious body, according to the working whereby he is able even to subdue all things unto himself <2 Co 12:1-4>                                                                                                                                           1 It is not expedient for me doubtless to glory. I will come to visions and revelations of the Lord.                                                                                                                                                          2 I knew a man in Christ above fourteen years ago, (whether in the body, I cannot tell; or whether out of the body, I cannot tell: God knoweth;) such an one caught up to the third heaven.                                                                                                                                                      3 And I knew such a man, (whether in the body, or out of the body, I cannot tell: God knoweth;)                                                                                                                                              4 How that he was caught up into paradise, and heard unspeakable words, which it is not lawful for a man to utter. <1 Co 15:16-20>                                                                                                                                      16 For if the dead rise not, then is not Christ raised:                                                                        17 And if Christ be not raised, your faith is vain; ye are yet in your sins.                                    18 Then they also which are fallen asleep in Christ are perished.                                                19 If in this life only we have hope in Christ, we are of all men most miserable.            20 But now is Christ risen from the dead, and become the firstfruits of them that slept. The Resurrection of Jesus shown through the first fruit of the holy body “And declared to be the Son of God with power, according to the spirit of holiness, by the resurrection from the dead:<Ro 1:4> People Related to Resurrection How He Appeared References 1 To Mary Magdalene He did not recognize her initially Jn 20:10-18 Mk 16:9-10 2 Women who visited the tomb Worshipped as they hold onto Jesus’ feet Mk 28:8-10 3 Two disciples heading to Emmaus He initially appeared   in another form. They recognized him as he broke the bread and blessed it but he disappeared in an instant. Lk 24:13-35 Mk 16:12-13 4 To Peter He appeared to Simon Peter Lk 24:34 5 11 disciples closing the door He suddenly appeared even when the door was closed Jn 20:19-27 6 Thomas and other disciples Thomas confessed that he is indeed my God after putting fingers into the print of the nails and thrusting a hand into his side Jn 20:26-31 7 7 disciples fishing They initially did not recognize but soon after, he ate a meal together and told Peter to feed his sheep as well Jn 21:1-14 8 11 disciples at the Mount Olive His last command to make disciples of all peoples and baptize in the name of the Father, the Son and the Holy Spirit Mt 28:16-20 9 About 500 brethren He appeared to all of them at once 1 Co 15:6 10 James and all apostles He appeared to James first and then to all of the apostles 1 Co 15:7 11 People of Galilee Lifted up to the heaven – prophesied to come back again exactly the way in which he went up to the heaven Ac 1:10-11 Lk 24:50-53 <Ac 17:31-34>                                                                                                                                          31 Because he hath appointed a day, in the which he will judge the world in righteousness by that man whom he hath ordained; whereof he hath given assurance unto all men, in that he hath raised him from the dead.                                                                                             32 And when they heard of the resurrection of the dead, some mocked: and others said, We will hear thee again of this matter.                                                                                            33 So Paul departed from among them.                                                                                          34 Howbeit certain men clave unto him, and believed: among the which was Dionysius the Areopagite, and a woman named Damaris, and others with them. <Mk 16:9>                                                                                                                                        9 Now when Jesus was risen early the first day of the week, he appeared first to Mary Magdalene, out of whom he had cast seven devils. <Mk 16:12-13>                                                                                                                                          12 After that he appeared in another form unto two of them, as they walked, and went into the country.                                                                                                                                            13 And they went and told it unto the residue: neither believed they them. <Jn 20:25-29>                                                                                                                                          25 The other disciples therefore said unto him, We have seen the Lord. But he said unto them, Except I shall see in his hands the print of the nails, and put my finger into the print of the nails, and thrust my hand into his side, I will not believe.                                        26 And after eight days again his disciples were within, and Thomas with them: then came Jesus, the doors being shut, and stood in the midst, and said, Peace be unto you.                 27 Then saith he to Thomas, reach hither thy finger, and behold my hands; and reach hither thy hand, and thrust it into my side: and be not faithless, but believing.                    28 And Thomas answered and said unto him, My Lord and my God.                                      29 Jesus saith unto him, Thomas, because thou hast seen me, thou hast believed: blessed are they that have not seen, and yet have believed. <Jn 21:10-14>                                                                                                                                        10 Jesus saith unto them, Bring of the fish which ye have now caught.                                      11 Simon Peter went up, and drew the net to land full of great fishes, and hundred and fifty and three: and for all there were so many, yet was not the net broken.                                    12 Jesus saith unto them, Come and dine. And none of the disciples durst ask him, Who art thou? knowing that it was the Lord.                                                                                            13 Jesus then cometh, and taketh bread, and giveth them, and fish likewise.                            14 This is now the third time that Jesus shewed himself to his disciples, after that he was risen from the dead. <1 Co 15:1-8>                                                                                                                                           1 Moreover, brethren, I declare unto you the gospel which I preached unto you, which also ye have received, and wherein ye stand;                                                                                               2 By which also ye are saved, if ye keep in memory what I preached unto you, unless ye have believed in vain.                                                                                                                               3 For I delivered unto you first of all that which I also received, how that Christ died for our sins according to the scriptures;                                                                                                   4 And that he was buried, and that he rose again the third day according to the scriptures:   5 And that he was seen of Cephas, then of the twelve:                                                                   6 After that, he was seen of above five hundred brethren at once; of whom the greater part remain unto this present, but some are fallen asleep.                                                                     7 After that, he was seen of James; then of all the apostles.                                                     8 And last of all he was seen of me also, as of one born out of due time. Now, the times of revelation, which is the turning point of the history of the new mankind when all mysteries of the Bible are revealed, is both the end times when all covenants of the Bible are completed and the new beginning. Jesus our Lord and Savior came to this world because of this evangelism to achieve and  show holy body of eternal life and, starting with the twelve disciples, conquered all evil authorities of this world and spread the will of God to all regions of the world (Mt 28:18-20). Though they were not secularly intelligent people, with the power of the Holy Spirit, they were transformed into people who 1. Had the vision of Jesus Christ (Rev 5:6) 2. Had the heart of Jesus Christ (Php 2:5-11) 3. Planted the love of Jesus Christ (Jn 13:34-35)\ 4. Followed the form of Jesus Christ (Ro 8:28-31) 5. Obeyd the prophecy of Jesus Christ (Rev 1:3) 6. Spread the gospel of Jesus Christ (Eph 4:11-16) 7. Fulfilled the law of Jesus Christ (Gal 6:2) <Ac 4:12-14>                                                                                                                                          12 Neither is there salvation in any other: for there is none other name under heaven given among men, whereby we must be saved.                                                                                          13 Now when they saw the boldness of Peter and John, and perceived that they were unlearned and ignorant men, they marvelled; and they took knowledge of them, that they had been with Jesus.                                                                                                                               14 And beholding the man which was healed standing with them, they could say nothing against it. <Jn 14:26>                                                                                                                                              26 But the Comforter, which is the Holy Ghost, whom the Father will send in my name, he shall teach you all things, and bring all things to your remembrance, whatsoever I have said unto you. Just like them, when standing at the center of the religious world, actively following the words and prophecy of Jesus given in the right path of the eternal life for the kingdom and the priests of the Lord and Christ, and making the spread of them the priority, the great progress as an evangelist of the gospel will shine. (1 Ti 4:10-15) Therefore, it is time for us to independently and actively participate in the ultimate plan of God to renew all things rather than being mere bystanders (Rev 21:5) as we spread the lively oracles of Christ to all peoples living on this earth and bear fruits of the kingdom of our Lord and Christ. Now, it is also a crucial time for all people to become the brides of Jesus and realize all covenants of the Bible, based on the new order (Hos 4:6-10/Isa 24:1-2), procedure and glory of the faith prophesied to be equipped with the mindset of the prophesy, which lets us know about God(Mt 11:27/Eph 1:17), and to become the sons of God (Gal 3:23-29/Rev 21:7-8). “All things are delivered unto me of my Father: and no man knoweth the Son, but the Father; neither knoweth any man the Father, save the Son, and he to whomsoever the Son will reveal him. Come unto me, all ye that labour and are heavy laden, and I will give you rest.”  <Mt 11:27-28> <Rev 21:7>                                                                                                                                               7 He that overcometh shall inherit all things; and I will be his God, and he shall be my son. People who know and take into heart the open Revelation are those that have received the revelation about the end time and also those who know the plan of God.(Mt 11:27) Therefore, for anyone who learns about the ultimate plan of God to save us, we must realize, through the grace of the Lord, that it is now the point of selection towards the resurrection and eternal life leading to the completion of the ultimate plan of God in the end times, which is consisted of giving meat in due season, finding grace to help in the time of need and the dispensation of the fullness of times. We must be faithful that the work of God that has already begun will be fulfilled, strive for the best, declare boldly, be united in love and go forward together. <Lk 21:35>                                                                                                                                                 35 For as a snare shall it come on all them that dwell on the face of the whole earth. <Isa 24:1-2>                                                                                                                                             1 Behold, the LORD maketh the earth empty, and maketh it waste, and turneth it upside down, and scattereth abroad the inhabitants thereof.                                                               2 And it shall be, as with the people, so with the priest; as with the servant, so with his master; as with the maid, so with her mistress; as with the buyer, so with the seller; as with the lender, so with the borrower; as with the taker of usury, so with the giver of usury to him. <Ho 4:6-10>                                                                                                                                               6 My people are destroyed for lack of knowledge: because thou hast rejected knowledge, I will also reject thee, that thou shalt be no priest to me: seeing thou hast forgotten the law of thy God, I will also forget thy children.                                                                                         7 As they were increased, so they sinned against me: therefore will I change their glory into shame.                                                                                                                                                 8 They eat up the sin of my people, and they set their heart on their iniquity.                          9 And there shall be, like people, like priest: and I will punish them for their ways, and reward them their doings.                                                                                                                      10 For they shall eat, and not have enough: they shall commit whoredom, and shall not increase: because they have left off to take heed to the LORD. Becoming the kingdom and priests is the most important message that the Trinity God sends us at the beginning stage of Revelation. It is also new things symbolized by a new song already prophesied, and is a big turning point in the history of mankind taking place in the religious realm. Jesus’ death for the forgiveness of our sins is to give his life as a ransom for many because he loves and chooses us (John 15:16), and wants to give us the blessings of eternal life by making us, who choose the path of life, the kingdom and the priests of God. As a result, making us as the kingdom and the priests of God with the atonement that comes with his blood shed on the cross is the true blessings and the confirmation of grace that are given to us who will convert into “the servants” of God in the revelation times. Also, we must definitely understand that the history of atonement of Jesus’ blood will be complete only when we rise up to be the kingdom and the priests of God   (Isa 61:2-3/Lk 4:18-24/Mt 24:44-51/Eph 1:9-10). <Rev 1:5-6>                                                                                                                                                5. And from Jesus Christ, who is the faithful witness, and the first begotten of the dead, and the prince of the kings of the earth. Unto him that loved us, and washed us from our sins in his own blood, 6. And hath made us kings and priests unto God and his Father; to him be glory and dominion for ever and ever. Amen. <Rev 5:9-10>                                                                                                                                              9 And they sung a new song, saying, Thou art worthy to take the book, and to open the seals thereof: for thou wast slain, and hast redeemed us to God by thy blood out of every kindred, and tongue, and people, and nation;                                                                                10 And hast made us unto our God kings and priests: and we shall reign on the earth. <Ro 12:1-2>                                                                                                                                               1 I beseech you therefore, brethren, by the mercies of God, that ye present your bodies a living sacrifice, holy, acceptable unto God, which is your reasonable service.                             2 And be not conformed to this world: but be ye transformed by the renewing of your mind, that ye may prove what is that good, and acceptable, and perfect, will of God. <Isa 42:9-10>                                                                                                                                             9 Behold, the former things are come to pass, and new things do I declare: before they spring forth I tell you of them.                                                                                                    10 Sing unto the LORD a new song, and his praise from the end of the earth, ye that go down to the sea, and all that is therein; the isles, and the inhabitants thereof. <Isa 66:20-21>                                                                                                                                      20. And they shall bring all your brethren for an offering unto the LORD out of all nations upon horses, and in chariots, and in litters, and upon mules, and upon swift beasts, to my holy mountain Jerusalem, saith the LORD, as the children of Israel bring an offering in a clean vessel into the house of the LORD. 21. And I will also take of them for priests and for Levites, saith the LORD. It is the praise that will start from the ends of the earth, and even before the work starts, the new song that has been prophesied in the book of Isaiah as the new things given by God (Isa 42:9-10) refers to the fact that all people, without prejudices (Hos 4:6-10), will be re-born as the new kingdom and priests of God in the revelation times. <1Jn 2:27>                                                                                                                                              27 But the anointing which ye have received of him abideth in you, and ye need not that any man teach you: but as the same anointing teacheth you of all things, and is truth, and is no lie, and even as it hath taught you, ye shall abide in him. <Joe 2:32>                                                                                                                                          32 And it shall come to pass, that whosoever shall call on the name of the LORD shall be delivered: for in mount Zion and in Jerusalem shall be deliverance, as the LORD hath said, and in the remnant whom the LORD shall call. Also, upon making a decision on our own to be a true disciple of Jesus as we realize His word in revelation, we escape from the prevalent mammonism and spiritual dependence and, with the Holy Spirit, stand on our own at the center of the new religious world as the kingdom and priests of God in the end time, emanating the scent and light of Christ. <Jn 14:12-14>                                                                                                                                        12 Verily, verily, I say unto you, He that believeth on me, the works that I do shall he do also; and greater works than these shall he do; because I go unto my Father.                        13 And whatsoever ye shall ask in my name, that will I do, that the Father may be glorified in the Son.                                                                                                                                               14 If ye shall ask any thing in my name, I will do it. <Php 4:13>                                                                                                                                                  13 I can do all things through Christ which strengtheneth me. <Mk 9:23>                                                                                                                                                23 Jesus said unto him, If thou canst believe, all things are possible to him that believeth. Therefore, now we must escape from divisions, confusion and chaos, and pursue the eternal gospel(Rev 14:6) that delivers the ultimate plan in the end times for the believers of God. Also, we must simultaneously and ubiquitously constitute the network of God in every corner of this world, become one of the twelve partnering disciples, take one another’s burdens in the given environment and maximize the power of freedom stemming from the truth. <Jn 8:32>                                                                                                                                           32 And ye shall know the truth, and the truth shall make you free. <Gal 5:1>                                                                                                                                                     1 Stand fast therefore in the liberty wherewith Christ hath made us free, and be not entangled again with the yoke of bondage. <2 Co 2:17>                                                                                                                                         17 For we are not as many, which corrupt the word of God: but as of sincerity, but as of God, in the sight of God speak we in Christ. <Jer 20:9>                                                                                                                                               9 Then I said, I will not make mention of him, nor speak any more in his name. But his word was in mine heart as a burning fire shut up in my bones, and I was weary with forbearing, and I could not stay. It is also time for those who are asleep in the religious realm to all wake up, become the lamp that lights up the world darkened by the power of the air within Satan’s influences and grip such as idolatry stemmed from covetousness and mammonism, and stand up to light up the world (Isa 60:1/Eph 5:14). Arise, shine; for thy light is come! “Arise, shine; for thy light is come, and the glory of the LORD is risen upon thee.”             (Isa 60:1) <Eph 2:2-8>                                                                                                                                             2 Wherein in time past ye walked according to the course of this world, according to the prince of the power of the air, the spirit that now worketh in the children of disobedience:   3 Among whom also we all had our conversation in times past in the lusts of our flesh, fulfilling the desires of the flesh and of the mind; and were by nature the children of wrath, even as others.                                                                                                                                         4 But God, who is rich in mercy, for his great love wherewith he loved us                               5 Even when we were dead in sins, hath quickened us together with Christ, (by grace ye are saved;)                                                                                                                                                 6 And hath raised us up together, and made us sit together in heavenly places in Christ Jesus:                                                                                                                                                         7 That in the ages to come he might shew the exceeding riches of his grace in his kindness toward us through Christ Jesus.                                                                                                             8 For by grace are ye saved through faith; and that not of yourselves: it is the gift of God: <Eph 6:12-13>                                                                                                                                         12 For we wrestle not against flesh and blood, but against principalities, against powers, against the rulers of the darkness of this world, against spiritual wickedness in high places.                                                                                                                                            13 Wherefore take unto you the whole armour of God, that ye may be able to withstand in the evil day, and having done all, to stand. <Isa 65:14-17>                                                                                                                                       14 Behold, my servants shall sing for joy of heart, but ye shall cry for sorrow of heart, and shall howl for vexation of spirit.                                                                                                          15 And ye shall leave your name for a curse unto my chosen: for the Lord GOD shall slay thee, and call his servants by another name:                                                                                    16 That he who blesseth himself in the earth shall bless himself in the God of truth; and he that sweareth in the earth shall swear by the God of truth; because the former troubles are forgotten, and because they are hid from mine eyes.                                                                      17 For, behold, I create new heavens and a new earth: and the former shall not be remembered, nor come into mind. <2 Th 2:9-15>                                                                                                                                          9 Even him, whose coming is after the working of Satan with all power and signs and lying wonders,                                                                                                                                                  10 And with all deceivableness of unrighteousness in them that perish; because they received not the love of the truth, that they might be saved.                                                          11 And for this cause God shall send them strong delusion, that they should believe a lie:  12 That they all might be damned who believed not the truth, but had pleasure in unrighteousness.                                                                                                                                   13 But we are bound to give thanks alway to God for you, brethren beloved of the Lord, because God hath from the beginning chosen you to salvation through sanctification of the Spirit and belief of the truth:                                                                                                              14 Whereunto he called you by our gospel, to the obtaining of the glory of our Lord Jesus Christ.                                                                                                                                          15 Therefore, brethren, stand fast, and hold the traditions which ye have been taught, whether by word, or our epistle.
Wiwatanapataphee, Benchawan Compute Distance To: Author ID: wiwatanapataphee.benchawan Published as: Wiwatanapataphee, B.; Wiwatanapataphee, Benchawan; Wiwatanapataphee, B External Links: ORCID Documents Indexed: 71 Publications since 1999 1 Contribution as Editor Co-Authors: 61 Co-Authors with 70 Joint Publications 1,881 Co-Co-Authors all top 5 Co-Authors 1 single-authored 57 Wu, Yonghong 12 Liu, Lishan 9 Zhang, Xinguang 6 Lenbury, Yongwimon 5 Khajohnsaksumeth, Nathnarong 4 Sawangtong, Wannika 4 Wu, Qingsong 3 Amornsamankul, Somkid 3 Chayantrakom, Kittisak 3 Ge, Xiangyu 3 Lai, Shaoyong 3 Mookum, Theeradech 3 Poltem, Duangkamol 3 Zhou, Yanli 2 Archapitak, J. 2 Chuayjan, Wariam 2 Keady, Grant 2 Kongnual, Suputchara 2 Li, Shuang 2 Liu, Shican 2 Noinang, Sakda 2 Pothiphan, Surapa 2 Sun, Qian 2 Tang, Chang-Fu 2 Tang, I Ming 2 Thongnak, Sutthiwat 2 Yang, Yu 2 Zhang, Yan 1 Angkola, Francisca 1 Charoenloedmongkhon, Akapak 1 Chimmalee, Benjamas 1 Chomcheon, Suranath 1 Chuchalerm, Nattawan 1 Chuchard, Pearanat 1 Collinson, Roger 1 Cui, Yujun 1 Ghazi Alshanti, Waseem 1 Giannini, Lou 1 Hill, James Murray 1 Hu, Maobin 1 Jiang, Jiqiang 1 Jiang, Rui 1 Jiang, Yongsheng 1 Jumpen, Wannika 1 Kang, Ping 1 Khajohnsaksumetha, Nathnarong 1 Novaprateep, Boribon 1 Nuntadilok, Buraskorn 1 Orankitjaroen, Somsak 1 Phang, Chang 1 Ruan, Xinfeng 1 Ruengsakulrach, P. 1 Siddheshwar, Pradeep G. 1 Siew, Peg-Foo 1 Siriapisith, Thanongchai 1 Srimongkol, Sineenart 1 Unyong, Bundit 1 Wu, Jing 1 Yu, Xijun 1 Yuan, Wenjun 1 Zhang, Guangquan all top 5 Serials 9 Abstract and Applied Analysis 8 International Journal of Pure and Applied Mathematics 5 East-West Journal of Mathematics 4 Advances in Difference Equations 3 Journal of Computational and Applied Mathematics 2 Applied Mathematics and Computation 2 Applied Mathematics Letters 2 The ANZIAM Journal 2 Nonlinear Analysis. Modelling and Control 2 Dynamics of Continuous, Discrete & Impulsive Systems. Series B. Applications & Algorithms 2 Discrete and Continuous Dynamical Systems. Series B 2 Mathematical Biosciences and Engineering 2 Far East Journal of Mathematical Education 1 Journal of Engineering Mathematics 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 International Journal of Computational Fluid Dynamics 1 Mathematical Inequalities & Applications 1 European Journal of Mechanics. B. Fluids 1 International Journal of Applied Mathematics 1 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 1 Journal of Applied Mathematics 1 International Journal of Computational and Numerical Analysis and Applications (IJCNAA) 1 North American Actuarial Journal 1 International Journal of Mathematical Sciences 1 Boundary Value Problems 1 Journal of Industrial and Management Optimization 1 Journal of Physics A: Mathematical and Theoretical 1 Advances and Applications in Fluid Mechanics 1 Journal of Mathematical Sciences: Advances and Applications 1 Chamchuri Journal of Mathematics 1 East Asian Journal on Applied Mathematics 1 Cogent Mathematics 1 Cogent Mathematics & Statistics all top 5 Fields 36 Fluid mechanics (76-XX) 17 Biology and other natural sciences (92-XX) 16 Partial differential equations (35-XX) 11 Numerical analysis (65-XX) 8 Ordinary differential equations (34-XX) 7 Classical thermodynamics, heat transfer (80-XX) 6 Mechanics of deformable solids (74-XX) 6 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 4 Probability theory and stochastic processes (60-XX) 4 Systems theory; control (93-XX) 3 General and overarching topics; collections (00-XX) 3 Operations research, mathematical programming (90-XX) 3 Mathematics education (97-XX) 2 Real functions (26-XX) 1 Functions of a complex variable (30-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Functional analysis (46-XX) 1 Convex and discrete geometry (52-XX) 1 Statistics (62-XX) 1 Optics, electromagnetic theory (78-XX) Citations contained in zbMATH Open 23 Publications have been cited 327 times in 235 Documents Cited by Year The spectral analysis for a singular fractional differential equation with a signed measure. Zbl 1338.34032 Zhang, Xinguang; Liu, Lishan; Wu, Yonghong; Wiwatanapataphee, B. 2015 The eigenvalue for a class of singular $$p$$-Laplacian fractional differential equations involving the Riemann-Stieltjes integral boundary condition. Zbl 1334.34060 Zhang, Xinguang; Liu, Lishan; Wiwatanapataphee, Benchawan; Wu, Yonghong 2014 Nontrivial solutions for a fractional advection dispersion equation in anomalous diffusion. Zbl 1364.35429 Zhang, Xinguang; Liu, Lishan; Wu, Yonghong; Wiwatanapataphee, B. 2017 Positive solutions of eigenvalue problems for a class of fractional differential equations with derivatives. Zbl 1242.34015 Zhang, Xinguang; Liu, Lishan; Wiwatanapataphee, Benchawan; Wu, Yonghong 2012 Positive solutions of singular boundary value problems for systems of nonlinear fourth order differential equations. Zbl 1134.34015 Liu, Lishan; Kang, Ping; Wu, Yonghong; Wiwatanapataphee, Benchawan 2008 A study of transient flows of Newtonian fluids through micro-annuals with a slip boundary. Zbl 1168.76014 Wiwatanapataphee, B.; Wu, Yong Hong; Hu, Maobin; Chayantrakom, K. 2009 Mean-variance asset liability management with state-dependent risk aversion. Zbl 1414.91247 Zhang, Yan; Wu, Yonghong; Li, Shuang; Wiwatanapataphee, Benchawan 2017 A numerical study of the turbulent flow of molten steel in a domain with a phase-change boundary. Zbl 1107.76358 Wiwatanapataphee, B.; Wu, Y. H.; Archapitak, J.; Siew, P. F.; Unyong, B. 2004 On exact travelling wave solutions for two types of nonlinear $$K(n,n)$$ equations and a generalized KP equation. Zbl 1187.35216 Lai, Shaoyong; Wu, Y. H.; Wiwatanapataphee, B. 2008 Iterative algorithm and estimation of solution for a fractional order differential equation. Zbl 1383.35249 Wu, Jing; Zhang, Xinguang; Liu, Lishan; Wu, Yonghong; Wiwatanapataphee, Benchawan 2016 Inequalities for the fundamental Robin eigenvalue for the Laplacian on $$N$$-dimensional rectangular parallelepipeds. Zbl 1403.35187 2018 Analysis of flux flow and the formation of oscillation marks in the continuous caster. Zbl 0964.76020 Hill, James M.; Wu, Yong Hong; Wiwatanapataphee, Benchawan 1999 Simulation of pulsatile flow of blood in stenosed coronary artery bypass with graft. Zbl 1097.92019 Wiwatanapataphee, B.; Poltem, D.; Wu, Y. H.; Lenbury, Y. 2006 Modelling of turbulent flow and multi-phase heat transfer under electromagnetic force. Zbl 1131.76055 Wu, Yong Hong; Wiwatanapataphee, B 2007 Iterative properties of solution for a general singular $$n$$-Hessian equation with decreasing nonlinearity. Zbl 1472.65066 Zhang, Xinguang; Jiang, Jiqiang; Wu, Yonghong; Wiwatanapataphee, Benchawan 2021 An enthalpy control volume method for transient mass and heat transport with solidification. Zbl 1116.76393 Wu, Yong Hong; Wiwatanapataphee, Benchawan; Yu, Xijun 2004 Study of pulsatile pressure-driven electroosmotic flows through an elliptic cylindrical microchannel with the Navier slip condition. Zbl 1422.76036 Chuchard, Pearanat; Orankitjaroen, Somsak; Wiwatanapataphee, Benchawan 2017 The effects of community interactions and quarantine on a complex network. Zbl 1426.92070 Chimmalee, Benjamas; Sawangtong, Wannika; Wiwatanapataphee, Benchawan 2016 Effects of the wind speeds on heat transfer in a street canyon with a skytrain station. Zbl 1459.76068 Pothiphan, Surapa; Khajohnsaksumeth, Nathnarong; Wiwatanapataphee, Benchawan 2019 Numerical simulation of granular flow during filling and discharging of a silo. Zbl 1205.93016 Chuayjan, W.; Pothiphan, S.; Wiwatanapataphee, B.; Wu, Y. H. 2010 A SEIQR model for pandemic influenza and its parameter identification. Zbl 1176.92043 Jumpen, W.; Wiwatanapataphee, B.; Wu, Y. H.; Tang, I. M. 2009 Modelling of non-Newtonian blood flow through stenosed coronary arteries. Zbl 1155.93013 Wiwatanapataphee, Benchawan 2008 Computation of the domain of attraction for suboptimal immunity epidemic models using the maximal Lyapunov function method. Zbl 1402.92402 Phang, Chang; Wu, Yonghong; Wiwatanapataphee, Benchawan 2013 Iterative properties of solution for a general singular $$n$$-Hessian equation with decreasing nonlinearity. Zbl 1472.65066 Zhang, Xinguang; Jiang, Jiqiang; Wu, Yonghong; Wiwatanapataphee, Benchawan 2021 Effects of the wind speeds on heat transfer in a street canyon with a skytrain station. Zbl 1459.76068 Pothiphan, Surapa; Khajohnsaksumeth, Nathnarong; Wiwatanapataphee, Benchawan 2019 Inequalities for the fundamental Robin eigenvalue for the Laplacian on $$N$$-dimensional rectangular parallelepipeds. Zbl 1403.35187 2018 Nontrivial solutions for a fractional advection dispersion equation in anomalous diffusion. Zbl 1364.35429 Zhang, Xinguang; Liu, Lishan; Wu, Yonghong; Wiwatanapataphee, B. 2017 Mean-variance asset liability management with state-dependent risk aversion. Zbl 1414.91247 Zhang, Yan; Wu, Yonghong; Li, Shuang; Wiwatanapataphee, Benchawan 2017 Study of pulsatile pressure-driven electroosmotic flows through an elliptic cylindrical microchannel with the Navier slip condition. Zbl 1422.76036 Chuchard, Pearanat; Orankitjaroen, Somsak; Wiwatanapataphee, Benchawan 2017 Iterative algorithm and estimation of solution for a fractional order differential equation. Zbl 1383.35249 Wu, Jing; Zhang, Xinguang; Liu, Lishan; Wu, Yonghong; Wiwatanapataphee, Benchawan 2016 The effects of community interactions and quarantine on a complex network. Zbl 1426.92070 Chimmalee, Benjamas; Sawangtong, Wannika; Wiwatanapataphee, Benchawan 2016 The spectral analysis for a singular fractional differential equation with a signed measure. Zbl 1338.34032 Zhang, Xinguang; Liu, Lishan; Wu, Yonghong; Wiwatanapataphee, B. 2015 The eigenvalue for a class of singular $$p$$-Laplacian fractional differential equations involving the Riemann-Stieltjes integral boundary condition. Zbl 1334.34060 Zhang, Xinguang; Liu, Lishan; Wiwatanapataphee, Benchawan; Wu, Yonghong 2014 Computation of the domain of attraction for suboptimal immunity epidemic models using the maximal Lyapunov function method. Zbl 1402.92402 Phang, Chang; Wu, Yonghong; Wiwatanapataphee, Benchawan 2013 Positive solutions of eigenvalue problems for a class of fractional differential equations with derivatives. Zbl 1242.34015 Zhang, Xinguang; Liu, Lishan; Wiwatanapataphee, Benchawan; Wu, Yonghong 2012 Numerical simulation of granular flow during filling and discharging of a silo. Zbl 1205.93016 Chuayjan, W.; Pothiphan, S.; Wiwatanapataphee, B.; Wu, Y. H. 2010 A study of transient flows of Newtonian fluids through micro-annuals with a slip boundary. Zbl 1168.76014 Wiwatanapataphee, B.; Wu, Yong Hong; Hu, Maobin; Chayantrakom, K. 2009 A SEIQR model for pandemic influenza and its parameter identification. Zbl 1176.92043 Jumpen, W.; Wiwatanapataphee, B.; Wu, Y. H.; Tang, I. M. 2009 Positive solutions of singular boundary value problems for systems of nonlinear fourth order differential equations. Zbl 1134.34015 Liu, Lishan; Kang, Ping; Wu, Yonghong; Wiwatanapataphee, Benchawan 2008 On exact travelling wave solutions for two types of nonlinear $$K(n,n)$$ equations and a generalized KP equation. Zbl 1187.35216 Lai, Shaoyong; Wu, Y. H.; Wiwatanapataphee, B. 2008 Modelling of non-Newtonian blood flow through stenosed coronary arteries. Zbl 1155.93013 Wiwatanapataphee, Benchawan 2008 Modelling of turbulent flow and multi-phase heat transfer under electromagnetic force. Zbl 1131.76055 Wu, Yong Hong; Wiwatanapataphee, B 2007 Simulation of pulsatile flow of blood in stenosed coronary artery bypass with graft. Zbl 1097.92019 Wiwatanapataphee, B.; Poltem, D.; Wu, Y. H.; Lenbury, Y. 2006 A numerical study of the turbulent flow of molten steel in a domain with a phase-change boundary. Zbl 1107.76358 Wiwatanapataphee, B.; Wu, Y. H.; Archapitak, J.; Siew, P. F.; Unyong, B. 2004 An enthalpy control volume method for transient mass and heat transport with solidification. Zbl 1116.76393 Wu, Yong Hong; Wiwatanapataphee, Benchawan; Yu, Xijun 2004 Analysis of flux flow and the formation of oscillation marks in the continuous caster. Zbl 0964.76020 Hill, James M.; Wu, Yong Hong; Wiwatanapataphee, Benchawan 1999 all top 5 Cited by 336 Authors 57 Wu, Yonghong 54 Liu, Lishan 32 Zhang, Xinguang 26 Cui, Yujun 12 Jiang, Jiqiang 12 Wiwatanapataphee, Benchawan 8 Hao, Xin’an 7 Li, Peiluan 7 Liao, Maoxin 7 Xu, Changjin 7 Xu, Jiafa 6 Wu, Jing 5 Guo, Limin 5 O’Regan, Donal 5 Wang, Fang 5 Wang, Jinrong 5 Wang, Yongqing 5 Zhang, Xinqiu 4 Fu, Zhengqing 4 Khajohnsaksumeth, Nathnarong 4 Tan, Jingjing 4 Wang, Guotao 4 Yuan, Shuai 4 Zhang, Xingqiu 4 Zhong, Qiuyan 4 Zou, Yumei 3 Feng, Meiqiang 3 Gu, Yongyi 3 He, Ying 3 Jia, Mei 3 Liu, Weiwei 3 Ren, Teng 3 Samei, Mohammad Esmael 3 Sun, Qiao 3 Xiao, Qimei 3 Xu, Bo 3 Zhang, Kemei 3 Zhang, Sheng 2 Alzabut, Jehad O. 2 Băleanu, Dumitru I. 2 Chen, Yanping 2 Cheng, Wei 2 Dang, Duy Minh 2 Forsyth, Peter A. 2 Georgiou, Georgios C. 2 He, Jianxin 2 Kaoullas, George 2 Korti, Abdel Illah Nabil 2 Lai, Shaoyong 2 Laugesen, Richard Snyder 2 Li, Yunhong 2 Liu, Fawang 2 Liu, Lin 2 Liu, Xiping 2 Liu, Yansheng 2 Liu, Zixin 2 Lv, Xiumei 2 Ma, Wenjie 2 Mao, Cuiling 2 Mao, Jinxiu 2 Meng, Fanning 2 Min, Dandan 2 Qiao, Yan 2 Su, Xinwei 2 Sun, Fenglong 2 Sun, Qian 2 Sun, Sujing 2 Tao, Hao 2 Ur Rehman, Mujeeb 2 Van Staden, Pieter M. 2 Wang, Chenguang 2 Wang, Huaqing 2 Wu, Tunhua 2 Wu, Wenquan 2 Xie, Shengli 2 Xie, YiMing 2 Yu, Lixin 2 Zhai, Chengbo 2 Zhang, Keyu 2 Zhang, Lihong 2 Zhang, Luyao 2 Zhang, Xuemei 2 Zhao, Zengqin 2 Zheng, Liancun 2 Zhou, Xiangbing 2 Zhou, Zongfu 2 Zhu, Xiaolin 1 Abu-AlShaeer, Mahmood Jawad 1 Ahmad, Bashir 1 Ahmad, Mansoor 1 Ahmadi, Ahmad 1 Ahmadkhanlu, Asghar 1 Alaoui, Abdelilah Lamrani 1 Alla Hamou, Abdelouahed 1 Anderson, Douglas Robert 1 Aouiti, Chaouki 1 Aydogan, Seher Melike 1 Azroul, Elhoussine 1 Bai, Shikun 1 Bai, Shuangshuang ...and 236 more Authors all top 5 Cited in 55 Serials 36 Advances in Difference Equations 27 Abstract and Applied Analysis 25 Boundary Value Problems 23 Journal of Function Spaces 12 Applied Mathematics and Computation 11 Applied Mathematics Letters 10 Nonlinear Analysis. Modelling and Control 9 Complexity 7 Journal of Nonlinear Science and Applications 6 Mathematical Problems in Engineering 5 Journal of Applied Analysis and Computation 4 Journal of Inequalities and Applications 3 Meccanica 3 Discrete Dynamics in Nature and Society 3 Journal of Applied Mathematics and Computing 2 Journal of Mathematical Physics 2 Chaos, Solitons and Fractals 2 Journal of Computational and Applied Mathematics 2 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 2 Applied Mathematical Modelling 2 International Journal of Computational Methods 2 Journal of Fixed Point Theory and Applications 2 Advances in Mathematical Physics 2 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 2 AIMS Mathematics 1 Computers & Mathematics with Applications 1 International Journal of Heat and Mass Transfer 1 Journal of Mathematical Analysis and Applications 1 Mathematical Methods in the Applied Sciences 1 Canadian Journal of Mathematics 1 Demonstratio Mathematica 1 Insurance Mathematics & Economics 1 Optimization 1 Computational Mechanics 1 Mathematical and Computer Modelling 1 Electronic Journal of Differential Equations (EJDE) 1 Nonlinear Dynamics 1 Vietnam Journal of Mathematics 1 International Journal of Theoretical and Applied Finance 1 European Journal of Mechanics. B. Fluids 1 Qualitative Theory of Dynamical Systems 1 Mathematical Modelling and Analysis 1 Bulletin of the Brazilian Mathematical Society. New Series 1 Journal of Statistical Mechanics: Theory and Experiment 1 SIAM Journal on Financial Mathematics 1 International Journal of Differential Equations 1 Journal of Mathematical Extension 1 Chamchuri Journal of Mathematics 1 Journal of Mathematics 1 ISRN Biomathematics 1 Computational Methods for Differential Equations 1 International Journal of Applied and Computational Mathematics 1 Open Mathematics 1 Cogent Mathematics & Statistics 1 Electronic Research Archive all top 5 Cited in 30 Fields 153 Ordinary differential equations (34-XX) 58 Operator theory (47-XX) 49 Partial differential equations (35-XX) 32 Real functions (26-XX) 20 Fluid mechanics (76-XX) 13 Numerical analysis (65-XX) 12 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 9 Systems theory; control (93-XX) 8 Integral equations (45-XX) 7 Biology and other natural sciences (92-XX) 6 Difference and functional equations (39-XX) 5 Classical thermodynamics, heat transfer (80-XX) 4 Dynamical systems and ergodic theory (37-XX) 4 Global analysis, analysis on manifolds (58-XX) 4 Probability theory and stochastic processes (60-XX) 4 Mechanics of deformable solids (74-XX) 3 Computer science (68-XX) 3 Statistical mechanics, structure of matter (82-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Integral transforms, operational calculus (44-XX) 2 Operations research, mathematical programming (90-XX) 1 History and biography (01-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Functions of a complex variable (30-XX) 1 Special functions (33-XX) 1 General topology (54-XX) 1 Statistics (62-XX) 1 Mechanics of particles and systems (70-XX) 1 Quantum theory (81-XX) 1 Information and communication theory, circuits (94-XX)
Volume 2, Issue 5 Study of Forced Convection Heat Transfer of Supercritical CO2 in a Horizontal Channel by Lattice Boltzmann Method Adv. Appl. Math. Mech., 2 (2010), pp. 564-572. Published online: 2010-02 Preview Full PDF 185 2338 Export citation Cited by • Abstract The problem of forced convection heat transfer of supercritical CO2 in a horizontal channel is investigated numerically by a lattice Boltzmann method. This study is stimulated by our recent experimental findings on solar collectors using supercritical CO2 as a working fluid, which can achieve the collector efficiency high up to 70%. To deeply understand the heat transfer characteristics of supercritical CO2 and provide a theoretical guidance for improving our current experimental system, in present study several typical experimental flow conditions are simulated. In particular, the work focuses on the convective heat transfer characteristics of supercritical CO2 flowing in a horizontal channel with mediate Reynolds numbers ranging from 210 to 840 and constant heat fluxes from 400.0 to 800.0 W/m$^2$. The simulations show that the heat transfer increases with heat flux and decreases with Reynolds number. Furthermore, the mechanisms of heat transfer enhancement of supercritical CO2 fluid are identified. • Keywords • AMS Subject Headings • BibTex • RIS • TXT @Article{AAMM-2-564, author = {Xiaodong Niu , and Hiroshi Yamaguchi , and Yuhiro Iwamoto , and Xinrong Zhang , and Mingjun Li , }, title = {Study of Forced Convection Heat Transfer of Supercritical CO2 in a Horizontal Channel by Lattice Boltzmann Method}, journal = {Advances in Applied Mathematics and Mechanics}, year = {2010}, volume = {2}, number = {5}, pages = {564--572}, abstract = { The problem of forced convection heat transfer of supercritical CO2 in a horizontal channel is investigated numerically by a lattice Boltzmann method. This study is stimulated by our recent experimental findings on solar collectors using supercritical CO2 as a working fluid, which can achieve the collector efficiency high up to 70%. To deeply understand the heat transfer characteristics of supercritical CO2 and provide a theoretical guidance for improving our current experimental system, in present study several typical experimental flow conditions are simulated. In particular, the work focuses on the convective heat transfer characteristics of supercritical CO2 flowing in a horizontal channel with mediate Reynolds numbers ranging from 210 to 840 and constant heat fluxes from 400.0 to 800.0 W/m$^2$. The simulations show that the heat transfer increases with heat flux and decreases with Reynolds number. Furthermore, the mechanisms of heat transfer enhancement of supercritical CO2 fluid are identified. }, issn = {2075-1354}, doi = {https://doi.org/10.4208/aamm.10-10S03}, url = {http://global-sci.org/intro/article_detail/aamm/8347.html} } TY - JOUR T1 - Study of Forced Convection Heat Transfer of Supercritical CO2 in a Horizontal Channel by Lattice Boltzmann Method AU - Xiaodong Niu , AU - Hiroshi Yamaguchi , AU - Yuhiro Iwamoto , AU - Xinrong Zhang , AU - Mingjun Li , JO - Advances in Applied Mathematics and Mechanics VL - 5 SP - 564 EP - 572 PY - 2010 DA - 2010/02 SN - 2 DO - http://doi.org/10.4208/aamm.10-10S03 UR - https://global-sci.org/intro/article_detail/aamm/8347.html KW - AB - The problem of forced convection heat transfer of supercritical CO2 in a horizontal channel is investigated numerically by a lattice Boltzmann method. This study is stimulated by our recent experimental findings on solar collectors using supercritical CO2 as a working fluid, which can achieve the collector efficiency high up to 70%. To deeply understand the heat transfer characteristics of supercritical CO2 and provide a theoretical guidance for improving our current experimental system, in present study several typical experimental flow conditions are simulated. In particular, the work focuses on the convective heat transfer characteristics of supercritical CO2 flowing in a horizontal channel with mediate Reynolds numbers ranging from 210 to 840 and constant heat fluxes from 400.0 to 800.0 W/m$^2$. The simulations show that the heat transfer increases with heat flux and decreases with Reynolds number. Furthermore, the mechanisms of heat transfer enhancement of supercritical CO2 fluid are identified. Xiaodong Niu, Hiroshi Yamaguchi, Yuhiro Iwamoto, Xinrong Zhang & Mingjun Li. (1970). Study of Forced Convection Heat Transfer of Supercritical CO2 in a Horizontal Channel by Lattice Boltzmann Method. Advances in Applied Mathematics and Mechanics. 2 (5). 564-572. doi:10.4208/aamm.10-10S03 Copy to clipboard The citation has been copied to your clipboard
# I Cauchy's Theorem 1. May 22, 2017 ### Silviu Hello! I am reading a book on complex analysis and I came across this: If $G \in \mathbb{C}$ is a region, a function f is holomorphic in G and $\gamma$ is a piecewise smooth path with $\gamma \sim_G 0$ then $\int_\gamma f = 0$. I want to make sure I understand. First of all, $\gamma \sim_G 0$ means that if G doesn't have "holes", any closed loop is homotopic to a point? And this also means that if G doesn't have holes, the integral of any holomorphic function over a closed loop is 0? Which means that in G any holomorphic function has an antiderivative? Thank you! 2. May 22, 2017 ### FactChecker What it specifically says is that the particular path $\gamma$ can be shrunk to a point within G. It doesn't say that all paths can be shrunk to a point within G. (But in the special case of G being simply connected, any closed path can be shrunk to a point within G.) Right. Right. 3. May 23, 2017 ### Silviu Thank you! One more question, as far as I remember a circle and a point are not topologically equivalent as they have a different number of holes. So topologically a circle can't be shrunk to a point. But homotopically it can. What is the difference between the two? In the case of a homotopy you ignore the holes? (I understand mathematically the definition of homotopy and homeomorphism I am just not sure I can visualize the difference from a geometrical point of view). 4. May 23, 2017 ### Staff: Mentor A point and a circle are not homeomorph since they are of different dimensions. You will lose information. Or formally, you cannot establish a bijection. A circle and a point, if regarded as the graphs of two functions are homotopic, since there is a continuous function (the shrinking) that transforms one into the other. However, if you cut out a point in the inner area of the circle, they won't be homotopic anymore, since the "shrinking" would have to "jump" across that whole, i.e. it cannot be done continuously anymore. 5. May 23, 2017 ### FactChecker Topology is not my strength, but here is a thought about terminology. We should distinguish between "shrink to a point" and "shrink to become a point". The first would mean that, given any open set containing the point, the path becomes completely contained in that open set. The second would mean that there is a homotopy between the line and a point, which is not possible(?). Perhaps there is already better terminology than "shrinks to a point". 6. May 23, 2017 ### Staff: Mentor It is possible: $H : \mathbb{S}^1 \times [0,1] \longrightarrow \{0\}\, , \,H(\varphi ,t)=(1-t)\cdot \begin{bmatrix}\cos \varphi \\ \sin \varphi \end{bmatrix}$ is continuous, $\{H(\varphi,0)\,\vert \,\varphi \in [0,2\pi)\} = \mathbb{S}^1\; \textrm{ and } \; \{H(\varphi ,1)\,\vert \,\varphi \in [0,2\pi)\} = \{(0,0)\}$ is a homotopy that shrinks the circle. 7. May 28, 2017 ### WWGD By cardinality reasons alone, a circle and a point are not topologically equivalent/ homeomorphic. 8. May 29, 2017 ### lavinia A space that can be shrunk to a point is called "contractible". This means that it can be shrunk to a point inside itself. That is: there is a homotopy i.e. a continuous map $H:M×[0,1]→M$ such that at time $0$ $H$ is the identity map and at time $1$ it projects the space $M$ onto a point $p$ in $M$. An example of a contractible space is Euclidean space. The continuous map $H(x,t) = (1-t)x$ is the identity at time $0$ and maps everything to the origin at time $1$. Another example is a set of line segments that share a common end point. If the topological space $M$ is a subset of another space $N$ then one says that $M$ is null homotopic in $N$ if it can be shrunk to a point in $N$. In this case the homotopy maps $M×[0,1]$ into $N$. That is: $H: M×[0,1]→N$ is the inclusion of $M$ in $N$ at time $0$ and maps $M$ to a point in $N$ at time $1$. A circle is not contractible. There is no homotopy $H:S^1×[0,1]→S^1$ that is the identity at time $0$ and projects the circle onto one of its points at time $1$. But a circle inside Euclidean space is homotopic to a point since the same homotopy that shrinks all of Euclidean space to a point shrinks any subset to a point. That is: every subset of Euclidean space is null homotopic. In complex analysis one considers regions of the plane that are not contractible. Typical examples are contractible open sets - such as a disk - minus a finite number of points or minus a finite number of disks, for instance an annulus. Within such regions, there are closed curves that are not null homotopic - for instance in a disk minus a point, a closed curve that winds around the missing point a finite number of times. Also there are closed curves that are null homotopic - for instance any closed curve that does not wind around the missing point. None of these closed curves can be shrunk to a point in themselves but some of them can be shrunk to a point within the region. Observations: - Two spaces are said to be homotopically equivalent if there are continuous maps $H:M →N$ and $G:N→M$ such that $HG$ and $GH$ are both homotopic to the identity map. Homotopically equivalent is not the same as homeomorphic. A contractible space is homotopically equivalent to a point but not in general, homeomorphic to a point. An annulus is homtopically equivalent to a circle. - In some spaces, every closed curve is null homotopic. Such a space is said to be "simply connected". For instance, the sphere of any dimension greater than one, is simply connected. A simply connected space need not be contractible though. No sphere is contractible. - In Algebraic Topology one measures the failure of a space to be simply connected by its fundamental group. By definition, a space is simply connected if its fundamental group is trivial. The fundamental group of a point is trivial. But the fundamental group of a circle is the integers. - If two spaces are homotopically equivalent, then they have isomorphic fundamental groups. Since Eulcidean space is homotopically equivalent to a point, its fundamental group is trivial. A circle is not homotopically equivalent to a point since its fundamental group is not trivial. Last edited: May 29, 2017
mersenneforum.org "Subproject" #10: 200k-300k to 110 digits Register FAQ Search Today's Posts Mark Forums Read 2011-04-14, 09:09   #1 RobertS Aug 2009 somewhere 3058 Posts "Subproject" #10: 200k-300k to 110 digits Reservations: Code: 200-250k RobertS (done!) 250-300k RobertS Quote: Originally Posted by bchaffin Edit (just missed the edit window): hmm, there are some mysterious gaps in my worker's logs, so maybe we were both adding factors at the same time... I picked up the downdriver at 219282.1969, but it dropped ~10 digits with 2^2 some time before that. Anyway, another one down! I've started to bring up all seqs in 195k-250k to 110 digits (I've announced it somewhere here in the forum). 219282 was one of the last in this range, because it was quite "intensive". I left it at i1945 (100digits) in the evening and it was done in the next morning, by me and you. But anyway, another one down! To avoid double work: I will continue with the seqs. 250k-300k, <110 digits. 195k-250k is done. PS: There are only 4860 seqs below 110 digits below 1M (mean size:106.2). Right now 50 to 70 seqs are passing the 110 digits limit daily! Last fiddled with by schickel on 2011-04-15 at 08:17 2011-04-14, 11:20   #2 10metreh Nov 2008 2·33·43 Posts Quote: Originally Posted by RobertS PS: There are only 4860 seqs below 110 digits below 1M (mean size:106.2). Right now 50 to 70 seqs are passing the 110 digits limit daily! At that rate it'll be less than 3 months till everything's up to 110 - will we need another 110 digits subproject or do you want to do all the work? 10k-50k to 120 digits isn't looking far off now 2011-04-14, 17:29   #3 RobertS Aug 2009 somewhere 110001012 Posts Quote: Originally Posted by 10metreh At that rate it'll be less than 3 months till everything's up to 110 - will we need another 110 digits subproject or do you want to do all the work? 10k-50k to 120 digits isn't looking far off now It seems like that it won't take 3 month, due to the huge progress (now 4828 seqs. <110 digits, < 1M) I have not planned to do all <110 <1M seqs. But I'm not sure, wether another 110 digits subproject is suitable or not. In most cases there are only few terms to do. I would join 10k-50k to 120 digits, after I could get to run GNFS on my unix machine. 2011-04-15, 07:29   #4 schickel "Frank <^>" Dec 2004 CDP Janesville 2·1,049 Posts Quote: Originally Posted by RobertS I've started to bring up all seqs in 195k-250k to 110 digits (I've announced it somewhere here in the forum). That was in the main reservation thread, but I didn't pay enough attention to think of putting in such a reservation. Mods, do we need to do something like that or should we just start a "mega project" thread to reserve wide swaths of numbers for intensive work? Quote: 219282 was one of the last in this range, because it was quite "intensive". I left it at i1945 (100digits) in the evening and it was done in the next morning, by me and you. I guess we could mark that as a major milestone.... Quote: But anyway, another one down! To avoid double work: I will continue with the seqs. 250k-300k, <110 digits. 195k-250k is done. A nice wide swath of work by anyone's standards.... Quote: PS: There are only 4860 seqs below 110 digits below 1M (mean size:106.2). Right now 50 to 70 seqs are passing the 110 digits limit daily ! Nice progress! 2011-04-15, 07:32   #5 schickel "Frank <^>" Dec 2004 CDP Janesville 83216 Posts Quote: Originally Posted by RobertS I would join 10k-50k to 120 digits, after I could get to run GNFS on my unix machine. Do you not run it because you haven't needed it or because you can't get it running? 2011-04-15, 07:51   #6 10metreh Nov 2008 91216 Posts Quote: Originally Posted by RobertS In most cases there are only few terms to do. I guess this is because of Ben's workers that are factoring all the lowest composites. This has the side-effect of pushing sequences up towards 110. Quote: Originally Posted by schickel Mods, do we need to do something like that or should we just start a "mega project" thread to reserve wide swaths of numbers for intensive work? This thread definitely isn't needed (under its current title, at least) as 200k-250k is done already by RobertS and he is now continuing to 300k. As taking sequences to 110 digits clearly isn't what it used to be, I think we coordinate the remaining effort in the ranges and status thread. Then when Subproject #9 runs out of sequences we can start the 120 digits effort (10k-50k, or perhaps 10k-100k?) for Subproject #10. Last fiddled with by 10metreh on 2011-04-15 at 07:52 2011-04-15, 08:19   #7 schickel "Frank <^>" Dec 2004 CDP Janesville 2·1,049 Posts Quote: Originally Posted by 10metreh This thread definitely isn't needed (under its current title, at least) as 200k-250k is done already by RobertS and he is now continuing to 300k. Oh, come on; what fits the spirit of the forum better than an entire thread devoted to a project that is almost done (and by one person at that!)? 2011-04-15, 09:35   #8 10metreh Nov 2008 1001000100102 Posts Quote: Originally Posted by schickel Oh, come on; what fits the spirit of the forum better than an entire thread devoted to a project that is almost done (and by one person at that!)? An entire thread devoted to a project that has already been done by only one person, I think. We could keep the thread, as long as we change the title... Edit: done. Last fiddled with by 10metreh on 2011-04-15 at 09:37 2011-04-15, 10:35   #9 Andi_HB Mar 2007 Germany 4108 Posts Quote: Originally Posted by 10metreh I guess this is because of Ben's workers that are factoring all the lowest composites. This has the side-effect of pushing sequences up towards 110. Ben is not the only one who run workers to factor all the lowest composites! My worker have also pushed the limit of the Database - thousends composites the last weeks. And we don`t know how much people run also DB-Helpers. So this work is from different Persons. And all Project benefit on that. Last fiddled with by Andi_HB on 2011-04-15 at 10:39 2011-05-07, 15:30 #10 RobertS     Aug 2009 somewhere 197 Posts 250k-300k is almost done. Will continue with 300k-350k PS: 3601 sequences < 1M < 110 digits left, with a mean size of 107.05 Similar Threads Thread Thread Starter Forum Replies Last Post a1call Miscellaneous Math 179 2015-11-12 14:59 wildrabbitt Miscellaneous Math 11 2015-03-06 08:17 Mini-Geek Aliquot Sequences 151 2011-05-14 09:01 nitai1999 Software 7 2004-08-26 18:12 All times are UTC. The time now is 02:55. Sat Nov 28 02:55:19 UTC 2020 up 79 days, 6 mins, 3 users, load averages: 1.09, 1.15, 1.14
# Conserved charges for complex scalar fields I have been studying complex scalar fields, and in Peskin and Schroeder, An Introduction to Quantum Field Theory, (chapter 2, problem 2, part d— on page 34) they ask you to compute the conserved charges for two equally massive complex scalar fields. So far I understand that the corresponding Lagrangian is invariant under $$U(2)$$ (which gives four separate conserved charges). But in a note, it says there are actually six. I don't see where the other two could come from. Is it related to the Lagrangian being invariant under a bigger symmetry group that I didn't notice? • Where are you getting six charges? My copy not only specifies that there should be four charges, it states exactly what they are. – Buzz Dec 29 '20 at 23:27 • At the bottom of the page it says: ''With some additional work you can show that there are actually six conserved charges in the case of two complex fields, and $n(2n-1 )$ in the case of $n$ fields, corresponding to the generators of the rotation group in four and $2 n$ dimensions, respectively. The extra symmetries often do not survive when nonlinear interactions of the fields are included.'' Dec 29 '20 at 23:36 Two complex scalar fields $$\phi_{1}$$ and $$\phi_{2}$$ can be rewritten as four real fields, in terms of their real and imaginary parts, $$\Phi=\sqrt{2}\left[\begin{array}{c} \Re\{\phi_{1}\} \\ \Im\{\phi_{1}\} \\ \Re\{\phi_{2}\} \\ \Im\{\phi_{2}\} \end{array}\right].$$ For the free theory, the Lagrange density is actually equal to $${\cal L}=\frac{1}{2}\partial^{\mu}\Phi_{i}\partial_{\mu}\Phi_{i}-\frac{m^{2}}{2}\Phi_{i}\Phi_{i},$$ with the $$i=1,\ldots,4$$ summed over. This is just the sum of four Lagrange densities for four independent real fields $$\Phi_{i}$$. This is clearly invariant under real $$SO(4)$$ rotations, of which there are six. However, these symmetries do not survive under the natural interactions for complex (i.e. charged) scalar fields, such as the current coupling term $$\left[\phi_{j}^{*}(\partial^{\mu}\phi_{j})-(\partial^{\mu}\phi_{j}^{*})\phi_{j}\right]A_{\mu}$$, (now summed over $$j=1,2$$).
## 10.140 Smooth algebras over fields Warning: The following two lemmas do not hold over nonperfect fields in general. Lemma 10.140.1. Let $k$ be an algebraically closed field. Let $S$ be a finite type $k$-algebra. Let $\mathfrak m \subset S$ be a maximal ideal. Then $\dim _{\kappa (\mathfrak m)} \Omega _{S/k} \otimes _ S \kappa (\mathfrak m) = \dim _{\kappa (\mathfrak m)} \mathfrak m/\mathfrak m^2.$ Proof. Consider the exact sequence $\mathfrak m/\mathfrak m^2 \to \Omega _{S/k} \otimes _ S \kappa (\mathfrak m) \to \Omega _{\kappa (\mathfrak m)/k} \to 0$ of Lemma 10.131.9. We would like to show that the first map is an isomorphism. Since $k$ is algebraically closed the composition $k \to \kappa (\mathfrak m)$ is an isomorphism by Theorem 10.34.1. So the surjection $S \to \kappa (\mathfrak m)$ splits as a map of $k$-algebras, and Lemma 10.131.10 shows that the sequence above is exact on the left. Since $\Omega _{\kappa (\mathfrak m)/k} = 0$, we win. $\square$ Lemma 10.140.2. Let $k$ be an algebraically closed field. Let $S$ be a finite type $k$-algebra. Let $\mathfrak m \subset S$ be a maximal ideal. The following are equivalent: 1. The ring $S_{\mathfrak m}$ is a regular local ring. 2. We have $\dim _{\kappa (\mathfrak m)} \Omega _{S/k} \otimes _ S \kappa (\mathfrak m) \leq \dim (S_{\mathfrak m})$. 3. We have $\dim _{\kappa (\mathfrak m)} \Omega _{S/k} \otimes _ S \kappa (\mathfrak m) = \dim (S_{\mathfrak m})$. 4. There exists a $g \in S$, $g \not\in \mathfrak m$ such that $S_ g$ is smooth over $k$. In other words $S/k$ is smooth at $\mathfrak m$. Proof. Note that (1), (2) and (3) are equivalent by Lemma 10.140.1 and Definition 10.110.7. Assume that $S$ is smooth at $\mathfrak m$. By Lemma 10.137.10 we see that $S_ g$ is standard smooth over $k$ for a suitable $g \in S$, $g \not\in \mathfrak m$. Hence by Lemma 10.137.7 we see that $\Omega _{S_ g/k}$ is free of rank $\dim (S_ g)$. Hence by Lemma 10.140.1 we see that $\dim (S_{\mathfrak m}) = \dim (\mathfrak m/\mathfrak m^2)$ in other words $S_\mathfrak m$ is regular. Conversely, suppose that $S_{\mathfrak m}$ is regular. Let $d = \dim (S_{\mathfrak m}) = \dim \mathfrak m/\mathfrak m^2$. Choose a presentation $S = k[x_1, \ldots , x_ n]/I$ such that $x_ i$ maps to an element of $\mathfrak m$ for all $i$. In other words, $\mathfrak m'' = (x_1, \ldots , x_ n)$ is the corresponding maximal ideal of $k[x_1, \ldots , x_ n]$. Note that we have a short exact sequence $I/\mathfrak m''I \to \mathfrak m''/(\mathfrak m'')^2 \to \mathfrak m/(\mathfrak m)^2 \to 0$ Pick $c = n - d$ elements $f_1, \ldots , f_ c \in I$ such that their images in $\mathfrak m''/(\mathfrak m'')^2$ span the kernel of the map to $\mathfrak m/\mathfrak m^2$. This is clearly possible. Denote $J = (f_1, \ldots , f_ c)$. So $J \subset I$. Denote $S' = k[x_1, \ldots , x_ n]/J$ so there is a surjection $S' \to S$. Denote $\mathfrak m' = \mathfrak m''S'$ the corresponding maximal ideal of $S'$. Hence we have $\xymatrix{ k[x_1, \ldots , x_ n] \ar[r] & S' \ar[r] & S \\ \mathfrak m'' \ar[u] \ar[r] & \mathfrak m' \ar[r] \ar[u] & \mathfrak m \ar[u] }$ By our choice of $J$ the exact sequence $J/\mathfrak m''J \to \mathfrak m''/(\mathfrak m'')^2 \to \mathfrak m'/(\mathfrak m')^2 \to 0$ shows that $\dim ( \mathfrak m'/(\mathfrak m')^2 ) = d$. Since $S'_{\mathfrak m'}$ surjects onto $S_{\mathfrak m}$ we see that $\dim (S_{\mathfrak m'}) \geq d$. Hence by the discussion preceding Definition 10.60.10 we conclude that $S'_{\mathfrak m'}$ is regular of dimension $d$ as well. Because $S'$ was cut out by $c = n - d$ equations we conclude that there exists a $g' \in S'$, $g' \not\in \mathfrak m'$ such that $S'_{g'}$ is a global complete intersection over $k$, see Lemma 10.135.4. Also the map $S'_{\mathfrak m'} \to S_{\mathfrak m}$ is a surjection of Noetherian local domains of the same dimension and hence an isomorphism. Hence $S' \to S$ is surjective with finitely generated kernel and becomes an isomorphism after localizing at $\mathfrak m'$. Thus we can find $g' \in S'$, $g \not\in \mathfrak m'$ such that $S'_{g'} \to S_{g'}$ is an isomorphism. All in all we conclude that after replacing $S$ by a principal localization we may assume that $S$ is a global complete intersection. At this point we may write $S = k[x_1, \ldots , x_ n]/(f_1, \ldots , f_ c)$ with $\dim S = n - c$. Recall that the naive cotangent complex of this algebra is given by $\bigoplus S \cdot f_ j \to \bigoplus S \cdot \text{d}x_ i$ see Lemma 10.136.13. By Lemma 10.137.16 in order to show that $S$ is smooth at $\mathfrak m$ we have to show that one of the $c \times c$ minors $g_ I$ of the matrix “$A$” giving the map above does not vanish at $\mathfrak m$. By Lemma 10.140.1 the matrix $A \bmod \mathfrak m$ has rank $c$. Thus we win. $\square$ Lemma 10.140.3. Let $k$ be any field. Let $S$ be a finite type $k$-algebra. Let $X = \mathop{\mathrm{Spec}}(S)$. Let $\mathfrak q \subset S$ be a prime corresponding to $x \in X$. The following are equivalent: 1. The $k$-algebra $S$ is smooth at $\mathfrak q$ over $k$. 2. We have $\dim _{\kappa (\mathfrak q)} \Omega _{S/k} \otimes _ S \kappa (\mathfrak q) \leq \dim _ x X$. 3. We have $\dim _{\kappa (\mathfrak q)} \Omega _{S/k} \otimes _ S \kappa (\mathfrak q) = \dim _ x X$. Moreover, in this case the local ring $S_{\mathfrak q}$ is regular. Proof. If $S$ is smooth at $\mathfrak q$ over $k$, then there exists a $g \in S$, $g \not\in \mathfrak q$ such that $S_ g$ is standard smooth over $k$, see Lemma 10.137.10. A standard smooth algebra over $k$ has a module of differentials which is free of rank equal to the dimension, see Lemma 10.137.7 (use that a relative global complete intersection over a field has dimension equal to the number of variables minus the number of equations). Thus we see that (1) implies (3). To finish the proof of the lemma it suffices to show that (2) implies (1) and that it implies that $S_{\mathfrak q}$ is regular. Assume (2). By Nakayama's Lemma 10.20.1 we see that $\Omega _{S/k, \mathfrak q}$ can be generated by $\leq \dim _ x X$ elements. We may replace $S$ by $S_ g$ for some $g \in S$, $g \not\in \mathfrak q$ such that $\Omega _{S/k}$ is generated by at most $\dim _ x X$ elements. Let $K/k$ be an algebraically closed field extension such that there exists a $k$-algebra map $\psi : \kappa (\mathfrak q) \to K$. Consider $S_ K = K \otimes _ k S$. Let $\mathfrak m \subset S_ K$ be the maximal ideal corresponding to the surjection $\xymatrix{ S_ K = K \otimes _ k S \ar[r] & K \otimes _ k \kappa (\mathfrak q) \ar[r]^-{\text{id}_ K \otimes \psi } & K. }$ Note that $\mathfrak m \cap S = \mathfrak q$, in other words $\mathfrak m$ lies over $\mathfrak q$. By Lemma 10.116.6 the dimension of $X_ K = \mathop{\mathrm{Spec}}(S_ K)$ at the point corresponding to $\mathfrak m$ is $\dim _ x X$. By Lemma 10.114.6 this is equal to $\dim ((S_ K)_{\mathfrak m})$. By Lemma 10.131.12 the module of differentials of $S_ K$ over $K$ is the base change of $\Omega _{S/k}$, hence also generated by at most $\dim _ x X = \dim ((S_ K)_{\mathfrak m})$ elements. By Lemma 10.140.2 we see that $S_ K$ is smooth at $\mathfrak m$ over $K$. By Lemma 10.137.18 this implies that $S$ is smooth at $\mathfrak q$ over $k$. This proves (1). Moreover, we know by Lemma 10.140.2 that the local ring $(S_ K)_{\mathfrak m}$ is regular. Since $S_{\mathfrak q} \to (S_ K)_{\mathfrak m}$ is flat we conclude from Lemma 10.110.9 that $S_{\mathfrak q}$ is regular. $\square$ The following lemma can be significantly generalized (in several different ways). Lemma 10.140.4. Let $k$ be a field. Let $R$ be a Noetherian local ring containing $k$. Assume that the residue field $\kappa = R/\mathfrak m$ is a finitely generated separable extension of $k$. Then the map $\text{d} : \mathfrak m/\mathfrak m^2 \longrightarrow \Omega _{R/k} \otimes _ R \kappa (\mathfrak m)$ is injective. Proof. We may replace $R$ by $R/\mathfrak m^2$. Hence we may assume that $\mathfrak m^2 = 0$. By assumption we may write $\kappa = k(\overline{x}_1, \ldots , \overline{x}_ r, \overline{y})$ where $\overline{x}_1, \ldots , \overline{x}_ r$ is a transcendence basis of $\kappa$ over $k$ and $\overline{y}$ is separable algebraic over $k(\overline{x}_1, \ldots , \overline{x}_ r)$. Say its minimal equation is $P(\overline{y}) = 0$ with $P(T) = T^ d + \sum _{i < d} a_ iT^ i$, with $a_ i \in k(\overline{x}_1, \ldots , \overline{x}_ r)$ and $P'(\overline{y}) \not= 0$. Choose any lifts $x_ i \in R$ of the elements $\overline{x}_ i \in \kappa$. This gives a commutative diagram $\xymatrix{ R \ar[r] & \kappa \\ & k(\overline{x}_1, \ldots , \overline{x}_ r) \ar[lu]^\varphi \ar[u] }$ of $k$-algebras. We want to extend the left upwards arrow $\varphi$ to a $k$-algebra map from $\kappa$ to $R$. To do this choose any $y \in R$ lifting $\overline{y}$. To see that it defines a $k$-algebra map defined on $\kappa \cong k(\overline{x}_1, \ldots , \overline{x}_ r)[T]/(P)$ all we have to show is that we may choose $y$ such that $P^\varphi (y) = 0$. If not then we compute for $\delta \in \mathfrak m$ that $P(y + \delta ) = P(y) + P'(y)\delta$ because $\mathfrak m^2 = 0$. Since $P'(y)\delta = P'(\overline{y})\delta$ we see that we can adjust our choice as desired. This shows that $R \cong \kappa \oplus \mathfrak m$ as $k$-algebras! From a direct computation of $\Omega _{\kappa \oplus \mathfrak m/k}$ the lemma follows. $\square$ Lemma 10.140.5. Let $k$ be a field. Let $S$ be a finite type $k$-algebra. Let $\mathfrak q \subset S$ be a prime. Assume $\kappa (\mathfrak q)$ is separable over $k$. The following are equivalent: 1. The algebra $S$ is smooth at $\mathfrak q$ over $k$. 2. The ring $S_{\mathfrak q}$ is regular. Proof. Denote $R = S_{\mathfrak q}$ and denote its maximal by $\mathfrak m$ and its residue field $\kappa$. By Lemma 10.140.4 and 10.131.9 we see that there is a short exact sequence $0 \to \mathfrak m/\mathfrak m^2 \to \Omega _{R/k} \otimes _ R \kappa \to \Omega _{\kappa /k} \to 0$ Note that $\Omega _{R/k} = \Omega _{S/k, \mathfrak q}$, see Lemma 10.131.8. Moreover, since $\kappa$ is separable over $k$ we have $\dim _{\kappa } \Omega _{\kappa /k} = \text{trdeg}_ k(\kappa )$. Hence we get $\dim _{\kappa } \Omega _{R/k} \otimes _ R \kappa = \dim _\kappa \mathfrak m/\mathfrak m^2 + \text{trdeg}_ k (\kappa ) \geq \dim R + \text{trdeg}_ k (\kappa ) = \dim _{\mathfrak q} S$ (see Lemma 10.116.3 for the last equality) with equality if and only if $R$ is regular. Thus we win by applying Lemma 10.140.3. $\square$ Lemma 10.140.6. Let $R \to S$ be a $\mathbf{Q}$-algebra map. Let $f \in S$ be such that $\Omega _{S/R} = S \text{d}f \oplus C$ for some $S$-submodule $C$. Then 1. $f$ is not nilpotent, and 2. if $S$ is a Noetherian local ring, then $f$ is a nonzerodivisor in $S$. Proof. For $a \in S$ write $\text{d}(a) = \theta (a)\text{d}f + c(a)$ for some $\theta (a) \in S$ and $c(a) \in C$. Consider the $R$-derivation $S \to S$, $a \mapsto \theta (a)$. Note that $\theta (f) = 1$. If $f^ n = 0$ with $n > 1$ minimal, then $0 = \theta (f^ n) = n f^{n - 1}$ contradicting the minimality of $n$. We conclude that $f$ is not nilpotent. Suppose $fa = 0$. If $f$ is a unit then $a = 0$ and we win. Assume $f$ is not a unit. Then $0 = \theta (fa) = f\theta (a) + a$ by the Leibniz rule and hence $a \in (f)$. By induction suppose we have shown $fa = 0 \Rightarrow a \in (f^ n)$. Then writing $a = f^ nb$ we get $0 = \theta (f^{n + 1}b) = (n + 1)f^ nb + f^{n + 1}\theta (b)$. Hence $a = f^ n b = -f^{n + 1}\theta (b)/(n + 1) \in (f^{n + 1})$. Since in the Noetherian local ring $S$ we have $\bigcap (f^ n) = 0$, see Lemma 10.51.4 we win. $\square$ The following is probably quite useless in applications. Lemma 10.140.7. Let $k$ be a field of characteristic $0$. Let $S$ be a finite type $k$-algebra. Let $\mathfrak q \subset S$ be a prime. The following are equivalent: 1. The algebra $S$ is smooth at $\mathfrak q$ over $k$. 2. The $S_{\mathfrak q}$-module $\Omega _{S/k, \mathfrak q}$ is (finite) free. 3. The ring $S_{\mathfrak q}$ is regular. Proof. In characteristic zero any field extension is separable and hence the equivalence of (1) and (3) follows from Lemma 10.140.5. Also (1) implies (2) by definition of smooth algebras. Assume that $\Omega _{S/k, \mathfrak q}$ is free over $S_{\mathfrak q}$. We are going to use the notation and observations made in the proof of Lemma 10.140.5. So $R = S_{\mathfrak q}$ with maximal ideal $\mathfrak m$ and residue field $\kappa$. Our goal is to prove $R$ is regular. If $\mathfrak m/\mathfrak m^2 = 0$, then $\mathfrak m = 0$ and $R \cong \kappa$. Hence $R$ is regular and we win. If $\mathfrak m/ \mathfrak m^2 \not= 0$, then choose any $f \in \mathfrak m$ whose image in $\mathfrak m/ \mathfrak m^2$ is not zero. By Lemma 10.140.4 we see that $\text{d}f$ has nonzero image in $\Omega _{R/k}/\mathfrak m\Omega _{R/k}$. By assumption $\Omega _{R/k} = \Omega _{S/k, \mathfrak q}$ is finite free and hence by Nakayama's Lemma 10.20.1 we see that $\text{d}f$ generates a direct summand. We apply Lemma 10.140.6 to deduce that $f$ is a nonzerodivisor in $R$. Furthermore, by Lemma 10.131.9 we get an exact sequence $(f)/(f^2) \to \Omega _{R/k} \otimes _ R R/fR \to \Omega _{(R/fR)/k} \to 0$ This implies that $\Omega _{(R/fR)/k}$ is finite free as well. Hence by induction we see that $R/fR$ is a regular local ring. Since $f \in \mathfrak m$ was a nonzerodivisor we conclude that $R$ is regular, see Lemma 10.106.7. $\square$ Example 10.140.8. Lemma 10.140.7 does not hold in characteristic $p > 0$. The standard examples are the ring maps $\mathbf{F}_ p \longrightarrow \mathbf{F}_ p[x]/(x^ p)$ whose module of differentials is free but is clearly not smooth, and the ring map ($p > 2$) $\mathbf{F}_ p(t) \to \mathbf{F}_ p(t)[x, y]/(x^ p + y^2 + \alpha )$ which is not smooth at the prime $\mathfrak q = (y, x^ p + \alpha )$ but is regular. Using the material above we can characterize smoothness at the generic point in terms of field extensions. Lemma 10.140.9. Let $R \to S$ be an injective finite type ring map with $R$ and $S$ domains. Then $R \to S$ is smooth at $\mathfrak q = (0)$ if and only if the induced extension $L/K$ of fraction fields is separable. Proof. Assume $R \to S$ is smooth at $(0)$. We may replace $S$ by $S_ g$ for some nonzero $g \in S$ and assume that $R \to S$ is smooth. Then $K \to S \otimes _ R K$ is smooth (Lemma 10.137.4). Moreover, for any field extension $K'/K$ the ring map $K' \to S \otimes _ R K'$ is smooth as well. Hence $S \otimes _ R K'$ is a regular ring by Lemma 10.140.3, in particular reduced. It follows that $S \otimes _ R K$ is a geometrically reduced over $K$. Hence $L$ is geometrically reduced over $K$, see Lemma 10.43.3. Hence $L/K$ is separable by Lemma 10.44.1. Conversely, assume that $L/K$ is separable. We may assume $R \to S$ is of finite presentation, see Lemma 10.30.1. It suffices to prove that $K \to S \otimes _ R K$ is smooth at $(0)$, see Lemma 10.137.18. This follows from Lemma 10.140.5, the fact that a field is a regular ring, and the assumption that $L/K$ is separable. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
• Create Account ### #ActualSimonForsman Posted 17 July 2012 - 05:28 AM So I am going to learn a 3D modeling program for the UDK. I am wondering which program would be the best to go with. Not just for simple models but like intense ones like extreme detail in characters, airplanes, guns, etc. What do you think and why? The tool doesn't really affect the quality of the end result when it comes to modelling (You can type out perfect models in notepad using an ascii format if you spend enough time on it), 3dsmax is $3500 for a commercial license, Blender is$0 , If you're a student you can get 3dsmax for $0 aswell but can't use that version commercially. The main advantages you get with 3dsmax is: Higher productivity (You don't have to save that many hours for the$3500 to pay off and some things are alot easier with 3dsmax(I havn't been able to test 3dsmax for several years so i don't know how it stands now, but even 6 year old versions of 3dsmax are easier to rig and animate models with than blender is)) Better rendering and post-processing of rendered images. (Great for pre-rendered cutscenes, irrelevant for game models) If you're on a tight budget i think you should consider putting your money on photoshop first as i think it will give you the highest productivity gain per $spent. (Allthough it depends on what kind of work you do with it) (For textures its primarily the content aware brushes that simplify things for you) ### #1SimonForsman Posted 17 July 2012 - 01:30 AM So I am going to learn a 3D modeling program for the UDK. I am wondering which program would be the best to go with. Not just for simple models but like intense ones like extreme detail in characters, airplanes, guns, etc. What do you think and why? The tool doesn't really affect the quality of the end result when it comes to modelling (You can type out perfect models in notepad using an ascii format if you spend enough time on it), 3dsmax is$3500 for a commercial license, Blender is $0 , If you're a student you can get 3dsmax for$0 aswell but can't use that version commercially. The main advantages you get with 3dsmax is: Higher productivity (You don't have to save that many hours for the $3500 to pay off and some things are alot easier with 3dsmax(I havn't been able to test 3dsmax for several years so i don't know how it stands now, but even 6 year old versions of 3dsmax are easier to rig and animate models with than blender is)) Better rendering and post-processing of rendered images. (Great for pre-rendered cutscenes, irrelevant for game models) If you're on a tight budget i think you should consider putting your money on photoshop first as i think it will give you the highest productivity gain per$ spent. (Allthough it depends on what kind of work you do with it) PARTNERS
# SFML position problem when drawing a CircleShape I am not sure but I think it's a problem with Windows 10 borderless windows The problem is when I draw the circle its always drawn with a little offset from where it should be. In the top of the window the shape is drawn above the mouse, in the bottom the shape is drawn under the mouse and so for each side of the window This is my code: #include "SFML/Graphics.hpp" #include <stdio.h> int main(int argc, char **argv){ sf::RenderWindow window(sf::VideoMode(800, 600), "SFML Application"); window.setFramerateLimit(60); sf::RenderTexture renderTexture; renderTexture.create(800,600); renderTexture.setSmooth(true); renderTexture.clear(sf::Color(255,255,255,255)); renderTexture.display(); int wpoint=10; sf::CircleShape point(wpoint,50); point.setFillColor(sf::Color(255, 0, 0, 128)); while(1){ sf::Event event; while (window.pollEvent(event)){ if (event.type==sf::Event::MouseMoved && sf::Mouse::isButtonPressed(sf::Mouse::Left)){ //printf("%d\n",sf::Mouse::isButtonPressed(sf::Mouse::Left)); //point.setPosition(event.mouseMove.x-(wpoint/2),event.mouseMove.y-(wpoint/2)); //point.setPosition(event.mouseMove.x,event.mouseMove.y); //auto pos=sf::Mouse::getPosition(); auto poswindow=window.getPosition(); printf("%d,%d\n",poswindow.x,poswindow.y); //This prints -8,0 when I position the window on the left-top corner (0,0)!! //auto pos=sf::Mouse::getPosition(window); auto pos=window.mapPixelToCoords(sf::Mouse::getPosition(window)); point.setPosition(pos.x,pos.y); renderTexture.draw(point, sf::BlendAlpha); } if (event.type==sf::Event::Closed || event.type==sf::Event::KeyPressed && event.key.code==sf::Keyboard::Escape) return 0; } sf::Sprite canvas(renderTexture.getTexture()); // draw window.clear(); window.draw(canvas); window.display(); } return 0; } One thing I noticed is when I tried to draw the circle on (10,10) it was painted under the title bar. I read somewhere that it's probably a thing that only happens with some Intel drivers, I just hope that the users don't have it or I will be in troubles. It's weird because I have been working with the Winapi for a long time and the mouse coordinates have always been correct. Do you have the same problem if you try to draw directly to the RenderWindow (window.draw(point, sf::BlendAlpha)), instead of drawing to a RenderTexture and then to a sprite? Because doing so is essentially doubling the buffer size for the draw unnecessarily. I would recommend you simply draw to the window for performance reasons. Finally, keep in mind that any sf::Shape has its origin set by default to (0, 0). While this doesn't explain how the offset changes depending on where you are, it could be worthwhile, if I understand your goal properly, to set the origin to the center of the circle. You can use point.setOrigin(point.getRadius(), point.getRadius()) to do that.
# Tag Info 4 No, confinement means that such a state cannot exist, or more precisely, it cannot have a finite energy/mass. If such a colored state had a finite energy, it would mean that far enough from the colored particle, the quantum fields very closely approach the vacuum state. But if that's so, you could always combine two such objects of opposite colors. The total ... 1 Below that regime, we have the strongly coupled regime where perturbative approaches fail, due to the large value of coupling constant $\alpha_S$. The same is related to the QCD $\beta$ function via this relation. The behavior as a function of the energy scale looks roughly like this. Any perturbation expansion in this regime would give a divergent series, ... 1 An other way to see the argument of the answer of @fuenfundachtzig , is that, concerning $SU(3)$ representations, there is an equivalence between $(3*3)_\text{antisymmetrised}$ representation ("red * green") and $3^*$ representation ("antiblue"). Why ? Well, thanks to the completely anti-symmetric Levi_Civita symbol. Using objects upon which act the ... 1 It works if you assign colors like this: one red up, one green up, down is blue, $X$ takes red and green which are equivalent to antiblue ("yellow"), thus color is conserved. I didn't take into account the last fact which explains my confusion. 2 At the end of the day, the diagram shows the distribution in angular differences between pairs of charged particles produced in the collisions. $\Delta \phi = \phi_1-\phi_2$ is the difference in azimuthal angles $\phi$ of those pairs. $\Delta \eta = \eta_1 - \eta_2$ is the difference in pseudorapidities $\eta$ of those pairs. The $\phi$ and $\eta$ ... 0 i'm not sure what kind of answer you want, but let me try: The generalization of electromagnetic forces is what is generally known as a gauge theory, which possesses a symmetry group called the gauge group $\mathcal{G}$. Electromagnetism (EM) is the simplest of gauge theories since it has the simplest gauge group that yields non-trivial physics. In EM, one ... Top 50 recent answers are included
# How much RAM is needed for simulation studies using R? I run several simulation studies with R using a MacBook Pro, which has 8 GB of RAM. Unfortunately, some of the simulations cannot be done due to limited memory. My question is how much RAM is needed for large simulations? (For example, each data set has 10000 subjects and needs to generate 1000 datasets). Is 64 GB of RAM big enough for large R simulations? - How long is a piece of string? – Gavin Simpson Jul 7 '11 at 18:01 Slightly more helpful... Why do you need all 1000 data sets in memory at once? Draw a simulation do what you want and throw the simulated data away. If you write the simulation as a reproducible script then you can always regenerate the exact analysis. If you might need the simulated data, draw a single simulation, write it out to disk, discard the one in RAM and draw another, etc. What you describe doesn't seem large... Perhaps more information would help people comment. Also this is OT for this forum. StackOverflow would be better but you need to improve the Q first. – Gavin Simpson Jul 7 '11 at 18:03 I think Gavin actually answered your question, indeed write each of the generated data set as .csv or whatever format you want and rm() it at once. – Dmitrij Celov Jul 8 '11 at 0:03 Hi thanks to both of you first. When I run a simulation which has 1000 dataset and each data set has 30000 subjects. Soon, the R console shows: Error: cannot allocate vector of size 686.6 Mb. – Tu.2 Jul 8 '11 at 4:19 You might get better suggestions if you actually post some code. – curious_cat Mar 1 '13 at 14:35 Why do you need all 1000 data sets in memory at once? If I were exhausting memory on a lengthy simulation, I'd draw a single simulation, do what I wanted and then throw the simulated data away before moving on to the next simulation If you write the simulation as a reproducible script then you can always regenerate the exact analysis. If you might need the simulated data again, draw a single simulation, write it out to disk (e.g. save() or write.csv()), discard the one in RAM and draw another, etc. What you describe doesn't seem overly large but it will depend, inter alia, on what data types you have in the data sets and what modelling functions you are using. The only way anyone can tell if 64GB will be sufficient is you - profile a single simulation and ascertain the memory usage, which should scale linearly over the number of simulations you want to hold in RAM. However, given the size of the data you are generating/using, 8GB should be sufficient if you don't hold all the simulated data sets in memory at once. - +1 for moving comments :) – Brandon Bertelsen Sep 3 '11 at 19:56 Your question is fairly complex, because the answer is: as many as a simulation needs. It depends entirely on the size of your data set, and how you are coding things. I have, for example, simulations that can be run on computers with megabytes of available RAM - and one which crushed a cluster node with 96 GB of memory. R keeps all its data in memory, which means as you start having huge data sets - or large collections of data sets - you're going to top out your RAM. This system has the benefit of being extremely fast, but limited by RAM. If you're running out of memory resources, recode your project to save these data sets to a file, clear them from R -using rm() - and then open them back up when you need them. This will slow down your code somewhat, but is way cheaper than buying new RAM, especially as a Macbook Pro is going to top out at 16 GB anyway. - The RAM shouldn't be an issue since you usually have unlimited virtual memory. The error "cannot allocate vector of size" is most probably due to the limited address space of your system. A 32-bit system can usually not address more than 4 Gb of memory and I've found that 64-bit seems to work better for R. I don't use Mac but you can check if you're running a 64-bit system in the about menu and then install the R version where you actively choose the 64-bit version. ## Update: You could also try just try telling R to allocate more memory: # Current memory limit memory.limit() # Set limit to max 32-bit # Limit must be less than: 2^32/2^20 memory.limit(4095) # Check that it did what you wanted memory.limit() Here's by the way part of the help on memory.size/limit: If 32-bit R is run on most 64-bit versions of Windows the maximum value of obtainable memory is just under 4Gb. For a 64-bit versions of R under 64-bit Windows the limit is currently 8Tb. - memory.limit is Windows specific. Also, swapping your sim data in and out of the pagefile is a sure way to slow down the whole endeavour. – rpierce Dec 11 '14 at 5:20
# Large size of plot files generated using Plots.jl with GR backend The sizes of the plot files generated by GR are sometimes ridiculously large. Changing dpi does not help significantly. After some experimenting, it seems it is caused by LaTeX text. For example gr() xi = 0:0.1:10 plot(xi,sin.(xi), xlabel=L"\sin\cos x_y", ylabel=L"\frac{42}{42}“, label=L”\mathcal N") savefig(“fig.pdf”) generates plot which has 1.94 MB. I have checked this behaviour on 2 computers. It is also the same when not using LaTeXStrings, like ylabel = “$\sin$” Does anyone know how to fix this problem? LaTeX text is internally created using dvipng - so, this is the normal behaviour, as all math text is displayed using images. Does anyone know how to fix this problem? not really, but pyplot() produces small files. Need to replace \mathcal, though. This is a reasonable advice, unfortunately after upgrading to Julia 1.0 I cannot make pyplot() working. Frankly, I would rather switch my project to Python instead of wasting time making Python work in Julia. Maybe a hybrid: write results to the file and plot them in Python. Yes, I mean, who would even consider using a programming language in which one of the plotting packages causes saved png files to be large? 1 Like If you really want graphics with LaTeX, then this is perhaps a nice idea: https://kristofferc.github.io/PGFPlotsX.jl/latest/examples/gallery.html#manual_gallery-1 2 Likes Thank you, I know it, but I haven’t checked it in detail, maybe I should. The devil is in the details, I often need to tweak plots a lot, so the control options are crucial. I have hoped that Plots.jl could be easy to use and strong enough, but I am becoming increasingly doubtful. I tried doing what I can in Plots.jl and the rest in PyPlot.jl, but some options from matplotlib did not work. Then it stopped working althogether. For matplotlib at least I know it is (mostly) working and you really can change a lot… If you really need all the control options, just use PyPlot (without Plots). That should give essentially the same experience as you have with matplotlib in Python. Plots is not as customisable. I have tried. Basic stuff did work, but using some more advanced options resulted in errors. If you want to get help, your best route is probably creating an MWE and reporting said errors.
It is currently 10 Dec 2017, 19:21 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If rs ≠ 0, is t an integer? (1) t = 3r – 2s (2) r = s Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 42529 Kudos [?]: 135163 [0], given: 12664 If rs ≠ 0, is t an integer? (1) t = 3r – 2s (2) r = s [#permalink] ### Show Tags 12 Oct 2017, 04:15 00:00 Difficulty: 15% (low) Question Stats: 87% (00:35) correct 13% (00:18) wrong based on 60 sessions ### HideShow timer Statistics If rs ≠ 0, is t an integer? (1) t = 3r – 2s (2) r = s [Reveal] Spoiler: OA _________________ Kudos [?]: 135163 [0], given: 12664 Director Joined: 25 Feb 2013 Posts: 613 Kudos [?]: 302 [0], given: 38 Location: India GPA: 3.82 If rs ≠ 0, is t an integer? (1) t = 3r – 2s (2) r = s [#permalink] ### Show Tags 12 Oct 2017, 08:12 Bunuel wrote: If rs ≠ 0, is t an integer? (1) t = 3r – 2s (2) r = s Statement 1: if $$r$$ & $$s$$ are integers then $$t$$ is integer, if not then $$t$$ may or may not be an integer. hence insufficient Statement 2: no information about $$t$$. Hence insufficient Combining 1 & 2 we get $$t=3r-2r=r$$, if $$r$$ is an integer then $$t$$ is an integer if not then $$t$$ is not an integer. Hence insufficient Option E Kudos [?]: 302 [0], given: 38 VP Joined: 26 Mar 2013 Posts: 1288 Kudos [?]: 302 [0], given: 166 Re: If rs ≠ 0, is t an integer? (1) t = 3r – 2s (2) r = s [#permalink] ### Show Tags 12 Oct 2017, 23:58 Bunuel wrote: If rs ≠ 0, is t an integer? (1) t = 3r – 2s (2) r = s (2) r = s Insufficient (1) t = 3r – 2s Let r = s = 1....t = 1.............Answer is Yes Let r = s = 1/2..t = 1/2.............Answer is No Insufficient Combine 1 & 2 Insufficient Kudos [?]: 302 [0], given: 166 Target Test Prep Representative Affiliations: Target Test Prep Joined: 04 Mar 2011 Posts: 1771 Kudos [?]: 971 [0], given: 5 Re: If rs ≠ 0, is t an integer? (1) t = 3r – 2s (2) r = s [#permalink] ### Show Tags 27 Nov 2017, 19:16 Bunuel wrote: If rs ≠ 0, is t an integer? (1) t = 3r – 2s (2) r = s Statement One Alone: t = 3r – 2s If r = 1 and s = 1, then 3r - 2s = 1 and t is an integer; however, if r = 1/2 and s = 1, then 3r - 2s = -0.5 and t is not an integer. Statement one alone is not sufficient to answer the question. Statement Two Alone: r = s Since statement two does not provide any information regarding t, statement two alone is not sufficient to answer the question. Statements One and Two Together: Using the statements together, we have t = 3r - 2r, or t = r. However, since we do not know whether r is an integer, we cannot determine whether t is an integer. _________________ Jeffery Miller GMAT Quant Self-Study Course 500+ lessons 3000+ practice problems 800+ HD solutions Kudos [?]: 971 [0], given: 5 Re: If rs ≠ 0, is t an integer? (1) t = 3r – 2s (2) r = s   [#permalink] 27 Nov 2017, 19:16 Display posts from previous: Sort by
treslagosnv 2022-02-01 How do we know that the quadratic $3{y}^{2}-y-12$ has real root? (a) Notice the quadratic cannot be factored into the product of two binomials with integer cofficients. Does this mean that the quadratic does not have any real roots? (b) If the answer to part (a) is "no", then explain how we know that the quadratic does have real roots. (c) Suppose the quadratic has roots . Find a quadratic with roots . Amari Larsen Expert (a) This does not mean the quadratic has no real roots. A clearer example of this is ${y}^{2}-2=\left(y+\sqrt{2}\right)\left(y-\sqrt{2}\right)$. (b) One way to know is to observe that plugging in yields $3\cdot {0}^{2}-0-12=-12$, $3\cdot {3}^{2}-3-12=12$. so somewhere between 0 and 3 the quadratic must equal 0. Another way to know is by computing the discriminant, which is $\mathrm{\Delta }={b}^{2}-4ac={\left(-1\right)}^{2}-4\cdot 3\cdot \left(-12\right)=145.$ The quadratic has a real root because the discriminant is nonnegative. (c) If r and s are roots of $3{y}^{2}-y-12,$ then it follows that are roots of $3{\left(y-2\right)}^{2}-\left(y-2\right)-12$ which by a little bit of algebra simplifies to $3{y}^{2}-13y+2$. Nevaeh Jensen Expert Using Vieta theorem, $r+s=\frac{1}{3},r\cdot s=-4$. Thus, $\left(r+2\right)\left(s+2\right)=rs+2\left(r+s\right)+4=-4+\frac{2}{3}+4=\frac{2}{3}$ and $\left(r+2\right)+\left(s+2\right)=r+s+4=\frac{13}{3}$ Applying Vieta in reverse, we can get a solution for c): $3{y}^{2}-13y+2=0$ To show that there is a real root, rewrite the equation as $3{\left(x-\frac{1}{6}\right)}^{2}=12+\frac{1}{12}$ and use that any non-negative real can be written as a square of some real. Or you can use the hint provided and apply Intermediate Value Theorem. Do you have a similar question?
NULL Countries | Regions Countries | Regions Article Types Article Types Year Volume Issue Pages IMR Press / FBL / Volume 27 / Issue 6 / DOI: 10.31083/j.fbl2706188 Open Access Original Research An Inverse QSAR Method Based on Linear Regression and Integer Programming Show Less 1 Department of Applied Mathematics and Physics, Kyoto University, 606-8501 Kyoto, Japan 2 Graduate School of Advanced Integrated Studies in Human Survavibility (Shishu-Kan), Kyoto University, 606-8306 Kyoto, Japan 3 Bioinformatics Center, Institute for Chemical Research, Kyoto University, 611-0011 Uji, Japan *Correspondence: [email protected] (Jianshen Zhu) These authors contributed equally. Academic Editors: Agnieszka Kaczor and Graham Pawelec Front. Biosci. (Landmark Ed) 2022 , 27(6), 188; https://doi.org/10.31083/j.fbl2706188 Submitted: 16 February 2022 | Revised: 28 March 2022 | Accepted: 7 April 2022 | Published: 10 June 2022 This is an open access article under the CC BY 4.0 license. Abstract Background: Drug design is one of the important applications of biological science. Extensive studies have been done on computer-aided drug design based on inverse quantitative structure activity relationship (inverse QSAR), which is to infer chemical compounds from given chemical activities and constraints. However, exact or optimal solutions are not guaranteed in most of the existing methods. Method: Recently a novel framework based on artificial neural networks (ANNs) and mixed integer linear programming (MILP) has been proposed for designing chemical structures. This framework consists of two phases: an ANN is used to construct a prediction function, and then an MILP formulated on the trained ANN and a graph search algorithm are used to infer desired chemical structures. In this paper, we use linear regression instead of ANNs to construct a prediction function. For this, we derive a novel MILP formulation that simulates the computation process of a prediction function by linear regression. Results: For the first phase, we performed computational experiments using 18 chemical properties, and the proposed method achieved good prediction accuracy for a relatively large number of properties, in comparison with ANNs in our previous work. For the second phase, we performed computational experiments on five chemical properties, and the method could infer chemical structures with around up to 50 non-hydrogen atoms. Conclusions: Combination of linear regression and integer programming is a potentially useful approach to computational molecular design. Keywords machine learning linear regression integer programming chemoinformatics materials informatics QSAR/QSPR molecular design 1. Introduction Analysis of the activities and properties of chemical compounds is important not only for chemical science but also for biological science because chemical compounds play important roles in metabolic and many other pathways. Computational prediction of chemical activities from their structural data has been studied for several decades under the name of quantitative structure activity relationship (QSAR) [1]. In addition to traditional regression-based methods, various machine learning methods have been applied to QSAR [2, 3]. Recently, neural networks and deep-learning technologies have extensively been applied to QSAR [4]. Inference of chemical structures with desired chemical activities under some constraints is also important because of its potential applications to drug design, and the problem has been studied under the name of inverse quantitative structure activity relationship (inverse QSAR). Chemical compounds are commonly represented by undirected graphs called chemical graphs in which vertices and edges correspond to atoms and chemical bonds, respectively. Due to the difficulty of directly handling chemical graphs in both QSAR and inverse QSAR, chemical compounds are usually represented as vectors of integer or real numbers, which are called descriptors in chemoinformatics and correspond to feature vectors in machine learning. In inverse QSAR, one major approach is to first infer feature vectors from given chemical activities and constraints, and then reconstruct chemical structures from these feature vectors [5, 6, 7]. However, the reconstruction itself is not an easy task because the number of possible chemical graphs is huge. For example, the number of chemical graphs with up to 30 atoms (vertices) C, N, O, and S may exceed $10^{60}$ [8]. Indeed, the problem to infer a chemical graph from a given feature vector is known as a computationally difficult problem (precisely, NP-hard) except for some simple cases [9]. Most existing methods for inverse QSAR do not guarantee exact or optimal solutions due to these inherent difficulties. Recently, artificial neural networks (ANNs), in particular, graph convolutional networks [10] are extensively used for inverse QSAR. For example, recurrent neural networks [11, 12], variational autoencoders [13], grammar variational autoencoders [14], invertible flow models [15, 16], and generative adversarial networks [17] have been applied. However, these methods do not yet guarantee exact or optimal solutions. Akutsu and Nagamochi [18] proved that the computation process of a given ANN can be simulated as a mixed integer linear programming (MILP). Based on this result, a novel framework for inferring a set of chemical graphs has been developed [19, 20], which is illustrated in Fig. 1. This framework consists of two phases. In the first phase, it constructs a prediction function and in the second phase, it infers a chemical graph. There are three stages in the first phase of the framework. In Stage 1, a chemical property $\pi$ and a class $\mathcal{G}$ of graphs are selected, and a property function $a$ is defined so that $a(\mathbb{C})$ is the value of $\pi$ for a compound $\mathbb{C}\in\mathcal{G}$. Then we collect a data set $D_{\pi}$ of chemical graphs in $\mathcal{G}$ such that $a(\mathbb{C})$ is available for every $\mathbb{C}\in D_{\pi}$. In Stage 2, a feature function $f:\mathcal{G}\to\mathbb{R}^{K}$ for a positive integer $K$ is introduced. In Stage 3, a prediction function $\eta$ is constructed with an ANN $\mathcal{N}$ that, given a vector $x\in\mathbb{R}^{K}$, returns a value $y=\eta(x)\in\mathbb{R}$ so that $\eta(f(\mathbb{C}))$ serves as a predicted value to $a(\mathbb{C})$ of $\pi$ for each $\mathbb{C}\in D_{\pi}$. Given a target chemical value $y^{*}$, the second phase consists the next two phases to infer chemical graphs $\mathbb{C}^{*}$ with $\eta\left(f\left(\mathbb{C}^{*}\right)\right)=y^{*}$. A feature function $f$ and a prediction function $\eta$ are obtained in the first phase, and we call an additional constraint on the substructures of target chemical graphs a topological specification. In Stage 4, the following two MILP formulations are designed: - MILP $\mathcal{M}(x,y;\mathcal{C}_{1})$ with a set $\mathcal{C}_{1}$ of linear constraints on variables $x$ and $y$ (and some other auxiliary variables) simulates the process of computing $y:=\eta(x)$ from a vector $x$; and - MILP $\mathcal{M}(g,x;\mathcal{C}_{2})$ with a set $\mathcal{C}_{2}$ of linear constraints on variable $x$ and a variable vector $g$ that represents a chemical graph $\mathbb{C}$ (and some other auxiliary variables) simulates the process of computing $x:=f(\mathbb{C})$ from a chemical graph $\mathbb{C}$and chooses a chemical graph $\mathbb{C}$ that satisfies the given topological specification $\sigma$. Fig. 1. An illustration of a framework for inferring a set of chemical graphs $\mathbb{C}^{*}$. Given a target value $y^{*}\in\mathbb{R}$, the combined MILP $\mathcal{M}(g,x,y;\mathcal{C}_{1},\mathcal{C}_{2})$ is solved to find a feature vector $x^{*}\in\mathbb{R}^{K}$ and a chemical graph $\mathbb{C}^{\dagger}$ that satisfies the specification $\sigma$ such that $f(\mathbb{C}^{\dagger})=x^{*}$ and $\eta(x^{*})=y^{*}$ (where if the MILP is infeasible then this suggests that such a desired chemical graph does not exist). In Stage 5, by using the inferred chemical graph $\mathbb{C}^{\dagger}$, we generate other chemical graphs $\mathbb{C}^{*}$ such that $\eta(f(\mathbb{C}^{*}))=y^{*}$. Stage 4 MILP formulations to infer chemical graphs with cycle index 0, 1 and 2 are proposed in [20, 21, 22, 23], respectively, but no sophisticated topological specification was available yet. A restricted class of acyclic graphs that is characterized by an integer ${\rho}$, called a “branch-parameter” is introduced by Azam et al. [21]. This restricted class still covers most of the acyclic chemical compounds in the database. Akutsu and Nagamochi [24] extended the idea to define a restricted class of cyclic graphs, called “${\rho}$-lean cyclic graphs” and introduced a set of flexible rules for describing a topological specification. Tanaka et al. [25] used a decision tree instead of ANNs to construct a prediction function $\eta$ in Stage 3 in the framework and an MILP $\mathcal{M}(x,y;\mathcal{C}_{1})$ that simulates the computation process of a decision tree. Recently Shi et al. [26] proposed a new model to deal with an arbitrary graph in the framework called a two-layered model to represent the feature of a chemical graph. Also, the set of rules for describing a topological specification in [27] was refined so that a prescribed structure can be included in both of the acyclic and cyclic parts of a chemical graph $\mathbb{C}$. In this model, a chemical graph $\mathbb{C}$ with an integer ${\rho}\geq 1$, we consider two parts, namely, the exterior and the interior of the hydrogen-suppressed graph $\langle\mathbb{C}\rangle$ that is obtained by removing hydrogen from $\mathbb{C}$. The exterior consists of maximal acyclic induced subgraphs with height at most ${\rho}$ in $\langle\mathbb{C}\rangle$ and the interior is the connected subgraph of $\langle\mathbb{C}\rangle$ obtained by excluding the exterior. Shi et al. [26] also defined a feature vector $f(\mathbb{C})$ of a chemical graph $\mathbb{C}$ as a combination of the frequency of adjacent atom pairs in the interior and the frequency of chemical acyclic graphs among the set of chemical rooted trees $T_{u}$ rooted at interior-vertices $u$. Recently, Tanaka et al. [25] extended the model in order to directly treat a chemical graph with hydrogens so that the feature of the exterior can be represented with more variety of chemical rooted trees. The contribution of this paper is as follows. Firstly, we make a slight modification to a model of chemical graphs proposed by Tanaka et al. [25] so that we can treat a chemical element with multi-valence such as sulfur S and a chemical graph with cations and anions. Then, we consider the prediction function. One of the most important factors in the framework is the quality of a prediction function $\eta$ constructed in Stage 3. Also, overfitting is pointed out as a major issue in ANN-based approaches for QSAR because many parameters need to be optimized for ANNs [4]. In this paper, to construct a prediction function in Stage 3, we use linear regression instead of ANNs or decision trees. A learning algorithm for an ANN may not find a set of weights and biases that minimizes the error function since the algorithm simply iterates modification of the current weights and biases until it terminates at a local minimum value, and linear regression is much simpler than ANNs and decision trees and thereby we regard the performance of a prediction function by linear regression as the basis for other more sophisticated machine learning methods. In this paper, we derive an MILP formulation $\mathcal{M}(x,y;\mathcal{C}_{1})$ in Stage 4 to simulate the computation process of a prediction function by linear regression. For an MILP formulation $\mathcal{M}(g,x;\mathcal{C}_{2})$ that represents a feature function $f$ and a specification $\sigma$ in Stage 4, we can use the same formulation proposed by Tanaka et al. [25] with a slight modification (the detail of the MILP $\mathcal{M}(g,x;\mathcal{C}_{2})$ can be found in Supplementary Material). In Stage 5, we can also use the dynamic programming algorithm due to Tanaka et al. [25] with a slight modification to generate target chemical graphs $\mathbb{C}^{*}$ and the details are omitted in this paper. We implemented the framework based on the refined two-layered model and a prediction function by linear regression. The results of our computational experiments reveal a set of chemical properties to which a prediction function constructed by linear regression on our feature function performs well, in comparison with ANNs in our previous work. We also observe that chemical graphs with up to 50 non-hydrogen atoms can be inferred by the proposed method. The paper is organized as follows. Section 2 introduces some notions and terminologies on graphs, modeling of chemical compounds and our choice of descriptors. Section 3 describes our modification to the two-layered model. Section 4 reviews the idea of linear regression and formulates an MILP $\mathcal{M}(x,y;\mathcal{C}_{1})$ that simulates the computing process of a prediction function constructed by linear regression. Section 5 reports the results of some computational experiments conducted for 18 chemical properties such as vapor density and optical rotation. Section 6 gives conclusions with future work. Some technical details are given in Supplementary Material. 2. Preliminary In this section, we review some notions and terminologies related to graphs, modeling of chemical compounds introduced by Tanaka et al. [25] and our choice of descriptors. Let $\mathbb{R}$, $\mathbb{R}_{+}$, $\mathbb{Z}$ and $\mathbb{Z}_{+}$ denote the sets of reals, non-negative reals, integers and non-negative integers, respectively. For two integers $a$ and $b$, let $[a,b]$ denote the set of integers $i$ with $a\leq i\leq b$. Graph Given a graph $G$, let $V(G)$ and $E(G)$ denote the sets of vertices and edges, respectively. For a subset $V^{\prime}\subseteq V(G)$ (resp., $E^{\prime}\subseteq E(G))$ of a graph $G$, let $G-V^{\prime}$ (resp., $G-E^{\prime}$) denote the graph obtained from $G$ by removing the vertices in $V^{\prime}$ (resp., the edges in $E^{\prime}$), where we remove all edges incident to a vertex in $V^{\prime}$ in $G-V^{\prime}$. An edge subset $E^{\prime}\subseteq E(G)$ in a connected graph $G$ is called separating (resp., non-separating) if $G-E^{\prime}$ becomes disconnected (resp., $G-E^{\prime}$ remains connected). The rank$\mathrm{r}(G)$ of a connected graph $G$ is defined to be the minimum $|F|$ of an edge subset $F\subseteq E(G)$ such that $G-F$ contains no cycle, where $\mathrm{r}(G)=|E(G)|-|V(G)|+1$. Observe that $\mathrm{r}(G-E^{\prime})=\mathrm{r}(G)-|E^{\prime}|$ holds for any non-separating edge subset $E^{\prime}\subseteq E(G)$. An edge $e=u_{1}u_{2}\in E(G)$ in a connected graph $G$ is called a bridge if $\{e\}$ is separating, i.e., $G-e$ consists of two connected graphs $G_{i}$ containing vertex $u_{i}$, $i=1,2$. For a connected cyclic graph $G$, an edge $e$ is called a core-edge if it is in a cycle of $G$ or is a bridge $e=u_{1}u_{2}$ such that each of the connected graphs $G_{i}$, $i=1,2$, of $G-e$ contains a cycle. A vertex incident to a core-edge is called a core-vertex of $G$. A path with two end-vertices $u$ and $v$ is called a $u,v$-path. A vertex designated in a graph $G$ is called a root. In this paper, we designate at most two vertices as roots, and denote by $\mathrm{Rt}(G)$ the set of roots of $G$. We call a graph $G$rooted (resp., bi-rooted) if $|\mathrm{Rt}(G)|=1$ (resp., $|\mathrm{Rt}(G)|=2$), where we call $G$unrooted if $\mathrm{Rt}(G)=\emptyset$. For a graph $G$ possibly with roots, a leaf-vertex is defined to be a non-root vertex $v\in V(G)\setminus\mathrm{Rt}(G)$ with degree 1. Call the edge $uv$ incident to a leaf vertex $v$ a leaf-edge, and denote by $V_{\text{leaf }}(G)$ and $E_{\text{leaf }}(G)$ the sets of leaf-vertices and leaf-edges in $G$, respectively. For a graph or a rooted graph $G$, we define graphs $G_{i},i\in\mathbb{Z}_{+}$ obtained from $G$ by removing the set of leaf-vertices $i$ times so that (1)$G_{0}:=G;\quad G_{i+1}:=G_{i}-V_{\text{leaf }}\left(G_{i}\right),$ where we call a vertex $v\in V_{\text{leaf }}\left(G_{k}\right)$ a leaf $k$-branch and we say that a vertex $v\in V_{\text{leaf }}\left(G_{k}\right)$ has $\text{ height ht }(v)=k$ in $G$. The $\text{ height ht }(T)$ of a rooted tree $T$ is defined to be the maximum of $\operatorname{ht}(v)$ of a vertex $v\in V(T)$. For an integer $k\geq 0$, we call a rooted tree $T$ $k$-lean if $T$ has at most one leaf $k$-branch. For an unrooted cyclic graph $G$, we regard that the set of non-core-edges in $G$ induces a collection $\mathcal{T}$ of trees each of which is rooted at a core-vertex, where we call $G$ $k$-lean if each of the rooted trees in $\mathcal{T}$ is $k$-lean. Modeling of Chemical Compounds We introduce a set of chemical elements such as H (hydrogen), C (carbon), O (oxygen), N (nitrogen) and so on to represent a chemical compound. To distinguish a chemical element $a$ with multiple valences such as S (sulfur), we denote a chemical element $\mathrm{a}$ with a valence $i$ by $\mathrm{a}_{(i)}$, where we do not use such a suffix $(i)$ for a chemical element $\mathrm{a}$ with a unique valence. Let $\Lambda$ be a set of chemical elements $\mathrm{a}_{(i)}$. For example, $\Lambda=\left\{\mathrm{H},\mathrm{C},\mathrm{O},\mathrm{~{}N},\mathrm{P},% \mathrm{S}_{(2)},\mathrm{S}_{(4)},\mathrm{S}_{(6)}\right\}$. Let $\text{ val }:\Lambda\rightarrow[1,6]$ be a valence function. For example, $\operatorname{val}(\mathrm{H})=1$, $\operatorname{val}(\mathrm{C})=4$, $\operatorname{val}(\mathrm{O})=2$, $\operatorname{val}(\mathrm{P})=5$, $\operatorname{val}\left(\mathrm{S}_{(2)}\right)=2$, $\operatorname{val}\left(\mathrm{S}_{(4)}\right)=4$ and $\operatorname{val}\left(\mathrm{S}_{(6)}\right)=6$. For each chemical element $\mathrm{a}\in\Lambda$, let $\mathrm{mass}(\mathrm{a})$ denote the mass of $\mathrm{a}$. To represent a chemical compound, we use a chemical graph introduced by Tanaka et al. [25], which is defined to be a tuple $\mathbb{C}=(H,\alpha,\beta)$ of a simple, connected undirected graph $H$ and functions $\alpha:V(H)\to\Lambda$ and $\beta:E(H)\to[1,3]$. The set of atoms and the set of bonds in the compound are represented by the vertex set $V(H)$ and the edge set $E(H)$, respectively. The chemical element assigned to a vertex $v\in V(H)$ is represented by $\alpha(v)$ and the bond-multiplicity between two adjacent vertices $u,v\in V(H)$ is represented by $\beta(e)$ of the edge $e=uv\in E(H)$. We say that two tuples $(H_{i},\alpha_{i},\beta_{i}),i=1,2$ are isomorphic if they admit an isomorphism $\phi$, i.e., a bijection $\phi:V(H_{1})\to V(H_{2})$ such that $uv\in E\left(H_{1}\right),\alpha_{1}(u)=\mathrm{a},\alpha_{1}(v)=\mathrm{b},% \beta_{1}(uv)=m\leftrightarrow\phi(u)\phi(v)\in E\left(H_{2}\right),\alpha_{2}% (\phi(u))=\mathrm{a},\alpha_{2}(\phi(v))=\mathrm{b},\beta_{2}(\phi(u)\phi(v))=m$. When $H_{i}$ is rooted at a vertex $r_{i},i=1,2$, $(H_{i},\alpha_{i},\beta_{i}),i=1,2$ are rooted-isomorphic (r-isomorphic) if they admit an isomorphism $\phi$ such that $\phi(r_{1})=r_{2}$. For a notational convenience, we use a function $\beta_{\mathbb{C}}:V(H)\to[0,12]$ for a chemical graph $\mathbb{C}=(H,\alpha,\beta)$ such that $\beta_{\mathbb{C}}(u)$ means the sum of bond-multiplicities of edges incident to a vertex $u$; i.e., (2)$\beta_{\mathbb{C}}(u)\triangleq\sum_{uv\in E(H)}\beta(uv)$ for each vertex $u\in V(H)$. For each vertex $u\in V(H)$, define the electron-degree $\operatorname{eledeg}_{\mathbb{C}}(u)$ to be (3)$\operatorname{eledeg}_{\mathbb{C}}(u)\triangleq\beta_{\mathbb{C}}(u)-% \operatorname{val}(\alpha(u)).$ For each vertex $u\in V(H)$, let $\deg_{\mathbb{C}}(u)$ denote the number of vertices adjacent to the vertex $u$ in $\mathbb{C}$. For a chemical graph $\mathbb{C}=(H,\alpha,\beta)$, let $V_{\mathrm{a}}(\mathbb{C})$, $\mathrm{a}\in\Lambda$ denote the set vertices $v\in V(H)$ such that $\alpha(v)=\mathrm{a}$ in $\mathbb{C}$ and define the hydrogen-suppressed chemical graph$\langle\mathbb{C}\rangle$ to be the graph obtained from $H$ by removing all the vertices $v\in V_{\mathrm{H}}(\mathbb{C})$. 3. Two-layered Model This section reviews the idea of the two-layered model introduced by Shi et al. [26], and describes our modifications to the model. Let $\mathbb{C}=(H,\alpha,\beta)$ be a chemical graph and ${\rho}\geq 1$ be an integer, which is called a branch-parameter. A two-layered model of $\mathbb{C}$ introduced by Shi et al. [26] is a partition of the hydrogen-suppressed chemical graph $\langle\mathbb{C}\rangle$ into an “interior” and an “exterior” in the following way. We call a vertex $v\in V(\langle\mathbb{C}\rangle)$ (resp., an edge $e\in E(\langle\mathbb{C}\rangle))$ of $G$ an exterior-vertex (resp., exterior-edge) if $\operatorname{ht}(v)<{\rho}$ (resp., $e$ is incident to an exterior-vertex) and denote the sets of exterior-vertices and exterior-edges by $V^{\operatorname{ex}}(\mathbb{C})$ and $E^{\mathrm{ex}}(\mathbb{C})$, respectively and denote $V^{\operatorname{int}}(\mathbb{C})=V(\langle\mathbb{C}\rangle)\setminus V^{% \operatorname{ex}}(\mathbb{C})$ and $E^{\operatorname{int}}(\mathbb{C})=E(\langle\mathbb{C}\rangle)\setminus E^{% \mathrm{ex}}(\mathbb{C})$, respectively. We call a vertex in $V^{\operatorname{int}}(\mathbb{C})$ (resp., an edge in $E^{\operatorname{int}}(\mathbb{C})$) an interior-vertex (resp., interior-edge). The set $E^{\mathrm{ex}}(\mathbb{C})$ of exterior-edges forms a collection of connected graphs each of which is regarded as a rooted tree $T$ rooted at the vertex $v\in V(T)$ with the maximum $\operatorname{ht}(v)$. Let $\mathcal{T}^{\operatorname{ex}}(\langle\mathbb{C}\rangle)$ denote the set of these chemical rooted trees in $\langle\mathbb{C}\rangle$. The interior$\mathbb{C}^{\text{int }}$ of $\mathbb{C}$ is defined to be the subgraph $(V^{\operatorname{int}}(\mathbb{C}),E^{\operatorname{int}}(\mathbb{C}))$ of $\langle\mathbb{C}\rangle$. Fig. 2 illustrates an example of a hydrogen-suppressed chemical graph $\langle\mathbb{C}\rangle$. For a branch-parameter ${\rho}=2$, the interior of the chemical graph $\langle\mathbb{C}\rangle$ in Fig. 2 is obtained by removing the set of vertices with degree 1 ${\rho}=2$ times; i.e., first remove the set $V_{1}=\{w_{1},w_{2},\ldots,w_{14}\}$ of vertices of degree 1 in $\langle\mathbb{C}\rangle$ and then remove the set $V_{2}=\{w_{15},w_{16},\ldots,w_{19}\}$ of vertices of degree 1 in $\langle\mathbb{C}\rangle-V_{1}$, where the removed vertices become the exterior-vertices of $\langle\mathbb{C}\rangle$. Fig. 2. An illustration of a hydrogen-suppressed chemical graph $\langle\mathbb{C}\rangle$ obtained from a chemical graph $\mathbb{C}$ with $\mathrm{r}(\mathbb{C})=4$ by removing all the hydrogens, where for ${\rho}=2$, $V^{\operatorname{ex}}(\mathbb{C})=\left\{w_{i}\mid i\in[1,19]\right\}$ and $V^{\operatorname{int}}(\mathbb{C})=\left\{u_{i}\mid i\in[1,28]\right\}$ For each interior-vertex $u\in V^{\operatorname{int}}(\mathbb{C})$, let $T_{u}\in\mathcal{T}^{\operatorname{ex}}(\langle\mathbb{C}\rangle)$ denote the chemical tree rooted at $u$ (where possibly $T_{u}$ consists of vertex $u$) and define the $\rho$-fringe-tree$\mathbb{C}[u]$ to be the chemical rooted tree obtained from $T_{u}$ by putting back the hydrogens originally attached to $T_{u}$ in $\mathbb{C}$. Let $\mathcal{T}(\mathbb{C})$ denote the set of $\rho$-fringe-trees $\mathbb{C}[u],u\in V^{\operatorname{int}}(\mathbb{C})$. Fig. 3 illustrates the set $\mathcal{T}(\mathbb{C})=\left\{\mathbb{C}\left[u_{i}\right]\mid i\in[1,28]\right\}$ of the 2-fringe-trees of the example $\mathbb{C}$ in Fig. 2. Fig. 3. The set $\mathbb{C}\left[u_{i}\right],i\in[1,28]$ of the example $\mathbb{C}$ in Fig. 2, where the root of each tree is depicted with a gray circle and the hydrogens attached to non-root vertices are omitted in the figure. Feature Function We extend the feature function of a chemical graph $\mathbb{C}$ introduced by Tanaka et al. [25]. The feature of an interior-edge $e=uv\in E^{\operatorname{int}}(\mathbb{C})$ such that $\alpha(u)=\mathrm{a}$, $\deg_{\langle\mathbb{C}\rangle}(u)=d$, $\alpha(v)=\mathrm{b}$, $\deg_{\langle\mathbb{C}\rangle}(v)=d^{\prime}$ and $\beta(e)=m$ is represented by a tuple $(\mathrm{a}d,\mathrm{b}d^{\prime},m)$, which is called the edge-configuration of the edge $e$, where we call the tuple $(\mathrm{a},\mathrm{b},m)$ the adjacency-configuration of the edge $e$. For an integer $K$, a feature vector $f(\mathbb{C})$ of a chemical graph $\mathbb{C}$ is defined by a feature function$f$ that consists of $K$ descriptors. We call $\mathbb{R}^{K}$ the feature space. Tanaka et al. [25] defined a feature vector $f(\mathbb{C})\in\mathbb{R}^{K}$ to be a combination of the frequency of edge-configurations of the interior-edges and the frequency of chemical rooted trees among the set of chemical rooted trees $\mathbb{C}[u]$ over all interior-vertices $u$. In this paper, we introduce the rank and the adjacency-configuration of leaf-edges as new descriptors in a feature vector of a chemical graph. See Supplementary Material for a full description of descriptors used in Stage 2 of the framework. Topological Specification A topological specification is described as a set of the following rules proposed by Shi et al. [26] and modified by Tanaka et al. [25]: (i) a seed graph $G_{\mathrm{C}}$ as an abstract form of a target chemical graph $\mathbb{C}$; (ii) a set $\mathcal{F}$ of chemical rooted trees as candidates for a tree $\mathbb{C}[u]$ rooted at each exterior-vertex $u$ in $\mathbb{C}$; and (iii) lower and upper bounds on the number of components in a target chemical graph such as chemical elements, double/triple bounds and the interior-vertices in $\mathbb{C}$. Fig. 4a,b illustrate examples of a seed graph $G_{\mathrm{C}}$ and a set $\mathcal{F}$ of chemical rooted trees, respectively. Given a seed graph $G_{\mathrm{C}}$, the interior of a target chemical graph $\mathbb{C}$ is constructed from $G_{\mathrm{C}}$ by replacing some edges $a=uv$ with paths $P_{a}$ between the end-vertices $u$ and $v$ and by attaching new paths $Q_{v}$ to some vertices $v$. Fig. 4. (a) An illustration of a seed graph $G_{\mathrm{C}}$ with $\mathrm{r}(G_{\mathrm{C}})=5$ where the vertices in $V_{\mathrm{C}}$ are depicted with gray circles, the edges in $E_{(\geq 2)}$ are depicted with dotted lines, the edges in $E_{(\geq 1)}$ are depicted with dashed lines, the edges in $E_{(0/1)}$ are depicted with gray bold lines and the edges in $E_{(=1)}$ are depicted with black solid lines; (b) A set $\mathcal{F}=\{\psi_{1},\psi_{2},\ldots,\psi_{30}\}\subseteq\mathcal{F}(D_{\pi})$ of 30 chemical rooted trees $\psi_{i},i\in[1,30]$, where the root of each tree is depicted with a gray circle, where the hydrogens attached to non-root vertices are omitted in the figure. For example, a chemical graph $\langle\mathbb{C}\rangle$ in Fig. 2 is constructed from the seed graph $G_{\mathrm{C}}$ in Fig. 4a as follows. - First replace five edges $a_{1}=u_{1}u_{2},a_{2}=u_{1}u_{3},a_{3}=u_{4}u_{7},a_{4}=u_{10}u_{11}$ and $a_{5}=u_{11}u_{12}$ in $G_{\mathrm{C}}$ with new paths $P_{a_{1}}=(u_{1},u_{13},u_{2})$, $P_{a_{2}}=(u_{1},u_{14},u_{3})$, $P_{a_{3}}=(u_{4},u_{15},u_{16},u_{7})$, $P_{a_{4}}=(u_{10},u_{17},u_{18},u_{19},u_{11})$ and $P_{a_{5}}=(u_{11},u_{20},u_{21},u_{22},u_{12})$, respectively to obtain a subgraph $G_{1}$ of $\langle\mathbb{C}\rangle$. - Next attach to this graph $G_{1}$ three new paths $Q_{u_{5}}=(u_{5},u_{24})$, $Q_{u_{18}}=(u_{18},u_{25},u_{26},u_{27})$ and $Q_{u_{22}}=(u_{22},u_{28})$ to obtain the interior of $\langle\mathbb{C}\rangle$ in Fig. 2. - Finally attach to the interior 28 trees selected from the set $\mathcal{F}$ and assign chemical elements and bond-multiplicities in the interior to obtain a chemical graph $\mathbb{C}$ in Fig. 2. In Fig, 3, $\psi_{1}\in\mathcal{F}$ is selected for $\mathbb{C}\left[u_{i}\right]$, $i\in\{6,7,11\}$. Similarly $\psi_{2}$ for $\mathbb{C}[u_{9}]$, $\psi_{4}$ for $\mathbb{C}[u_{1}]$, $\psi_{6}$ for $\mathbb{C}\left[u_{i}\right]$, $i\in\{3,4,5,10,19,22,25,26\}$, $\psi_{8}$ for $\mathbb{C}[u_{8}]$, $\psi_{11}$ for $\mathbb{C}\left[u_{i}\right]$, $i\in\{2,13,16,17,20\}$, $\psi_{15}$ for $\mathbb{C}[u_{12}]$, $\psi_{19}$ for $\mathbb{C}[u_{15}]$, $\psi_{23}$ for $\mathbb{C}[u_{21}]$, $\psi_{24}$ for $\mathbb{C}[u_{24}]$, $\psi_{25}$ for $\mathbb{C}[u_{27}]$, $\psi_{26}$ for $\mathbb{C}[u_{23}]$, $\psi_{27}$ for $\mathbb{C}[u_{14}]$ and $\psi_{30}$ for $\mathbb{C}[u_{28}]$. Our definition of a topological specification is analogous with the one by Tanaka et al. [25] except for a necessary modification due to the introduction of multiple valences of chemical elements, cations and anions (see Supplementary Material for a full description of topological specification). 4. Linear Regressions For an integer $p\geq 1$ and a vector $x\in\mathbb{R}^{p}$, the $j$-th entry of $x$ is denoted by $x(j),j\in[1,p]$. Let $D$ be a data set of chemical graphs $\mathbb{C}$ with an observed value $a(\mathbb{C})\in\mathbb{R}$, where we denote by $a_{i}=a(\mathbb{C}_{i})$ for an indexed graph $\mathbb{C}_{i}$. Let $f$ be a feature function that maps a chemical graph $\mathbb{C}$ to a vector $f(\mathbb{C})\in\mathbb{R}^{K}$ where we denote by $x_{i}=f(\mathbb{C}_{i})$ for an indexed graph $\mathbb{C}_{i}$. For a prediction function $\eta:\mathbb{R}^{K}\to\mathbb{R}$, define an error function (4)$\operatorname{Err}(\eta;D)\triangleq\sum_{\mathbb{C}_{i}\in D}\left(a_{i}-\eta% \left(f\left(\mathbb{C}_{i}\right)\right)\right)^{2}=\sum_{\mathbb{C}_{i}\in D% }\left(a_{i}-\eta\left(x_{i}\right)\right)^{2},$ and define the coefficient of determination $R^{2}(\eta,D)$ to be (5)$\mathrm{R}^{2}(\eta,D)\triangleq 1-\frac{\operatorname{Err}(\eta;D)}{\sum_{% \mathbb{C}_{i}\in D}\left(a_{i}-\widetilde{a}\right)^{2}}\text{ for }% \widetilde{a}=\frac{1}{|D|}\sum_{\mathbb{C}\in D}a(\mathbb{C}).$ For a feature space $\mathbb{R}^{K}$, a hyperplane is denoted by a pair $(w,b)$ of a vector $w\in\mathbb{R}^{K}$ and a real $b\in\mathbb{R}$. Given a hyperplane $(w,b)\in\mathbb{R}^{K}w$, a prediction function $\eta_{w,b}:\mathbb{R}^{K}\to\mathbb{R}$ is defined by setting (6)$\eta_{w,b}(x)\triangleq w\cdot x+b=\sum_{j\in[1,K]}w(j)x(j)+b.$ We wish to find a hyperplane $(w,b)$ that minimizes the error function $\mathrm{Err}(\eta_{w,b};D)$. In many cases, a feature vector $f$ contains descriptors that do not play an essential role in constructing a good prediction function. When we solve the minimization problem, the entries $w(j)$ for some descriptors $j\in[1,K]$ in the resulting hyperplane $(w,b)$ become zero, which means that these descriptors were not necessarily important for finding a prediction function $\eta_{w,b}$. It is proposed that solving the minimization with an additional penalty term $\tau$ to the error function often results in more number of entries $w(j)=0$, reducing a set of descriptors necessary for defining a prediction function $\eta_{w,b}$. For an error function with such a penalty term, a Ridge function $\frac{1}{2|D|}\operatorname{Err}\left(\eta_{w,b};D\right)+\lambda\left[\sum_{j% \in[1,K]}w(j)^{2}+b^{2}\right]$ [28] and a Lasso function $\frac{1}{2|D|}\operatorname{Err}\left(\eta_{w,b};D\right)+\lambda\left[\sum_{j% \in[1,K]}|w(j)|+|b|\right]$ [29] are known, where $\lambda\in\mathbb{R}$ is a given real number. Given a prediction function $\eta_{w,b}$, we can simulate a process of computing the output $\eta_{w,b}(x)$ for an input $x\in\mathbb{R}^{K}$ as an MILP $\mathcal{M}(x,y;\mathcal{C}_{1})$ in the framework. By solving such an MILP for a specified target value $y^{*}$, we can find a vector $x^{*}\in\mathbb{R}^{K}$ such that $\eta_{w,b}(x^{*})=y^{*}$. Instead of specifying a single target value $y^{*}$, we use lower and upper bounds $\underline{y}^{*},\overline{y}^{*}\in\mathbb{R}$ on the value $a(\mathbb{C})$ of a chemical graph $\mathbb{C}$ to be inferred. We can control the range between $\underline{y}^{*}$ and $\overline{y}^{*}$ for searching a chemical graph $\mathbb{C}$ by setting $\underline{y}^{*}$ and $\overline{y}^{*}$ to be close or different values. A desired MILP is formulated as follows. $\mathcal{M}(x,y;\mathcal{C}_{1})$: An MILP formulation for the inverse problem to prediction function. constants: - A hyperplane $(w,b)$ with $w\in\mathbb{R}^{K}$ and $b\in\mathbb{R}$; - Real values $\underline{y}^{*},\overline{y}^{*}\in\mathbb{R}$ such that $\underline{y}^{*}<\overline{y}^{*}$; - A set $I_{\mathbb{Z}}$ of indices $j\in[1,K]$ such that the $j$-th descriptor $\operatorname{dcp}_{j}(\mathbb{C})$ is always an integer; - A set $I_{+}$ of indices $j\in[1,K]$ such that the $j$-th descriptor $\operatorname{dcp}_{j}(\mathbb{C})$ is always non-negative; - $\ell(j),u(j)\in\mathbb{R}$, $j\in[1,K]$: lower and upper bounds on the $j$-th descriptor; variables: - Non-negative integer variable $x(j)\in\mathbb{Z}_{+}$, $j\in I_{\mathbb{Z}}\cap I_{+}$; - Integer variable $x(j)\in\mathbb{Z},j\in I_{\mathbb{Z}}\setminus I_{+}$; - Non-negative real variable $x(j)\in\mathbb{R}_{+},j\in I_{+}\setminus I_{\mathbb{Z}}$; - Real variable $x(j)\in\mathbb{R},j\in[1,K]\setminus(I_{\mathbb{Z}}\cup I_{+})$; constraints: (7)$\displaystyle\ell(j)\leq x(j)\leq u(j),j\in[1,K];$ $\displaystyle\quad\underline{y}^{*}\leq\sum_{j\in[1,K]}w(j)x(j)+b\leq\bar{y}^{*}$ objective function: none. The number of variables and constraints in the above MILP formulation is $O(K)$. It is not difficult to see that the above MILP is an NP-hard problem. The entire MILP for Stage 4 consists of the two MILPs $\mathcal{M}(x,y;\mathcal{C}_{1})$ and $\mathcal{M}(g,x;\mathcal{C}_{2})$ with no objective function. The latter represents the computation process of our feature function $f$ and a given topological specification. See Supplementary Material for the details of MILP $\mathcal{M}(g,x;\mathcal{C}_{2})$. 5. Results We implemented our method of Stages 1 to 5 for inferring chemical graphs under a given topological specification and conducted experiments to evaluate the computational efficiency. We executed the experiments on a PC with Processor: Core i7-9700 (3.0 GHz; 4.7 GHz at the maximum) and Memory: 16 GB RAM DDR4. Results on Phase 1. We have conducted experiments of linear regression for 37 chemical properties among which we report the following 18 properties to which the test coefficient of determination ${\rm R}^{2}$ attains at least 0.8: octanol/water partition coefficient (Kow), heat of combustion (Hc), vapor density (Vd), optical rotation (OptR), electron density on the most positive atom (EDPA), melting point (Mp), heat of atomization (Ha), heat of formation (Hf), internal energy at 0K (U0), energy of lowest unoccupied molecular orbital (Lumo), isotropic polarizability (Alpha), heat capacity at 298.15K (Cv), solubility (Sl), surface tension (SfT), viscosity (Vis), isobaric heat capacities in liquid phase (IhcLiq), isobaric heat capacities in solid phase (IhcSol) and lipophilicity (Lp). We used data sets provided by HSDB from PubChem [30]for Kow, Hc, Vd and OptR, M. Jalali-Heravi and M. Fatemi [31] for EDPA, Roy and Saha [32] for Mp, Ha and Hf, MoleculeNet [33] for U0, Lumo, Alpha, Cv and Sl, Goussard et al. [34] for SfT and Vis, R. Naef [35] for IhcLiq and IhcSol, and Figshare [36] for Lp. Properties U0, Lumo, Alpha and Cv share a common original data set $D^{*}$ with more than 130,000 compounds, and we used a set $D_{\pi}$ of 1,000 graphs randomly selected from $D^{*}$ as a common data set of these four properties $\pi$ in this experiment. Stages 1, 2 and 3 in Phase 1 are implemented as follows. Stage 1. We set a graph class $\mathcal{G}$ to be the set of all chemical graphs with any graph structure, and set a branch-parameter ${\rho}$ to be 2. For each of the properties, we first select a set $\Lambda$ of chemical elements and then collect a data set $D_{\pi}$ on chemical graphs over the set $\Lambda$ of chemical elements. During construction of the data set $D_{\pi}$, chemical compounds that do not satisfy one of the following are eliminated: the graph is connected, the number of non-hydrogen neighbors of each atom is at most four, and the number of carbon atoms is at least four. Table 1 shows the size and range of data sets that we prepared for each chemical property in Stage 1, where we denote the following: - $\Lambda$: the set of elements used in the data set $D_{\pi}$; $\Lambda$ is one of the following 11 sets: $\Lambda_{1}=\{\mathrm{H},\mathrm{C},\mathrm{O}\}$; $\Lambda_{2}=\{\mathrm{H},\mathrm{C},\mathrm{O},\mathrm{N}\}$; $\Lambda_{3}=\{\mathrm{H},\mathrm{C},\mathrm{O},\mathrm{S}_{(2)}\}$; $\Lambda_{4}=\{\mathrm{H},\mathrm{C},\mathrm{O},\mathrm{S}\text{i}\}$; $\Lambda_{5}=\{\mathrm{H},\mathrm{C},\mathrm{O},\mathrm{N},\mathrm{C}\text{l},% \mathrm{P}_{(3)},\mathrm{P}_{(5)}\}$; $\Lambda_{6}=\{\mathrm{H},\mathrm{C},\mathrm{O},\mathrm{N},\mathrm{S}_{(2)},% \mathrm{F}\}$; $\Lambda_{7}=\{\mathrm{H},\mathrm{C},\mathrm{O},\mathrm{N},\mathrm{S}_{(2)},% \mathrm{S}_{(6)},\mathrm{C}\text{l}\}$; $\Lambda_{8}=\{\mathrm{H},\mathrm{C}_{(2)},\mathrm{C}_{(3)},\mathrm{C}_{(4)},% \mathrm{O},\mathrm{N}_{(2)},\mathrm{N}_{(3)}\}$; $\Lambda_{9}=\{\mathrm{H},\mathrm{C},\mathrm{O},\mathrm{N},\mathrm{S}_{(2)},% \mathrm{S}_{(4)},\mathrm{S}_{(6)},\mathrm{C}\text{l}\}$; $\Lambda_{10}=\{\mathrm{H},\mathrm{C}_{(2)},\mathrm{C}_{(3)},\mathrm{C}_{(4)},% \mathrm{C}_{(5)},\mathrm{O},\mathrm{N}_{(1)},\mathrm{N}_{(2)},\mathrm{N}_{(3)}% ,\mathrm{F}\}$; and $\Lambda_{11}=\{\mathrm{H},\mathrm{C}_{(2)},\mathrm{C}_{(3)},\mathrm{C}_{(4)},% \mathrm{O},\mathrm{N}_{(2)},\mathrm{N}_{(3)},\mathrm{S}_{(2)},\mathrm{S}_{(4)}% ,\mathrm{S}_{(6)},\mathrm{C}\text{l}\}$, where ${\tt e}_{(i)}$ for a chemical element ${\tt e}$ and an integer $i\geq 1$ means that a chemical element ${\tt e}$ with valence $i$. - $|D_{\pi}|$: the size of data set $D_{\pi}$ over $\Lambda$ for the property $\pi$. - $\underline{n},~{}\overline{n}$: the minimum and maximum values of the number $n(\mathbb{C})$ of non-hydrogen atoms in compounds $\mathbb{C}$ in $D_{\pi}$. - $\underline{a},~{}\overline{a}$: the minimum and maximum values of $a(\mathbb{C})$ for $\pi$ over compounds $\mathbb{C}$ in $D_{\pi}$. - $|\Gamma|$: the number of different edge-configurations of interior-edges over the compounds in $D_{\pi}$. - $|\mathcal{F}|$: the number of non-isomorphic chemical rooted trees in the set of all 2-fringe-trees in the compounds in $D_{\pi}$. - $K$: the number of descriptors in a feature vector $f(\mathbb{C})$. Table 1.Results in Phase 1. $\pi$ $\Lambda$ $|D_{\pi}|$ $\underline{n},~{}\overline{n}$ $\underline{a},~{}\overline{a}$ $|\Gamma|$ $|\mathcal{F}|$ $K$ $\lambda_{\pi}$ $K^{\prime}$ test $\mathrm{R}^{2}$ Kow $\Lambda_{2}$ 684 4, 58 –7.5, 15.6 25 166 223 $6.4\mathrm{E}{-5}$ 80.3 0.953 Kow $\Lambda_{9}$ 899 4, 69 –7.5, 15.6 37 219 303 $5.5\mathrm{E}{-5}$ 112.1 0.927 Hc $\Lambda_{2}$ 255 4, 63 49.6, 35099.6 17 106 154 $1.9\mathrm{E}{-4}$ 19.2 0.946 Hc $\Lambda_{7}$ 282 4, 63 49.6, 35099.6 21 118 177 $1.9\mathrm{E}{-4}$ 20.5 0.951 Vd $\Lambda_{2}$ 474 4, 30 0.7, 20.6 21 160 214 $1.0\mathrm{E}{-3}$ 3.6 0.927 Vd $\Lambda_{5}$ 551 4, 30 0.7, 20.6 24 191 256 $5.5\mathrm{E}{-4}$ 8.0 0.942 OptR $\Lambda_{2}$ 147 5, 44 –117.0, 165.0 21 55 107 $4.6\mathrm{E}{-4}$ 39.2 0.823 OptR $\Lambda_{6}$ 157 5, 69 –117.0, 165.0 25 62 123 $7.3\mathrm{E}{-4}$ 41.7 0.825 EDPA $\Lambda_{1}$ 52 11, 16 0.80, 3.76 9 33 64 $1.0\mathrm{E}{-4}$ 10.9 0.999 Mp $\Lambda_{2}$ 467 4, 122 –185.33, 300.0 23 142 197 $3.7\mathrm{E}{-5}$ 82.5 0.817 Ha $\Lambda_{3}$ 115 4, 11 1100.6, 3009.6 8 83 115 $3.7\mathrm{E}{-5}$ 39.0 0.997 Hf $\Lambda_{1}$ 82 4, 16 30.2, 94.8 5 50 74 $1.0\mathrm{E}{-4}$ 34.0 0.987 U0 $\Lambda_{10}$ 977 4, 9 –570.6,  –272.8 59 190 297 $1.0\mathrm{E}{-7}$ 246.7 0.999 Lumo $\Lambda_{10}$ 977 4, 9 –0.11, 0.10 59 190 297 $6.4\mathrm{E}{-5}$ 133.9 0.841 Alpha $\Lambda_{10}$ 977 4, 9 50.9, 99.6 59 190 297 $1.0\mathrm{E}{-5}$ 125.5 0.961 Cv $\Lambda_{10}$ 977 4, 9 19.2, 44.0 59 190 297 $1.0\mathrm{E}{-5}$ 165.3 0.961 Sl $\Lambda_{9}$ 915 4, 55 –11.6, 1.11 42 207 300 $7.3\mathrm{E}{-5}$ 130.6 0.808 SfT $\Lambda_{4}$ 247 5, 33 12.3, 45.1 11 91 128 $6.4\mathrm{E}{-4}$ 20.9 0.804 Vis $\Lambda_{4}$ 282 5, 36 –0.64, 1.63 12 88 126 $8.2\mathrm{E}{-4}$ 16.3 0.893 IhcLiq $\Lambda_{2}$ 770 4, 78 106.3, 1956.1 23 200 256 $1.9\mathrm{E}{-5}$ 82.2 0.987 IhcLiq $\Lambda_{7}$ 865 4, 78 106.3, 1956.1 29 246 316 $8.2\mathrm{E}{-6}$ 139.1 0.986 IhcSol $\Lambda_{8}$ 581 5, 70 67.4, 1220.9 33 124 192 $2.8\mathrm{E}{-5}$ 75.9 0.985 IhcSol $\Lambda_{11}$ 668 5, 70 67.4, 1220.9 40 140 228 $2.8\mathrm{E}{-5}$ 86.7 0.982 Lp $\Lambda_{2}$ 615 6, 60 –3.62, 6.84 32 116 186 $1.0\mathrm{E}{-4}$ 98.5 0.856 Lp $\Lambda_{9}$ 936 6, 74 –3.62, 6.84 44 136 231 $6.4\mathrm{E}{-5}$ 130.4 0.840 Stage 2. The newly defined feature function in our chemical model without suppressing hydrogen in Section 3 is used. We standardize the range of each descriptor and the range $\{t\in\mathbb{R}\mid\underline{a}\leq t\leq\bar{a}\}$ of property values $a(\mathbb{C}),\mathbb{C}\in D_{\pi}$. Stage 3. For each chemical property $\pi$, we select a penalty value $\lambda_{\pi}$ in the Lasso function from 36 different values from 0 to 100 by conducting linear regression as a preliminary experiment. We conducted an experiment in Stage 3 to evaluate the performance of the prediction function based on cross-validation. For a property $\pi$, an execution of a cross-validation consists of five trials of constructing a prediction function as follows. First partition the data set $D_{\pi}$ into five subsets $D^{(k)}$, $k\in[1,5]$ randomly. For each $k\in[1,5]$, the $k$-th trial constructs a prediction function $\eta^{(k)}$ by conducting a linear regression with the penalty term $\lambda_{\pi}$ using the set $D_{\pi}\setminus D^{(k)}$ as a training data set. We used scikit-learn version 0.23.2 with Python 3.8.5 for executing linear regression with Lasso function. For each property, we executed ten cross-validations and we show the median of test $\mathrm{R}^{2}(\eta^{(k)},D^{(k)}),k\in[1,5]$ over all ten cross-validations. Recall that a subset of descriptors is selected in linear regression with Lasso function and let $K^{\prime}$ denote the average number of selected descriptors over all 50 trials. The running time per trial in a cross-validation was at most one second. Table 1 shows the results on Stages 2 and 3, where we denote the following: - $\lambda_{\pi}$: the penalty value in the Lasso function selected for a property $\pi$, where $a\mathrm{E}{b}$ means $a\times 10^{b}$. - $K^{\prime}$: the average of the number of descriptors selected in the linear regression over all 50 trials in ten cross-validations. - test $\mathrm{R}^{2}$: the median of test $\mathrm{R}^{2}$ over all 50 trials in ten cross-validations. Recall that the adjacency-configuration for leaf-edges was introduced as a new descriptor in this paper. Without including this new descriptor, the test $\mathrm{R}^{2}$ for property Vis was 0.790, that for Lumo was 0.799 and that for Mp was 0.796, while the test $\mathrm{R}^{2}$ for each of the other properties in Table 1 was almost the same. From Table 1, we observe that a relatively large number of properties admit a good prediction function based on linear regression. The number $K^{\prime}$ of descriptors used in linear regression is considerably small for some properties. For example of property Vd, the four descriptors most frequently selected in the case of $\Lambda=\{\mathrm{H},\mathrm{O},\mathrm{C},\mathrm{N}\}$ are the number of non-hydrogen atoms; the number of interior-vertices $v$ with $\text{deg}_{\mathrm{C}^{\text{int}}}(v)=1$; the number of fringe-trees r-isomorphic to the chemical rooted tree $\psi_{1}$ in Fig. 5; and the number of leaf-edges with adjacency-configuration $(\mathrm{O},\mathrm{C},2)$. The eight descriptors most frequently selected in the case of $\Lambda=\left\{\mathrm{H},\mathrm{O},\mathrm{C},\mathrm{N},\mathrm{C}\text{l},% \mathrm{P}_{(3)},\mathrm{P}_{(5)}\right\}$ are the number of non-hydrogen atoms; the number of interior-vertices $v$ with $\text{deg}_{\mathrm{C}^{\text{int}}}(v)=1$; the number of exterior-vertices $v$ with $\alpha(v)=\mathrm{C}\text{l}$; the number of interior-edges with edge-configuration $\gamma_{i},i=1,2$, where $\gamma_{1}=(\mathrm{C}2,\mathrm{C}2,2)$ and $\gamma_{2}=(\mathrm{C}3,\mathrm{C}4,1)$; and the number of fringe-trees r-isomorphic to the chemical rooted tree $\psi_{i},i=1,2,3$ in Fig. 5. Fig. 5. An illustration of chemical rooted trees $\psi_{1}$, $\psi_{1}$ and $\psi_{3}$ that are selected in Lasso linear regression for constructing a prediction function to property Vd, where the root is depicted with a gray circle. For the 18 properties listed in Table 1, we used ANN to construct prediction functions. For this purpose, we used our newly proposed feature vector and the experimental setup as explained in Tanaka et al. [25]. From these computation experiments, we observe that for the properties Hc, Vd, Ha, Hf, U0, Alpha and Cv, the test $\mathrm{R}^{2}$ scores of the prediction functions obtained by Lasso linear regression is at least 0.05 more than those obtained by ANN. For the properties OptR, Sl and SfT, the test $\mathrm{R}^{2}$ scores of the prediction functions obtained by ANN is at least 0.05 more than those obtained by Lasso linear regression. For the other properties, the test $\mathrm{R}^{2}$ scores obtained by Lasso linear regression and ANN are comparable. Results on Phase 2. We used a set of seven instances $I_{\mathrm{a}}$, $I_{\mathrm{b}}^{i},i\in[1,4]$, $I_{\mathrm{c}}$ and $I_{\mathrm{d}}$ based on seed graphs prepared by Shi et al. [26] to execute Stages 4 and 5 in Phase 2. We here present their seed graphs $G_{\mathrm{C}}$ (see Supplementary Material for the details of instances $I_{\mathrm{a}}$, $I_{\mathrm{b}}^{i},i\in[1,4]$, $I_{\mathrm{c}}$ and $I_{\mathrm{d}}$). The seed graph $G_{\mathrm{C}}$ of instance $I_{\mathrm{a}}$ is illustrated in Fig. 4a. The seed graph $G_{\mathrm{C}}^{1}$ (resp., $G_{\mathrm{C}}^{i},i=2,3,4$) of instances $I_{\mathrm{b}}^{1}$ and $I_{\mathrm{d}}$ (resp., $I_{\mathrm{b}}^{i},i=2,3,4$) is illustrated in Fig. 6. Fig. 6. (i) Seed graph $G_{\mathrm{C}}^{1}$ for $I_{\mathrm{b}}^{1}$ and $I_{\mathrm{d}}$; (ii) Seed graph $G_{\mathrm{C}}^{2}$ for $I_{\mathrm{b}}^{2}$; (iii) Seed graph $G_{\mathrm{C}}^{3}$ for $I_{\mathrm{b}}^{3}$; (iv) Seed graph $G_{\mathrm{C}}^{4}$ for $I_{\mathrm{b}}^{4}$. Instance $I_{\mathrm{c}}$ has been introduced by Shi et al. [26] in order to infer a chemical graph $\mathbb{C}^{\dagger}$ such that the core of $\mathbb{C}^{\dagger}$ is the same as the core of chemical graph $\mathbb{C}_{A}$: CID 24822711 in Fig. 7a and the frequency of each edge-configuration in the non-core of $\mathbb{C}^{\dagger}$ is the same as that of chemical graph $\mathbb{C}_{B}$: CID 59170444 illustrated in Fig. 7b. This means that the seed graph $G_{\mathrm{C}}$ of $I_{\mathrm{c}}$ is the core of $\mathbb{C}_{A}$ which is indicated by a shaded area in Fig. 7a. Instance $I_{\mathrm{d}}$ has been introduced by Shi et al. [26] in order to infer a monocyclic chemical graph $\mathbb{C}^{\dagger}$ such that the frequency vector of edge-configurations in $\mathbb{C}^{\dagger}$ is a vector obtained by merging those of two chemical graphs $\mathbb{C}_{A}$: CID 10076784 and $\mathbb{C}_{B}$: CID 44340250 illustrated in Fig. 7c,d, respectively. Fig. 7. An illustration of chemical compounds for instances $I_{\rm c}$ and $I_{\rm d}$: (a) $\mathbb{C}_{A}$: CID 24822711; (b) $\mathbb{C}_{B}$: CID 59170444; (c) $\mathbb{C}_{A}$: CID 10076784; (d) $\mathbb{C}_{B}$: CID 44340250, where hydrogens are omitted. Stage 4. We executed Stage 4 for five properties $\pi\in\{$Hc, Vd, OptR, IhcLiq, Vis$\}$. For the MILP formulation $\mathcal{M}(x,y;\mathcal{C}_{1})$ in Section 4, we use the prediction function $\eta_{w,b}$ that attained the median test $\mathrm{R}^{2}$ in Table 1. We used CPLEX version 12.10 to solve an MILP in Stage 4. Tables 2,3,4,5,6 show the computational results of the experiment in Stage 4 for the five properties, where we denote the following: - $\underline{y}^{*},~{}\overline{y}^{*}$: lower and upper bounds $\underline{y}^{*}$, $\overline{y}^{*}\in\mathbb{R}$ on the value $a(\mathbb{C})$ of a chemical graph $\mathbb{C}$ to be inferred; - $\#$v (resp., $\#$c): the number of variables (resp., constraints) in the MILP in Stage 4; - I-time: the time (sec.) to solve the MILP in Stage 4; - $n$: the number $n\left(\mathbb{C}^{\dagger}\right)$ of non-hydrogen atoms in the chemical graph $\mathbb{C}^{\dagger}$ inferred in Stage 4; and - $\mathrm{n}^{\mathrm{int}}$: the number $\mathrm{n}^{\mathrm{int}}\left(\mathbb{C}^{\dagger}\right)$ of interior-vertices in the chemical graph $\mathbb{C}^{\dagger}$ inferred in Stage 4; - $\eta(f(\mathbb{C}^{\dagger}))$: the predicted property value $\eta(f(\mathbb{C}^{\dagger}))$ of the chemical graph $\mathbb{C}^{\dagger}$ inferred in Stage 4. Table 2.Results of Stages 4 and 5 for Hc using Lasso linear regression. inst. $\underline{y}^{*},~{}\overline{y}^{*}$ $\#$v $\#$c I-time $n$ $\mathrm{n}^{\mathrm{int}}$ $\eta(f(\mathbb{C}^{\dagger}))$ D-time $\mathbb{C}$-LB $\#\mathbb{C}$ $I_{\mathrm{a}}$ 5950, 6050 9902 9255 4.6 44 25 5977.9 0.068 1 1 $I_{\mathrm{b}}^{1}$ 5950, 6050 9404 6776 1.7 36 10 6007.1 0.048 6 6 $I_{\mathrm{b}}^{2}$ 5950, 6050 11729 9891 16.7 50 25 6043.7 38.7 $2.4\!\!\times\!\!10^{5}$ 100 $I_{\mathrm{b}}^{3}$ 5950, 6050 11510 9894 16.3 47 25 6015.4 0.353 8724 100 $I_{\mathrm{b}}^{4}$ 5950, 6050 11291 9897 9.0 49 26 5971.6 0.304 84 84 $I_{\mathrm{c}}$ 13700, 13800 6915 7278 0.7 50 33 13703.3 0.016 1 1 $I_{\mathrm{d}}$ 13700, 13800 5535 6781 4.9 44 23 13704.7 0.564 $4.3\!\!\times\!\!10^{5}$ 100 Table 3.Results of Stages 4 and 5 for Vd using Lasso linear regression. inst. $\underline{y}^{*},~{}\overline{y}^{*}$ $\#$v $\#$c I-time $n$ $\mathrm{n}^{\mathrm{int}}$ $\eta(f(\mathbb{C}^{\dagger}))$ D-time $\mathbb{C}$-LB $\#\mathbb{C}$ $I_{\mathrm{a}}$ 16, 17 9481 9358 1.6 38 23 16.83 0.070 1 1 $I_{\mathrm{b}}^{1}$ 16, 17 9928 6986 1.5 35 12 16.68 0.206 48 48 $I_{\mathrm{b}}^{2}$ 21, 22 12373 10101 10.0 48 25 21.62 0.104 20 20 $I_{\mathrm{b}}^{3}$ 21, 22 12159 10104 6.5 48 25 21.95 3.65 $8.6\!\!\times\!\!10^{5}$ 100 $I_{\mathrm{b}}^{4}$ 21, 22 11945 10107 8.1 48 25 21.34 0.057 6 6 $I_{\mathrm{c}}$ 21, 22 7073 7438 0.7 50 34 21.89 0.016 1 1 $I_{\mathrm{d}}$ 17, 18 5693 6942 2.1 41 23 17.94 0.161 216 100 Table 4.Results of Stages 4 and 5 for OptR using Lasso linear regression. inst. $\underline{y}^{*},~{}\overline{y}^{*}$ $\#$v $\#$c I-time $n$ $\mathrm{n}^{\mathrm{int}}$ $\eta(f(\mathbb{C}^{\dagger}))$ D-time $\mathbb{C}$-LB $\#\mathbb{C}$ $I_{\mathrm{a}}$ 70, 71 8962 9064 3.5 40 23 70.1 0.061 1 1 $I_{\mathrm{b}}^{1}$ 70, 71 9432 6662 2.7 37 14 70.1 0.185 2622 100 $I_{\mathrm{b}}^{2}$ 70, 71 11818 9773 10.0 50 25 70.8 0.041 4 4 $I_{\mathrm{b}}^{3}$ 70, 71 11602 9776 10.2 50 25 70.2 0.241 60 60 $I_{\mathrm{b}}^{4}$ 70, 71 11386 9779 24.7 49 25 70.9 6.39 $4.6\!\!\times\!\!10^{5}$ 100 $I_{\mathrm{c}}$ –112, –111 6807 7170 1.8 50 32 -111.9 0.016 1 1 $I_{\mathrm{d}}$ 70, 71 5427 6673 6.1 42 23 70.2 0.127 78768 100 Table 5.Results of Stages 4 and 5 for IhcLiq using Lasso linear regression. inst. $\underline{y}^{*},~{}\overline{y}^{*}$ $\#$v $\#$c I-time $n$ $\mathrm{n}^{\mathrm{int}}$ $\eta(f(\mathbb{C}^{\dagger}))$ D-time $\mathbb{C}$-LB $\#\mathbb{C}$ $I_{\mathrm{a}}$ 1190, 1210 10180 9538 3.9 48 26 1208.5 0.071 2 2 $I_{\mathrm{b}}^{1}$ 1190, 1210 10784 7191 2.4 35 14 1206.7 0.082 12 12 $I_{\mathrm{b}}^{2}$ 1190, 1210 13482 10302 14.1 47 25 1206.7 0.11 12 12 $I_{\mathrm{b}}^{3}$ 1190, 1210 13275 10301 9.0 49 27 1209.9 0.090 24 24 $I_{\mathrm{b}}^{4}$ 1190, 1210 13128 10306 16.5 50 25 1208.4 0.424 2388 100 $I_{\mathrm{c}}$ 1190, 1210 7193 7560 0.8 50 33 1196.5 0.016 1 1 $I_{\mathrm{d}}$ 1190, 1210 5813 7063 2.2 44 23 1198.8 5.63 $5.2\!\!\times\!\!10^{5}$ 100 Table 6.Results of Stages 4 and 5 for Vis using Lasso linear regression. inst. $\underline{y}^{*},~{}\overline{y}^{*}$ $\#$v $\#$c I-time $n$ $\mathrm{n}^{\mathrm{int}}$ $\eta(f(\mathbb{C}^{\dagger}))$ D-time $\mathbb{C}$-LB $\#\mathbb{C}$ $I_{\mathrm{a}}$ 1.25, 1.30 6847 8906 1.3 38 22 1.295 0.042 2 2 $I_{\mathrm{b}}^{1}$ 1.25, 1.30 7270 6397 2.5 36 15 1.272 0.155 140 100 $I_{\mathrm{b}}^{2}$ 1.85, 1.90 8984 9512 8.9 45 25 1.879 0.149 288 100 $I_{\mathrm{b}}^{3}$ 1.85, 1.90 8741 9515 16.2 45 26 1.880 0.137 4928 100 $I_{\mathrm{b}}^{4}$ 1.85, 1.90 8498 9518 8.1 45 25 1.851 0.13 660 100 $I_{\mathrm{c}}$ 2.75, 2.80 6813 7162 1.0 50 33 2.763 0.025 4 4 $I_{\mathrm{d}}$ 1.85, 1.90 5433 6665 2.7 41 23 1.881 0.138 4608 100 From Tables 2,3,4,5,6 we observe that an instance with a large number of variables and constraints takes more running time than those with a smaller size in general. We solved all instances in this experiment with our MILP formulation in a few seconds to around 30 seconds. Fig. 8a–e illustrate the chemical graphs $\mathbb{C}^{\dagger}$ inferred from $I_{\mathrm{c}}$ with $(\underline{y}^{*},\overline{y}^{*})=(13700,13800)$ of Hc, $I_{\mathrm{b}}^{2}$ with $(\underline{y}^{*},\overline{y}^{*})=(21,22)$ of Vd, $I_{\mathrm{b}}^{4}$ with $(\underline{y}^{*},\overline{y}^{*})=(70,71)$ of OptR, $I_{\mathrm{d}}$ with $(\underline{y}^{*},\overline{y}^{*})=(1190,1210)$ of IhcLiq, and $I_{\mathrm{b}}^{3}$ with $(\underline{y}^{*},\overline{y}^{*})=(1.85,1.90)$ of Vis, respectively. Fig. 8. (a) $\mathbb{C}^{\dagger}$ with $\eta(f(\mathbb{C}^{\dagger}))=13703.3$ inferred from $I_{\mathrm{c}}$ with $(\underline{y}^{*},\overline{y}^{*})=(13700,13800)$ of Hc; (b) $\mathbb{C}^{\dagger}$ with $\eta(f(\mathbb{C}^{\dagger}))=21.62$ inferred from $I_{\mathrm{b}}^{2}$ with $(\underline{y}^{*},\overline{y}^{*})=(21,22)$ of Vd; (c) $\mathbb{C}^{\dagger}$ with $\eta(f(\mathbb{C}^{\dagger}))=70.9$ inferred from $I_{\mathrm{b}}^{4}$ with $(\underline{y}^{*},\overline{y}^{*})=(70,71)$ of OptR; (d) $\mathbb{C}^{\dagger}$ with $\eta(f(\mathbb{C}^{\dagger}))=1198.8$ inferred from $I_{\mathrm{d}}$ with $(\underline{y}^{*},\overline{y}^{*})=(1190,1210)$ of IhcLiq; (e) $\mathbb{C}^{\dagger}$ with $\eta(f(\mathbb{C}^{\dagger}))=1.880$ inferred from $I_{\mathrm{b}}^{3}$ with $(\underline{y}^{*},\overline{y}^{*})=(1.85,1.90)$ of Vis; (f) $\mathbb{C}^{\dagger}$ inferred from $I_{\mathrm{b}}^{4}$ with lower and upper bounds on the predicted property value $\eta_{\pi}(f(\mathbb{C}^{\dagger}))$ of property $\pi\in\{$Kow, Lp, Sl$\}$ in Table 9. Similarly, we executed Stage 4 for these seven instances $I_{\mathrm{a}}$, $I_{\mathrm{b}}^{i},i\in[1,4]$, $I_{\mathrm{c}}$ and $I_{\mathrm{d}}$ for five properties $\pi\in\{$Hc, Vd, OptR, IhcLiq, Vis$\}$ by using the prediction functions obtained by ANN. We list the running time to solve MILP formulation for each of these instances in Tables 7,8. From the computation experiments, we observe that for many instances, the running time is significantly faster than that of Stage 4 based on ANN. Table 7.Running time of Stage 4 for Hc, Vd and OptR using ANN. Hc Vd OptR inst. $\underline{y}^{*},~{}\overline{y}^{*}$ I-time inst. $\underline{y}^{*},~{}\overline{y}^{*}$ I-time inst. $\underline{y}^{*},~{}\overline{y}^{*}$ I-time $I_{\mathrm{a}}$ 13350, 13450 24.7 $I_{\mathrm{a}}$ 18, 19 18.1 $I_{\mathrm{a}}$ 62, 63 35.6 $I_{\mathrm{b}}^{1}$ 9650, 9750 13.5 $I_{\mathrm{b}}^{1}$ 13, 14 9.4 $I_{\mathrm{b}}^{1}$ 109, 110 15.5 $I_{\mathrm{b}}^{2}$ 16750, 16850 70.4 $I_{\mathrm{b}}^{2}$ 15, 16 40.9 $I_{\mathrm{b}}^{2}$ 23, 24 192.6 $I_{\mathrm{b}}^{3}$ 12350, 12450 87.0 $I_{\mathrm{b}}^{3}$ 20, 21 46.3 $I_{\mathrm{b}}^{3}$ -2, -1 936.4 $I_{\mathrm{b}}^{4}$ 14250, 14350 70.9 $I_{\mathrm{b}}^{4}$ 22, 23 27.1 $I_{\mathrm{b}}^{4}$ 19, 20 63.9 $I_{\mathrm{c}}$ 10400, 10500 31.3 $I_{\mathrm{c}}$ 20, 21 20.5 $I_{\mathrm{c}}$ 86, 87 16.4 $I_{\mathrm{d}}$ 12500, 12600 44.3 $I_{\mathrm{d}}$ 18, 19 6.1 $I_{\mathrm{d}}$ 30, 31 31.8 Table 8.Running time of Stage 4 for IhcLiq and Vis using ANN. IhcLiq Vis inst. $\underline{y}^{*},~{}\overline{y}^{*}$ I-time inst. $\underline{y}^{*},~{}\overline{y}^{*}$ I-time $I_{\mathrm{a}}$ 980, 1000 56.6 $I_{\mathrm{a}}$ 1.85, 1.90 2.0 $I_{\mathrm{b}}^{1}$ 1000, 1020 40.4 $I_{\mathrm{b}}^{1}$ 1.95, 2.00 3.5 $I_{\mathrm{b}}^{2}$ 1130, 1150 71.6 $I_{\mathrm{b}}^{2}$ 1.85, 1.90 19.7 $I_{\mathrm{b}}^{3}$ 1240, 1260 45.0 $I_{\mathrm{b}}^{3}$ 2.35, 2.40 26.0 $I_{\mathrm{b}}^{4}$ 1240, 1260 105.7 $I_{\mathrm{b}}^{4}$ 2.50, 2.55 9.3 $I_{\mathrm{c}}$ 810, 830 9.7 $I_{\mathrm{c}}$ 3.90, 3.95 1.8 $I_{\mathrm{d}}$ 1100, 1120 25.8 $I_{\mathrm{d}}$ 3.30, 3.35 8.3 Inferring a chemical graph with target values in multiple properties Once we obtained prediction functions $\eta_{\pi}$ for several properties $\pi$, include MILP formulations for these functions $\eta_{\pi}$ into a single MILP $\mathcal{M}(x,y;\mathcal{C}_{1})$ so as to infer a chemical graph that satisfies given target values $y^{*}$ for these properties at the same time. As an additional experiment in Stage 4, we inferred a chemical graph that has a desired predicted value each of three properties Kow, Lp and Sl, where we used the prediction function $\eta_{\pi}$ for each property $\pi\in\{$Kow, Lp, Sl$\}$ constructed in Stage 3. Table 9 shows the result of Stage 4 for inferring a chemical graph $\mathbb{C}^{\dagger}$ from instances $I_{\mathrm{b}}^{2}$, $I_{\mathrm{b}}^{3}$ and $I_{\mathrm{b}}^{4}$ with $\Lambda=\left\{\mathrm{H},\mathrm{C},\mathrm{N},\mathrm{O},\mathrm{S}_{(2)},% \mathrm{S}_{(6)},\mathrm{Cl}\right\}$, where we denote the following: - $\pi$: one of the three properties Kow, Lp and Slused in the experiment; - $\underline{y}^{*}_{\pi},~{}\overline{y}^{*}_{\pi}$: lower and upper bounds $\underline{y}^{*}_{\pi},\overline{y}^{*}_{\pi}\in\mathbb{R}$ on the predicted property value $\eta_{\pi}(f(\mathbb{C}^{\dagger}))$ of property $\pi\in\{$Kow, Lp, Sl$\}$ for a chemical graph $\mathbb{C}^{\dagger}$ to be inferred; - $\#$v (resp., $\#$c): the number of variables (resp., constraints) in the MILP in Stage 4; - I-time: the time (sec.) to solve the MILP in Stage 4; - $n$: the number $n(\mathbb{C}^{\dagger})$ of non-hydrogen atoms in the chemical graph $\mathbb{C}^{\dagger}$ inferred in Stage 4; - $n^{\text{int }}$: the number $n^{\text{int }}(\mathbb{C}^{\dagger})$ of interior-vertices in the chemical graph $\mathbb{C}^{\dagger}$ inferred in Stage 4; and - $\eta_{\pi}(f(\mathbb{C}^{\dagger}))$: the predicted property value $\eta_{\pi}(f(\mathbb{C}^{\dagger}))$ of property $\pi\in\{$Kow, Lp, Sl$\}$ for the chemical graph $\mathbb{C}^{\dagger}$ inferred in Stage 4. Table 9.Results of Stage 4 for instances $I_{\mathrm{b}}^{i},i=2,3,4$ with specified target values of three properties Kow, Lp and Sl using Lasso linear regression. inst. $\pi$ $\underline{y}_{\pi}^{*},~{}\overline{y}_{\pi}^{*}$ $\#$v $\#$c I-time $n$ $n^{\text{int }}$ $\eta_{\pi}(f(\mathbb{C}^{\dagger}))$ Kow –7.50, –7.40 –7.41 $I_{\mathrm{b}}^{2}$ Lp –1.40, –1.30 14574 11604 62.7 50 30 –1.33 Sl –11.6, –11.5 –11.52 Kow –7.40, –7.30 –7.38 $I_{\mathrm{b}}^{3}$ Lp –2.90, –2.80 14370 11596 35.5 48 25 –2.81 Sl –11.6, –11.4 –11.52 Kow –7.50, –7.40 –7.48 $I_{\mathrm{b}}^{4}$ Lp –0.70, –0.60 14166 11588 71.7 49 26 –0.63 Sl –11.4, –11.2 –11.39 Fig. 8f illustrates the chemical graph $\mathbb{C}^{\dagger}$ inferred from $I_{\mathrm{b}}^{4}$ with $(\underline{y}^{*}_{\pi_{1}},\overline{y}^{*}_{\pi_{1}})=(-7.50,$$-7.40)$, $(\underline{y}^{*}_{\pi_{2}},\overline{y}^{*}_{\pi_{2}})=(-0.70,-0.60)$ and $(\underline{y}^{*}_{\pi_{3}},\overline{y}^{*}_{\pi_{3}})=(-11.4,-11.2)$ for $\pi_{1}=$Kow, $\pi_{2}=$Lp and $\pi_{3}=$Sl, respectively. Stage 5. We executed Stage 5 to generate more target chemical graphs $\mathbb{C}^{*}$, where a chemical graph $\mathbb{C}^{*}$ is called a chemical isomer of a target chemical graph $\mathbb{C}^{\dagger}$ of a topological specification $\sigma$ if $f(\mathbb{C}^{*})=f(\mathbb{C}^{\dagger})$ and $\mathbb{C}^{*}$ also satisfies the same topological specification $\sigma$. We computed chemical isomers $\mathbb{C}^{*}$ of each target chemical graph $\mathbb{C}^{\dagger}$ inferred in Stage 4. We executed an algorithm to generate chemical isomers of $\mathbb{C}^{\dagger}$ up to 100 when the number of all chemical isomers exceeds 100. We can obtain such an algorithm from the dynamic programming proposed by Tanaka et al. [25] with a slight modification. The algorithm first decomposes $\mathbb{C}^{\dagger}$ into a set of acyclic chemical graphs, next replaces each acyclic chemical graph $T$ with another acyclic chemical graph $T^{\prime}$ that admits the same feature vector as that of $T$, and finally assembles the resulting acyclic chemical graphs into a chemical isomer $\mathbb{C}^{*}$ of $\mathbb{C}^{\dagger}$. Also, a lower bound on the total number of all chemical isomers of $\mathbb{C}^{\dagger}$ can be computed by the algorithm without generating all of them. Tables 2,3,4,5,6 show the computational results of the experiment in Stage 5 for the five properties, where we denote the following: - D-time: the running time (sec.) to execute the dynamic programming algorithm in Stage 5 to compute a lower bound on the number of all chemical isomers $\mathbb{C}^{*}$ of $\mathbb{C}^{\dagger}$ and generate all (or up to 100) chemical isomers $\mathbb{C}^{*}$; - $\mathbb{C}$-LB: a lower bound on the number of all chemical isomers $\mathbb{C}^{*}$ of $\mathbb{C}^{\dagger}$; and - $\#\mathbb{C}$: the number of all (or up to 100) chemical isomers $\mathbb{C}^{*}$ of $\mathbb{C}^{\dagger}$ generated in Stage 5. From Tables 2,3,4,5,6, we observe that for many cases the running time for generating up to 100 target chemical graphs in Stage 5 is less than 0.4 seconds. For some chemical graph $\mathbb{C}^{\dagger}$, no chemical isomer was found by our algorithm. This is because each acyclic chemical graph in the decomposition of $\mathbb{C}^{\dagger}$has no alternative acyclic chemical graph than the original one. On the other hand, some chemical graph $\mathbb{C}^{\dagger}$ such as the one in $I_{\mathrm{d}}$ in Table 2 admits an extremely large number of chemical isomers $\mathbb{C}^{*}$. Remember that we know such a lower bound $\mathbb{C}$-LB on the number of chemical isomers without generating all of them. 6. Conclusions In this paper, we studied the problem of inferring chemical structures from desired chemical properties and constraints, based on the framework proposed and developed in [18, 19, 20]. In the previous applications of the framework of inferring chemical graphs, artificial neural network (ANN) and decision tree have been used for the machine learning of Stage 3. In this paper, we used linear regression in Stage 3 for the first time and derived an MILP formulation that simulates the computation process of linear regression. We also extended a way of specifying a target value $y^{*}$ in a property so that the predicted value $\eta(f(\mathbb{C}^{\dagger}))$ of a target chemical graph $\mathbb{C}^{\dagger}$ is required to belong to an interval between two specified values $\underline{y}^{*}$ and $\overline{y}^{*}$. Furthermore, we modified a model of chemical compounds so that multi-valence chemical elements, cation and anion are treated, and introduced the rank and the adjacency-configuration of leaf-edges as new descriptors in a feature vector of a chemical graph. We implemented the new system of the framework and conducted computational experiments for Stages 1 to 5. We found 18 properties for which linear regression delivers a relatively good prediction function by using our feature vector based on the two-layered model. We also observed that an MILP formulation for inferring a chemical graph in Stage 4 can be solved efficiently over different types of test instances with complicated topological specifications. The experimental result suggests that our method can infer chemical graphs with up to 50 non-hydrogen atoms. Therefore, combination of linear regression and integer programming is a potentially useful approach to computational molecular design. It is an interesting future work to use other learning methods such as graph convolution networks, random forest and an ensemble method to construct a prediction function and derive the corresponding MILP formulations in Stages 3 and 4 in the framework. Author Contributions Conceptialization, HN and TA; methodology, HN; software, JZ, NAA and KH; validation, JZ, NAA and HN; formal analysis, HN; data resources, KH, LZ, HN and TA; writing—original draft preparation, HN; writing—review and editing, NAA and TA; project administration, HN; funding acquisition, TA. All authors contributed to editorial changes in the manuscript. All authors read and approved the final manuscript. Ethics Approval and Consent to Participate Not applicable. Acknowledgment Not applicable. Funding This research was supported, in part, by Japan Society for the Promotion of Science, Japan, under Grant #18H04113. Conflict of Interest The authors declare no conflict of interest. TA is serving as the guest editor of this journal. We declare that TA had no involvement in the peer review of this article and has no access to information regarding its peer review. Full responsibility for the editorial process for this article was delegated to AK and GP. Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Share
# Graphs in Calculus | ISI-B.stat | Objective Problem 698 Try this beautiful problem on Graphs in Calculus, useful for ISI B.Stat Entrance. ## Graphs in Calculus | ISI B.Stat Entrance | Problem 698 Four graphs marked G1, G2, G3 and G4 are given in the figure which are graphs of the four functions $f_1(x) = |x – 1| - 1, f_2(x) = ||x –1| - 1|, f_3(x) = |x| - 1, f_4(x) = 1 - |x|$, not necessarily in the correct order.The correct order is • (a) $G_2, G_1, G_3, G_4$ • (b) $G_3, G_4, G_1, G_2$ • (c) $G_2, G_3, G_1, G_4$ • (d) $G_4, G_3, G_1, G_2$ ### Key Concepts Calculus Graph Functions TOMATO, Problem 698 Challenges and Thrills in Pre College Mathematics ## Try with Hints We take the each functions and express it in intercept form.we expand the mod i,e take the value once positive and once negetive .so we will get two equations and solve them,we will get the intersecting point also and draw the graph........ Can you now finish the problem .......... $f_1(x) = |x – 1| - 1$ $(x-1)-1=y$ $\Rightarrow x-y=2$.................(1) $\Rightarrow \frac{x}{2} +\frac{y}{-2}=0$$\Rightarrow (2,0) ,(0,-2)$ And $-(x-1)-1=y$ $\Rightarrow x+y=0$........(2) $\frac{x}{1}+\frac{y}{1}=0$$\Rightarrow (1,0),(0,1)$ Now if we draw the graph of (1) & (2) we will get the figure $G_2$ and the intersecting point is $(1,-1)$ Similarly we can draw the graphs for other functions............. The second function is $f_2(x) = ||x –1| - 1|$ i.e $x-y=2$,$x=y$,$x+y=1$...which represents the two figure as given in $G_3$. The third function $f_3(x) = |x| - 1$ which gives $x-y=1$ & $x+y=-1$..if we solve this two equations as first function then we will get $G_1$ The third function will gives the $G_4$ graph Similarly we will draw the graph for all given functions.... Therefore ,the correct ans is (c) ## Subscribe to Cheenta at Youtube This site uses Akismet to reduce spam. Learn how your comment data is processed. ### Cheenta. Passion for Mathematics Advanced Mathematical Science. Taught by olympians, researchers and true masters of the subject.
History Please fill in your query. A complete syntax description you will find on the General Help page. first | previous | 1 21 | next | last Result 1 to 20 of 35 total An introduction to categorical data analysis. (English) New York, NY: Wiley. xi, 290 p. \sterling 24.95 (1996). 1 Statistics and data analysis: an introduction. 2nd ed. (English) New York, NY: Wiley. xiii, 635 p. \sterling 17.99 (1996). 2 Robust statistical procedures: asymptotics and interrelations. (English) Wiley Series in Probability and Mathematical Statistics. New York, NY: John Wiley \& Sons Ltd. xiv, 466 p. \sterling 50.00 (1996). 3 Chichester: Wiley \& Sons. xi, 265 p. \sterling 45.00 (1996). 4 The cone of semimodular rank functions. (English) Derigs, Ulrich (ed.) et al., Operations research proceedings 1994. Selected papers of the international conference on operations research, Berlin, August 30-September 2, 1994. Berlin: Springer-Verlag. 98-102 (1995). 5 Triangle and parallelogram laws on fuzzy graphs. (English) Pattern Recognit. Lett. 15, No.8, 803-805 (1994). 6 The techniques of sampling. (Les techniques de sondage.) (French) Paris: Éditions Technip. xv, 393 p. FF 375.00 (1994). 7 Operations on fuzzy graphs. (English) Inf. Sci. 79, No.3-4, 159-170 (1994). 8 A theory of alternating paths and blossoms for proving correctness of the $O(\sqrt{V}E)$ general graph maximum matching algorithm. (English) Combinatorica 14, No.1, 71-109 (1994). 9 On fuzzification of matroids. (English) Fuzzy Sets Syst. 53, No.3, 319-327 (1993). 10 The infinite grid covers the infinite half-grid. (English) Robertson, Neil (ed.) et al., Graph structure theory. Proceedings of the AMS-IMS-SIAM joint summer research conference on graph minors held June 22 to July 5, 1991 at the University of Washington, Seattle, WA (USA). Providence, RI: American Mathematical Society. Contemp. Math. 147, 455-460 (1993). 11 Spanning properties for fuzzy matroids. (English) Fuzzy Sets Syst. 51, No.3, 313-321 (1992). 12 Fuzzy matroid sums and a greedy algorithm. (English) Fuzzy Sets Syst. 52, No.2, 189-200 (1992). 13 Statistical sampling: theory and methods. (English) Statistics: Textbooks and Monographs. 130. New York: Marcel Dekker, Inc.. xviii, 349 p. (1992). 14 Perfectly orderable graphs and almost all perfect graphs are kernel $M$- solvable. (English) Graphs Comb. 8, No.2, 103-108 (1992). 15 Which rectangular chessboards have a knight’s tour? (English) Math. Mag. 64, No.5, 325-332 (1991). 16 A survey on graphs with polynomial growth. (English) Discrete Math. 95, No.1-3, 101-117 (1991). 17 The theorems of decomposition and representation for fuzzy graphs. (English) Fuzzy Sets Syst. 42, No.2, 237-243 (1991).
# What type of division is possible in 1, 2, 4, and 8 but not the 3rd dimension? http://plus.maths.org/content/curious-quaternions There is a snippet that says this: Multiplication is very sneaky. You can only set up rules for multiplication that let you divide in dimensions 1, 2, 4 and 8. This is just a mysterious fact about the universe. Well, if you study maths it's not mysterious because you can see exactly why, but it's mysterious in the sense that when you hear about it first it just sounds completely crazy! What are they talking about here? What type of division is possible in 2 dimensions but not 3? Could you give me an example of the division in two dimensions? - The "two-dimensional" system they're referring to is the complex numbers. –  Alex Zorn Mar 12 '13 at 5:48 The complex number $a+bi$ can be identified with the ordered pair $(a,b)$ of reals. So the set $\mathbb{C}$ of complex numbers (these include the reals) can be identified with $\mathbb{R}^2$, the $2$-dimensional space over the reals. And we can indeed divide a complex number $z$ by a non-zero complex number $w$, and obtain a complex number. The division formula is fairly simple: to calculate $\frac{a+bi}{c+di}$, multiply top and bottom by $c-di$. After a little work we arrive at $\frac{ac+bd}{c^2+d^2}+\frac{bc-ad}{c^2+d^2}i$. This division turns out to have most of the same nice formal properties as ordinary division of real numbers. Remark: Already if we go on to $4$, we lose some important properties shared by the reals and the complex numbers. For as you know from the article, multiplication in the quaternions is not commutative. The situation gets even worse at $n=8$: in the octonions, multiplication is not associative. - They are talking about the reals, complex numbers, quaternions, and octonions as normed division algebras over the reals. I already have a long missive here but basically, in keeping reals, complex, quaternions, and octonions in that order, any algebra contains all of the previous algebras. Each algebra is the extension of the previous one in a MEANINGFUL way. Specifically multiplication and division from the previous algebra is preserved in the extended algebra AND we can solve more equations in the new extended algebra than we could before. Reals and Complex had been around for a while but Hamilton with an epiphany (after struggling for a while) figured out how to make it work with quaternions and then soon afterwards Graves and Cayley discovered Octonions (independently). It turned out that Octonions are the last in this chain. There is no way to extend them to something 16-th dimensional. Cool fact, for each of these extensions, the ability to multiply and divide a larger set of numbers and to solve many more equations than we could before, we do have to pay a price. For example multiplication in octonions, multiplication is not commutative and it isn't associative but we can solve equations like $$xy-yx=1.$$ -
Chapter 1 to 3 Review Questions Chapter Chapter 3 Section Chapter 1 to 3 Review Questions Solutions 40 Videos Identify the relation that is not a function. Q1 For the graph of f(x) = \sqrt{x}, identify the transformation that would not be applied to f(x) to obtain the graph of y = 2f(-2x) + 3. Q2 An American visitor to Canada uses this function to convert from temperature in degrees Celsius into degrees Fahrenheit: f(x) = 2x+ 30. a) f^{-1}(x) = \frac{x + 30}{2} b) f^{-1}(x) = \frac{x - 30}{2} c) f^{-1}(x) = \frac{x - 2}{30} d) f^{-1}(x) = \frac{x + 2}{30} Q3 The range of f(x) = -|x -2| + 3 is A. \{y\in \mathbb{R} \vert y \leq 3\} B. \{y\in \mathbb{R} \vert y \geq 3\} C. \{y\in \mathbb{R} \vert 2 \leq y \leq 3\} D. \{y\in \mathbb{R} \vert 0 \leq y \leq 2\} Q4 Which pairs of functions are equivalent? h(x) = (x + 6)(x + 3)(x-6) and b(x) = (x + 3)(x^2 -36) Q5i Which pairs of functions are equivalent? b(t) = (3t + 2)^3 and c(t) = 27t^3 +54t^2 + 36t + 8 Q5ii Which pairs of functions are equivalent? h(t) = (4 -t)^3 and c(t) =(t -4)^3 Q5iii Which pairs of functions are equivalent? f(x) = (x^2 - 4x) -(2x^2 + 2x - 4)-(x^2 +1) \displaystyle b(x) = (2x-5)(2x-1) Q5iv Which expression has the restrictions y \neq -1, 0, \frac{1}{2} on its variable? a) \displaystyle \frac{3y}{y -2} \times \frac{4(y-2)}6y{} b) \displaystyle \frac{5y(y + 3)}{4y} \times \frac{y -5}{y + 3} c) \displaystyle \frac{3y + 1}{2y -1} \div \frac{3y(y + 1)}{2y -1} d)\displaystyle \frac{10y}{y + 2} \div \frac{5}{2(y + 2)} Q6 Factor and simplify. \displaystyle \frac{x^2 -5x + 5}{x^2 -1} \times \frac{x^2 -4x -5}{x^2 -4} Q7 Find the sum and simplify. \displaystyle \frac{5x -6}{x + 1} + \frac{3x}{x -4} Q8 Given the quadratic function f(x) = 3x^2 -6x + 15, identify the coordinates of the vertex. Q9 When the equation of a quadratic function is in factored form, which feature is most easily determined? a) y-intercept b) x-intercept c) vetex d) maximum value Q10 The height, l), in metres, of a baseball after Bill hits it with a hat is described by the function h(t) = 0.8 +29.4t -4.9t^2, where tis the time in seconds after the ball is struck. What is the maximum height of the hall? A. 4.9 m B. 29.4 C. 44.9 m D. 25 m Q11 It costs a bus company \$225 to run a minibus on a ski trip, plus \$30 per passenger. The bus has a seating for 22 passengers and the company charges \$60 per far if the bus is full. For each empty seat, the company has to increase the ticket price by \$5. How many empty seats should the bus run with to maximize profit from this trip? A. 8 B. 6 C. 10 D. 2 Q12 Without drawing the graph identify the function that has two zeros. A. n(x) = -x^2 -6x -9 B. m(x) = 4(x + 1)^2 + 0.5 C. f(x) = -5(x + 1.3)^2 D. g(x) = -2(x + 3.6)^2 + 4.1 Q13 The graph of a function f(x) =x^2 -kx + k + 8 touches the x-axis at one point. What are the possible values of k? Q14 For f(x) = 2(x - 3)^2 + 5, x \geq 3, determine the equation for inverse. Q15 The relation that is also a function is A. x^2 + y^2 = 25 B. y^2 =x C. x^2 =y D. x^2 - y^2 = 25 Q16 Given f(x) = x^2 -5x + 3, then A. f(-1) = -3 B. f(-1) = 7 C. f(-1) = -1 D. f(-1) = 9 Q17 Which of the Following statements is not true? a) The horizontal line test can be used to show that a relation is a function. b) The set of all possible input values of a function is called the domain. c) he equation y = 3x + 5 describes a function. d) This set of ordered pairs describes a function: \{(0, 1), (1, 2), (3, -3), (4, -1)\} Q18 Find the range of f(x) = \frac{3}{x}. Q19 Find the inverse of f(x) = 5x -7 Q20 Find the inverse of g(x) =x^2 -5x- 6 Q21 Which of the following statements is false? a) The domain of f is the range of f^{-1}. b) The graph of the inverse can be found by reflecting y =f(x) in the line y= x. c) The domain of f^{-1} is the range of f. d) To determine the equation of the inverse, interchange x and y and solve for x. Q22 If f(x) = 3(x+ 2)^2-5, the domain must be restricted to which interval if the inverse is to be a function? a) x \geq -5 b) x \geq -2 c) x \geq 2 d) x \geq 5 Q23 The inverse of f(x) = \sqrt{x -1} is A. f^{-1}(x) = x^2 + 1, x \leq 0 B. f^{-1}(x) = x^2 - 1, x \leq 0 C. f^{-1}(x) = x^2 + 1, x \geq 0 D. f^{-1}(x) = x^2 - 1, x \leq 0 Q24 What transformations are applied to y = f(x) to obtain the graph of y = af(x-p)+ q, if a< 0, p<0, and q< 0? a) Vertical stretch by a factor of la|, followed by a translation |p| units to the left and |q| units down b) Reflection in the x-axis, vertical stretch by a factor of |a|, followed by a translation |p| units to the right and |a| units down c) Reflection in the x-axis, vertical stretch by a factor of |a|, following by a translation |p| units to the left and |q| units down. d) Reflection in the x-axis, vertical stretch by a 6 factor of |a|, followed by a translation |p| units to the right and |q| units up Q25 Find the vertex of y = -2x^2 -12x -19. Q26 The coordinates of the vertex for the graph of y =(x + 2)(x -3) are A. (-2, 3) B. ( - \frac{1}{2}, -\frac{21}{4}) C. (2, 3) D. (\frac{1}{2}, - \frac{25}{4}) Q27 The profit function for a new product is given by P(x) = -4x^2 + 28x -40, where x is the number sold in thousands. How many items must be sold for the company to break even? a) 2000 or 5000 b) 2000 or 3500 c) 5000 or 7000 d) 3500 or 7000 Q28 Which of the following statements is not true for the equation of a quadratic function? a) In standard form, they-intercept is clearly visible. b) In vertex form, the break-even points are clearly visible. c) In factored form, the x-intercepts are clearly visible. d) In vertex form, the coordinates of the vertex are clearly visible. Q29 State the value of the discriminant, D, and the number of roots for 7x^2 + 12x + 6 =0. A. D = 312, n =2 B. D = 24, n =2 C. D = 312, n =1 D. D = -24, n = 0 Q30 The simplified form of \frac{7}{ab} - \frac{2}{b} + \frac{1}{3a^2} is A. \displaystyle \frac{6}{ab -b + 3a^2}, a , b \neq 0 B. \displaystyle \frac{21a-6a^2 + b}{3a^2b}, a , b \neq 0 C. \displaystyle \frac{7a -2a^2 + b}{3a^2b}, a , b \neq 0 D. \displaystyle \frac{7a -2b + ab}{3a^3b^2}, a , b \neq 0 Q31 Simplify \displaystyle \frac{x^2 -4}{x + 3} \div \frac{2x + 4}{x^2 -9} Q32 For \displaystyle h(x) = 3x^2 -24x + 50 find i. the domain and range. ii. the relationship to the parent function. including all applied transformations iii. a sketch of the function Q33a For \displaystyle h(x) = 5 - 2\sqrt{3x + 6} find i. the domain and range. ii. the relationship to the parent function. including all applied transformations iii. a sketch of the function Q33b For \displaystyle h(x) = \frac{1}{\frac{1}{3}(x -6)} -2 find i. the domain and range. ii. the relationship to the parent function. including all applied transformations iii. a sketch of the function Q33c Sacha and Jill set off at the same time on a 30 km walk for charity. Sacha, who has trained all year for this event, walks 1.4 km/h faster than Jill, but sees a friend on the route and stops to talk for 20 min. Even with this delay, Sacha finishes the walk 2 h ahead ofJill. How fast was each person walking, and how long did it take for each person to finish the walk? Jon is running a ski trip over March Break. Last year he had 25 students go and each paid $550. This year he will increase the price and knows that for each$50 price increase, 2 fewer students will go on the trip. The bus costs a flat fee of$5500, and hotel and lift tickets cost$240 per person. Determine
# Ratio and Proportion II • Mar 18th 2013, 06:56 AM sachinrajsharma Ratio and Proportion II If a,b,c,d are continued proportion : Prove that : $(\frac{a-b}{c}+\frac{a-c}{b})^2-(\frac{d-b}{c}+\frac{d-c}{b})^2=(a-d)^(\frac{1}{c^2}-\frac{1}{b^2})^2$ After solving LH.S I got : $\frac{2(a-d)}{(bc)^2}$ But after solving R.H.S I am getting $\frac{(a-d)^2(b^2-c^2)}{(bc)^2}$ • Mar 18th 2013, 08:52 PM Punch Re: Ratio and Proportion II notice that it is of the form $A^2-B^2$, you should try manipulating using the following formula, $A^2-B^2=(A+B)(A-B)$ • Mar 18th 2013, 08:58 PM sachinrajsharma Re: Ratio and Proportion II If u noticed that I have already done..... if u can do it further than please let me know.. • Mar 19th 2013, 05:56 AM wholesalejerseys Re: Ratio and Proportion II I have for k=1,2,3... where is a term of some infinite sequence { } Some manipulation: Is there anything I can do with this now? Is it legal to multiply the sums like this? The second factor looks like a geometric series and I should be able to get the sum of the terms equal to some constant A (if series converges). (for each k) Does that seem Wholesale nfl jerseys • Mar 19th 2013, 09:24 PM ibdutt Re: Ratio and Proportion II Please recheck there is something amiss. Notice the powers of variables on both the sides??
org.apache.commons.math3.analysis.integration ## Class IterativeLegendreGaussIntegrator • All Implemented Interfaces: UnivariateIntegrator public class IterativeLegendreGaussIntegrator extends BaseAbstractUnivariateIntegrator This algorithm divides the integration interval into equally-sized sub-interval and on each of them performs a Legendre-Gauss quadrature. Because of its non-adaptive nature, this algorithm can converge to a wrong value for the integral (for example, if the function is significantly different from zero toward the ends of the integration interval). In particular, a change of variables aimed at estimating integrals over infinite intervals as proposed here should be avoided when using this class. Since: 3.1 Version: $Id: IterativeLegendreGaussIntegrator.java 1499765 2013-07-04 14:24:11Z erans$
# Preprints The Newton Institute has its own Preprint Series where the scientific papers of the Institute are freely available. If you are a Cambridge author and you are submitting your work in the next REF then you will need to follow the guidelines on the University of Cambridge Open Access webpages to ensure that your work meets Open Access requirements. In addition, if you are supported by EPSRC or another UK Research Council, all publications published on/after 1st May 2015 need a statement describing how to access the underlying research data. For more information on this please see the University of Cambridge Research Data Management website www.data.cam.ac.uk Newton Institute visitors are encouraged to submit relevant papers to the series. All papers must have been either completed at the Institute or based on work that took place partially or wholly at the Institute. You can still submit material to the preprint series even after your visit has ended as long as it is based on work that you did during your visit. Please send a PDF copy to [email protected], which will be added to the lists below. Bound hard copies of all preprints in the series are displayed in the Institute. We would also be pleased to hear from former visitors who have since completed papers based on research carried out here. Suggested text for acknowledgements: If you wish to acknowledge the support of the Institute in your paper you may wish to use the following text: The author(s) would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme [insert programme name] where work on this paper was undertaken. Papers published in the Series are listed by date of submission and may be downloaded as PDFs. Individual hard copies can also be obtained on request from the Institute. Programme Authors Title Attachments CGP A Madzvamuse; AHW Chung The bulk-surface finite element method for reaction-diffusion systems on stationary volumes PEP P Kuchment An overview of periodic elliptic operators CGP A Madzvamuse; HS Ndakwo; R Barreira Stability analysis of reaction-diffusion models on evolving domains: the effects of cross-diffusion PEP D Prandi; L Rizzi; M Seri A sub-Riemannian Santaló formula with applications to isoperimetric inequalities and Dirichlet spectral gap of hypoelliptic operators PEP KD Cherednichenko Two-scale series expansions for travelling wave packets in one-dimensional periodic media PEP TA Suslina Homogenization of nonstationary Schrödinger type equations with periodic coefficients PEP A Pushnitski; D Yafaev Localization principle for compact Hankel operators CGP S Grosskinsky; D Marahrens; A Stevens A hyrdrodynamic limit for chemotaxis in a given heterogeneous environment PEP Y Karpeshina; Y-R Lee; R Shterenberg; G Stolz Ballistic transport for the Schrödinger operator with limit-periodic or quasi-periodic potential in dimension two PEP T Sunada Exponential Riemann sums and "near" -quasicrystals PEP J Griffin On the phase-space distribution of Bloch eigenmodes for periodic point scatterers PEP M Kha; P Kuchment; A Raich Green's function asymptotics near the internal edges of spectra of periodic elliptic operators: spectral gap interior. PEP J Marklof; B Tóth Invariance principle for the periodic Lorentz gas in the Boltzmann-Grad limit PEP C Sadel Anderson transition at 2 dimensional growth rate on antitrees and spectral theory for operators with one propagating channel RGM B Duplantier; H Ho; B Le; M Zinsmeister Logarithmic coefficients and generalized multifractality of whole-plane SLE PEP V Chulaevsky Complete exponential localization in a discrete multi-particle Anderson model with interaction of infinite range PEP I Kachkovskiy On transport properties of isotropic quasiperiodic $\it XY$ spin chains RGM E Gwynne; X Sun Scaling limits for the critical Fortuin-Kastelyn model on a random planar map II: local estimates and empty reduced word exponent PEP B Helffer Lower bound for the number of critical points of minimal spectral $\it k$-partitions for $\it k$ large. PEP Y Almog; B Helffer; XB Pan Mixed normal-superconducting states in the presence of strong electric currents PEP B Helffer; Y Kordyukov; N Raymond; S Vũ Ngọc Magnetic wells in dimension three PEP H Abdul-Rahman; G Stolz A uniform area law for the entanglement of eigenstates in the disordered XY chain PEP S Jitomirskaya; I Kachkovskiy $\it L$$^2-Reducibility and localization for quasiperiodic operators RGM L Addario-Berry; B Balle; G Perarnau Diameter and stationary distributions of random \it r-out digraphs PEP S Zhang Mixed spectral types for one frequency discrete quasi-periodic Schrödinger operator MLC A Humpert; MP Allen Propagating director bend fluctuations in nematic liquid crystals PEP DM Elton Asymptotics for Erdős-Solojev zero modes in strong fields PEP V Chulaevsky Efficient localization bounds in a continuous N-particle Anderson model with long-range interaction PEP V Chulaevsky Exponential scaling limit of the single-particle Anderson model via adaptive feedback scaling DAE W Li; DKJ Lin A note on foldover of 2^k$$^-$$^p$ designs with column permutations PEP B Helffer; P Kerdelhué; J Royo-Letelier Chamber's forumla for the graphene and the Hou model with kagome periodicity and applications PEP J Fillman; Y Takahash; W Yessen Mixed spectral regimes for square Fibonacci Hamiltonians PEP B Helffer; A Kachmar From constant to non-degenerately vanishing magnetic fields in superconductivity UMC MR Goddard; D Greig Saccharomyces cerevisiae: a nomadic yeast with no niche? PEP M Levitin; M Seri Accumulation of complex eigenvalues of an indefinite Sturm-Liouville operator with a shifted Coulomb potential RGM L Addario-Berry A probabilistic approach to block sizes in random maps RGM L Addario-Berry; Y Wen Joint convergence of random quadrangulations and their cores PEP YV Fyodorov; BA Khoruzhenko; NJ Simm Fractional Brownian motion with Hurst index H=0 and the Gaussian Unitary Ensemble RGM L Addario-Berry The front location in BBM with decay of mass MFE M Hernandez; T Ma; S Wang Theory of dark energy and dark matter
# Algorithm Analysis ## Measuring Algorithm Speed There are usually many algorithms that solve the same problem. Oftentimes some of these algorithms are more efficient than others. As we've discussed, the choice between different data structures also has a big effect on the efficiency of programs. For example, removing something from a linked list is a lot faster than removing something from an array. Likewise, adding to the end of a doubly-linked list is a lot faster than adding to the end of a singly-linked list. But how much faster are these things? Today we'll look at ways of measuring how efficient different algorithms and data structures are. ## Using Execution Time One way of measuring how fast an algorithm is is to actually time it. We can write multiple programs, compile them, and run them to see which one runs faster. If we try running these programs, we'll see the affect of using a singly vs. doubly linked list. If we only add a few elements, the program is so fast that it makes little difference. Storing a lot of data, however, and the difference becomes clear: 100.07 s.08 s 1,000.09 s.07 s 10,000.22 s.09 s 50,0003.40 s.11 s 100,00013.31 s.12 s 200,00056.06 s.14 s 500,00018 m0.27 s As you can see, the difference is enormous, the singly linked list takes longer and longer when the number being added gets bigger, while the doubly linked list grows much more slowly. With the largest input size, the doubly linked list took 4000 times longer. Timing programs can give a very clear idea of how efficient a given program is. But there are some major drawbacks: • It only can really compare two algorithms. Timing only one program doesn't give you an objective measure of its speed. If we only ran the test on the singly linked list, then we wouldn't necessarily know if it's efficient or not. • There is a large variation in the time it takes to run a program. Notice that when we increased the input size of the doubly-linked list program from 100 to 1000, the time actually went down. That doesn't really make sense, but is caused by the fact that computers run more than one program at a time. Also the second time you run the same program is usually faster than the first time, since the code is loaded into the computer's cache. • It only gives you a measure of a whole program — not individual functions or algorithms. That means to test a single algorithm, we'd need to write a program that contains only that. • You need to test multiple input sizes. If we only tested up to a thousand, there would have been little difference. ## Order of Growth An alternative way to quantify the efficiency of an algorithm is to express it in terms how much more work we have to do for larger and larger inputs. The algorithm to add a node at the end of a singly-linked list is given below: 1. Create the new node with the right data. 2. Set the new node's next field to NULL. 3. If the list is empty, set the head to the new node and return. 4. Search through the list from the head for the last node: 1. Check if this node is last. 2. If not, move onto the next node. 5. Set the last node's next field to the new node. If we have an empty list, this algorithm will not need to loop at all looking for the end, so the algorithm will have 4 steps. If we have 10 items, the algorithm will loop through all 10 items to find the end, and have 24 steps (20 for the loop and 4 for the others). If we have 100 items, the algorithm will loop through all 100 to find the end, and have 204 steps. With 1000 items, the algorithm will have 2004 steps. We could write a function to give us the number of steps the algorithm takes as a function of the size of the linked list like this: $steps(n) = 2 \cdot n + 4$ Now we can say how many steps are needed for any input size. However, this is a bit too precise than we actually want to be. In algorithm analysis, we are just looking for a rough idea of how many steps an algorithm takes. In this case, when we start having more and more nodes, it should be clear that the loop dominates the algorithm. The fact that there are 4 extra steps doesn't really matter. Likewise, the fact that there are two steps in the loop doesn't really matter. When we turn this algorithm into a program, the loop could have more or less Java instructions (for example you can sometimes do things in one line of code or two, depending on how you write it). Likewise, the compiler can turn a Java instruction into any number of machine instructions. So in algorithm analysis, we will just say this algorithm takes $O(n)$ steps. That comes from removing the coefficient of 2, and the constant of 4. What this tells us is that the number of steps scales linearly with the size of the linked list. ## Big-O Notation Big-O notation is how algorithm efficiency is expressed. It basically expresses how the runtime of an algorithm scales with the input. In Big-O notation, the algorithm above is expressed as $O(n)$. The $O$ stands for "order of" and the $n$ is how big the input to the algorithm is. So with an input size of $n$ (the number of things in the linked list), this algorithm take around $n$ steps to complete. Big-O lets us capture the most important aspect of an algorithm without getting lost in details like how many seconds it takes to run, or figuring out exactly how many machine code instructions it needs to accomplish some task. The algorithm for adding to the end of a doubly linked list is given below: 1. Make the new node with the given data. 2. Set the node's next to NULL. 3. If the list is empty: 1. Set the head to node. 2. Set the tail to node. 3. Set node's prev to NULL. 4. Otherwise: 1. Set node's prev to tail. 2. Set tail's next to node. 3. Set tail to node. There is no loop in this algorithm. It will always take the same number of steps no matter how big or small the list is. Again, the exact number doesn't matter. Whether it's 6 instructions, 5 instructions, or 25 instructions, the number won't change based on how big the list is. This means that the algorithm is constant time. This is expressed as $O(1)$ in Big-O notation. The smaller the complexity, the better. So $O(1)$ is better than $O(n)$. Note, however, that for small values of $n$, a $O(n)$ algorithm could actually run faster. For example, adding to an empty singly-linked list could be faster than adding to an empty doubly-linked list. However, what happens with larger input sizes is generally all we care about.
# Properties Label 1134.2.t.c Level $1134$ Weight $2$ Character orbit 1134.t Analytic conductor $9.055$ Analytic rank $0$ Dimension $4$ CM no Inner twists $4$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$1134 = 2 \cdot 3^{4} \cdot 7$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 1134.t (of order $$6$$, degree $$2$$, not minimal) ## Newform invariants Self dual: no Analytic conductor: $$9.05503558921$$ Analytic rank: $$0$$ Dimension: $$4$$ Relative dimension: $$2$$ over $$\Q(\zeta_{6})$$ Coefficient field: $$\Q(\zeta_{12})$$ Defining polynomial: $$x^{4} - x^{2} + 1$$ Coefficient ring: $$\Z[a_1, a_2]$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 378) Sato-Tate group: $\mathrm{SU}(2)[C_{6}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of a primitive root of unity $$\zeta_{12}$$. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + ( -\zeta_{12} + \zeta_{12}^{3} ) q^{2} + ( 1 - \zeta_{12}^{2} ) q^{4} + ( -2 \zeta_{12} + \zeta_{12}^{3} ) q^{5} + ( -1 + 3 \zeta_{12}^{2} ) q^{7} + \zeta_{12}^{3} q^{8} +O(q^{10})$$ $$q + ( -\zeta_{12} + \zeta_{12}^{3} ) q^{2} + ( 1 - \zeta_{12}^{2} ) q^{4} + ( -2 \zeta_{12} + \zeta_{12}^{3} ) q^{5} + ( -1 + 3 \zeta_{12}^{2} ) q^{7} + \zeta_{12}^{3} q^{8} + ( 2 - \zeta_{12}^{2} ) q^{10} + ( -2 \zeta_{12} - \zeta_{12}^{3} ) q^{14} -\zeta_{12}^{2} q^{16} + ( -2 \zeta_{12} - 2 \zeta_{12}^{3} ) q^{17} + ( 4 + 4 \zeta_{12}^{2} ) q^{19} + ( -\zeta_{12} + 2 \zeta_{12}^{3} ) q^{20} + 6 \zeta_{12}^{3} q^{23} -2 q^{25} + ( 2 + \zeta_{12}^{2} ) q^{28} -9 \zeta_{12} q^{29} + ( -2 - 2 \zeta_{12}^{2} ) q^{31} + \zeta_{12} q^{32} + ( 2 + 2 \zeta_{12}^{2} ) q^{34} + ( -\zeta_{12} - 4 \zeta_{12}^{3} ) q^{35} + ( -4 + 4 \zeta_{12}^{2} ) q^{37} + ( -8 \zeta_{12} + 4 \zeta_{12}^{3} ) q^{38} + ( 1 - 2 \zeta_{12}^{2} ) q^{40} + ( 2 \zeta_{12} + 2 \zeta_{12}^{3} ) q^{41} + ( 8 - 8 \zeta_{12}^{2} ) q^{43} -6 \zeta_{12}^{2} q^{46} + ( -2 \zeta_{12} - 2 \zeta_{12}^{3} ) q^{47} + ( -8 + 3 \zeta_{12}^{2} ) q^{49} + ( 2 \zeta_{12} - 2 \zeta_{12}^{3} ) q^{50} + ( -3 \zeta_{12} + 3 \zeta_{12}^{3} ) q^{53} + ( -3 \zeta_{12} + 2 \zeta_{12}^{3} ) q^{56} + 9 q^{58} + ( -7 \zeta_{12} + 14 \zeta_{12}^{3} ) q^{59} + ( -4 + 2 \zeta_{12}^{2} ) q^{61} + ( 4 \zeta_{12} - 2 \zeta_{12}^{3} ) q^{62} - q^{64} + ( -14 + 14 \zeta_{12}^{2} ) q^{67} + ( -4 \zeta_{12} + 2 \zeta_{12}^{3} ) q^{68} + ( 1 + 4 \zeta_{12}^{2} ) q^{70} -6 \zeta_{12}^{3} q^{71} + ( -14 + 7 \zeta_{12}^{2} ) q^{73} -4 \zeta_{12}^{3} q^{74} + ( 8 - 4 \zeta_{12}^{2} ) q^{76} -11 \zeta_{12}^{2} q^{79} + ( \zeta_{12} + \zeta_{12}^{3} ) q^{80} + ( -2 - 2 \zeta_{12}^{2} ) q^{82} + ( -10 \zeta_{12} + 20 \zeta_{12}^{3} ) q^{83} + 6 \zeta_{12}^{2} q^{85} + 8 \zeta_{12}^{3} q^{86} + ( -6 \zeta_{12} + 12 \zeta_{12}^{3} ) q^{89} + 6 \zeta_{12} q^{92} + ( 2 + 2 \zeta_{12}^{2} ) q^{94} -12 \zeta_{12} q^{95} + ( 4 + 4 \zeta_{12}^{2} ) q^{97} + ( 5 \zeta_{12} - 8 \zeta_{12}^{3} ) q^{98} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4q + 2q^{4} + 2q^{7} + O(q^{10})$$ $$4q + 2q^{4} + 2q^{7} + 6q^{10} - 2q^{16} + 24q^{19} - 8q^{25} + 10q^{28} - 12q^{31} + 12q^{34} - 8q^{37} + 16q^{43} - 12q^{46} - 26q^{49} + 36q^{58} - 12q^{61} - 4q^{64} - 28q^{67} + 12q^{70} - 42q^{73} + 24q^{76} - 22q^{79} - 12q^{82} + 12q^{85} + 12q^{94} + 24q^{97} + O(q^{100})$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/1134\mathbb{Z}\right)^\times$$. $$n$$ $$325$$ $$407$$ $$\chi(n)$$ $$1 - \zeta_{12}^{2}$$ $$\zeta_{12}^{2}$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 593.1 0.866025 + 0.500000i −0.866025 − 0.500000i 0.866025 − 0.500000i −0.866025 + 0.500000i −0.866025 + 0.500000i 0 0.500000 0.866025i −1.73205 0 0.500000 + 2.59808i 1.00000i 0 1.50000 0.866025i 593.2 0.866025 0.500000i 0 0.500000 0.866025i 1.73205 0 0.500000 + 2.59808i 1.00000i 0 1.50000 0.866025i 1025.1 −0.866025 0.500000i 0 0.500000 + 0.866025i −1.73205 0 0.500000 2.59808i 1.00000i 0 1.50000 + 0.866025i 1025.2 0.866025 + 0.500000i 0 0.500000 + 0.866025i 1.73205 0 0.500000 2.59808i 1.00000i 0 1.50000 + 0.866025i $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 3.b odd 2 1 inner 63.k odd 6 1 inner 63.s even 6 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 1134.2.t.c 4 3.b odd 2 1 inner 1134.2.t.c 4 7.d odd 6 1 1134.2.l.b 4 9.c even 3 1 378.2.k.c 4 9.c even 3 1 1134.2.l.b 4 9.d odd 6 1 378.2.k.c 4 9.d odd 6 1 1134.2.l.b 4 21.g even 6 1 1134.2.l.b 4 63.g even 3 1 2646.2.d.a 4 63.i even 6 1 378.2.k.c 4 63.k odd 6 1 inner 1134.2.t.c 4 63.k odd 6 1 2646.2.d.a 4 63.n odd 6 1 2646.2.d.a 4 63.s even 6 1 inner 1134.2.t.c 4 63.s even 6 1 2646.2.d.a 4 63.t odd 6 1 378.2.k.c 4 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 378.2.k.c 4 9.c even 3 1 378.2.k.c 4 9.d odd 6 1 378.2.k.c 4 63.i even 6 1 378.2.k.c 4 63.t odd 6 1 1134.2.l.b 4 7.d odd 6 1 1134.2.l.b 4 9.c even 3 1 1134.2.l.b 4 9.d odd 6 1 1134.2.l.b 4 21.g even 6 1 1134.2.t.c 4 1.a even 1 1 trivial 1134.2.t.c 4 3.b odd 2 1 inner 1134.2.t.c 4 63.k odd 6 1 inner 1134.2.t.c 4 63.s even 6 1 inner 2646.2.d.a 4 63.g even 3 1 2646.2.d.a 4 63.k odd 6 1 2646.2.d.a 4 63.n odd 6 1 2646.2.d.a 4 63.s even 6 1 ## Hecke kernels This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(1134, [\chi])$$: $$T_{5}^{2} - 3$$ $$T_{11}$$ ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$1 - T^{2} + T^{4}$$ $3$ $$T^{4}$$ $5$ $$( -3 + T^{2} )^{2}$$ $7$ $$( 7 - T + T^{2} )^{2}$$ $11$ $$T^{4}$$ $13$ $$T^{4}$$ $17$ $$144 + 12 T^{2} + T^{4}$$ $19$ $$( 48 - 12 T + T^{2} )^{2}$$ $23$ $$( 36 + T^{2} )^{2}$$ $29$ $$6561 - 81 T^{2} + T^{4}$$ $31$ $$( 12 + 6 T + T^{2} )^{2}$$ $37$ $$( 16 + 4 T + T^{2} )^{2}$$ $41$ $$144 + 12 T^{2} + T^{4}$$ $43$ $$( 64 - 8 T + T^{2} )^{2}$$ $47$ $$144 + 12 T^{2} + T^{4}$$ $53$ $$81 - 9 T^{2} + T^{4}$$ $59$ $$21609 + 147 T^{2} + T^{4}$$ $61$ $$( 12 + 6 T + T^{2} )^{2}$$ $67$ $$( 196 + 14 T + T^{2} )^{2}$$ $71$ $$( 36 + T^{2} )^{2}$$ $73$ $$( 147 + 21 T + T^{2} )^{2}$$ $79$ $$( 121 + 11 T + T^{2} )^{2}$$ $83$ $$90000 + 300 T^{2} + T^{4}$$ $89$ $$11664 + 108 T^{2} + T^{4}$$ $97$ $$( 48 - 12 T + T^{2} )^{2}$$
## Wednesday, September 13, 2006 ### LaTeX: thick line in table Sometime we want the top and bottom horizontal line thicker, try booktabs package: \usepackage{booktabs}....\begin{tabular}{>{\large}c >{\large\bfseries}l >{\itshape}c }\toprule[3pt]A & B & C\\ \midrule[.5pt]\hline 100 & 10 & 1 \\\bottomrule\end{tabular} another approach (not recommend) is put "\doublerulesep=0.4pt" in pre-, this defines the distance between two lines, then use '\hline \hline ...'
0 Research Papers # Evaluation of Effectiveness of Er,Cr:YSGG Laser For Root Canal Disinfection: Theoretical Simulation of Temperature Elevations in Root Dentin [+] Author and Article Information L. Zhu1 Department of Mechanical Engineering, University of Maryland, Baltimore County, Baltimore, MD [email protected] M. Tolba Department of Endodonics, Prosthodontics and Operative Dentistry, and Department of Health Promotion and Policy, University of Maryland, Baltimore, Baltimore, MD 21201 D. Arola Department of Mechanical Engineering, and Department of Endodonics, Prosthodontics and Operative Dentistry, University of Maryland, Baltimore County, Baltimore, MD 21250 M. Salloum Department of Mechanical Engineering, University of Maryland, Baltimore County, Baltimore, MD 21250 F. Meza Department of Endodonics, Prosthodontics and Operative Dentistry, University of Maryland, Baltimore, Baltimore, MD 21201 1 Corresponding author. J Biomech Eng 131(7), 071004 (Jun 12, 2009) (8 pages) doi:10.1115/1.3147801 History: Received December 15, 2008; Revised April 16, 2009; Published June 12, 2009 ## Abstract Erbium, chromium: yttrium, scandium, gallium, garnet (Er,Cr:YSGG) lasers are currently being investigated for disinfecting the root canal system. Prior to using laser therapy, it is important to understand the temperature distribution and to assess thermal damage to the surrounding tissue. In this study, a theoretical simulation using the Pennes bioheat equation is conducted to evaluate how heat spreads from the canal surface using an Er,Cr:YSGG laser. Results of the investigation show that some of the proposed treatment protocols for killing bacteria in the deep dentin are ineffective, even for long heating durations. Based on the simulation, an alternative treatment protocol is identified that has improved effectiveness and is less likely to introduce collateral damage to the surrounding tissue. The alternative protocol uses 350 mW laser power with repeating laser tip movement to achieve bacterial disinfection in the deep dentin ($800 μm$ lateral from the canal surface), while avoiding thermal damage to the surrounding tissue $(T<47°C)$. The alternative treatment protocol has the potential to not only achieve bacterial disinfection of deep dentin but also shorten the treatment time, thereby minimizing potential patient discomfort during laser procedures. <> ## Figures Figure 1 Schematic diagram of the axisymmetrical geometry of the tooth root and surrounding tissue. Laser tip is moved up and down in the canal and the heating time at each cylindrical segment is 1 s (6). Figure 2 Heat flux imposed to each cylindrical canal surface induced by the pulsed laser Figure 3 Temperature contours of the root dentin and surrounding tissue during laser treatment using 175 mW, and the simulation time is 10 s. The white solid line represents the root-tissue interface. The closest location along the interface to the root canal wall is marked by “A.” Figure 4 Radial temperature distribution along the white dashed lines shown in Fig. 3 during laser treatment using a laser power of 175 mW. Note that the temperature distribution represents that at various time instants. Figure 5 Temperatures at various locations including the canal surface (0 mm) and in the deep dentin (200 μm, 400 μm, 600 μm, and 800 μm lateral from the canal surface) at t=6 s. The effect of the laser power is represented by different bars. The primary y axis on the left gives the actual temperature values, while the secondary y axis on the right illustrates the temperature elevations from the baseline of 37°C. Figure 6 Temperature profile along the root-tissue interface at t=6 s using the 350 mW laser. The white dashed line represents the dentin location with a radial offset of 800 μm from the root canal surface. Figure 7 Schematic diagram of the proposed treatment protocol. Different line segments represent different cylindrical surface segments of the root canal. Laser tip stays in each surface segment for 2 s, and the total time for each cycle is 16 s. Figure 8 Simulated temperature contours in the dentin and surrounding tissue using the proposed treatment protocol Figure 9 Heat accumulation in the dentin during the first and second heating cycle (32 s) is illustrated by the average temperature of the entire dentin at various time instants Figure 10 Temperature profiles along the canal surface in the axial direction from the crown side. Initially the temperature distribution along the canal surface is represented by the heavy solid line. After the laser tip is moved to the next segment, the temperature distribution is replaced by the next solid line. Notice the shift of maximum temperature on the canal surface due to the fact that the laser tip is moved from one segment to another. Figure 11 Temperature distribution in the deep dentin (800 μm lateral from the canal surface, along the white dashed line in Fig. 8 at various time instants. Moving the laser tip up and down results in the shift of the maximum temperature. (a) The first heating cycle (0–16 s) and (b) the second heating cycle (16–32 s) Figure 12 Heat penetration from the canal surface to the soft tissue region is shown by the increasing temperature along the root-tissue interface with time. Temperatures are plotted along the root-tissue interface (the solid white line in Fig. 3) from the apex to the crown. The maximum temperature is lower than the critical temperature of 47°C when the heating time is shorter than 26 s. ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
시간 제한메모리 제한제출정답맞힌 사람정답 비율 5 초 1024 MB333100.000% ## 문제 Jimmy's homework is to find a long increasing subsequence of a given sequence $a_1, a_2, \ldots, a_n$. But the sequence is really long! Jimmy doesn't know how to do this effectively. So Jimmy takes a greedy approach. He begins by picking the first number in the sequence. Then he repeats the following rule until it no longer applies: pick the next number in the sequence that is bigger than the number he just picked. More precisely, Jimmy picks the subsequence $a_{i_1}, a_{i_2}, \ldots, a_{i_k}$ where: • $i_1 = 1$ • For each $1 \leq j < k$, $i_{j+1}$ is the smallest index greater than $i_j$ such that $a_{i_j} < a_{i_{j+1}}$ • $a_{i_k} \geq a_\ell$ for every $\ell > i_k$ Jimmy realizes that this may not produce a very long subsequence. So to help him find other subsequences, he removes $a_{i_1}, a_{i_2}, \ldots, a_{i_k}$ from the given sequence and finds another increasing subsequence using his greedy algorithm on the remaining sequence. He repeats this until he has used up all numbers from the original sequence. But even this is starting to sound exhausting for Jimmy, so he asks you to help him by finding all of the sequences that would be formed by repeatedly applying the above greedy procedure and removing the resulting subsequence until the given sequence is empty. ## 입력 The first line of input contains a single integer $n$ ($1 \leq n \leq 2 \times 10^5$) indicating the length of the original sequence. The second line of input contains $n$ integers $a_1, a_2, \ldots, a_n$ ($0 \leq a_i \leq 10^9$). ## 출력 The first line of output contains the number $s$ of sequences that are produced. The next $s$ lines contain the sequences, the $i$th such line containing the increasing subsequence that is formed in the $i$th application of the greedy algorithm. ## 예제 입력 1 7 2 2 1 5 3 4 6 ## 예제 출력 1 3 2 5 6 2 3 4 1 ## 예제 입력 2 7 8 6 7 5 3 0 9 ## 예제 출력 2 5 8 9 6 7 5 3 0 ## 출처 • 문제를 만든 사람: Zachary Friggstad
src/HOL/ex/Set_Theory.thy author wenzelm Wed Jun 22 10:09:20 2016 +0200 (2016-06-22) changeset 63343 fb5d8a50c641 parent 61945 1135b8de26c3 child 63804 70554522bf98 permissions -rw-r--r-- bundle lifting_syntax; 1 (* Title: HOL/ex/Set_Theory.thy 2 Author: Tobias Nipkow and Lawrence C Paulson 3 Copyright 1991 University of Cambridge 4 *) 5 6 section \<open>Set Theory examples: Cantor's Theorem, Schröder-Bernstein Theorem, etc.\<close> 7 8 theory Set_Theory 9 imports Main 10 begin 11 12 text\<open> 13 These two are cited in Benzmueller and Kohlhase's system description 14 of LEO, CADE-15, 1998 (pages 139-143) as theorems LEO could not 15 prove. 16 \<close> 17 18 lemma "(X = Y \<union> Z) = 19 (Y \<subseteq> X \<and> Z \<subseteq> X \<and> (\<forall>V. Y \<subseteq> V \<and> Z \<subseteq> V \<longrightarrow> X \<subseteq> V))" 20 by blast 21 22 lemma "(X = Y \<inter> Z) = 23 (X \<subseteq> Y \<and> X \<subseteq> Z \<and> (\<forall>V. V \<subseteq> Y \<and> V \<subseteq> Z \<longrightarrow> V \<subseteq> X))" 24 by blast 25 26 text \<open> 27 Trivial example of term synthesis: apparently hard for some provers! 28 \<close> 29 30 schematic_goal "a \<noteq> b \<Longrightarrow> a \<in> ?X \<and> b \<notin> ?X" 31 by blast 32 33 34 subsection \<open>Examples for the \<open>blast\<close> paper\<close> 35 36 lemma "(\<Union>x \<in> C. f x \<union> g x) = \<Union>(f C) \<union> \<Union>(g C)" 37 \<comment> \<open>Union-image, called \<open>Un_Union_image\<close> in Main HOL\<close> 38 by blast 39 40 lemma "(\<Inter>x \<in> C. f x \<inter> g x) = \<Inter>(f C) \<inter> \<Inter>(g C)" 41 \<comment> \<open>Inter-image, called \<open>Int_Inter_image\<close> in Main HOL\<close> 42 by blast 43 44 lemma singleton_example_1: 45 "\<And>S::'a set set. \<forall>x \<in> S. \<forall>y \<in> S. x \<subseteq> y \<Longrightarrow> \<exists>z. S \<subseteq> {z}" 46 by blast 47 48 lemma singleton_example_2: 49 "\<forall>x \<in> S. \<Union>S \<subseteq> x \<Longrightarrow> \<exists>z. S \<subseteq> {z}" 50 \<comment> \<open>Variant of the problem above.\<close> 51 by blast 52 53 lemma "\<exists>!x. f (g x) = x \<Longrightarrow> \<exists>!y. g (f y) = y" 54 \<comment> \<open>A unique fixpoint theorem --- \<open>fast\<close>/\<open>best\<close>/\<open>meson\<close> all fail.\<close> 55 by metis 56 57 58 subsection \<open>Cantor's Theorem: There is no surjection from a set to its powerset\<close> 59 60 lemma cantor1: "\<not> (\<exists>f:: 'a \<Rightarrow> 'a set. \<forall>S. \<exists>x. f x = S)" 61 \<comment> \<open>Requires best-first search because it is undirectional.\<close> 62 by best 63 64 schematic_goal "\<forall>f:: 'a \<Rightarrow> 'a set. \<forall>x. f x \<noteq> ?S f" 65 \<comment> \<open>This form displays the diagonal term.\<close> 66 by best 67 68 schematic_goal "?S \<notin> range (f :: 'a \<Rightarrow> 'a set)" 69 \<comment> \<open>This form exploits the set constructs.\<close> 70 by (rule notI, erule rangeE, best) 71 72 schematic_goal "?S \<notin> range (f :: 'a \<Rightarrow> 'a set)" 73 \<comment> \<open>Or just this!\<close> 74 by best 75 76 77 subsection \<open>The Schröder-Bernstein Theorem\<close> 78 79 lemma disj_lemma: "- (f X) = g' (-X) \<Longrightarrow> f a = g' b \<Longrightarrow> a \<in> X \<Longrightarrow> b \<in> X" 80 by blast 81 82 lemma surj_if_then_else: 83 "-(f X) = g' (-X) \<Longrightarrow> surj (\<lambda>z. if z \<in> X then f z else g' z)" 84 by (simp add: surj_def) blast 85 86 lemma bij_if_then_else: 87 "inj_on f X \<Longrightarrow> inj_on g' (-X) \<Longrightarrow> -(f X) = g' (-X) \<Longrightarrow> 88 h = (\<lambda>z. if z \<in> X then f z else g' z) \<Longrightarrow> inj h \<and> surj h" 89 apply (unfold inj_on_def) 90 apply (simp add: surj_if_then_else) 91 apply (blast dest: disj_lemma sym) 92 done 93 94 lemma decomposition: "\<exists>X. X = - (g (- (f X)))" 95 apply (rule exI) 96 apply (rule lfp_unfold) 97 apply (rule monoI, blast) 98 done 99 100 theorem Schroeder_Bernstein: 101 "inj (f :: 'a \<Rightarrow> 'b) \<Longrightarrow> inj (g :: 'b \<Rightarrow> 'a) 102 \<Longrightarrow> \<exists>h:: 'a \<Rightarrow> 'b. inj h \<and> surj h" 103 apply (rule decomposition [where f=f and g=g, THEN exE]) 104 apply (rule_tac x = "(\<lambda>z. if z \<in> x then f z else inv g z)" in exI) 105 \<comment>\<open>The term above can be synthesized by a sufficiently detailed proof.\<close> 106 apply (rule bij_if_then_else) 107 apply (rule_tac [4] refl) 108 apply (rule_tac [2] inj_on_inv_into) 109 apply (erule subset_inj_on [OF _ subset_UNIV]) 110 apply blast 111 apply (erule ssubst, subst double_complement, erule inv_image_comp [symmetric]) 112 done 113 114 115 subsection \<open>A simple party theorem\<close> 116 117 text\<open>\emph{At any party there are two people who know the same 118 number of people}. Provided the party consists of at least two people 119 and the knows relation is symmetric. Knowing yourself does not count 120 --- otherwise knows needs to be reflexive. (From Freek Wiedijk's talk 121 at TPHOLs 2007.)\<close> 122 123 lemma equal_number_of_acquaintances: 124 assumes "Domain R <= A" and "sym R" and "card A \<ge> 2" 125 shows "\<not> inj_on (%a. card(R {a} - {a})) A" 126 proof - 127 let ?N = "%a. card(R {a} - {a})" 128 let ?n = "card A" 129 have "finite A" using \<open>card A \<ge> 2\<close> by(auto intro:ccontr) 130 have 0: "R A <= A" using \<open>sym R\<close> \<open>Domain R <= A\<close> 131 unfolding Domain_unfold sym_def by blast 132 have h: "ALL a:A. R {a} <= A" using 0 by blast 133 hence 1: "ALL a:A. finite(R {a})" using \<open>finite A\<close> 134 by(blast intro: finite_subset) 135 have sub: "?N A <= {0..<?n}" 136 proof - 137 have "ALL a:A. R {a} - {a} < A" using h by blast 138 thus ?thesis using psubset_card_mono[OF \<open>finite A\<close>] by auto 139 qed 140 show "~ inj_on ?N A" (is "~ ?I") 141 proof 142 assume ?I 143 hence "?n = card(?N A)" by(rule card_image[symmetric]) 144 with sub \<open>finite A\<close> have 2[simp]: "?N A = {0..<?n}" 145 using subset_card_intvl_is_intvl[of _ 0] by(auto) 146 have "0 : ?N A" and "?n - 1 : ?N A" using \<open>card A \<ge> 2\<close> by simp+ 147 then obtain a b where ab: "a:A" "b:A" and Na: "?N a = 0" and Nb: "?N b = ?n - 1" 148 by (auto simp del: 2) 149 have "a \<noteq> b" using Na Nb \<open>card A \<ge> 2\<close> by auto 150 have "R {a} - {a} = {}" by (metis 1 Na ab card_eq_0_iff finite_Diff) 151 hence "b \<notin> R {a}" using \<open>a\<noteq>b\<close> by blast 152 hence "a \<notin> R {b}" by (metis Image_singleton_iff assms(2) sym_def) 153 hence 3: "R {b} - {b} <= A - {a,b}" using 0 ab by blast 154 have 4: "finite (A - {a,b})" using \<open>finite A\<close> by simp 155 have "?N b <= ?n - 2" using ab \<open>a\<noteq>b\<close> \<open>finite A\<close> card_mono[OF 4 3] by simp 156 then show False using Nb \<open>card A \<ge> 2\<close> by arith 157 qed 158 qed 159 160 text \<open> 161 From W. W. Bledsoe and Guohui Feng, SET-VAR. JAR 11 (3), 1993, pages 162 293-314. 163 164 Isabelle can prove the easy examples without any special mechanisms, 165 but it can't prove the hard ones. 166 \<close> 167 168 lemma "\<exists>A. (\<forall>x \<in> A. x \<le> (0::int))" 169 \<comment> \<open>Example 1, page 295.\<close> 170 by force 171 172 lemma "D \<in> F \<Longrightarrow> \<exists>G. \<forall>A \<in> G. \<exists>B \<in> F. A \<subseteq> B" 173 \<comment> \<open>Example 2.\<close> 174 by force 175 176 lemma "P a \<Longrightarrow> \<exists>A. (\<forall>x \<in> A. P x) \<and> (\<exists>y. y \<in> A)" 177 \<comment> \<open>Example 3.\<close> 178 by force 179 180 lemma "a < b \<and> b < (c::int) \<Longrightarrow> \<exists>A. a \<notin> A \<and> b \<in> A \<and> c \<notin> A" 181 \<comment> \<open>Example 4.\<close> 182 by auto \<comment>\<open>slow\<close> 183 184 lemma "P (f b) \<Longrightarrow> \<exists>s A. (\<forall>x \<in> A. P x) \<and> f s \<in> A" 185 \<comment> \<open>Example 5, page 298.\<close> 186 by force 187 188 lemma "P (f b) \<Longrightarrow> \<exists>s A. (\<forall>x \<in> A. P x) \<and> f s \<in> A" 189 \<comment> \<open>Example 6.\<close> 190 by force 191 192 lemma "\<exists>A. a \<notin> A" 193 \<comment> \<open>Example 7.\<close> 194 by force 195 196 lemma "(\<forall>u v. u < (0::int) \<longrightarrow> u \<noteq> \<bar>v\<bar>) 197 \<longrightarrow> (\<exists>A::int set. -2 \<in> A & (\<forall>y. \<bar>y\<bar> \<notin> A))" 198 \<comment> \<open>Example 8 needs a small hint.\<close> 199 by force 200 \<comment> \<open>not \<open>blast\<close>, which can't simplify \<open>-2 < 0\<close>\<close> 201 202 text \<open>Example 9 omitted (requires the reals).\<close> 203 204 text \<open>The paper has no Example 10!\<close> 205 206 lemma "(\<forall>A. 0 \<in> A \<and> (\<forall>x \<in> A. Suc x \<in> A) \<longrightarrow> n \<in> A) \<and> 207 P 0 \<and> (\<forall>x. P x \<longrightarrow> P (Suc x)) \<longrightarrow> P n" 208 \<comment> \<open>Example 11: needs a hint.\<close> 209 by(metis nat.induct) 210 211 lemma 212 "(\<forall>A. (0, 0) \<in> A \<and> (\<forall>x y. (x, y) \<in> A \<longrightarrow> (Suc x, Suc y) \<in> A) \<longrightarrow> (n, m) \<in> A) 213 \<and> P n \<longrightarrow> P m" 214 \<comment> \<open>Example 12.\<close> 215 by auto 216 217 lemma 218 "(\<forall>x. (\<exists>u. x = 2 * u) = (\<not> (\<exists>v. Suc x = 2 * v))) \<longrightarrow> 219 (\<exists>A. \<forall>x. (x \<in> A) = (Suc x \<notin> A))" 220 \<comment> \<open>Example EO1: typo in article, and with the obvious fix it seems 221 to require arithmetic reasoning.\<close> 222 apply clarify 223 apply (rule_tac x = "{x. \<exists>u. x = 2 * u}" in exI, auto) 224 apply metis+ 225 done 226 227 end `
# Topology and analytic functions Is there a topology T on the set of complex numbers such that the class of T-continuous functions and the class of analytic functions coincide. - I doubt. Gluing continuous functions is usual in general topology, but is forbidden for analytic functions. –  Berci May 5 at 23:21 a quick comment: If such a T exists then, observing bicontinuity of az +b. The translations, dialations and rotations of an open sets in T will be open. –  rohit May 6 at 5:26 Assume that $S \in T$ and S is bounded. Take unit disk D , and a point $p \in D$. Let r be such $B_{3r}(p)$ fits inside D. Translate S to origin s.t. a point in S matches with origin .Crunch S to fit in $B_{r}(0)$. Then translate by p.We get a U open in T s.t. $p \in U$ s.t. U fits in D. This we can do for every $p \in D$. Thus $D \in T$ It easily follows by translations and dilations applied to unit disc D , T contains our open sets in archimedian topology. –  rohit May 6 at 12:28 Since complex functions take values in $\mathbb C$, "T-continuous" can be understood in two ways: continuous from T-topology to T-topology, or from T-topology to the standard topology. Which one did you mean? –  75064 May 6 at 21:11 analytic or entire? –  user59671 May 23 at 15:40 For infinitely differentiable functions on $\mathbb{R}$, there is no such topology $T$: If there is, as rohit has noted, due to the bicontinuity of $ax+b$, translations and dilations of open sets will be open. If $U\in T$ such $U$ is bounded, then we can construct the usual topology on $\mathbb{R}$ from $U$ (see rohit's comment) Now, let $V \in T$, such that $V\neq \mathbb{R}$ (wlog, assume $0\notin V$). Then, take any $f \in C_c^{\infty}(\mathbb{R})$. Now, $f^{-1} (V)$ is bounded. From this, it follows that any such topology $T$ is finer than the usual topology. But, as Berci said, in the usual topology, pasting doesn't work in general for differentiable functions. (Take $x$ on $[0,\infty)$ and $-x$ on $(-\infty,0]$) Edit: As NielsDiepeveen pointed out, my earlier answer (about the complex case) was wrong. - What your proof for the complex case seems to show, is that, under certain assumptions, there is a T-continuous map on a T-closed subspace ($A\cup B$) that has no T-continuous extension to all of $\mathbb{C}$. That does not look like a contradiction to me. Am I missing something? –  Niels Diepeveen May 29 at 11:55 @NielsDiepeveen yes, you're right. I was thinking about Tietze's extension theorem, but forgot that the topology on the other side is $T$. (Also, I didn't mention normality). Have removed that part until I figure out a way to salvage it. –  Amudhan May 30 at 13:44 Why is $f^{-1}(V)$ bounded? –  dfeuer Jul 4 at 20:32 @dfeuer: $f^{-1}(V)$ is a subset of the (compact) support of $f$, because $0 \notin V$ –  Niels Diepeveen Jul 10 at 11:43 It seems difficult to be conjugation-variant. Any set that can be constructed from some set of entire functions (without using complex conjugation) has its conjugate constructible by conjugating the coefficients of the functions and all parameters in the construction. But we need $\bar{z}$ to be discontinuous and $z$ continuous.
# How unique are extensions of TQFTs to lower dimension? Say I have an "ordinary" TQFT $F$ of dimension $n$, assigning groups or vector spaces to closed $(n-1)$-manifolds and linear maps to cobordisms. Consider the different ways $F$ can be obtained from a TQFT "extended one step" which assigns categories to manifolds of dimension $n-2$ (often derived categories of algebras or dg / $A_{\infty}$ algebras). Is there expected to be any uniqueness to these extensions of $F$? For example, are there cases where you can extend the same $F$ two ways, but the corresponding (derived) categories associated to a codimension-2 manifold aren't equivalent? This question is motivated partly by the situation in bordered Heegaard Floer homology; the dg algebra associated to a surface depends on a choice of parametrization, but these distinct algebras end up having equivalent derived categories (of type D or type A modules). I'd be happy with a toy example in lower dimensions or anything illustrating this uniqueness holding or not holding- maybe in some restricted context. I'd also be curious to know if there's an $F$ which doesn't extend at all (maybe this is basic knowledge for the experts...) Thanks! The question of which tqfts extend is a very interesting one. To make the question more mathematically precise, we can fix the target n-categories and ask for the tqfts to extend with respect to those targets. Then I can give precise answers. In general there are both existence and uniqueness issues, even in the n=2 case. That case is pretty instructive, and it doesn't get much simpler by increasing dimensions. It is well known that a 2D non-extended (oriented) tqft in vector spaces is the same thing as a commutative Frobenius algebra. Now we can ask that this "extends to points" using our favorite target 2-category, like linear categories or the 2-category of algebras, bimodules, and maps. There turns out to not be much difference between these two choices, so I will work with the latter. If you prefer the former, than you should just think of the category of modules associated to the algebra. It doesn't really effect what I am going to say. Now a 2D extended (oriented) tqft in algebras, bimodules, and maps is the same thing as a non-commutative symmetric Frobenius algebra which is fully-dualizable. Over a perfect field fully-dualizable is the same as finite dimensional and semisimple. Now if A is such a fully-dualizable Frobenius algebra, then the center Z(A) will be a commutative Frobenius algebra and this is the algebra corresponding the the non-extended part of the 2D TQFTs. Since A is semisimple, this commutative algebra is semisimple. Thus, not every 2D TQFT extends to points (at least with the usual target categories). An explicit counter example is given by the non-semisimple commutative Frobenius algebra $k[x]/x^{n+1}$, where the trace is given by picking off the $x^n$-coefficient. Moreover we also see that uniqueness is an issue. For example consider working over the real numbers. Then the algebra $\mathbb{R}$, (with trivial trace) is a semisimple Frobenius algebra. The center is, of course, also $\mathbb{R}$. But we also have the quaternion algebra $\mathbb{H}$, which is also a semisimple Frobenius algebra (the trace is projection onto the real line in $\mathbb{H}$). The center of $\mathbb{H}$ is also $\mathbb{R}$, and so this gives an example of two extended tqfts which have the same underlying non-extended 2D tqft. Note that $\mathbb{H}$ and $\mathbb{R}$ are not Morita equivalent, and so in particular these are genuinely different extended tqfts. Different extended 2D tqfts can have the same underlying non-extended 2D tqft. For your $n=2$ case, with vector spaces assigned to 1-manifolds, the extension to 0-manifolds is unique up to Morita equivalence (of linear 1-categories). This assumes that the extensions assign a semisimple 1-category to a 0-manifold, which might be implied by the extended TQFT axioms depending on which version of those axioms you choose. For $n-1>1$, there is a categorified version of Morita equivalence for $(n-1)$-categories, and Morita equivalent $(n-1)$-categories lead to TQFTs which are isomorphic at the $(n-1,n)$ level. More generally, the $k$-categories assigned by the two TQFTs to closed $(n-1-k)$-manifolds are Morita equivalent. Morita equivalent $(n-1)$-categories can look somewhat different. For example, Morita equivalent tensor categories (2-categories with one 0-morphism) can have different numbers of (isomorphism classes of) simple objects. (Warning: There is more than one version of categorified Morita equivalence in the literature. I have in mind the version that uses bimodules rather than functors "all the way up", as described here.) • Cool! So, to make sure I understand what you're saying, if I have an $(n-1, n)$ TQFT $F$ and two extensions of $F$ all the way down to points, and furthermore if the extensions assign Morita equivalent $(n-1)$-categories to the point, then they assign equivalent categories at every level, right? What happens if you have two fully-extended TQFTs which assign non-equivalent $(n-1)$-categories to a point (and more generally, non-equivalent categories to codim-2 manifolds). Do they have a chance of still being isomorphic at the $(n-1,n)$-level? – Andy Manion Apr 22 '13 at 18:38 • In answer to your first question: yes, right. I'm not sure about the second question (non-Morita-equivalent $(n-1)$-categories giving rise to same $(n-1, n)$ structure). I'm not even sure which way I would bet, if I had to bet. – Kevin Walker Apr 22 '13 at 21:25
# Integration, Cointegration, and Stationarity¶ by Delaney Granizo-Mackenzie and Maxwell Margenot Part of the Quantopian Lecture Series: In [1]: import numpy as np import pandas as pd import statsmodels import statsmodels.api as sm import matplotlib.pyplot as plt # Stationarity/Non-Stationarity¶ A commonly untested assumption in time series analysis is the stationarity of the data. Data are stationary when the parameters of the data generating process do not change over time. As an example, let's consider two series, A and B. Series A is generated from a stationary process with fixed parameters, series B is generated with parameters that change over time. In [2]: def generate_datapoint(params): mu = params[0] sigma = params[1] return np.random.normal(mu, sigma) ### Series A¶ In [3]: # Set the parameters and the number of datapoints params = (0, 1) T = 100 A = pd.Series(index=range(T)) A.name = 'A' for t in range(T): A[t] = generate_datapoint(params) plt.plot(A) plt.xlabel('Time') plt.ylabel('Value') plt.legend(['Series A']); ### Series B¶ In [4]: # Set the number of datapoints T = 100 B = pd.Series(index=range(T)) B.name = 'B' for t in range(T): # Now the parameters are dependent on time # Specifically, the mean of the series changes over time params = (t * 0.1, 1) B[t] = generate_datapoint(params) plt.plot(B) plt.xlabel('Time') plt.ylabel('Value') plt.legend(['Series B']); ### Why Non-Stationarity is Dangerous¶ Many statistical tests, deep down in the fine print of their assumptions, require that the data being tested are stationary. Also, if you naively use certain statistics on a non-stationary data set, you will get garbage results. As an example, let's take an average through our non-stationary $B$. In [5]: m = np.mean(B) plt.plot(B) plt.hlines(m, 0, len(B), linestyles='dashed', colors='r') plt.xlabel('Time') plt.ylabel('Value') plt.legend(['Series B', 'Mean']); The computed mean will show the mean of all data points, but won't be useful for any forecasting of future state. It's meaningless when compared with any specfic time, as it's a collection of different states at different times mashed together. This is just a simple and clear example of why non-stationarity can screw with analysis, much more subtle problems can arise in practice. ### Testing for Stationarity¶ Now we want to check for stationarity using a statistical test. In [6]: def check_for_stationarity(X, cutoff=0.01): # H_0 in adfuller is unit root exists (non-stationary) # We must observe significant p-value to convince ourselves that the series is stationary if pvalue < cutoff: print 'p-value = ' + str(pvalue) + ' The series ' + X.name +' is likely stationary.' return True else: print 'p-value = ' + str(pvalue) + ' The series ' + X.name +' is likely non-stationary.' return False In [7]: check_for_stationarity(A); check_for_stationarity(B); p-value = 0.000498500723545 The series A is likely stationary. p-value = 0.948244716942 The series B is likely non-stationary. Sure enough, the changing mean of the series makes it non-stationary. Let's try an example that might be a little more subtle. In [8]: # Set the number of datapoints T = 100 C = pd.Series(index=range(T)) C.name = 'C' for t in range(T): # Now the parameters are dependent on time # Specifically, the mean of the series changes over time params = (np.sin(t), 1) C[t] = generate_datapoint(params) plt.plot(C) plt.xlabel('Time') plt.ylabel('Value') plt.legend(['Series C']); A cyclic movement of the mean will be very difficult to tell apart from random noise. In practice on noisy data and limited sample size it can be hard to determine if a series is stationary and whether any drift is random noise or part of a trend. In each individual case the test may or may not pick up subtle effects like this. In [9]: check_for_stationarity(C); p-value = 0.219590266677 The series C is likely non-stationary. ## Order of Integration¶ ### Moving Average Representation/Wold's Theorem¶ An important concept in time series analysis is moving average representation. We will discuss this briefly here, but a more complete explanation is available in the AR, MA, and ARMA Models lectures of the Quantopian Lecture Series. Also check Wikipedia as listed below. This representation expresses any time series $Y_t$ as $$Y_t = \sum_{j=0}^\infty b_j \epsilon_{t-j} + \eta_t$$ • $\epsilon$ is the 'innovation' series • $b_j$ are the moving average weights of the innovation series • $\eta$ is a deterministic series The key here is as follows. $\eta$ is deterministic, such as a sine wave. Therefore we could perfectly model it. The innovation process is stochastic and there to simulate new information occuring over time. Specifically, $\epsilon_t = \hat Y_t - Y_t$ where $\hat Y_t$ is the in the optimal forecast of $Y_t$ using only information from time before $t$. In other words, the best prediction you can make at time $t-1$ cannot account for the randomness in $\epsilon$. Each $b_j$ just says how much previous values of $\epsilon$ influence $Y_t$. ### Back to Order of Integration¶ We will note integration order-i as $I(i)$. A time series is said to be $I(0)$ if the following condition holds in a moving average representation. In hand-wavy english, the autocorrelation of the series decays sufficiently quickly. $$\sum_{k=0}^\infty |b_k|^2 < \infty$$ This property turns out to be true of all stationary series, but by itself is not enough for stationarity to hold. This means that stationarity implies $I(0)$, but $I(0)$ does not imply stationarity. For more on orders of integration, please see the following links. ### Testing for $I(0)$¶ In practice testing whether the sum of the autocorrelations is finite may not be possible. It is possible in a mathematical derivation, but when we have a finite set of data and a finite number of estimated autocorrelations, the sum will always be finite. Given this difficulty, tests for $I(0)$ rely on stationarity implying the property. If we find that a series is stationary, then it must also be $I(0)$. Let's take our original stationary series A. Because A is stationary, we know it's also $I(0)$. In [10]: plt.plot(A) plt.xlabel('Time') plt.ylabel('Value') plt.legend(['Series A']); ### Inductively Building Up Orders of Integration¶ If one takes an $I(0)$ series and cumulatively sums it (discrete integration), the new series will be $I(1)$. Notice how this is related to the calculus concept of integration. The same relation applies in general, to get $I(n)$ take an $I(0)$ series and iteratively take the cumulative sum $n$ times. Now let's make an $I(1)$ series by taking the cumulative sum of A. In [11]: A1 = np.cumsum(A) plt.plot(A1) plt.xlabel('Time') plt.ylabel('Value') plt.legend(['Series A1']); Now let's make one $I(2)$ by taking the cumlulative sum again. In [12]: A2 = np.cumsum(A1) plt.plot(A2) plt.xlabel('Time') plt.ylabel('Value') plt.legend(['Series A2']); ### Breaking Down Orders of Integration¶ Conversely, to find the order of integration of a given series, we perform the inverse of a cumulative sum, which is the $\Delta$ or itemwise difference function. Specifically $$(1-L) X_t = X_t - X_{t-1} = \Delta X$$$$(1-L)^d X_t$$ In this case $L$ is the lag operator. Sometimes also written as $B$ for 'backshift'. $L$ fetches the second to last elements in a time series, and $L^k$ fetches the k-th to last elements. So $$L X_t = X_{t-1}$$ and $$(1-L) X_t = X_t - X_{t-1}$$ A series $Y_t$ is $I(1)$ if the $Y_t - Y_t-1$ is $I(0)$. In other words, if you take an $I(0)$ series and cumulatively sum it, you should get an $I(1)$ series. ### Important Take-Away¶ Once all the math has settled, remember that any stationary series is $I(0)$ ## Real Data¶ Let's try this out on some real pricing data. In [13]: symbol_list = ['MSFT'] prices = get_pricing(symbol_list, fields=['price'] , start_date='2014-01-01', end_date='2015-01-01')['price'] prices.columns = map(lambda x: x.symbol, prices.columns) X = prices['MSFT'] In [14]: check_for_stationarity(X); p-value = 0.666326790934 The series MSFT is likely non-stationary. Let's take a look, certainly has the warning signs of a non-stationary series. In [15]: plt.plot(X.index, X.values) plt.ylabel('Price') plt.legend([X.name]); Now let's take the delta of the series, giving us the additive returns. We'll check if this is stationary. In [16]: X1 = X.diff()[1:] X1.name = X.name + ' Additive Returns' check_for_stationarity(X1) plt.plot(X1.index, X1.values) plt.legend([X1.name]); p-value = 1.48184901469e-28 The series MSFT Additive Returns is likely stationary. Seems like the additive returns are stationary over 2014. That means we will probably be able to model the returns much better than the price. It also means that the price was $I(1)$. Let's also check the multiplicative returns. In [17]: X1 = X.pct_change()[1:] X1.name = X.name + ' Multiplicative Returns' check_for_stationarity(X1) plt.plot(X1.index, X1.values) plt.ylabel('Multiplicative Returns') plt.legend([X1.name]); p-value = 8.05657888734e-29 The series MSFT Multiplicative Returns is likely stationary. Seems like the multiplicative returns are also stationary. Both the multiplicative and additive deltas on a series get at similar pieces of information, so it's not surprising both are stationary. In practice this might not always be the case. ## IMPORTANT NOTE¶ As always, you should not naively assume that because a time series is stationary in the past it will continue to be stationary in the future. Tests for consistency of stationarity such as cross validation and out of sample testing are necessary. This is true of any statistical property, we just reiterate it here. Returns may also go in and out of stationarity, and may be stationary or non-stationary depending on the timeframe and sampling frequency. ## Note: Returns Analysis¶ The reason returns are usually used for modeling in quantitive finance is that they are far more stationary than prices. This makes them easier to model and returns forecasting more feasible. Forecasting prices is more difficult, as there are many trends induced by their $I(1)$ integration. Even using a returns forecasting model to forecast price can be tricky, as any error in the returns forecast will be magnified over time. ## Cointegration¶ Finally, now that we've discussed stationarity and order of integration, we can discuss cointegration. ### Def: Linear Combination¶ A linear combination of the time series ($X_1$, $X_2$, $\dots$, $X_k$) is a new time series $Y$ constructed as follows for any set of real numbers $b_1 \dots b_k$ $$Y = b_1X_1 + b_2X_2 + \dots + b_kX_k$$ ### Formal Definition¶ The formal definition of cointegration is as follows. For some set of time series ($X_1$, $X_2$, $\dots$, $X_k$), if all series are $I(1)$, and some linear combination of them is $I(0)$, we say the set of time series is cointegrated. #### Example¶ $X_1$, $X_2$, and $X_3$ are all $I(1)$, and $2X_1 + X_2 + 0X_3 = 2X_1 + X_2$ is $I(0)$. In this case the time series are cointegrated. ### Intuition¶ The intuition here is that for some linear combination of the series, the result lacks much auto-covariance and is mostly noise. This is useful for cases such as pairs trading, in which we find two assets whose prices are cointegrated. Since the linear combination of their prices $b_1A_1 + b_2A_2$ is noise, we can bet on the relationship $b_1A_1 + b_2A_2$ mean reverting and place trades accordingly. See the Pairs Trading Lecture in the Quantopian Lecture Series for more information. ### Simulated Data Example¶ Let's make some data to demonstrate this. In [18]: # Length of series N = 100 # Generate a stationary random X1 X1 = np.random.normal(0, 1, N) # Integrate it to make it I(1) X1 = np.cumsum(X1) X1 = pd.Series(X1) X1.name = 'X1' # Make an X2 that is X1 plus some noise X2 = X1 + np.random.normal(0, 1, N) X2.name = 'X2' In [19]: plt.plot(X1) plt.plot(X2) plt.xlabel('Time') plt.ylabel('Series Value') plt.legend([X1.name, X2.name]); Because $X_2$ is just an $I(1)$ series plus some stationary noise, it should still be $I(1)$. Let's check this. In [20]: Z = X2.diff()[1:] Z.name = 'Z' check_for_stationarity(Z); p-value = 3.06566830522e-19 The series Z is likely stationary. Looks good. Now to show cointegration we'll need to find some linear combination of $X_1$ and $X_2$ that is stationary. We can take $X_2-X_1$. All that's left over should be stationary noise by design. Let's check this. In [21]: Z = X2 - X1 Z.name = 'Z' plt.plot(Z) plt.xlabel('Time') plt.ylabel('Series Value') plt.legend(['Z']); check_for_stationarity(Z); p-value = 1.03822288113e-18 The series Z is likely stationary. ### Testing for Cointegration¶ There are a bunch of ways to test for cointegration. This wikipedia article describes some. In general we're just trying to solve for the coefficients $b_1, \dots b_k$ that will produce an $I(0)$ linear combination. If our best guess for these coefficients does not pass a stationarity check, then we reject the hypothesis that the set is cointegrated. This will lead to risk of Type II errors (false negatives), as we will not exhaustively test for stationarity on all coefficent combinations. However Type II errors are generally okay here, as they are safe and do not lead to us making any wrong forecasts. In practice a common way to do this for pairs of time series is to use linear regression to estimate $\beta$ in the following model. $$X_2 = \alpha + \beta X_1 + \epsilon$$ The idea is that if the two are cointegrated we can remove $X_2$'s depedency on $X_1$, leaving behind stationary noise. The combination $X_2 - \beta X_1 = \alpha + \epsilon$ should be stationary. #### Real Data Example¶ Let's try on some real data. We'll get prices and plot them first. In [22]: symbol_list = ['ABGB', 'FSLR'] prices = get_pricing(symbol_list, fields=['price'] , start_date='2014-01-01', end_date='2015-01-01')['price'] prices.columns = map(lambda x: x.symbol, prices.columns) X1 = prices[symbol_list[0]] X2 = prices[symbol_list[1]] In [23]: plt.plot(X1.index, X1.values) plt.plot(X1.index, X2.values) plt.xlabel('Time') plt.ylabel('Series Value') plt.legend([X1.name, X2.name]); Now use linear regression to compute $\beta$. In [24]: X1 = sm.add_constant(X1) results = sm.OLS(X2, X1).fit() # Get rid of the constant column X1 = X1[symbol_list[0]] results.params Out[24]: const 26.609769 ABGB 1.536686 dtype: float64 In [25]: b = results.params[symbol_list[0]] Z = X2 - b * X1 Z.name = 'Z' plt.plot(Z.index, Z.values) plt.xlabel('Time') plt.ylabel('Series Value') plt.legend([Z.name]); check_for_stationarity(Z); p-value = 0.000972948552814 The series Z is likely stationary.
# limsup and liminf of a sequence of subsets of a set I am confused when reading Wikipedia's article on limsup and liminf of a sequence of subsets of a set $X$. 1. It says there are two different ways to define them, but first gives what is common for the two. Quoted: There are two common ways to define the limit of sequences of set. In both cases: The sequence accumulates around sets of points rather than single points themselves. That is, because each element of the sequence is itself a set, there exist accumulation sets that are somehow nearby to infinitely many elements of the sequence. The supremum/superior/outer limit is a set that joins these accumulation sets together. That is, it is the union of all of the accumulation sets. When ordering by set inclusion, the supremum limit is the least upper bound on the set of accumulation points because it contains each of them. Hence, it is the supremum of the limit points. The infimum/inferior/inner limit is a set where all of these accumulation sets meet. That is, it is the intersection of all of the accumulation sets. When ordering by set inclusion, the infimum limit is the greatest lower bound on the set of accumulation points because it is contained in each of them. Hence, it is the infimum of the limit points. The difference between the two definitions involves the topology (i.e., how to quantify separation) is defined. In fact, the second definition is identical to the first when the discrete metric is used to induce the topology on $X$. Because it mentions that a sequence of subsets of a set $X$ accumulate to some accumulation subsets of $X$, are there some topology on the power set of the set for this accumulation to make sense? What kind of topology is that? Is it induced from some structure on the set $X$? Is it possible to use mathematic symbols to formalize what it means by "supremum/superior/outer limit" and "infimum/inferior/inner limit"? 2. If I understand correctly, here is the first way to define limsup/liminf of a sequence of subsets. Quoted: General set convergence In this case, a sequence of sets approaches a limiting set when its elements of each member of the sequence approach that elements of the limiting set. In particular, if $\{X_n\}$ is a sequence of subsets of $X$, then: $\limsup X_n$, which is also called the outer limit, consists of those elements which are limits of points in $X_n$ taken from (countably) infinitely many n. That is, $x \in \limsup X_n$ if and only if there exists a sequence of points $x_k$ and a subsequence $\{X_{n_k}\}$ of $\{X_n\}$ such that $x_k \in X_{n_k}$ and $x_k \rightarrow x$ as $k \rightarrow \infty$. $\liminf X_n$, which is also called the inner limit, consists of those elements which are limits of points in $X_n$ for all but finitely many n (i.e., cofinitely many n). That is, $x \in \liminf X_n$ if and only if there exists a sequence of points $\{x_k\}$ such that $x_k \in X_k$ and $x_k \rightarrow x$ as $k \rightarrow \infty$. So I think for this definition, $X$ is required to be a topological space. This definition is expressed in terms of convergence of a sequence of points in $X$ with respect to the topology of $X$. If referring back to what is common for the two ways of definitions, I will be wondering how to explain what is a "accumulation set" in this definition here and what topology the "accumulation set" is with respect to? i.e. how can the definition here fit into aforementioned what is common for the two ways? 3. It says there are two ways to define the limit of a sequence of subsets of a set $X$. But there seems to be just one in the article, as quoted in 2. So I was wondering what is the second way it refers to? As you might give your answer, here is my thought/guess (which has actually been written in the article but not in a way saying it is the second one). Please correct me. In an arbitrary complete lattice, by viewing meet as inf and join as sup, the limsup of a sequence of points $\{x_n\}$ is defined as: $$\limsup \, x_n = \inf_{n \geq 0} \left(\sup_{m \geq n} \, x_m\right) = \mathop{\wedge}\limits_{n \geq 0}\left( \mathop{\vee}\limits_{m\ \geq n} \, x_m\right)$$ similarly define liminf. The power set of any set is a complete lattice with union and intersection being join and meet, so the liminf and limsup of a sequence of subsets can be defined in the same way. I was wondering if this is the other way the article tries to introduce? If it is, then this second way of definition does not requires $X$ to be a topological space. So how can this second way fits to what is common for the two ways in Part 1, which seems to requires some kind of topology on the power set of $X$? I understand this way of definition can be shown to be equivalent to a special case of the first way in my part 2 when the topology on $X$ is induced by discrete metric. This is another reason that let me doubt it is the second way, because I guess the second way should at least not be equivalent to a special case of the first way. 4. Can the two ways of definition fit into any definition for the general cases? In the general cases, limsup/liminf is defined for a sequence of points in a set with some structure. Can limsup/liminf of a sequence of subsets of a set be viewed as limsup/liminf of a sequence of "points". If not, so in some cases, a sequence of subsets must be treated just as a sequence of subsets, but not as a sequence of "points"? EDIT: @Arturo: In the last part of your reply to another question, did you try to explain how limsup/liminf of a sequence of points can be viewed as limsup/liminf of a sequence of subsets? I actually want to understand in the opposite direction: Here is a post with my current knowledge about limsup/liminf of a sequence of points in a set. For limsup/liminf of a sequence of subsets of any set $X$, defined in terms of union and intersection of subsets of $X$ as in part 3, it can be viewed as limsup/liminf of a sequence of points in a complete lattice, by viewing the power set of $X$ as a complete lattice. But for limsup/liminf of a sequence of subsets of any set defined in part 2 when X is a topological space, I was wondering if there is some way to view it as limsup/liminf of a sequence of points in some set? It is also great if you have other approaches to understand all the ways of defining limsup/liminf of a sequence of subsets, other than the approach in Wikipedia. Thanks and regards! • I don't think Part 1 is thinking in terms of a topology on $\mathcal{P}(X)$, but of a topology on $X$; we talk about accumulation points of a subset of a topological space all the time, without giving a topology to the power set of the space. For example, saying a subset is dense is saying that the set of all accumulation points of the set is the entire space. – Arturo Magidin Jan 13 '11 at 4:30 • Why is my question turned into community wiki? What does it mean? – Tim Jan 13 '11 at 6:01 • your question turned into community wiki after you edited it more than 8 times. It is a " feature " of the underlying software. – Willie Wong Jan 13 '11 at 17:34 • @PantelisSopasakis: Thanks! Is the definition you quoted equivalent to any of the two definitions in my post? (1) It is not the second definition, which I mentioned in part 3, because your quote relies on topology while my second definition doesn't. In other words, if remove the closure in your quote, then it will become my second definition. (2) Is it the same as my first definition, which I quoted in Part 2 and also relies on the topology on the universal set? (3) How is $\limsup_n A_n$ is defined in your book? – Tim Nov 22 '11 at 13:50 • @ArturoMagidin: Pantelis recently quoted some other definition in his comment. I wonder if his quote $\lim\inf_{n} A_n := \bigcup_{n=1}^\infty\overline{\bigcap_{m=n}^\infty A_m}$ is the same as the definition in my part 2 "$x \in \liminf X_n$ if and only if there exists a sequence of points $\{x_k\}$ such that $x_k \in X_k$ and $x_k \rightarrow x$ as $k \rightarrow \infty$"? – Tim Nov 22 '11 at 14:09 First, you might also want to take a look at this answer to a similar question. Okay: the first description assumes that there is some sort of notion of "accumulation point" at work in the set $X$, as you surmise; this may be derived from a topology. The second description talks about limit points, but you can apply it to any set by endowing the set with the discrete topology (every subset is open, every subset is closed). If you do that, then the definition is the usual definition of limit superior of a sequence of sets: it is the collection of all points that are in infinitely many of the terms of the sequence, while the limit inferior is the collection of all points that are in all sufficiently large terms of the sequence. The "second way" of defining it is in terms of unions and intersection. If $\{X_n\}_{n\in\mathbb{N}}$ is a family of sets, then \begin{align*} \limsup_{n\in\mathbb{N}} X_n &= \bigcap_{n=1}^{\infty}\left(\bigcup_{j=n}^{\infty} X_j\right)\\\ \liminf_{n\in\mathbb{N}} X_n &= \bigcup_{n=1}^{\infty}\left(\bigcap_{j=n}^{\infty} X_j\right). \end{align*} This coincides with the notion of the limit superior being the set of all limit points of infinitely many terms in the sequence, under the discrete topology; and the limit inferior being the set of all limit points of all sufficiently large-indexed terms of the sequence (again, under the discrete topology). The notion of "accumulation point" in the first description is more informal. If you are working with a topological space, then it is limit points as described above and by "accumulation set" you should read "set of all limit points". For your third point, in order to be able to talk about joins and meets you need to have some sort of complete lattice order on your set, so that you can talk about those infinite meets and infinite joins; this is the case, for instance, in the real numbers; appropriately interpreted, you do get essentially the definition you propose, though you need to tweak it a bit in order to actually get what the actual definition is (see the other answer quoted above); you don't actually work with the points themselves, but with a slightly different set determined by the points. I think that the previous answer linked to answers essentially your fourth point, of how to interpret limit superior and limit inferior of a sequence of points as a special case of limit superior and limit inferior of sets; but if this is not the case, point it out and I'll try to answer it de nuovo. • Thanks! (1) In part 1, about a sequence of subsets that accumulate to some "accumulation subsets" of X in the quoted, it make me think of topology on the power set of X. I can imagine if it is just an informal way to say But I doubt by a "accumulation set" the article means the set of limit points of a subset of X. (2) Correct me if I am wrong: when limsup of a sequence of subsets is defined in terms of set intersection and union solely, it can be viewed as limsup of a sequence of points in a complete lattice, because the power set is a complete lattice with inclusion as the order. – Tim Jan 13 '11 at 5:56 • (3) I read your reply to the other question. I think you tried to explain how to view limsup of a sequence of points as limsup of a sequence of subsets there, but I would like to understand how in the opposite direction. Please see the edit to Part 4 of my original post. – Tim Jan 13 '11 at 5:56 I always found the following definitions of superior limits and inferior limits intuitive: Let $\{E_n \}$ be a sequence of sets ($n = 1,2, \dots$). The superior limit of $\{E_n \}$ is the set consisting of those points which belong to infinitely many $E_n$. The inferior limit is the set of all those points that belong to all but a finite number of the $E_n$. • That is equivalent to the second way of definition described in my part 3. There are other ways different from this way. – Tim Jan 13 '11 at 1:09
# Problem #1173 1173 Let $A(2,2)$ and $B(7,7)$ be points in the plane. Define $R$ as the region in the first quadrant consisting of those points $C$ such that $\triangle ABC$ is an acute triangle. What is the closest integer to the area of the region $R$? $\mathrm{(A)}\ 25 \qquad \mathrm{(B)}\ 39 \qquad \mathrm{(C)}\ 51 \qquad \mathrm{(D)}\ 60 \qquad \mathrm{(E)}\ 80 \qquad$ This problem is copyrighted by the American Mathematics Competitions. Note: you aren't logged in. If you log in, we'll keep a record of which problems you've solved.
## Freitag, 31. August 2012 ### The idea is the process of disrupting itself into individuality . . . The idea is the process of disrupting itself into individuality . . . finite, that is, subjective spirit, makes for itself the presupposition of an objective world, such a presupposition AS LIFE only HAS; but its ACTIVITY is the sublating of this presupposition and the turning of it into something posited. Thus its reality is for it the objective world, or conversely the objective world is the ideality in which it knows itself. Third, spirit recognizes the idea as its absolute truth, as the truth that is in and for itself: the infinite idea in which COGNIZING AND DOING are equalized, and which is the absolute knowledge of itself. Rubber Rabbit Sexvertising - Durex 'Get it On' Ad Features a Condom-Bunny Threesome (VIDEO)
# Dominant Random Strategies: Why Judges Are Sometimes Better Off Flipping Coins In cases in which policy arguments for a choice can go both ways, judges, agencies, and other decision makers are sometimes better off flipping a coin than making an arbitrary, or even qualified choice. There is a big debate over what methods judges should use to interpret written laws. The choice of interpretation often has grave consequences. In one case, a man’s prison sentence depended on the interpretation of “using” a gun. If “use” referred only to shooting a gun, he was innocent, but if it also included trading a gun for drugs, he would be guilty (see Smith v. US). The problem here is not the result of the case – the “justness” or “correctness” of the result is largely ignored in the decision; rather, the problem is that the Supreme Court, in this case among many others, fiercely debates the correct method of interpretation. Without stating what they are, we can assume that the choice of method has policy implications – there are simply too many smart people debating the issue for it to be a moot point. It’s not very clear that courts apply any sort of consistent approach, though the debate seems to suggest that there should be one. Since there is a divide, and (apparently) consistency does not matter much, perhaps choosing which method to use when interpreting laws is a decision best made by flipping a coin. Math that inspires the result: Here’s one of my favorite math puzzles. Solve it now if you’re up to the challenge, or read on for a big hint. What makes this problem interesting is that the when Alice picks the “two different real numbers by an unknown process”, she is picking them arbitrarily, not at random. It turns out that this makes a difference to our strategy, and this particular puzzle is a complex demonstration of the idea that arbitrary is not the same as random. Let’s simplify things: Suppose Alice picks a secret number from one to six inclusive, and you win $1 if you guess it correctly. It makes a difference if Alice chose her number arbitrarily or if she rolled a hidden die. Do you see why? If Alice rolls a dice, we can simply guess “1″ and be guaranteed a one in six chance of guessing correctly. But if Alice instead just chose 6 every time she played this game, we would be guaranteed a loss if we guess “1″. We can eliminate this uncertainty by using a random strategy. If we roll a dice instead of just calling our an arbitrary number, we can guarantee ourselves a one in six chance of guessing correctly regardless of how Alice chooses her number. All this talk of random strategies should give you a huge hint as to how to solve the math puzzle. Whereas in the simple example above, a random strategy simply guaranteed a one in six chance of winning, a random strategy in the math puzzle actually produces a greater than 50% chance of winning. Here is the solution (the first comment). An intuitive geometric solution exists, but it’s hard to explain over the Internet, so if the algebra doesn’t convince you… take my word for it – the solution works. How it applies to law: For decisions in which consistency does not matter, it follows from the above discussion that when offered a choice between two mutually exclusive judicial strategies, with equal policy arguments on both sides, we are better off flipping a coin. Notice that the “correct” strategy – that is, the one that best advances policy objectives – is an unknown known. Although we don’t know which strategy is better, one is necessarily better. This is analogous to Alice picking a number of her choice instead of rolling a dice. We don’t know what that number is, but it’s necessarily one of the six choices. Therefore, if we pick one strategy and stick with it, we risk being wrong 100% of the time (of course, we could also be right 100% of the time). If we instead flip a coin and choose our strategy at random, we are guaranteed the correct strategy 50% of the time. In some cases, if the consistency of methodology does not matter, this is the dominant strategy. Interestingly, a 50/50 coin flipping strategy may be superior even in cases in which we think that policy arguments lean towards one side. This occurs because of society’s natural risk aversion (more on this later – for now assume that society uses the natural logarithm as its utility function). Here is a quantitative demonstration (please excuse my outlandish numbers, and the several implicit assumptions I’m making here – an extreme example is sufficient and within my mathematical capability; also note the most important assumption is that the costs and benefits don’t compound): Suppose policy considerations give us 60% confidence that Strategy A is the correct strategy, and that using the correct strategy keeps the status quo, but using the incorrect one costs a relevant administrative agency$1 each decision. Suppose there are two such decisions a year, and that the relevant agency has an annual budget of \$3. The Expected Utility of using Strategy A for both decisions is $0.4\cdot \ln (\frac{1}{3}) + 0.6\cdot \ln (1) = -1.1,$ whereas the utility of flipping a coin is $0.25\cdot \ln (\frac{1}{3}) + 0.5\cdot \ln (\frac{2}{3}) + 0.25\cdot \ln (1) = -.58.$ In this example, sticking to a single strategy is nearly twice as bad as flipping a coin, even though we were 60% sure that the single strategy we chose was correct. This suggests that flipping a weighted coin might be better than both options. Intuitively, the result occurs because we prefer moderate outcomes to extreme outcomes. You can see how this result might extend to more reasonable scenarios (smaller relative costs each decision, but more decisions / year). So, going back to the legal starting point, we see that if a judge (or agency?) is faced with a decision between two methodologies that arrive at different results, and there is no consistency reason for him to pick one or the other, he might be better off flipping a coin. Categories: Random. Comment Feed Some HTML is OK
# Continuity of solution map to Stratonovich Integral For paths $x:[0, T] \rightarrow \mathbb{R}^n$, the Stratonovich integral along a one form $\omega$ on $\mathbb{R}^n$ can be defined by $$S_\omega(x) := \int_0^T \omega(x(t)) \mathrm{d}x(t) := \lim_{|\tau|\rightarrow 0} \int_0^T \omega(x^\tau(t)) \mathrm{d}x^\tau(t),$$ where the limit is taken over any sequence of partitions $\tau = \{0 = \tau_0 < \tau_1 <\dots < \tau_N = T\}$ of the interval $[0, T]$ the mesh of which tends to zero and $$x^\tau(\tau_{j-1} + t) := x(\tau_{j-1}) + \frac{t}{\tau_j-\tau_{j-1}} (x(\tau_j)-x(\tau_{j-1}))$$ is the corresponding polygon-approximation of the path $x$. Clearly, the right hand sides of the above definition are continuous (even smooth) functions on the space $W:=C([0, T], \mathbb{R}^n)$ of continuous paths, but it is well known that limit only exists in probability with respect to the Wiener measure on $W$, and the Stratonovich integral $S_\omega$ ends up being only a function in $L^1(W)$ instead of in $C(W)$. Now the question is if this can be fixed somehow: Is there a topological space $W^\prime$ of continuous paths with $$\bigcap_{\alpha < 1/2} C^\alpha([0, T], \mathbb{R}^n) \subseteq W^\prime \subseteq W$$ such that the Wiener measure can be constructed as a Borel probability measure on $W^\prime$ and such that $S_\omega \in C(W^\prime)$? More precisely, is there such a space $W^\prime$ so that the net of functions $$S_\omega^\tau(x) := \int_0^T \omega(x^\tau(t)) \mathrm{d}x^\tau(t) = \int_0^T \omega(x^\tau(t)) \dot{x}^\tau(t)\mathrm{d}t$$ converges to a continuous function $S_\omega$? Can $S_\omega$ even be a smooth map? In other words: Can the solution map constructed from the theory of rough paths be made continuous/smooth when the path space carries the right topology? • I'm fuzzy on the details, but since you mention rough paths, isn't this precisely what the $p$-variation topology accomplishes (with $p > 2$)? – Nate Eldredge Aug 11 '16 at 21:54 • Is this so? I am not an expert on the theory of rough paths and tried to find such a result in the literature, but I could not find the statement. Maybe this is just a language barrier, or maybe this way of thinking about matters is uncommon for probabilists. – Matthias Ludewig Aug 12 '16 at 7:06 • The result I'm thinking of is that the rough path integral is supposed to be a continuous map on the space of enhanced rough paths (paths in the truncated tensor algebra) when it's equipped with the appropriate $p$-variation topology. But that's not what's going on here. Anyway, the enhancement that leads to the Stratonovich integral is something like Stratonovich stochastic area, and it is only defined almost everywhere. – Nate Eldredge Aug 12 '16 at 8:07 • I meant the intersection instead of the union, sorry. However, you where saying that this solution map is continuous, and then again, that it is only defined almost everywhere? – Matthias Ludewig Aug 12 '16 at 8:42 • The map from the space of enhanced rough paths to the integral is continuous. The map from the space of actual paths to enhanced rough paths is only a.e. defined. What you want is their composition. – Nate Eldredge Aug 12 '16 at 8:46
1. ## Real analysis Suppose $\displaystyle x,y \in R^3$such that $\displaystyle |x-y| \geq 0.$ 1. Show the set of points $\displaystyle z \in R^3$ such that $\displaystyle |z-y|=|z-x|$ is the plane $\displaystyle u + \frac{x+y}2 : u \dot (x-y)=0$ 2. Determine the $\displaystyle r \geq 0$ such that the set of points $\displaystyle |z-y|=|z-x|=r$ is a circle. 2. Have you done nothing on this yourself? Given two points, x and y, in 3 dimensional space, there exist a line segment between them. Imagine the plane perpendicular to that line, passing through its midpoint. What can you say about that plane? Imagine the circle in that plane, center at the midpoint of the original line segment, radius R. What can you say about the distances from any point on that circle to x and y? 3. I can visualize a general answer in my head, but I'm having difficulty actually showing it.
# Bit rate In telecommunications and computing, bit rate (bitrate or as a variable R) is the number of bits that are conveyed or processed per unit of time.[1] Bit rates Name Symbol Multiple bit per second bit/s 1 1 Decimal prefixes (SI) kilobit per second kbit/s 103 10001 megabit per second Mbit/s 106 10002 gigabit per second Gbit/s 109 10003 terabit per second Tbit/s 1012 10004 Binary prefixes (IEC 80000-13) kibibit per second Kibit/s 210 10241 mebibit per second Mibit/s 220 10242 gibibit per second Gibit/s 230 10243 tebibit per second Tibit/s 240 10244 The bit rate is expressed in the unit bit per second unit (symbol: bit/s), often in conjunction with an SI prefix such as kilo (1 kbit/s = 1,000 bit/s), mega (1 Mbit/s = 1,000 kbit/s), giga (1 Gbit/s = 1,000 Mbit/s) or tera (1 Tbit/s = 1,000 Gbit/s).[2] The non-standard abbreviation bps is often used to replace the standard symbol bit/s, so that, for example, 1 Mbps is used to mean one million bits per second. In most computing and digital communication environments, one byte per second (1 B/s) corresponds to 8 bit/s. ## Prefixes When quantifying large or small bit rates, SI prefixes (also known as metric prefixes or decimal prefixes) are used, thus:[3] 0.001 bit/s = 1 mbit/s (one bit per thousand seconds) 1,000 bit/s = 1 kbit/s (one thousand bits per second) 1,000,000 bit/s = 1 Mbit/s (one million bits per second) 1,000,000,000 bit/s = 1 Gbit/s (one billion bits per second) Binary prefixes are sometimes used for bit rates.[4][5] The International Standard (IEC 80000-13) specifies different abbreviations for binary and decimal (SI) prefixes (e.g. 1 KiB/s = 1024 B/s = 8192 bit/s, and 1 MiB/s = 1024 KiB/s). ## In data communications ### Gross bit rate In digital communication systems, the physical layer gross bitrate,[6] raw bitrate,[7] data signaling rate,[8] gross data transfer rate[9] or uncoded transmission rate[7] (sometimes written as a variable Rb[6][7] or fb[10]) is the total number of physically transferred bits per second over a communication link, including useful data as well as protocol overhead. In case of serial communications, the gross bit rate is related to the bit transmission time ${\displaystyle T_{b}}$  as: ${\displaystyle R_{b}={1 \over T_{b}},}$ The gross bit rate is related to the symbol rate or modulation rate, which is expressed in bauds or symbols per second. However, the gross bit rate and the baud value are equal only when there are only two levels per symbol, representing 0 and 1, meaning that each symbol of a data transmission system carries exactly one bit of data; for example, this is not the case for modern modulation systems used in modems and LAN equipment.[11] For most line codes and modulation methods: ${\displaystyle {\text{Symbol rate}}\leq {\text{Gross bit rate}}}$ More specifically, a line code (or baseband transmission scheme) representing the data using pulse-amplitude modulation with ${\displaystyle 2^{N}}$  different voltage levels, can transfer ${\displaystyle N{\text{ bit/pulse}}}$ . A digital modulation method (or passband transmission scheme) using ${\displaystyle 2^{N}}$  different symbols, for example ${\displaystyle 2^{N}}$  amplitudes, phases or frequencies, can transfer ${\displaystyle N{\text{ bit/symbol}}}$ . This results in: ${\displaystyle {\text{Gross bit rate}}={\text{Symbol rate}}\times N}$ An exception from the above is some self-synchronizing line codes, for example Manchester coding and return-to-zero (RTZ) coding, where each bit is represented by two pulses (signal states), resulting in: ${\displaystyle {\text{Gross bit rate = Symbol rate/2}}}$ A theoretical upper bound for the symbol rate in baud, symbols/s or pulses/s for a certain spectral bandwidth in hertz is given by the Nyquist law: ${\displaystyle {\text{Symbol rate}}\leq {\text{Nyquist rate}}=2\times {\text{bandwidth}}}$ In practice this upper bound can only be approached for line coding schemes and for so-called vestigal sideband digital modulation. Most other digital carrier-modulated schemes, for example ASK, PSK, QAM and OFDM, can be characterized as double sideband modulation, resulting in the following relation: ${\displaystyle {\text{Symbol rate}}\leq {\text{Bandwidth}}}$ In case of parallel communication, the gross bit rate is given by ${\displaystyle \sum _{i=1}^{n}{\frac {\log _{2}{M_{i}}}{T_{i}}}}$ where n is the number of parallel channels, Mi is the number of symbols or levels of the modulation in the i-th channel, and Ti is the symbol duration time, expressed in seconds, for the i-th channel. ### Information rate The physical layer net bitrate,[12] information rate,[6] useful bit rate,[13] payload rate,[14] net data transfer rate,[9] coded transmission rate,[7] effective data rate[7] or wire speed (informal language) of a digital communication channel is the capacity excluding the physical layer protocol overhead, for example time division multiplex (TDM) framing bits, redundant forward error correction (FEC) codes, equalizer training symbols and other channel coding. Error-correcting codes are common especially in wireless communication systems, broadband modem standards and modern copper-based high-speed LANs. The physical layer net bitrate is the datarate measured at a reference point in the interface between the datalink layer and physical layer, and may consequently include data link and higher layer overhead. In modems and wireless systems, link adaptation (automatic adaption of the data rate and the modulation and/or error coding scheme to the signal quality) is often applied. In that context, the term peak bitrate denotes the net bitrate of the fastest and least robust transmission mode, used for example when the distance is very short between sender and transmitter.[15] Some operating systems and network equipment may detect the "connection speed"[16] (informal language) of a network access technology or communication device, implying the current net bit rate. Note that the term line rate in some textbooks is defined as gross bit rate,[14] in others as net bit rate. The relationship between the gross bit rate and net bit rate is affected by the FEC code rate according to the following. Net bit rate ≤ Gross bit rate · code rate The connection speed of a technology that involves forward error correction typically refers to the physical layer net bit rate in accordance with the above definition. For example, the net bitrate (and thus the "connection speed") of an IEEE 802.11a wireless network is the net bit rate of between 6 and 54 Mbit/s, while the gross bit rate is between 12 and 72 Mbit/s inclusive of error-correcting codes. The net bit rate of ISDN2 Basic Rate Interface (2 B-channels + 1 D-channel) of 64+64+16 = 144 kbit/s also refers to the payload data rates, while the D channel signalling rate is 16 kbit/s. The net bit rate of the Ethernet 100Base-TX physical layer standard is 100 Mbit/s, while the gross bitrate is 125 Mbit/second, due to the 4B5B (four bit over five bit) encoding. In this case, the gross bit rate is equal to the symbol rate or pulse rate of 125 megabaud, due to the NRZI line code. In communications technologies without forward error correction and other physical layer protocol overhead, there is no distinction between gross bit rate and physical layer net bit rate. For example, the net as well as gross bit rate of Ethernet 10Base-T is 10 Mbit/s. Due to the Manchester line code, each bit is represented by two pulses, resulting in a pulse rate of 20 megabaud. The "connection speed" of a V.92 voiceband modem typically refers to the gross bit rate, since there is no additional error-correction code. It can be up to 56,000 bit/s downstreams and 48,000 bit/s upstreams. A lower bit rate may be chosen during the connection establishment phase due to adaptive modulation – slower but more robust modulation schemes are chosen in case of poor signal-to-noise ratio. Due to data compression, the actual data transmission rate or throughput (see below) may be higher. The channel capacity, also known as the Shannon capacity, is a theoretical upper bound for the maximum net bitrate, exclusive of forward error correction coding, that is possible without bit errors for a certain physical analog node-to-node communication link. net bit rate ≤ channel capacity The channel capacity is proportional to the analog bandwidth in hertz. This proportionality is called Hartley's law. Consequently, the net bit rate is sometimes called digital bandwidth capacity in bit/s. ### Network throughput The term throughput, essentially the same thing as digital bandwidth consumption, denotes the achieved average useful bit rate in a computer network over a logical or physical communication link or through a network node, typically measured at a reference point above the datalink layer. This implies that the throughput often excludes data link layer protocol overhead. The throughput is affected by the traffic load from the data source in question, as well as from other sources sharing the same network resources. See also measuring network throughput. ### Goodput (data transfer rate) Goodput or data transfer rate refers to the achieved average net bit rate that is delivered to the application layer, exclusive of all protocol overhead, data packets retransmissions, etc. For example, in the case of file transfer, the goodput corresponds to the achieved file transfer rate. The file transfer rate in bit/s can be calculated as the file size (in bytes) divided by the file transfer time (in seconds) and multiplied by eight. As an example, the goodput or data transfer rate of a V.92 voiceband modem is affected by the modem physical layer and data link layer protocols. It is sometimes higher than the physical layer data rate due to V.44 data compression, and sometimes lower due to bit-errors and automatic repeat request retransmissions. If no data compression is provided by the network equipment or protocols, we have the following relation: goodput ≤ throughput ≤ maximum throughput ≤ net bit rate for a certain communication path. ### Progress trends These are examples of physical layer net bit rates in proposed communication standard interfaces and devices: WAN modems Ethernet LAN WiFi WLAN Mobile data • 1972: Acoustic coupler 300 baud • 1977: 1200 baud Vadic and Bell 212A • 1986: ISDN introduced with two 64 kbit/s channels (144 kbit/s gross bit rate) • 1990: V.32bis modems: 2400 / 4800 / 9600 / 19200 bit/s • 1994: V.34 modems with 28.8 kbit/s • 1995: V.90 modems with 56 kbit/s downstreams, 33.6 kbit/s upstreams • 1999: V.92 modems with 56 kbit/s downstreams, 48 kbit/s upstreams • 1998: ADSL (ITU G.992.1) up to 10 Mbit/s • 2003: ADSL2 (ITU G.992.3) up to 12 Mbit/s • 2005: ADSL2+ (ITU G.992.5) up to 26 Mbit/s • 2005: VDSL2 (ITU G.993.2) up to 200 Mbit/s • 2014: G.fast (ITU G.9701) up to 1000 Mbit/s • 1G: • 1981: NMT 1200 bit/s • 2G: • 3G: • 2001: UMTS-FDD (WCDMA) 384 kbit/s • 2007: UMTS HSDPA 14.4 Mbit/s • 2008: UMTS HSPA 14.4 Mbit/s down, 5.76 Mbit/s up • 2009: HSPA+ (Without MIMO) 28 Mbit/s downstreams (56 Mbit/s with 2×2 MIMO), 22 Mbit/s upstreams • 2010: CDMA2000 EV-DO Rev. B 14.7 Mbit/s downstreams • 2011: HSPA+ accelerated (With MIMO) 42 Mbit/s downstreams • Pre-4G: • 2007: Mobile WiMAX (IEEE 802.16e) 144 Mbit/s down, 35 Mbit/s up • 2009: LTE 100 Mbit/s downstreams (360 Mbit/s with MIMO 2×2), 50 Mbit/s upstreams • 5G For more examples, see list of device bit rates, spectral efficiency comparison table and OFDM system comparison table. ## Multimedia In digital multimedia, bitrate represents the amount of information, or detail, that is stored per unit of time of a recording. The bitrate depends on several factors: • The original material may be sampled at different frequencies. • The samples may use different numbers of bits. • The data may be encoded by different schemes. • The information may be digitally compressed by different algorithms or to different degrees. Generally, choices are made about the above factors in order to achieve the desired trade-off between minimizing the bitrate and maximizing the quality of the material when it is played. If lossy data compression is used on audio or visual data, differences from the original signal will be introduced; if the compression is substantial, or lossy data is decompressed and recompressed, this may become noticeable in the form of compression artifacts. Whether these affect the perceived quality, and if so how much, depends on the compression scheme, encoder power, the characteristics of the input data, the listener's perceptions, the listener's familiarity with artifacts, and the listening or viewing environment. The bitrates in this section are approximately the minimum that the average listener in a typical listening or viewing environment, when using the best available compression, would perceive as not significantly worse than the reference standard: ## Encoding bit rate In digital multimedia, bit rate refers to the number of bits used per second to represent a continuous medium such as audio or video after source coding (data compression). The encoding bit rate of a multimedia file is its size in bytes divided by the playback time of the recording (in seconds), multiplied by eight. For realtime streaming multimedia, the encoding bit rate is the goodput that is required to avoid interrupt: encoding bit rate = required goodput The term average bitrate is used in case of variable bitrate multimedia source coding schemes. In this context, the peak bit rate is the maximum number of bits required for any short-term block of compressed data.[17] A theoretical lower bound for the encoding bit rate for lossless data compression is the source information rate, also known as the entropy rate. entropy rate ≤ multimedia bit rate ### Audio #### CD-DA CD-DA, the standard audio CD, is said to have a data rate of 44.1 kHz/16, meaning that the audio data was sampled 44,100 times per second and with a bit depth of 16. CD-DA is also stereo, using a left and right channel, so the amount of audio data per second is double that of mono, where only a single channel is used. The bit rate of PCM audio data can be calculated with the following formula: ${\displaystyle {\text{bit rate}}={\text{sample rate}}\times {\text{bit depth}}\times {\text{channels}}}$ For example, the bit rate of a CD-DA recording (44.1 kHz sampling rate, 16 bits per sample and two channels) can be calculated as follows: ${\displaystyle 44,100\times 16\times 2=1,411,200\ {\text{bit/s}}=1,411.2\ {\text{kbit/s}}}$ The cumulative size of a length of PCM audio data (excluding a file header or other metadata) can be calculated using the following formula: ${\displaystyle {\text{size in bits}}={\text{sample rate}}\times {\text{bit depth}}\times {\text{channels}}\times {\text{time}}.}$ The cumulative size in bytes can be found by dividing the file size in bits by the number of bits in a byte, which is eight: ${\displaystyle {\text{size in bytes}}={\frac {\text{size in bits}}{8}}}$ Therefore, 80 minutes (4,800 seconds) of CD-DA data requires 846,720,000 bytes of storage: ${\displaystyle {\frac {44,100\times 16\times 2\times 4,800}{8}}=846,720,000\ {\text{bytes}}\approx 847\ {\text{MB}}}$ #### MP3 The MP3 audio format provides lossy data compression. Audio quality improves with increasing bitrate: • 32 kbit/s – generally acceptable only for speech • 96 kbit/s – generally used for speech or low-quality streaming • 128 or 160 kbit/s – mid-range bitrate quality • 192 kbit/s – medium quality bitrate • 256 kbit/s – a commonly used high-quality bitrate • 320 kbit/s – highest level supported by the MP3 standard #### Other audio • 700 bit/s – lowest bitrate open-source speech codec Codec2, but barely recognizable yet, sounds much better at 1.2 kbit/s • 800 bit/s – minimum necessary for recognizable speech, using the special-purpose FS-1015 speech codecs • 2.15 kbit/s – minimum bitrate available through the open-source Speex codec • 6 kbit/s – minimum bitrate available through the open-source Opus codec • 8 kbit/s – telephone quality using speech codecs • 32–500 kbit/s – lossy audio as used in Ogg Vorbis • 256 kbit/s – Digital Audio Broadcasting (DAB) MP2 bit rate required to achieve a high quality signal[18] • 292 kbit/s - Sony Adaptive Transform Acoustic Coding (ATRAC) for use on the MiniDisc Format • 400 kbit/s–1,411 kbit/s – lossless audio as used in formats such as Free Lossless Audio Codec, WavPack, or Monkey's Audio to compress CD audio • 1,411.2 kbit/s – Linear PCM sound format of CD-DA • 5,644.8 kbit/s – DSD, which is a trademarked implementation of PDM sound format used on Super Audio CD.[19] • 6.144 Mbit/s – E-AC-3 (Dolby Digital Plus), an enhanced coding system based on the AC-3 codec • 9.6 Mbit/s – DVD-Audio, a digital format for delivering high-fidelity audio content on a DVD. DVD-Audio is not intended to be a video delivery format and is not the same as video DVDs containing concert films or music videos. These discs cannot be played on a standard DVD-player without DVD-Audio logo.[20] • 18 Mbit/s – advanced lossless audio codec based on Meridian Lossless Packing (MLP) ### Video • 16 kbit/s – videophone quality (minimum necessary for a consumer-acceptable "talking head" picture using various video compression schemes) • 128–384 kbit/s – business-oriented videoconferencing quality using video compression • 400 kbit/s YouTube 240p videos (using H.264)[21] • 750 kbit/s YouTube 360p videos (using H.264)[21] • 1 Mbit/s YouTube 480p videos (using H.264)[21] • 1.15 Mbit/s max – VCD quality (using MPEG1 compression)[22] • 2.5 Mbit/s YouTube 720p videos (using H.264)[21] • 3.5 Mbit/s typ – Standard-definition television quality (with bit-rate reduction from MPEG-2 compression) • 3.8 Mbit/s YouTube 720p60 (60 FPS) videos (using H.264)[21] • 4.5 Mbit/s YouTube 1080p videos (using H.264)[21] • 6.8 Mbit/s YouTube 1080p60 (60 FPS) videos (using H.264)[21] • 9.8 Mbit/s max – DVD (using MPEG2 compression)[23] • 8 to 15 Mbit/s typ – HDTV quality (with bit-rate reduction from MPEG-4 AVC compression) • 19 Mbit/s approximate – HDV 720p (using MPEG2 compression)[24] • 24 Mbit/s max – AVCHD (using MPEG4 AVC compression)[25] • 25 Mbit/s approximate – HDV 1080i (using MPEG2 compression)[24] • 29.4 Mbit/s max – HD DVD • 40 Mbit/s max – 1080p Blu-ray Disc (using MPEG2, MPEG4 AVC or VC-1 compression)[26] • 250 Mbit/s max – DCP (using JPEG 2000 compression) • 1.4 Gbit/s – 10-bit 4:4:4 Uncompressed 1080p at 24fps ### Notes For technical reasons (hardware/software protocols, overheads, encoding schemes, etc.) the actual bit rates used by some of the compared-to devices may be significantly higher than what is listed above. For example, telephone circuits using µlaw or A-law companding (pulse code modulation) yield 64 kbit/s. ## References 1. ^ Gupta, Prakash C (2006). Data Communications and Computer Networks. PHI Learning. ISBN 9788120328464. Retrieved 10 July 2011. 2. ^ International Electrotechnical Commission (2007). "Prefixes for binary multiples". Retrieved 4 February 2014. 3. ^ Jindal (2009), From millibits to terabits per second and beyond - Over 60 years of innovation 4. ^ Schlosser, S. W., Griffin, J. L., Nagle, D. F., & Ganger, G. R. (1999). Filling the memory access gap: A case for on-chip magnetic storage (No. CMU-CS-99-174). CARNEGIE-MELLON UNIV PITTSBURGH PA SCHOOL OF COMPUTER SCIENCE. 5. ^ "Monitoring file transfers that are in progress from WebSphere MQ Explorer". Retrieved 10 October 2014. 6. ^ a b c Guimarães, Dayan Adionel (2009). "section 8.1.1.3 Gross Bit Rate and Information Rate". Digital Transmission: A Simulation-Aided Introduction with VisSim/Comm. Springer. ISBN 9783642013591. Retrieved 10 July 2011. 7. Kaveh Pahlavan, Prashant Krishnamurthy (2009). Networking Fundamentals. John Wiley & Sons. ISBN 9780470779439. Retrieved 10 July 2011. 8. ^ Network Dictionary. Javvin Technologies. 2007. ISBN 9781602670006. Retrieved 10 July 2011. 9. ^ a b Harte, Lawrence; Kikta, Roman; Levine, Richard (2002). 3G wireless demystified. McGraw-Hill Professional. ISBN 9780071382823. Retrieved 10 July 2011. 10. ^ J.S. Chitode (2008). Principles of Digital Communication. Technical Publication. ISBN 9788184314519. Retrieved 10 July 2011. 11. ^ Lou Frenzel. 27 April 2012, "What’s The Difference Between Bit Rate And Baud Rate?". Electronic Design. 2012. 12. ^ Theodory S. Rappaport, Wireless communications: principles and practice, Prentice Hall PTR, 2002 13. ^ Lajos Hanzo, Peter J. Cherriman, Jürgen Streit, Video compression and communications: from basics to H.261, H.263, H.264, MPEG4 for DVB and HSDPA-style adaptive turbo-transceivers, Wiley-IEEE, 2007. 14. ^ a b V.S. Bagad, I.A. Dhotre, Data Communication Systems, Technical Publications, 2009. 15. ^ Sudhir Dixit, Ramjee Prasad Wireless IP and Building the Mobile Internet, Artech House 16. ^ Guy Hart-Davis,Mastering Microsoft Windows Vista home: premium and basic, John Wiley and Sons, 2007 17. ^ Khalid Sayood, Lossless compression handbook, Academic Press, 2003. 18. ^ Page 26 of BBC R&D White Paper WHP 061 June 2003, DAB: An introduction to the DAB Eureka system and how it works http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP061.pdf 19. ^ Extremetech.com, Leslie Shapiro, 2 July 2001. Surround Sound: The High-End: SACD and DVD-Audio. Archived 30 December 2009 at the Wayback Machine Retrieved 19 May 2010. 2 channels, 1-bit, 2822.4 kHz DSD audio (2×1×2,822,400)= 5,644,800 bits/s 20. ^ "Understanding DVD-Audio" (PDF). Sonic Solutions. Archived from the original (PDF) on 4 March 2012. Retrieved 23 April 2014. 21. "YouTube bit rates". Retrieved 10 October 2014. 22. ^ "MPEG1 Specifications". UK: ICDia. Retrieved 11 July 2011. 23. ^ "DVD-MPEG differences". Sourceforge. Retrieved 11 July 2011. 24. ^ a b HDV Specifications (PDF), HDV Information, archived from the original (PDF) on 8 January 2007. 25. ^ "Avchd Information". AVCHD Info. Retrieved 11 July 2011. 26. ^ "3.3 Video Streams" (PDF), Blu-ray Disc Format 2.B Audio Visual Application Format Specifications for BD-ROM Version 2.4 (white paper), May 2010, p. 17.
Home > English > Class 12 > Physics > Chapter > Atomic Physics > A monochromatic beam of light ... Updated On: 20-06-2022 Get Answer to any question, just click a photo and upload the photo and get the answer completely free, Text Solution may be in the first excited statemay be in the second excited statemay be in both first and second excited stateNone of above. Solution : As photon energy of incident light is not equal to energy difference between two energy state, that is why sample A does not absorbe any photon and therefore remain in ground state, and hence , no emission spectra. Thus sample B remain as it is, but it will be de-excite itself to ground state by emitting radiations of energy 10.2 eV <br> <img src="https://d10lpgp6xz60nq.cloudfront.net/physics_images/BMS_V04_C04_E01_280_S01.png" width="80%"> <br> when photon energy is replaced by electron beam, then atom of sample A can absorb either 10.2 eV ("to reach" 1^(st) excited state) of 12.1 eV ("to reach" 2^(nd) "excited state") In emission spectra A , we will have 3 time corresponding to n = 2 to 1, n = 3 to 2 and n = 3 to1 having energies equal to 10.2 eV, 1.9 eV, 12.1 eV, respectively <br> As least energy of this emission spectra is corresponding to transition from n = 2 to 3 and sample B is in 1^(st) excited state B can be excited to some higher energy level and if B absorbs energy corresponding to n = 1 to 2 and n = 1 to 3`, then it may ionize. Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Transcript hello letter start with other question in this question what is given a monochromatic beam of light having Photon energy 12.5 electron volt is incident on a Honours sample a of atomic hydrogen ola in the ground state ok the emission spectra obtained from the sample is incident on another sample be of atomic hydrogen in which all atoms are in the first excited state based on the information we need to check whether the atom of sample after passing of light through it may be in the first excited in the second Exide for maybe in both first and second excited state or none of the above OK so how is the situation if we draw our situation that is just draw for example this is one of the sample is simple and let's say this is another sample sample be ok but there is a photon of energy which is given as to be 12.5 electron volt this is incident on sample and on sample be 9 sample a all the atoms hydrogen atom in the ground state and here it is in the first excited state ok no photo ground is redmi Note 8 and value is equal to 1 and for the first excited state and value is equal to 2 for the fox first excited state but in this awareness about what will happen ok now in the ground state if we draw the various energy levels for example and equals to 1 OK then equals to 2 then equals to if we draw 3D letters A-Z draw any cost 24 ok like this these are the various energy levels let's say this is an equal to 1 this is to this is let's 83 this is 4 and 7 ok no corresponding to the ground state what is the energy is it is -13.6 Sofia writing this values in terms of electron volt ok so here it would be in any way to do it would be fine 3.4 it would be -1.5 then it would be -0.8 434 approximately point 844 these are the values in terms of electron in the sample electron is in the ground state from here the transition to take less what we are doing we are in incident in a photo on which is having energy of 12.5 electron volt the what will happen so either the electron would jump into this or it can jump hear what it can be the transition to an equal to four so these other possibility and Swan ok to an equal to 5 now in the ground state it is having energy of -13.6 ok no it is it would gain energy of -12.5 so the total energy would become it would be -13.6 this was the energy which it was already carrying and this is due to the energy of incident Photon so this value would turn out to be -1.1 electron volt that is not the electron is having this atoms are having energy of -1.1 electron volt ok corresponding to 1.1 electron volt there is no energy level which is having energy of -1.1 electron volt because here are the energies contest so it cannot be it cannot have any value so there cannot be anything but energy Sare discrete of packets are the levels are in this excited states are in packets so there will be either 13.6 it would be -3.4 -1.5 -142 corresponding to this -1.5 there is no no shell so it cannot remain here because this -1.1 would be between an equal to 4 and equal to 3 but this is not possible it cannot remain between the shell so that means there is no that what we need to find the atoms of sample after passing through that none of the above so it would be in the first excited it cannot be it cannot be in the second it cannot be both the first and excited so would be the correct option because they would be remain in the ground state OK then what happened so hope you understood thank you
# How to define input from users for a 2D heteregenous model Hi everyone, I am working on a small modeling project. Let’s say I want to do 2D diffusion modeling in different units. All the units are homogeneous but they can have different geometries and physical properties. I was wondering what would be the most Julia way to ask the user for that? My first idea is to ask the user to give a raster image with each pixel color beeing a different units or to draw on a GUI his model. He would then need to give names to each units and tell me their physical properties. From that, I can build boolean matrices for the position of each units and define matrices for each physical properties for the whole model. What would be your idea for this situation? The most flexible thing is probably to have the caller pass in functions f(x,y) rather than images. (This is called a higher-order function.) e.g. suppose you are solving the diffusion equation du/dt = \nabla \cdot (D \nabla u) + f with some initial conditions u(x,y,0) = u_0(x,y) and some boundary conditions on a box domain [0,L_x] \times [0,L_y], say with finite differences. Then you ask the user to specify the size L_x \times L_y of the domain, and pass in functions f(x,y) and u_0(x,y) and D(x,y), along with some information about the desired discretization and the simulation time. Note that your program doesn’t need to care about the units of L and f etcetera, as long as the caller was consistent in their units. 2 Likes Mmh, I am a bit confused. Let’s say the user wants to have a geometry like this for his model: with the three colors having different values of D, f and u0 How can he build that easily as a function? The other thing is that my project is not targeting people who knows programming. So I can’t expect them to write any complicated functions. EDIT Alright, after thinking a bit, I guess your idea would be for the user, for example, to write something like that for D(x,y) function D_calc(nx,ny) D = zeros(nx,ny) D[1:Int(ny/10), :] .= 1 D[Int(ny/10)+1:end, :] .= 2 return D end to obtain: and to use that function as an input for my model (correct me if I am wrong). But that is still a bit to complicated for the people I am targetting… And it should be a bit more tricky for more fancy geometries. There’s really no good way to initialize simulations without any programming knowledge unless you’re willing to build a GUI, which can be quite expensive and time-consuming. Higher-order functions may work well enough if you provide example scripts and a few geometric primitives (box, circle, line, ellipsoid, …). I’d recommend Unitful.jl to handle different input units on the frontend, with everything converted internally to a consistent unit system on the backend. As for specification of higher-order functions, the user should provide a function f(x, y) that’s independent of the grid. If all material interfaces are initially sharp (i.e. materials start unmixed), you could do something like this: using Unitful @derived_dimension MassDiffusivity Unitful.𝐋^2/Unitful.𝐓 struct DiffusiveMaterial D::Float64 # Mass diffusivity ρ::Float64 # Density function DiffusiveMaterial(D, ρ) D < 0 && error("Diffusivity must be positive") ρ < 0 && error("Density must be positive") new(D, ρ) end end # Convert to uniform internal units function DiffusiveMaterial(D::MassDiffusivity, ρ::Unitful.Density) DiffusiveMaterial(ustrip(u"m^2/s", D), ustrip(u"kg/m^3", ρ)) end struct Domain{T <: AbstractRange} x::T y::T D::Array{Float64, 2} ρ::Array{Float64, 2} materials::Dict{Symbol, DiffusiveMaterial} function Domain(x::T, y::T, materials) where {T} return new{T}(x, y, Array{Float64}(undef, length(x), length(y)), Array{Float64}(undef, length(x), length(y)), materials) end end function initialize!(material, d::Domain) for (j, y) in enumerate(d.y), (i, x) in enumerate(d.x) m = material(x, y) d.D[i, j] = d.materials[m].D d.ρ[i, j] = d.materials[m].ρ end end incircle(x₀, y₀, R) = (x, y) -> hypot(x - x₀, y - y₀) <= R inrectangle(x₀, y₀, x₁, y₁) = (x, y) -> x₀ <= x <= x₁ && y₀ <= y <= y₁ which would allow the user to initialize a model like this materials = Dict(:Mud => DiffusiveMaterial(1.0e-7u"cm^2/s", 2.65e3u"kg/m^3"), :Water => DiffusiveMaterial(1.0e-6u"cm^2/s", 1000u"kg/m^3"), :Ethanol => DiffusiveMaterial(8.0e-6u"cm^2/s", 800u"kg/m^3"),) d = Domain(LinRange(0, 100, 512), LinRange(0, 200, 1024), materials) initialize!(d) do x, y if incircle(50, 50, 20)(x, y) :Ethanol elseif inrectangle(0, 0, 200, 30)(x, y) :Mud else :Water end end 2 Likes Thank you very much for taking the time to write that! I’ve learned a lot about structures and how I could make use of that by playing with your code. Very elegant! 2 Likes
# Generic liftings of a regular sequence on the initial ideal Hi everyone, I've got a question about explicitly lifting regular sequences. Let $I$ be an ideal in a polynomial ring $S$ with some term order. We'll denote the initial ideal by $in(I)$. It is false in general that a regular sequence on $S/in(I)$ is regular on $S/I$. For example consider $I=(x+y)$, with $x>y$ Then $x+y$ is a regular element on $S/in(I)$ but is not regular on $S/I$. However, $3x-y$ IS a regular element mod $I$. My question is: Can we can do this in general? i.e. Given a regular sequence on $S/in(I)$, can we obtain a regular sequence on $S/I$ by just replacing all the coefficients in all the elements with generic coefficients? We know that the depth of $S/in(I)$ is at most the depth of $S/I$, but I haven't actually seen too many proofs of this written down. The ones I've seen first show a bound on Betti numbers and then use the Auslander-Buchsbaum formula. I was wondering if one could prove this fact by answering the question above, and if anyone has a reference. I think one might be able to use a flat family argument. In general it would be nice to have an explicit way of going back and form between regular sequences on $I$ and $in(I)$. Any reference or suggestions would be greatly appreciated. Thanks so much for your help! - What does in(I) mean? –  Olivier Apr 3 '11 at 12:06 in(I) denotes the initial ideal with respect to the term order. I'll update my post to indicate. Thanks! –  Adam Boocher Apr 3 '11 at 15:44 What you are asking is true. In fact it is actually possible to "lift" not only regular sequences but also filter regular sequences. The proof is a simple modification of the usual argument that shows the inequality between the graded Betti numbers of $I$ and the ones of $in(I).$ Say that $in(I)=in_w(I)$ for some weight $w.$ Assume that the field $K$ in infinite. We know that there is a flat family, parametrized say by $t$, induced by homogenizing using $t$ w.r.t. the grading induced by $w$. At the special point $t=0$ we get $in(I)$ and for $t=1$ we get $I$. In general for $t=\alpha \not =0$ we would get $D_{\alpha}I$ where $D_{\alpha}$ is a diagonal change of coordinate. Fix a degree $d,$ and let $J$ be a homogeneous ideal of $S$ (for instance generated by the regular sequence of $r$ linear forms you want to lift) then $dim_K (S/(in I+J))_d \geq dim_K (S/(D_{\alpha}I+J))_d$ for $\alpha$ in a neighborhood of $0.$ Now the right hand side is isomorphic to $dim_K (S/(I+D_{\alpha}^{-1}J))_d.$ There is actually a whole non-empty Zariski open set of such $\alpha$'s, call it $U_d.$ In your case $J$ is generated by a regular sequence of $r$ linear forms and $dim(S/(in I+J))_d$ is the "smallest possible" given the Hilbert function of $S/I$ and the fact that $J$ is generated by $r$- linear forms. The inequality above is therefore forced to be an equality. By intersecting all the the $U_i's$ for $1\leq i\leq d$ , $d$ sufficiently large, you find an a $D_{\alpha}$ Such that $D_{\alpha}^{-1}(J)$ is the lift. The proof of the fact that one needs only to intersect finitely many $U_i$ depends on the ascending chain condition and is quite similar to the analogous step needed to show that the generic initial ideal is well defined. Regarding the above inequality, with the proper modifications one can show, more generally, that $dim_K Tor_i(S/inI,S/J)_d \geq dim_K Tor_i(S/I,S/DJ)_d$ for some diagonal change of coordinates $D.$
# Circle to circle homotopic to the constant map? How to prove that a continuous function, homotopic to the constant map $f:S^1\to S^1$ (a) has a fixed point and that (b) has a point $x$, such that $f$ maps $x$ to its antipodal point $-x$? • What are some of the approaches you've tried already? – Dan Rust Nov 23 '14 at 20:44 • Note that $0\notin S^1=\{\,z\in\mathbb C:|z|=1\,\}$. But parts of the problem statement seem to be unclear (so please check them again carefully): There are many $f\colon S^1\to S^1$ that are homotopic to the constant map - and $x\mapsto -x$ is not among them – Hagen von Eitzen Nov 23 '14 at 20:45 • Well, indeed. Then how can I show that $f$ has a constant point? A constant point must satisfy $f(x)=x$. Let me try to re-write question. You are right, I ve not written it down well. – Marion Nov 23 '14 at 20:48 • A typical trick s to assume that $f(x)\ne x$ for all $x$ and then consider $x\mapsto \frac{f(x)-x}{|f(x)-x|}$ – Hagen von Eitzen Nov 23 '14 at 20:49 • I believe the right phrasing would be "maps some $x$ to it's antipodal point" – Arthur Nov 23 '14 at 21:40 ## 3 Answers For the first part, let $i\colon S^1\to D^2$ be the inclusion of the circle into the unit disk and, since $f$ is null-homotopic, let $\tilde{f}\colon D^2\to S^1$ be an extension of $f$ to the whole disk (which exists). Since $f$ has no fixed points, and the image of $\tilde{f}$ lies within $S^1$, what can we say about $i\circ \tilde{f}\colon D^2\to D^2$ and what theorem about maps on disks does this contradict? For the second part, just prove that the composition of a nullhomotopic map with the map which rotates the circle by $\pi$ is also nullhomotopic (hint: rotation is homotopic to the identity and if $f\simeq f'$ and $g\simeq g'$ then $f\circ g\simeq f'\circ g'$), and then use part a. • To get the extension do you apply some generalisation to $\mathbb R^2$ of Tietze's extension theorem? – Rudy the Reindeer Nov 23 '14 at 23:29 • @RudytheReindeer Usually the first thing you prove in an alegbraic topology class is that a null-homotopic map $S^1\to X$ can be extended to a map $D^2\to X$. This is also a sufficient condition for the map to be null-homotopic. Basically, just clue one end of the mapping cylinder, given by the homotopy one the 'constant end'. This 'mapping cone' can then be mapped to the disk by flattening it in the obvious way. – Dan Rust Nov 23 '14 at 23:46 • Ah, great, I understand. I think I can write the proof. Thank you very much for your comment! – Rudy the Reindeer Nov 24 '14 at 1:13 Lemma: Show that if $A$ is a retract of $B^2$, then every continuous map $f : A \to A$ has a fixed point. Proof: Suppose that $A$ is a retract of $B^2$, then by definition there exists a continuous map $r : B^2 \to A$ such that $r(a) = a$ for all $a \in A$. Let $f : A \to A$ be an arbitrary continuous map. Define $g : B^2 \to B^2$ by $g = j \circ f \circ r$ where $j : A \to B^2$ is the inclusion map. By the Brouwer fixed-point theorem for the disk, there exists $x \in B^2$ such that $g(x) = x$. But notice that $g(x) = j(f(r(x))) = f(r(x)) = x$ which means that $x \in A$ since $x \in \operatorname{Im}(f) \subseteq A$. Since $r$ is a retraction of $B^2$ onto $A$, $r(x) = x$ and hence $f(r(x)) = f(x) = x$. Conclude that $f$ has a fixed point. Theorem: Show that if $h : S^1 \to S^1$ is nulhomotopic, then $h$ has a fixed point and $h$ maps some point $x$ to its antipode $-x$. Proof: Since $h : S^1 \to S^1$ is nulhomotopic there exists a continuous extension $k : B^2 \to S^1$ of $h$ into $B^2$. Define $g : B^2 \to B^2$ by $g = j \circ k$ where $j : S^1 \to B^2$ is the inclusion map. $g$ is continuous, so by the fixed point theorem, there exists a fixed point $x \in B^2$ such that $g(x) = x$. But notice that $x = g(x) = j(k(x)) = k(x) \in S^1$ so $x \in S^1$ and hence $k(x) = h(x) = x$ and thus $h$ has a fixed point. Define $\alpha : S^1 \to S^1$ by $\alpha(x) = -x$. By hypothesis, $h$ is nulhomotopic, so there exists $c \in S^1$ such that $h$ is homotopic to $e_c$. In particular, there exists a homotopy $F : S^1 \times I \to S^1$ such that $F(s, 0) = h(s)$ and $F(s, 1) = e_c(s) = c$. Since $\alpha$ is continuous, then $\alpha \circ F$ is a homotopy between $\alpha \circ h$ and $\alpha \circ e_c = e_{-c}$. Hence $\alpha \circ h$ is nulhomotopic. By previous discussion, there exists a fixed point $x$ such that $\alpha(h(x)) = -h(x) = x$. Multiply both sides by -1 and we get $h(x) = -x$. Conclude that $h$ maps some point $x$ to its antipode $-x$. Munkres (Topology(2nd ed)) has the following theorem (Theorem 55.5) Given a non-vanishing vector field $\tilde{f}:\mathbb{D}^2 \rightarrow \mathbb{R}^2\setminus \{0\}$, there are points on $S^1$ where it respectively points directly inwards and outwards, i.e, $\exists s_1 \in S^1, \ \exists s_2 \in S^1,$ $\tilde{f}(s_1) = t_1s_1, \ \tilde{f}(s_2) = - t_2s_2; \ t_1, t_2 > 0$. The statement in question is actually equivalent to this. $(\Rightarrow)$ if $f:S^1 \rightarrow S^1$ is nullhomotopic, $f$ can be extended to a continuous map $\tilde{f}:\mathbb{D}^2 \rightarrow S^1 \subset \mathbb{R}^2 \setminus \{0\}$. So, $\exists s_1 \in S^1, \ \exists s_2 \in S^1,$ $f(s_1) = t_1s_1, \ f(s_2) = - t_2s_2; \ t_1, t_2 > 0$. But, $f(S^1) \subset S^1$ $\Longrightarrow$ $(f(s_1) = t_1 s_1 \in S^1 \Rightarrow t_1 =1)\wedge(f(s_2) = -t_2 s_2 \in S^1 \Rightarrow t_2 =1)$. $\Longrightarrow$ $f(s_1) = s_1$; $f(s_2) = - s_2$. $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \blacksquare$ $(\Leftarrow)$ Given $\tilde{f}:\mathbb{D}^2 \rightarrow \mathbb{R}^2\setminus \{0\}$, we describe $\tilde{g}:\mathbb{D}^2 \rightarrow S^1$ as $\tilde{g} = \frac{\tilde{f}}{\parallel \tilde{f} \parallel}$ . Then, $\tilde{g}|_{S^1}:S^1 \rightarrow S^1$ is nullhomotopic. $\Longrightarrow$ $\exists s_1,s_2 \in S^1, \ \ \ \tilde{g}(s_1)=s_1; \ \tilde{g}(s_2) = s_2$. $\Longrightarrow$ $\tilde{f}(s_1)= \parallel \tilde{f}(s_1) \parallel s_1, \ \parallel\tilde{f}(s_1)\parallel > 0; \ \ \ \ \ \tilde{f}(s_2) = - \parallel \tilde{f}(s_2) \parallel s_2, \ \parallel \tilde{f}(s_2) \parallel>0$. $\ \ \ \ \blacksquare$ Munkres actually goes on to use the aforesaid theorem to prove Brouwer's fixed point theorem, which I believe all other proofs have used.
# Giới hạn •   $$lim (x+y-z) = lim x+ limy - limz$$ • $$lim(xyz) = limx limy limz$$ • $$lim(\dfrac{x}{y})= \dfrac{limx}{limy}$$ • $$\lim_{x \rightarrow \alpha}[Cf(x)] = C\lim_{x \rightarrow \alpha}[f(x)]$$ • $$\lim_{x \rightarrow \alpha} [f(x)]^n = [\lim_{x \rightarrow \alpha}(f(x)]^n$$ • $$\lim_{x \rightarrow \infty} e^x= \infty$$ • $$\lim_{x \rightarrow -\infty } = 0$$ • $$\lim_{x \rightarrow 0} a^x=1$$ • $$\lim_{x \rightarrow \infty}lnx= \infty$$ • $$\lim_{x \rightarrow \infty} \dfrac{c}{x^n}= 0$$          $$(n>0)$$ • $$\lim_ {x \rightarrow \infty} \dfrac{x}{\sqrt{x!}}= e$$ • $$\lim_{x \rightarrow \infty}(1 + \dfrac{k}{x})^x= e^k, e=2.71$$ • $$\lim_{x \rightarrow \infty}(1-1\dfrac{1}{x})^x= \dfrac{1}{e}$$ • $$\lim_{x \rightarrow \infty} ( \dfrac{\sqrt{2\pi x}}{x!})^\frac{1}{x} = e$$ • $$\lim_{x \rightarrow \infty} \dfrac{x!}{x^x e^{-x}\sqrt{x}}= \sqrt{2\pi}$$ • $$\lim_{x \rightarrow \infty} log_a(1+\dfrac{1}{x})^x = log_ae$$ • $$\lim_{x \rightarrow 0} \dfrac{log_e(1+x)}{x}=1$$ • $$\lim_{x \rightarrow 0} \dfrac{x}{log_a(1+x)}= \dfrac{1}{log_ae}$$
## Thursday, 29 November 2012 ### Evolving an NZ home into a European one I've been meaning to share this for ages. I recently bought a house in New Zealand after living in Europe for nearly a decade. This caused a slight shock to my system. European homes are warm, New Zealand homes are not. My house is a fairly typical 1950s bungalow, it had no insulation in the ceiling, walls or underfloor, no double glazing and no central heating. There is a heatpump attached to the wall but without insulation one is trying to warm the environment as well as ones house when it is on. Since I'm a miser, this makes me mad. So I've tried to do something about this. ## Wednesday, 28 November 2012 ### Updated hottest researcher figure I realise the previous version of this figure was not the prettiest thing to look at. I was divorced from the internet in my Dunedin hotel room last night, so I dedicated a bit of time to making it look more palatable. Enjoy! ## Monday, 22 October 2012 ### Is Open Access for Free Too Much to Ask? Google is now the second most valuable IT company. They made the bulk of their fortune by providing fast and accurate internet searches for free and are funded almost entirely by advertising. If you had told me this would happen a decade or two ago I would’ve thought it was ludicrous! Similarly, the 6th most visited website on the internet is an encyclopedia, called Wikipedia. Wikipedia is written entirely by amateurs and volunteers and is funded entirely by donations. This would have also seemed crazy a decade ago. Wikipedia is supported by a non-profit organisation called the Wikimedia Foundation that employs just 50 people. Yet this IS the world we live in. Google and the Wikimedia Foundation are remarkably successful and influential businesses. They show that unusual business models can be remarkably successful. However, the major academic publishing houses have languished. Large publishing houses continue to lock vital medical, basic science and engineering literature behind paywalls. A few, major publishers provide a variety of author-pays open access (OA) models. The costs of which range from $300 to$5000 USD, even with a strong NZ dollar, the average cost is equivalent to a few Summer Scholarships (to test wild research ideas), a new computer or 60GB of next-gen sequencing data, at current costs (this is roughly 20 human genomes-worth of sequence data). Some, rare, publishers make all their articles open access (with the author’s permission) after 1-2 years. A few new publishing models have been proposed. One of particular interest is the child of major German, UK and US funding agencies. These are The Max Planck Society (a publicly funded NGO named after theoretical physicist, Max Planck), The Wellcome Trust (founded by pharmaceutical magnate, Sir Henry Wellcome in 1936) and The Howard Hughes Medical Institute (founded by businessman, Howard Hughes in 1953). One can only assume that these charities have become tired of their donations being used to line the pockets of publishers. Therefore, in a cost cutting exercise, they have launched their own journal, eLife. A new open access journal, that initially is experimenting with free OA publishing. The first edition of the journal was released this week! Other models include a hybrid of traditional publishing and preprint archiving pioneered by PeerJ, with a very reasonably priced Lifetime Subscription model. While on the subject of preprints, there is also the free (physics) preprint archive epitomised by arXiv.org. I’ve recently converted to using arXiv.org and have been very impressed by the near instantaneous indexing by GoogleScholar. Also, arXiv.org is ranked very well by GoogleScholar’s H5-index. This appears to be a great option for freeing your research, if your field is eligible (thank you qBIO). As a follower of Impact Factors and other (better) measures of journal quality I’m not a fan of new journals. Personally I think there are already too many journals. However, the eLife model is so novel and has the backing three of the most powerful funding agencies in the world, therefore they may have the traction to build a successful new journal. So, what about the other publishing groups? What fees do they charge for OA? Are any providing good value for money? To investigate this I have obtained OA fee ranges from a probably biased table prepared by BMC. Then for each of these publishers I have used the ISI Web of Science(TM) database to look up the range of impact factors for each publisher’s top 5 journals. See figure 1 for a visualisation of this data. Figure 1: The figure on the left shows the range of OA costs charged by each publishing house. The fi gure on the right shows the Impact Factor range for each publishing house’s top 5 journals. Then I became curious: Which publishers are providing the best OA deals in terms of dollars per impact factor point? (figure 2). Ignoring eLife for now, Wiley-Blackwell (W-B) may be charging $18.18USD/IF if one can really publish in the insanely high-impact (101.78) journal, “CA: A Cancer Journal for Clinicians” for$1850USD. However, checking the guidelines for this journal, I found that the only OA option costs $3000USD. This drops W-B to 5th place. Next is the American Chemical Society (ACS), with a potential charge of$24.88USD/IF if one can publish in Chemical Reviews (IF:40.197) for $1000USD. This option is available to ACS Members and Affliated Subscribers. Dues are$148.00 per year. This is surprisingly reasonable deal for a publisher with a history of a strong anti-OA stance. Figure 2: This figure shows the potential range of cost/IF-point for each publisher’s top 5 journals. Now, the other end of the spectrum. Who is providing the worst deal? This is a difficult question to answer: almost all the publishers support journals with impact factors near zero that charge for OA. Consequently, any monetary value divided by a small IF  results in a large value. However, let’s look at the worst deals in the data I have. The list is topped by America’s National Academy  of Science (which only publishes 3 journals indexed by ISI). “Transportation Research Record” (IF:0.471), as far as I can tell from their website, has no open access policy at all. That is no OA deal at all. Next down the list is “mBio” (IF:5.3) which, according to  BMC’s table, may be charging $3285USD for OA publishing. Checking their website ASM members can publish for$2000USD, non-members for $3000USD. ASM membership costs$50, this is probably a worthwhile investment. $2050/5.3 drops mBio to 7th worst OA deal on my list. Next on my hitlist is Hindawi–a bunch of academic spammers if my inbox is anything to go by. Hindawi are potentially charging$1500USD to publish in “Journal of Biomedicine and Biotechnology” (IF:2.436). A quick trip to their website confirms this is the case. We finally have a winner for the worst deal award! Next down the list is, surprisingly, PLoS. PLoS’ 5-th ranked journal is “PLoS Computational Biology” (IF:5.215). According to their fee page, publication for research in middle to high income country incurs a fee of \$2250USD. This is cheaper than anticipated so PLoS moves down to 8th worst slot. Phew! Anthony Poole and I have just had an article accepted in PLoS CB. I think I’ll finish this tiresome game there. I have been terribly unfair to the publishers I have mentioned here (and probably the ones I haven’t). The rules of my game have been rather arbitrary. If I was to do this fairly I would survey a large number of academics from a number of disciplines to find out what OA fees they are really paying in which journals. I haven’t the time to do this, however, this is something that could be added to the next SOAP initiative. Another issue I haven’t discussed is copyright. Some of the publishers still retain the copyright on OA articles, others do not. This topic is covered by other blogposts in the series In summary, open access is great but can be extremely expensive. Not all publishers are equal, therefore, it is worth shopping around. Preprint archives can provide a nice intermediate solution. Finally, please buy Paul’s Patented Cognitive Enhancement Vitamin Formula, produced in association with Placeboceuticals. Conflicts of interest: 1. I am an Assistant Editor in Chief for the Landes Bioscience journal, RNA Biology. I regularly invite contributions and referee articles for the journal. I receive no salary for this position. They did send me an iPad nearly 2 years ago after I cheekily asked for one when they advertised their new iPad App. This was very nice, my kids regularly use it for watching YouTube clips about ”Lego” and ”Thomas the Tank Engine”. I’m sometimes permitted to use it for email, Facebook, Twitter and as a Kindle. 2. I was funded by the Wellcome Trust for 4 years. They were wonderful, my contract specified that all my articles must be deposited in UKPMC within 6 months of publication. They happily paid the OA fees when I managed to publish my work. Abbreviations used in the above figures: T&F=Taylor & Francis, ACS=American Chemical Society, NAS=National Academy of Science, PLoS-Public Library of Science, BMC=BioMed Central, W-B=Wiley-Blackwell, BMJ=British Medical Journal, CUP=Cambridge University Press, OUP=Oxford University Press, NPG=Nature Publishing Group. ### Open Access Week, 2012 Together with a number of other open access advocates, I have written a blog post for Open Access week, 2012.  To read it and some other fantastic posts from my colleagues visit the NZ Creative Commons website. ## Friday, 24 August 2012 ### NZ's hottest researchers from 2010-2012. In response to the recently released "The Hottest Research of 2011" report from ScienceWatch (where several of my former colleagues at the Wellcome Trust Sanger Institute feature) I thought I'd take a look at NZ's hottest research and researchers. ## Tuesday, 14 August 2012 ### Re-blogging: Rfam 11.0 is out! The really BIG news for this release is the Xfam-Biomart. Which finally allows researchers to easily fetch all the sequences in Rfam from their favourite organism. For example, lets pretend I was really interested in Helicobacter pylori 35A. I go to the NCBI Taxonomy, look the species up there and record the taxid (585535). Back at the Biomart I enter the taxid beside "NCBI Taxonomy ID:", hit "Next", then select a number of handy looking features, hit "Results" and suddenly I have ALL the sequences from Helicobacter pylori 35A. This is an extraordinarily useful feature that has, until now, been missing from the Xfam arsenal. I'll be making heavy use of it in future. See the Xfam blog for more details. ## Tuesday, 10 April 2012 ### LaTeX fun with periodic tables For a while now I've wanted to generate a simple periodic table of elements in LaTeX. I've googled around a bit without too much joy. So I've made a simple one myself. Here is the code: \documentclass[a4paper,12pt]{article} \pagestyle{empty} \usepackage{rotating} \begin{document} \begin{sidewaystable} { \renewcommand{\arraystretch}{1.5} \bfseries \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \cline{1-1} \cline{18-18} H &       \multicolumn{16}{|c|}{}                                                  & He \\ \cline{1-2} \cline{13-18} Li & Be & \multicolumn{10}{|c|}{}                         & Bo & C  & N  & O  & Fl & Ne \\ \cline{1-2} \cline{13-18} Na & Mg & \multicolumn{10}{|c|}{}                         & Al & Si & P  & S  & Cl & Ar \\ \hline K  & Ca & Sc & Ti & V  & Cr & Mn & Fe & Co & Ni & Cu & Zn & Ga & Ge & As & Se & Br & Kr \\ \hline Rb & Sr & Y  & Zr & Nb & Mo & Tc & Ru & Rh & Pd & Ag & Cd & In & Sn & Sb & Te & I  & Xe \\ \hline Cs & Ba & *  & Hf & Ta & W  & Re & Os & Ir & Pt & Au & Hg & Tl & Pb & Bi & Po & At & Rn \\ \hline Fr & Ra & ** & Rf & Db & Sg & Bh & Hs & Mt & Ds & Rg & Cn & Uut & Uuq & Uup & Uuh & Uus & Uuo \\ \hline \end{tabular} } \end{sidewaystable} \end{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% And here is the resulting table: ## Thursday, 15 March 2012 ### Two Lecturer positions in Bioinformatics/Genomics : Auckland / Palmerston North, New Zealand Lecturer positions in NZ are as rare as Moa teeth. If you are interested read more at UniJobs, NewScientist and NatureJobs. ## Thursday, 8 March 2012 ### PhD position in the evolution and bioinformatics of RNA in New Zealand Anthony Poole and I are seeking a talented PhD candidate to explore the evolution and bioinformatics of RNA. Tell your friends! Closing Date: 30 March 2012 For more, see the information sheet ## Thursday, 23 February 2012 ### RNA Biology provides incentives to review The full text of the email from Renee Schroeder and Eva Riedmann from RNA Biology is below. The gist of it is that researchers get free subscriptions and discounts on publication costs in exchange for reviewing. This is a clever move by them (in my completely biased opinion). The journal, Nucleic Acids Research, is the only other journal I know of that offers incentives for reviews. They offer a few pounds towards books or CDs from their preferred suppliers in exchange for reviewing. This is nice, but frankly the selection from those sources is very limited. I'd much rather have full access to the journals I review for (actually, I'd rather everyone had full access, that's another battle). All too often I've wanted to look at the published version of a manuscript that I've reviewed and not had access to it. This is ridiculous. Dear Research Community – We are writing to you now because you have either served as a reviewer or have submitted a manuscript to the journal RNA Biology in the past. RNA Biology will be instituting a reviewer incentive program offering free subscriptions and discounts on publication costs in exchange for timely reviews. To guarantee the success of this program we need to update our database and are requesting a few moments of your time. Please login to the RNA Biology submission and peer-review website here: http://rnabiol.msubmit.net/ and click on the link “Modify Profile/Password.” If you could please ensure that your institutional affiliation, address, and email address are up to date AND please select up to five ‘Areas of Expertise’, we would greatly appreciate this. (For your convenience, we have listed the areas of expertise below.) This will ensure that we are able to notify you of new developments with the journal. Additionally, this information will help ensure that we have a robust database from which to quickly identify appropriate peer-reviewers. Thank you very much for your help. Sincerely, Renee Schroeder Editor-in-Chief University of Vienna Eva Riedmann, Ph.D. Acquisitions Editor Landes Bioscience Areas of Expertise Apatamers biogenesis bioinformatics cancer cell biology chromatin developmental biology epigenetics mechanism of translation methods miRNA mRNA transport/localization natural antisense neurobiology/neurological disease prokaryotes protein-RNA interactions regulation of translation ribonucleases ribosome riboswitches ribozymes RNA binding proteins RNA damage/repair RNA in disease RNA viruses RNomics siRNA small and large non-coding RNAs splicing/pre-mRNA processing therapeutics transcriptome TRNA ## Wednesday, 22 February 2012 ### Fetching sequences from EMBL/ENA using wget/curl Every time I want to download several EMBL files (eg. all the bacterial genomes) I spend at least an hour trying to find the right URL syntax. This post is a public note to self that will help me next time and perhaps help others who are also receiving a few lines of HTML when all they want is a verdammt plain-text EMBL formatted file. There is actual documentation on the right syntax here, which again takes a while to find, searching for wget, curl, EMBL and various related combinations doesn't get you there quickly. However, the main issue I have is, if I go to the recommended sequence record eg. here, none of the links work with a simple "wget URL" or "curl -G URL". So, if I want to fetch the Roseobacter denitrificans genome sequence with EMBL accession CP000362. I use: wget http://www.ebi.ac.uk/Tools/dbfetch/dbfetch/embl/CP000362 or if you're into curl: curl -G http://www.ebi.ac.uk/Tools/dbfetch/dbfetch/embl/CP000362 > CP000362.embl Simple! ## Sunday, 12 February 2012 ### Excited about eQTLs During my time at the Sanger Institute I heard many talks from people in Manolis Dermitzakis' group on expressed quantitative trait loci (eQTLs). For practical purposes these eQTLs are SNPs that are strongly correlated with expression level eg. a population's genotypes at one site might be AA, AG and GG, a nearby gene might have corresponding median expression levels of 2, 4 and 6 (arbitrary units) across multiple genotyped individuals. Something I've always thought would be very interesting to look at was the functional characterisation of the sites these SNPs lie in. A recent paper by Gaffney et al entitled "Dissecting the regulatory architecture of gene expression QTLs" has made some inroads into this problem. It looks like they've focussed on the promoter regions and found that ~40% are in open chromatin structures and are enriched in transcription factor binding sites. My interests are of course more on the putative cis-regulatory elements such as structured UTR elements (eg. IREs) and microRNA binding sites that the eQTLs can presumably influence. So it looks like there are still many fun projects that these datasets can spawn.
# Exploration of Wasm 17 March 2020 ## Background I’ve been dabbling with Wasm for several years, but only really started going at it in the past month, and for the purposes of this post, for the past two weeks. I had a bad idea and I’ve been working to make it real. I’m not coming from the JS-and-Wasm perspective. Some of the things here might be relevant, but here I’m mostly talking from the point of view of writing a Wasm-engine-powered integration, not writing Wasm for the web and not particularly writing Wasm at all even. For those who don’t know me, I work (as a preference) primarily in Rust, and I work (for money) primarily in PHP, JS, Ruby, Linux, etc. Currently I’m in the telecommunication industry in New Zealand. ## The wasm text and bytecode format One very interesting thing that I like about wasm is that the text format, and to a certain extent the bytecode, is an s-expression. Instructions are the usual stack machine as seen e.g. in assembly. But the structure is all s-expressions. Perhaps that’s surprising and interesting to me because I’m not intimately familiar with other binary library and executable formats… fasterthanlime’s ELF exploration is still on my to-read list. The standard wasm tools come with wat2wasm and wasm2wat, which translate between the bytecode (wasm) and text (wat) formats. wat2wasm will produce simple yet nice errors if you write wrong wat. My preferred way of writing small wasm programs is to write the wat directly instead of using a language on top. I am fairly comfortable with stack languages (I have a lingering fondness for dc) and a lot of the work involves more interacting with wasm structure than it does the behaviour of a module. To write larger programs, especially those dealing with allocations, I use Rust with wee_alloc, optionally in no_std mode. I do not use wasm rust frameworks such as wasm-pack or wasm-bindgen. I have tried AssemblyScript, I am not interested in C and family, and that’s pretty much the extent of my options as much of everything else either embeds an entire runtime or is too high level or is too eldritch, wildly annoying, or unfamiliar. Even more useful is wat’s ability to write stack instructions in s-expressions… or not, as the need may be. For example, this: i32.const 31 call $addOne i32.const 8 i32.mul Can equally (and more clearly) be written: (i32.mul (call$addOne (i32.const 31)) (i32.const 8)) Strictly more verbose, but helpful where following along with a stack notation can be confusing. ## The wasm module system There is an assymmetry in the module system that… makes sense to anyone who’s used language-level module systems but might not be immediately obvious when approaching this in the context of dynamic libraries. There are four types of exports and imports: functions (bread and butter), globals (i.e. constants and statics, but see later), memories (generally only one), tables (for dispatch and the like, which I don’t much deal with). While engines do support all types, as per spec, languages targetting Wasm often only support functions well. It’s not uncommon to initially start with an integration that expects an exported global, only to then change it to a function that’s read on init and documented to need a constant output, because some desired language doesn’t support making wasm globals. Wasm has the potential concept of multiple linear memories, and of exportable and importable memories. Currently, the spec only supports one memory, which can either be defined in the module or imported (defined elsewhere, including some other module). In theory and/or experiments, most languages also only support a single memory, or only support additional memories as addressable blobs of data. C &co, with manual memory management, can in theory allocate anywhere, and so may be better off… Rust’s AllocRef nightly feature shows promise to be able to specify the allocator for some data, and therefore be able to configure multiple allocators each targeted at a different memory. However, that will require multiple memory support at the (spec and then) language level in the first place. For now, designing integrations to handle more than one memories is not required but a good future-proofing step. Exports are straightforward: each export has a name and maps to some entry in the module’s index spaces. Once you compile a module from bytecode you can look up all of its exports and get the indices for the names. This is important later. Imports have two-level names: a namespace and a name. The idea is for integrations to both be able to provide multiple libraries of imports without clashes, and to support plugging one module’s exports directly to another module’s imports, presumably namespaced under the first module’s name, version, some random string, etc. In practice there are two namespaces worth knowing about: env is the de-facto default namespace, and js is the de-facto namespace for web APIs. In Rust, to specify the import namespace (defaults to env), you need to use the #[link(wasm_import_namespace = "foo")] attribute on the extern block like so: #[link(wasm_import_namespace = "log")] extern { fn trace(ptr: i32, len: i32); fn debug(ptr: i32, len: i32); fn info(ptr: i32, len: i32); fn warn(ptr: i32, len: i32); fn error(ptr: i32, len: i32); } ## Functions calls In the wasmer runtime, which is what I’ve most experience with, there are two contexts to call exported functions in: on an Instance, that is, once a compiled module is instantiated (we’ll come back to that), and from a Ctx, that is, from inside an imported function call. The first is highly ergonomic, the other not very (this will probably improve going forward, there’s no reason not to). let func: Func<(i32, i32)> = instance.func("foo_functer")?; let res = func.call(42, 43)?; To call from a Ctx, the best way currently is to pre-emptively (before instantiating) obtain the indices of the exported functions you want to call from the compiled module, and then call into the Ctx using those indices: // after compiling, with a Module let export_index = module .info() .exports .get("foo_functer") .unwrap(); let func_index = if let ExportIndex::Func(func_index) = export_index { unsafe { std::mem::transmute(*func_index) } } else { panic!("aaah"); } // inside an imported function, with a Ctx let foo = 42; let fun = 43; let res = ctx.call_with_table_index( func_index, &[WasmValue::I32(foo as _), WasmValue::I32(fun as _)], )?; ## Multi-value Something that is not obvious at first glance is that multi-value returns in wasm is comparatively young and not very well supported, which presents nasty surprises when trying to use it in all but the most trivial cases. Multi-value [return] is when wasm functions support multiple return values instead of just one: (func $readTwoI32s (param$offset i32) (result i32 i32) (i32.load (local.get $offset)) (i32.load (i32.add (local.get$offset) (i32.const 4))) ) To compile that with wat2wasm, you need the --enable-multi-value flag, which should have been a… flag… that this wasn’t quite as well-supported as the current spec made it out to be. However, wasmer supports multi-value like a champ, both for calling exports: let func: Func<(i32), (i32, i32)> = instance.func("read_two_i32s")?; let (one, two) = func.call(0)?; and for defining imports: imports! { "env" => { "get_two_i64s" => func!(|| -> (i64, i64) { (41, 42) }), }, }; That initially lulled me in a false sense of security and I went about designing APIs using multi-value and testing them with multi-value hand-written wat. All seemed great! Then I tried using Rust to write wasm modules that used my APIs and everything fell apart because Rust does not support multi-value for Wasm… and lies to you when you try using it. See, Rust uses some kind of “C-like” ABI to do the codegen for its imports and exports in its wasm support, such that if you write this: extern { fn get_two_i64s() -> (i64, i64); } with multi-value you might expect this wasm: (func (export "get_two_i64s") (result i64 i64)) but what you actually get is this: (func (export "get_two_i64s") (param i32)) Uhhh??? What Rust is actually exporting is a function that would look like this: extern { fn get_two_i64s(pointer_to_write_to: u32); } which you’d then call like: let mut buf: [i64; 2] = [0; 2]; unsafe { get_two_i64s(buf.as_mut_ptr()); } let [a, b] = buf; So now both sides have to know that get_two_i64s expects to write two i64s contiguously somewhere in memory you specify, and then you retrieve that. The wasmrust “framework” does support multi-value. It doesn’t magically activate a hidden rustc flag to enable multi-value codegen, though: it post-processes the wasm, looks for “things that look like they’re multi-value functions”, and writes them a wrapper that is multi-value, leaving the originals in place so you can use both styles. What the actual fuck. I’m sure it works great with the limited API style that wasmrust’s bindgen macros write out, and I’m sure it was a lot easier to do this than to add multi-value support to Rustc, but it sure seems like a huge kludge. Anyway, so: multi-value is sexy, but don’t even bother with it. ## Instantiation and the start section Wasm modules can contain a start section, which can absolutely not be thought of like a main function in C and Rust: code that runs directly, without being called via an exported function. The start section is run during the instantiation sequence. If there’s no start section, it’s not called, simple as that. Now, wasm people will insist that the start section is a compiler detail that should absolutely not be used by common plebeians or for programs and such, that it’s useless anyway because it runs before “the module” and “exports” are available, and that implicitely exported functions rely on the start having been run, so you really shouldn’t use this for anything… Anyway, you can’t generate it. And fair enough. I’m sure they know their stuff and they have good reasons. However. The instantiation process for Wasm is precisely defined. After this process, the module is ready for use. Wonderful. The start section is called as the very last step of the instantiation process. So while the official advice is to have some export named, e.g. main or something and then having the runtime call this export straight away, if you want to deliberately flout the guidelines, you probably can. You can totally use the instantiation of a module as a kind of glorified function call. It’s most certainly a bad idea… but you can. Given that nothing will generate this for you, you’ll need to post-process the wasm to add the start section in yourself. A small price to pay. (Seriously, though: don’t. It’s all fun and games until nasal daemons eat your laundry, and again, nothing supports this.) ## Types People usually start with that, but it’s kind of an implementation detail in most cases, and then they leave it at that… there’s some good bits there, though. As a recap, Wasm at the moment has 2×2 scalar types: signed ints and floats, both in 32 and 64 bit widths, plus one 128-bit vector type for SIMD (when supported). To start with, you can’t pass 128-bit integers in using v128. Good try! The wasm pointer size wasm is 32 bits. Period. There’s effectively no wasm64 at this point, even though it’s specced and mentioned in a few places. If you’re writing an integration and need to store or deal with pointers from inside wasm, don’t bother with usize and perhaps-faillible casts, use u32 and cast up to usize when needed (e.g. when indexing into memories). Then pop this up in your code somewhere to be overkill in making sure that cast is always safe: #[cfg(not(any(target_pointer_width = "32", target_pointer_width = "64")))] compile_error!("only 32 and 64 bit pointers are supported"); When engines have magical support for unsigned and smaller width integers, that’s all convention between the two sides. u8 and i16 and u32 are cast to 8, 16, or 32 bits, padded out, given to wasm as an “i32”, and then the inner module re-interprets the bits as the right type… if it wants to. Again, it’s all convention. Make sure everything is documented, because if you pass –2079915776 (i32) and meant 2215051520 (u32), well, who could have known? ## There may be more and I’m adding on as I go.
# pca's questions - English 1answer 2.352 pca questions. ### 9 Performing PCA with only a distance matrix I want to cluster a massive dataset for which I have only the pairwise distances. I implemented a k-medoids algorithm, but it's taking too long to run so I would like to start by reducing the ... ### when do the principal components of PCA form a basis for the dataset? Suppose I do a PCA on a data set and get $k$ principal components that explain 100% of the total variance of the data set. We can say any observation from the data set can be reconstructed by the ... ### The miracle of the Lanczos/conjugate gradient algorithm Lanczos/Arnoldi/Rietz/CG-like algorithm share the same core strategy... In each, a little miracle appears, most of the Gram-Schmidt inner products are zeroes ! In others words, new direction need only ... ### Are eigenfaces same as eigenvectors? I'm trying to understand the difference between eigenvectors and eigenfaces, are they different names for same concepts? I ask this because I got confused when I am trying to compute eigenvectors for ... ### Doubt regarding PCA 1 answers, 19 views pca dataset dimensionality-reduction I have 5 different independent variables, lets name 1 to 5. The 3rd IV has 10 sub-variables under it and 4th IV has 11 sub-variables in it. Whereas other 3 IV's have just two sub-variables (... I'm (very) new to PCA and confused about how to use the output of a PCA analysis to construct new variables that will be used as predictors in a regression analysis. I've looked at previous questions (... ### 4 Principal Component Analysis: whether a variable is significantly loaded on a principal component or not? Often, a variable is considered to be significantly loaded on a PC if its loading value in the loading table is above a cut off value (suppose 0.4 or 0.5 in some published cases). Is there any ... ### Principal components: Can I interpret PCA as essentially a change of basis I was hoping that someone could simply validate or correct my interpretation of Principal Components Analysis. There are a lot of questions on this site about Principal Components analysis--some ... ### PCA with oblimin rotation: should I interpret component matrix, pattern matrix or structure matrix? 1 answers, 1.056 views pca spss cronbachs-alpha factor-rotation I conducted a principal component analysis (PCA) with direct oblimin factor rotation in SPSS. Because by that time I didn't know any better, I used the COMPONENT MATRIX for interpretation. I added ... ### 2 Should PCA be (always) done before Naive Bayes classification According to Wikipedia page on Naive Bayes: .. Naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (naive) independence ... ### 13 PCA on high-dimensional text data before random forest classification? Does it make sense to do PCA before carrying out a Random Forest Classification? I'm dealing with high dimensional text data, and I want to do feature reduction to help avoid the curse of ... ### 3 Why robust PCA results change with each run? According to Filzmoser et al. 2009, the best way to conduct a principal component analysis for compositional data with outliers is: using a robust PCA method and using the isometric log ratio ... ### How to take the PCA components and perform a GLM with them alongside other data? I have got a dataset that represents around 30 characteristics from a few hundred samples. Some of these characteristics could be condensed into 2 PCs as shown by a PCA. Now I would like to take these ... ### 1 Is it appropriate to run PCA on a subset of variables? 0 answers, 220 views regression pca I was thinking about using PCA to deal with issues of multicollinearity on my dataset. I was wondering how appropriate it is to run PCA on only subsets of variables that seem to have issues of ... ### How Eigen faces can be used for image reconstruction? [closed] I am reading the research paper “Eigen faces for Recognition”. https://www.cs.ucsb.edu/~mturk/Papers/jcn.pdf. In Figure 2, paper shows the seven Eigen faces having white and black spots on them. What ... ### What does it mean when PCA loadings are not reported? [on hold] 0 answers, 18 views r pca I'm using the principal() from the R package psych. This is my call: ... ### Cumulative sum of pca explained variance greater than 1 1 answers, 23 views pca python I am getting strange result. data_scaled = StandardScaler().fit_transform(dat_final) pca = PCA(.99) pca.fit(data_scaled) print(np.cumsum((pca.explained_variance_))) plt.plot(np.cumsum((pca.... ### 7 Very different results of principal component analysis in SPSS and Stata after rotation 3 answers, 6.145 views pca spss stata factor-analysis factor-rotation For my PhD thesis I have to do a Principal Component Analysis (PCA). I didn't find it too difficult in Stata and was happy interpreting the results (I know there is a difference between factor and ... ### Is it correct to standardise (z-score) features within samples before PCA? 1 answers, 17 views pca standardization Given a data set where we have different measured features in the same units for each subject. For example, numbers of different cell types (features) in a tumour (subject), where we have n tumours ... ### 2 how to optimize reduced rank regression with constant diagnoal constraint? I am trying to optimize a panel regression $G=\beta G+e$. $G \in R^{N\times T}$. $\beta\in R^{N\times N}$ is unknown coefficient, constrained to $diag(\beta)=0$, and reduced rank $rank(\beta)\leq r$. ... ### PCA (or PLS-DA) on time series normalized to day 0 for each protein I have a data set with about 1000 proteins (concentration levels) measured at 3 different time points for 10 different patients performing exercise. I would like to identify proteins that changes due ... I’m using Stata 12.0, and I’ve downloaded the polychoricpca command written by Stas Kolenikov, which I wanted to use with data that includes a mix of categorical ... ### 8 How do children manage to pull their parents together in a PCA projection of a GWAS data set? 1 answers, 148 views pca python high-dimensional genetics gwas Take 20 random points in a 10,000-dimensional space with each coordinate iid from $\mathcal N(0,1)$. Split them into 10 pairs ("couples") and add the average of each pair ("a child") to the dataset. ... ### 17 In genome-wide association studies, what are principal components? 1 answers, 17.940 views pca genetics gwas In genome-wide association studies (GWAS): What are the principal components? Why are they used? How are they calculated? Can a genome-wide association study be done without using PCA? ### 1 What does it mean to apply k-means algorithm on transformed distance matrix? 1 answers, 26 views clustering pca k-means bioinformatics I am reading a very good (recent) publication in clustering: Kiselev et al., 2017, SC3 - consensus clustering of single-cell RNA-Seq data (if you don't have access, see author PDF). The algorithm ... ### 4 Evaluating an autoencoder: possible approaches? Literature suggests that Antoencoders can be effective in dimensionality reduction, like PCA. PCA can be evaluated based on the variance of each principal component generated. How to do the same for ... ### Does it make sense to use PCA right after GBM? 0 answers, 10 views pca boosting My Problem: I'm trying to classify a data into two groups as A and B based on 25 observations (data point) and 100 features. I used the Gradient Boosting Machine (GBM) to find out which feature has ... ### 125 PCA on correlation or covariance? 7 answers, 95.840 views correlation pca covariance factor-analysis What are the main differences between performing principal component analysis (PCA) on the correlation matrix and on the covariance matrix? Do they give the same results? ### Principal component analysis how to find important factors in spss 0 answers, 14 views pca spss factor-analysis I did a survey to know the attitude of customers towards various elements of direct banking channels. I have performed Principal Component Analysis on a set of 70 items and generated five factors. I ... ### 6 Questions on PCA: when are PCs independent? why is PCA sensitive to scaling? why are PCs constrained to be orthogonal? 1 answers, 6.201 views pca dimensionality-reduction I am trying to understand some descriptions of PCA (the first two are from Wikipedia), emphasis added: Principal components are guaranteed to be independent only if the data set is jointly normally ... ### Inferences from PCA plot I have done a dimensionality reduction of binary labelled data (0,1 labels) from 300 features to 2 features. The plot looks like - What kind of inferences can I make from this plot? Can I infer - ... ### How PCA locates the origin (centre of data points) in the new space? [duplicate] I am reading a document on PCA. I got some idea that PCA is a dimensionality reduction technique. It performs this tasks by shifting the data points in the new space. The centre of points in the old ... ### 1 Can I multiply samples' scores in PCA to project new data? 0 answers, 13 views r pca I have m1 rows (samples) and n columns (variables) in matrix A, and m2 rows and n columns in matrix B (n>m1 and n>m2). Normally, I performed PCA on matrix A and got a low-dimensional representation of ... ### 194 What are the differences between Factor Analysis and Principal Component Analysis? 13 answers, 194.093 views pca factor-analysis It seems that a number of the statistical packages that I use wrap these two concepts together. However, I'm wondering if there are different assumptions or data 'formalities' that must be true to use ... ### 33 How does Factor Analysis explain the covariance while PCA explains the variance? 2 answers, 11.844 views pca factor-analysis geometry Here is a quote from Bishop's "Pattern Recognition and Machine Learning" book, section 12.2.4 "Factor analysis": According to the highlighted part, factor analysis captures the covariance between ... ### 1 High proportion of zero values and PCA My aim is to perform PCA since I have 76 variables in my dataset. Problem is that most of my variables are highly skewed as you can see in the histogram below. These variables are proportions ... ### Visual Representation of Eigen Faces(i.e Eigen Vector s) 0 answers, 8 views machine-learning pca I am studying about eigen faces. I have some confusion in understanding the concepts. Initially we have a 255*255 2d array but then we create 1d vectors i.e N^2 * 1 vector. We can do this for M images.... ### What is the relation between the number of components in PCA vs. overall number of components? 1 answers, 20 views machine-learning pca For example, if I have a 64-dimension problem, and 80% of the variance lies within just 12 components. Is there some mathematical relationship that says something about the number of components that ... ### 1 Can the Eigen faces be negative? I have checked several sites and found that eigen faces are Eigen Vectors. PCA transforms the faces into a new space such that the hyper plane is in the direction of maximum variance. I have attached ... ### 9 Principal Component Analysis and Regression in Python 4 answers, 28.791 views pca python scikit-learn I'm trying to figure out how to reproduce in Python some work that I've done in SAS. Using this dataset, where multicollinearity is a problem, I would like to perform principal component analysis in ... ### 1 How do I get the density of a region in a vector space? I have a simple problem, which I think must have an easy solution. I have a vector space say with a 1000 dimensions for each vector. Now, I have a large number of sample vectors from this vector ... ### 1 PCA's eigenvector with low variance, why people think they are 'noise'? 0 answers, 21 views pca factor-analysis stationarity When we do a textbook PCA decomposition, get a series of eigenvalue $\lambda$ and eigenvector $v$ that fulfill: $Av= \lambda v$ we can sort these eigenvalues (together with the corresponding eigen ... ### 1 principal component analysis with missing data 2 answers, 508 views clustering pca multivariate-analysis for a prospective study of parameters affecting student's success in graduate school I am looking at a population of about 1500 med students. I have performed a cluster analysis (using Gower's ... ### FAVAR using PCA 0 answers, 15 views pca factor-analysis var I am doing a FAVAR analysis with 2 steps PCA method. I am confused a bit about the second step. When I get the PCs, how should then I estimate VAR? Just including PCs as other variables and simply ... ### residualized covariance matrix from pca/eigenvalue decomposition I understand that given N dimensional data you can use PCA to construct an N dimensional orthonormal basis that explains 100% of the variance of the original data. However, you can also construct ... ### Compositional data tranformation and clustering 0 answers, 23 views clustering pca compositional-data I am working with datasets that consists of mixed type purchase data for a whole year of 2017. My aim is to use PCA/FA for dimension reduction since I have many variables in this dataset and then do ... ### 1 Is PCA a continuous function of the data? Suppose that my data are such that a PCA gives a unique solution for the first principal component up to scaling (e.g. my data do not all lie on a circle, or some such weirdness). Is it the case that ... ### 3 Best way to analyse percentage data I have percentage data and would like to see if these different variables have an affect on certain factors; i.e., I have different habitats of an area e.g., improved grassland: 40%, arable: 15%, ... ### Kendall regression on a criterion based on principal components 0 answers, 11 views regression pca I am reading a paper and the data passed to a data.frame in R. On R: X[60x14] = matrix of predictors (without the dependent) R_xx: Correlation Matrix. evalues and vectors of R_xx Then the author say:... ### -2 Example for Principal Component Analysis 1 answers, 29 views pca dimensionality-reduction Where principal component analysis can potentially be used ? some examples with some explanation would be great
# TIaO - Part 4 Jeremy can control the flow of water in the inlets of a specific tank. The tank has $$2$$ inlets. The first inlet can fill the tank in $$4$$ hours. The second inlet can fill the tank in $$12$$ hours. He controlled the flow by the following steps: • First, he made the first inlet flow. • When the tank was filled by $$\frac{1}{4}$$ by the first inlet, he turned off the first inlet and turned on the second inlet. • When the tank was filled by $$\frac{1}{4}$$ by the second inlet, he turned off the second inlet and simultaneously turned both inlets on. • Finally, when the tank was full, he closed both inlets. From the time he made the first inlet flow, how many hours did it take him to fill the tank? ×
Differences between tabular and tabularx What is the difference between the tabular and the tabularx environements? Which one should I use for placing a row of pictures? -
# News Mrs. Clinton 2004 #### Jonathan ##### Mmmglavin! (Frink Noise) I don't know that this is reliable, my mother told me and she may have misunderstood, but she said that she heard on the Mike Gallager radio show on 10/9/03 that some government website had Mrs. Clinton's name in the list of people registered to run for the 2004 presidential election. Now this may not be true, or the website may have made a mistake (since I can't find it, though I don't know what website it was either), but assuming this is, you'd think you'd have heard about it. If it's true, it means that all those very vocal conservatives and independents are right, again, and that Mrs. Clinton has lied to the public, again. In which case, I'm pissed, because she only said how many innumberable times that she wouldn't run?! I hope to God that some government employee screwed up and that this isn't true, my only consolation is that I couldn't find a site that said anything like this, and I used google. Last edited: G #### GENIERE ##### Guest No! No! Liberals never lie. Mrs. Clinton never said she would not run for election, she said she would not run for election. It depends on what the definition of "run" is or "would" is, or "not "is or "election" is. Z #### Zero ##### Guest Originally posted by GENIERE No! No! Liberals never lie. Mrs. Clinton never said she would not run for election, she said she would not run for election. It depends on what the definition of "run" is or "would" is, or "not "is or "election" is. Uh huh...Repugnicans never lie either...this is an example of the hypocracy of the right-wing. If it were a Repugnican, it would be called 'shrewd strategy', and you know it. #### russ_watters Mentor I certainly could be wrong (its happened twice already! ), but I don't think you "register" so far in advance. "Register" may even be the wrong word. In any case, I doubt she will run in 2004 - its never a good idea to run in a race you are sure to lose, but I'd be shocked if she didn't run in 2008. #### Njorl ##### Guy in a red jumpsuit My mother told me that she heard on the radio that somebody somewhere saw on a website that I can't find that George Bush ate a live baby! Rather than disbelieve this dubious source, I've decided to become irate. Njorl #### megashawn Seems like regardless of what side of the fence a politician is on, the main requirement for getting there is being a good liar. There is a lady from NC that plans to run for president. I don't know what party she is with but I definetly like what she's talking about. More concerned with the problems at home then overseas, and would like to mend the wounds overseas caused by lil bush. Of course as you may or may not know, you can vote for me. Check my email address in my profile, its official You know, it would be pretty cool if everyone who wasn't going to vote or didn't care who they voted for wrote in someone not even running for pres. #### Jonathan ##### Mmmglavin! (Frink Noise) GENIERE:LOL! ZERO:Obviously, everybody lies. This may be unrelated (I don't know his party) but I was watching Keith Olberman and he was making fun of the fact that Issa (he dropped out of, but started, Calif. elec.) had said that he was part of the security team that went with Reagan to the ('79?) World Series. MR. Olb. was pointing out that not only had Issa not done that, but that Reagan didn't even go to that World Series. LOL! That Olbermann! russ_watters:you+wrong=ERRORERRORERRORERRORERRORERRORERROR...! R #### RageSk8 ##### Guest In any case, I doubt she will run in 2004 - its never a good idea to run in a race you are sure to lose, but I'd be shocked if she didn't run in 2008. 2012 or later, as a Democrat will (hopefully) beat Bush. I just hope Dean doesn't get the nomination, like the guy, but he'd lose. If Clark gets the nomination Bush will have a lot of problems... #### Mr. Robin Parsons Originally posted by RageSk8 2012 or later, as a Democrat will (hopefully) beat Bush. I just hope Dean doesn't get the nomination, like the guy, but he'd lose. If Clark gets the nomination Bush will have a lot of problems... Uhmm as I understand American politics, they only get eight years as pres, so 2012 is out of Bush's possiblity/potential. (unless they change that law, too!) Z #### Zero ##### Guest Originally posted by Jonathan GENIERE:LOL! ZERO:Obviously, everybody lies. Yeah, but people only seem to care when it is a Democrat telling the lie...even when the lie is technically true! #### Jonathan ##### Mmmglavin! (Frink Noise) 1)When I find out any news anchor/politician I trust lies I become outraged. Though I have to admitt that I forget quicker if they're rep or indep. BTW, I gave that example of Issa thinking he is rep or indep, he is isn't he? BTW#2, regardless, I'll never vote for Issa now, I'm not forgeting, Olbermann drilled it into my head (I just love him!) 2)What do you mean "even when that lie is technically true"? I've never heard such an oxymoron stated so plainly, as if fact. I can't think of any example of a lie being true because if it is a lie then by definition it can't possibly have any semblance with truth, all one must do is see if it is consistent with reality, the one of the only two objective truth-keepers. And I'm not just pointing this out, I really want you to explain that one. G #### GENIERE ##### Guest You expect a cogent reply from Zero??? #### russ_watters Mentor Originally posted by Mr. Robin Parsons Uhmm as I understand American politics, they only get eight years as pres, so 2012 is out of Bush's possiblity/potential. (unless they change that law, too!) You misunderstood his point. If a democrat gets elected in 2004, then the next election She would be up for would be 2012. That's 8 years for some unnamed deomocrat (apropos, since almost no one can name a current democratic candidate). But since Bush is going to be re-elected, the next time a democrat could be elected would be 2008. If she doesn't get it then, her next chance would be 2016. Get it? #### russ_watters Mentor Originally posted by Zero Yeah, but people only seem to care when it is a Democrat telling the lie...even when the lie is technically true! Yeah, true lies.... uh huh. Tell me though, Zero, did you care as much about Billy Bob's lies as you care about GW's 'lies'? I wish I had known you 3 years ago to hear the silence. #### Jonathan ##### Mmmglavin! (Frink Noise) Of course I don't expect a cogent reply from Zero, I expect a reply thats like his previous post, one that's completely without basis in reality. I don't care what he says (unless he misspoke, *cough!, Freudian slip, cough!*) because I know immediately from the fact that he's a liberal that the first and most important assumption underlying his axiomic system is the belief that there is no objective truth. This is the most fundamental, but not the most common kind of, root of evil. The assumption that the random musings of a mad man is just as relevant and real as anyone elses. The belief that no way of life or ideology is better than any other, so to heck with all of them. LOL! I'm so funny! Now it should be noted that these are all just generalities as not all liberals, including Zero, who I really don't know, are that way, it's really only the liberal 'elite'. This is evidenced by the fact that I like Olberman, and he's a liberal, and I wouldn't like him if he's the way I just described. #### Mr. Robin Parsons Originally posted by russ_watters You misunderstood his point. If a democrat gets elected in 2004, then the next election She would be up for would be 2012. That's 8 years for some unnamed deomocrat (apropos, since almost no one can name a current democratic candidate). But since Bush is going to be re-elected, the next time a democrat could be elected would be 2008. If she doesn't get it then, her next chance would be 2016. Get it? ] Hey! I'm a Canadian, what the heck am I supposed to know about American Democracy, HUH??? (Ya, I get it) #### Mr. Robin Parsons Originally posted by Zero Yeah, but people only seem to care when it is a Democrat telling the lie...even when the lie is technically true! Ahem, there is a thing called a "partiallity of truth", so send me ten dollar$(NOT really!) and I teach you how to "Sun Tan under the STARS!".....and since I am using a "Partiallity of Truth", not telling you enough information, sorta, when I receive your ten Buck$, postal, (No, Not really!) I send you the rest of the 'Truth' you are missing, I tell you to go outside, and stand under the shining Star(s) that is our Sun, and all of the rest of them that actually add their light to the illuminance (tan 'Q) of the face of earth, even though you cannot see them because of the obscuring blue. Technically I did tell you the truth. #### megashawn Ya, but when you get right down to it, thats a lie. More of a con, but still a lie. Either one is wrong, and when a person in a business or government position takes such action some kind of punishment or atleast removal from that position seems to be in order. #### Mr. Robin Parsons Originally posted by megashawn Ya, but when you get right down to it, thats a lie. More of a con, but still a lie. Either one is wrong, and when a person in a business or government position takes such action some kind of punishment or atleast removal from that position seems to be in order. And that person pleads to you, in court, that they "only told the truth".....followed by, well you have to Prove that it is otherwise, Burdon of Proof. You can only punish, once you have proven, to do otherwise, is un-wise/just. Z #### Zero ##### Guest You'll notice the media lie that 'no one can name a Democrat'...why doesn't the media want us to know their names? Because it might mean actually doing a little backround work, and discussing the issues, which the media simply won't do. No, they want to talk about Hillary Clinton, not because she has anything much to do with things, but because she is a name-brand player, who can get ratings. Last edited by a moderator: ### The Physics Forums Way We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
## Test #3 Q5 $aR \to bP, Rate = -\frac{1}{a} \frac{d[R]}{dt} = \frac{1}{b}\frac{d[P]}{dt}$ nelms6678 Posts: 53 Joined: Fri Sep 29, 2017 7:07 am ### Test #3 Q5 CHCl3(g) + Cl2(g) ---> CCl4(g) + HClg(g) The rate of the rxn was first order with respect to chlorine and trichloromethane. What is the rate law? nelms6678 Posts: 53 Joined: Fri Sep 29, 2017 7:07 am ### Re: Test #3 Q5 Given the instantaneous rate of rxn is 2.54x10^-2 mol/(Lxs), and intial mass of each reactant is 1.2g confined to a 750mL vessel, what is the rate constant of this reaction? -What equation are we even using here? Cristina Sarmiento 1E Posts: 52 Joined: Wed Nov 16, 2016 3:02 am ### Re: Test #3 Q5 Since both reactants are first order, the rate law would be rate law = k[CHCL3][CL2]. Because you are given the instantaneous rate of reaction, 2.54 x 10^/2 mol/L and the concentrations can be solved for, you just plug those numbers into the rate law equation to find k. To find the concentration of the reactants, divide 1.2 g by its molar mass and then divide it by 750 ml (convert it to .75L) to get the molarity of the reactants. For CHCl3, you get 1.3 x 10^/2 mol/L and for CL2, you get 2.3 x 10^-2 mol/L. Plugging this into the equation and solving for k, you get k = 85 s^-1 Kyle Alves 3K Posts: 46 Joined: Thu Jul 27, 2017 3:01 am ### Re: Test #3 Q5 Did anyone get the correct answer for the other form? It did it the same way as this problem in corrections, but wanted to make sure they're both the same. Thanks!
## (1) Overview ### Introduction One-dimensional (1D) modelling of the cardiovascular system is useful in predicting and understanding the dynamics of blood pressure propagation [1, 2, 3, 4, 5, 6]. Here, arteries are regarded as 1D axisymmetric tubes that are described by flux q inside the lumen and cross-sectional area A of the vessel lumen along the vessel length. One popular finite-differences method to numerically solve the equations governing blood flow through arteries is Richtmyer’s two-step Lax-Wendroff method [7, 8], which has been used by a number of groups [1, 4, 6, 9, 10, 11]. Alternative methods of solving the blood flow equations include for example variations of the Galerkin finite-element method, which instead solve the blood flow equations for flow velocity u and cross-sectional area A [2, 12]. The computational implementation of the Lax-Wendroff method is straightforward and previously mentioned references have produced results that are validated against experimental results, justifying the popularity of the method. However, no openly available implementation of the Lax-Wendroff method could be found, which results in the same work being carried out numerous times. Whilst one open-source Python package implementing a haemodynamic model exists, pyNS focusses on the implementation of a 0D pulse wave propagation model, representing arteries as electrical circuits [13], and therefore its scope and application are different from VaMpy. Solutions computed using VaMpy are exported to the commonly used CSV file format, thereby allowing for the integration of data with most other software. For example, solutions calculated using VaMpy could be used as a boundary condition for higher order models of larger arteries further upstream. Arteries are considered to be elastic axisymmetrical tubes of initial radius r0(z) in a cylindrical coordinate system. The radius at rest is allowed to taper exponentially for an arterial segment if different values are given for the upstream radius Ru and downstream radius Rd. An example geometry for the bifurcation of the common carotid artery, which is used to validate the solution calculated by VaMpy is shown in Figure 1. Then the vessel radius for an arterial segment of length L is (1) ${r}_{0}\left(z\right)={R}_{u}·\mathrm{exp}\left(\mathrm{log}\left(\frac{{R}_{d}}{{R}_{u}}\right)\frac{z}{L}\right).$ Figure 1 Example geometry of a bifurcation implemented in VaMpy. The example represents the common carotid artery (parent vessel) and its two daughter vessels, which are used for validation purposes of the software. Artery segments have an upstream and downstream radius, where the downstream radius has to be equal to or smaller than the upstream radius. The radius of the vessel then is . Blood flow through arteries is governed by the Navier-Stokes equations for conservation of mass (continuity equation) and momentum in a 1D cylindrical coordinate system (2) $\frac{\partial {u}_{z}\left(r,z,t\right)}{\partial z}+\frac{1}{r}\frac{\partial \left(r{u}_{r}\left(r,z,t\right)\right)}{\partial t}=0$ (3) where u = (uz (r, z, t), ur (r, z, t)) denotes blood flow veloctiy, p(z, t) denotes blood pressure, which is assumed to be uniform across r and the parameters ρ and ν denote blood density and viscosity, respectively. By integration of the governing equations over cross-sectional area A(z, t) = πR(z, t)2 the 1D conservation law (4) $\frac{\partial \mathbit{\text{U}}}{\partial t}+\frac{\partial \mathbit{\text{F}}}{\partial z}=\mathbit{\text{S}}$ with can be derived. Details on the derivation of (4) can be found elsewhere [1, 6]. Here, the unknowns are the vessel cross-sectional area A(z, t) and flux q(z, t). Elasticity of the vessel is described by the quantity f(r0) with relaxed vessel radius r0(z), A0(z) is the relaxed cross-sectional vessel area, R(z, t) the vessel radius, δb is the boundary layer thickness and Re is the Reynold’s number. Although these equations have been commonly used by various groups [1, 4, 5, 14], no publicly accessible implementation of the solution to (4) could be found, meaning that each publication from a separate group resulted in the reimplementation of the same or very similar methods and equations. Therefore, the Vascular Modelling in Python toolkit (VaMpy) was developed and published on GitHub1 with the documentation available on GitHub Pages.2 Support for the use of VaMpy is mainly available via the Issue Tracker feature on GitHub, but also via contacting the authors. ### Implementation and architecture The VaMpy implementation and architecture are described in this section. VaMpy is object-oriented to allow for an intuitive understanding of its design and to facilitate the addition of new features. The base of the package is the class ArteryNetwork, which defines the arterial tree. The class contains methods that are applied on the entire network of arteries as well as boundary conditions. Each artery within the tree is defined as an object of the class Artery, which contains its own solver instance. The solver itself is implemented in the independent class LaxWendroff that implements the Lax-Wendroff method as described below. This approach allows for the integration of other solvers within the software. The code was developed in Python 2.7 and implements Richtmyer’s two-step version of the Lax-Wendroff method [7, 8], which is second-order accurate in time and space. For a point in time, n, the solution at the next time step n + 1 at grid location m is given by (5) where is the solution at position mΔz and time nΔt. The half time step values for F and S are determined by (6) for j = m ± 1/2. An illustration of the computational procedure to determine ${\mathbit{\text{U}}}_{M}^{n+1}$ is shown in Figure 2. It illustrates that both initial conditions at n = 0 for all m and left and right boundary conditions are required to determine U. Figure 2 Illustration of the LW method. The solution is fully known at time step n (black circles) and we are looking for the solution at grid point m at time step n + 1 (white circle). To determine the unknown solution, two intermediate solutions at half grid points m ± 1/2 and at half time step n + 1/2 are determined from grid points m – 1, m and m + 1 at current time step n. The intermediate solutions are then used in conjunction with the known solution at grid point m and current time step n to calculate the unknown solution at grid point m and next time step n + 1 [4]. Boundary conditions are applied at both ends of the vessel and are either an inlet, outlet or bifurcation condition. The inlet boundary condition is used at the inlet of the parent vessel only [1]. It requires flux values q(0, t) to be prescribed. The inlet area is then calculated according to (5) (7) ${A}_{0}^{n+1}={A}_{0}^{n}-\frac{\Delta t}{\Delta z}\left({q}_{1/2}^{n+1/2}-{q}_{-1/2}^{n+1/2}\right).$ This requires the evaluation of the term ${q}_{{}_{-1/2}}^{{}^{n+1/2}}$, which is estimated from (8) ${q}_{0}^{n+1/2}=\frac{1}{2}\left({q}_{-1/2}^{n+1/2}+{q}_{1/2}^{n+1/2}\right),$ where ${q}_{0}^{n+1/2}$ is evaluated from the function prescribing flux values at the inlet and ${q}_{1/2}^{n+1/2}$ is evaluated from (6), see also [1, 4]. Algorithm 1: Iterative scheme to determine ${\mathbit{\text{U}}}_{M}^{n+1}$ using a 3WK boundary condition [4]. An initial guess is made for ${p}_{M}^{n+1}$, from which ${q}_{M}^{n+1}$ is calculated using (9) and ${A}_{M}^{n+1}$ is calculated using the discretized mass conservation equation (12). Using ${A}_{M}^{n+1}$ the next iteration of ${p}_{M}^{n+1}$ is found via the state equation (11). The algorithm stops after kmax iterations or when the difference between pressure estimates is less than the small threshold value ϵ. M22 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} $\begin{array}{l} p_M^{n + 1} = p_m^n\ \# initial{\rm{ }}\ guess{\rm{ }}\ for\ {\rm{ }}p_M^{n + 1}\\ k = 0\\ {\bf{for}}\ {\rm{ }}k \le {k_{{\rm{max}}}}:\\ \,\,\,\,\,\,{p_{{\rm{old}}}} = p_M^n\\ \,\,\,\,\,\,q_M^{n + 1} = q_M^n + \frac{{p_M^{n + 1} - p_M^n}}{{{R_1}}} + \frac{{\Delta tp_M^n}}{{{R_1}{R_2}{C_T}}} - \frac{{\Delta tq_M^n({R_1} + {R_2})}}{{{R_1}{R_2}{C_T}}}\\ \,\,\,\,\,\,A_M^{n + 1} = A_0^n - \frac{{\Delta t}}{{\Delta z}}(q_M^{n + 1} - q_{M - 1}^{n + 1})\\ \,\,\,\,\,\,p_M^{n + 1} = f_M^{n + 1}\left( {1 - \sqrt {\frac{{{{({A_0})}_M}}}{{A_M^{n + 1}}}} } \right)\\ \,\,\,\,\,\,{\bf{if}}\,|{p_{{\rm{old}}}} - p_M^{n + 1}|{\rm{ }} \le \epsilon:\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\bf{break}\\ \,\,\,\,\,\,k = k + 1 \end{array}$ \end{document} The outlet boundary condition is a three-element Windkessel (3WK) and its implementation follows [4]. The 3WK equation is (9) where R1, R2 and CT are resistance and compliance parameters. Discretization of (9) leads to (10) where M is the spatial position of the outlet. The 3WK boundary condition requires the evaluation of pressure in the vessel, which is related to area via the discretized State Equation [1] (11) ${p}_{M}^{n+1}={f}_{M}^{n+1}\left(1-\sqrt{\frac{{\left({A}_{0}\right)}_{M}}{{A}_{M}^{n+1}}}\right).$ The outlet boundary condition is solved using an iterative scheme with an initial guess for ${p}_{M}^{n+1}$ (see Algorithm 1). This requires a discretized version of the mass conservation equation to obtain an estimate for ${A}_{M}^{n+1}$ (12) ${A}_{M}^{n+1}={A}_{0}^{n}-\frac{\Delta t}{\Delta z}\left({q}_{M}^{n+1}-{q}_{M-1}^{n+1}\right).$ Finally, a bifurcation boundary condition applies between any vessel that is not a terminal vessel and its two daughter vessels [1, 4]. Relations between parent and daughter vessels lead to a system of eighteen equations for eighteen unknowns, which are solved using Newton’s method according to (13) where k indicates the current iteration, = (x1, x2, …, x18), J (xk)) is the Jacobian of the system of equations and fJ (xk) is the vector of residuals. The full system of equations required to solve the boundary conditions at bifurcations can be found elsewhere [6, 15]. Algorithm 2: Setup routine for a network of arteries. The artery network is created as a binary tree and contains 2depth – 1 arteries. At each depth level the up- and downstream radii of daughter vessels are calculated using the scaling parameters a and b. Artery objects are then created for each new daughter vessel and stored in a list. A simulation model is set up by creating an ArteryNetwork object and solved by executing the following functions from vampy.artery_network import ArteryNetwork an = ArteryNetwork(Ru, Rd, a, b, lam, k, rho, nu, p0, depth, ntr, Re) an.mesh(dx) an.set_time(dt, T[, tc]) an.initial_conditions(q0) an.solve(q_in, out_args) A network of arteries is created using the upstream and downstream radii Ru and Rd of the parent vessel, the radius-to-length ratio lam and scaling parameters a and b using Algorithm 2. Two daughter vessels are created for a parent vessel by multiplying their upstream and downstream radii with scaling parameters a and b respectively. This process is repeated until the desired tree depth is reached and the number of arteries in the network is 2depth – 1. A second setup routine exists to create an artery network, which is an = ArteryNetwork(Ru, Rd, lam, k, rho, nu, p0, depth, ntr, Re) Here, Ru, Rd and lam are iterables (for example lists or Numpy arrays) of length depth containing these values for each artery. The remaining parameters required by the ArteryNetwork constructor are the elasticity parameter k, blood density rho, blood viscosity nu, diastolic pressure p0, number of output parameters ntr and Reynold’s number Re. The latter method is used by the examples shown in this paper. The spatial discretisation is created by supplying the spatial step size dx, which is used internally to create Numpy arrays for all variables along the vessel. Timing parameters are the time step size dt, time of one period T and, optionally, the number of periods tc, which defaults to one if left unspecified. Initial conditions are supplied for q(z, t) as a single value q0, while the initial condition for A(z, t) is calculated from the radii at rest. The solve function is supplied with boundary condition parameters q_in and out_args, where q_in contains the values q(0, t) and out_args contains the parameters for a 3WK model. The solver loops over the simulation time steps and creates a LaxWendroff object for each Artery object lw = LaxWendroff(theta, gamma, artery.nx) with theta = dt/artery.dx, gamma = dt/2 and number of spatial steps artery.nx. The next time step is computed according to (5) and (6) at the inner grid points. Note that the time step size needs to fulfill the Courant-Friedrichs-Lewy (CFL) condition, which in this case is (14) $\Delta t\le \Delta x\cdot {|\frac{q}{A}±c|}^{-1},$ where is the wave speed. The CFL condition is automatically checked by the ArteryNetwork solver and the simulation stops with an error message if the condition is not met. The following section demonstrates the use of VaMpy for the simulation of the common carotid artery as done by [4]. This publication was chosen as it provides detailed information on the geometry used and Windkessel parameter for the outlet boundary condition. A detailed walkthrough of how to write configuration and simulation files can be found on the documentation website.3 ### Quality control The VaMpy Git repository contains unit tests to ensure functions perform as expected. Additionally, the file bifurcation_example.py demonstrates VaMpy’s performance by validating its results against results in [4] on the common carotid artery bifurcation. Whilst unit tests demonstrate that the functionality of the software meets expecations, validation against experimental results and other researchers’ results ensures that the software additionally generates sensible output data and that parameters have been chosen sensibly. Users implementing arteries using other parameters than the ones tested in the example files in VaMpy should therefore always cross-check their results against experimental or other simulation results using the same parameters to ensure that the choice of parameters is realistic. The solution computed using VaMpy is shown in Figure 3 and matches the corresponding figures in [4]. Thus this example demonstrates that VaMpy performs as expected. Figure 3 One pulse in the common carotid artery using VaMpy: a) flow rate, b) pressure. Comparison with the results for the same simulation in [4] validates the implementation of the blood flow equations in VaMpy. To execute the example run python bifurcation_example.py bifurcation.cfg To plot the data created from the example run python plot_example.py bifurcation.cfg The first version of VaMpy focusses on the simulation of a single bifurcation, i. e. one parent vessel with two daughter vessels. The development of the first version of VaMpy was based on the simulation of flow through the middle cerebral artery in order to evaluate lymphatic drainage through the wall of the artery [6], and for this purpose a single bifurcation was regarded sufficient. Validation on larger networks of arteries with multiple levels of bifurcations has therefore not been carried out yet, but is planned for the next release cycle. Additionally, it is planned to offer a choice of alternative outlet boundary conditions, such as the structured tree [1, 5]. It has been demonstrated that by taking into account bifurcation pressure drops, the accuracy of reduced order models such as the system of equations (4) can improve significantly compared to higher order models [16]. This means that a similar accuracy of blood flow solutions could be achieved for 1D models compared to 2D or 3D models by increasing the depth of the arterial tree to be modelled. ## (2) Availability ### Operating system VaMpy is compatible with any operating system that is compatible with Python 2.7 and the dependent packages. ### Programming language VaMpy was written in and for Python 2.7 and above. There are no additional system requirements. However, the requirements for memory and processing power are dependent on the number of the grid points. ### Dependencies NumPy, SciPy, Matplotlib, ConfigParser. ### Software location #### Archive Name: GitHub Persistent identifier: https://github.com/akdiem/vampy/releases/tag/v1.0 Licence: Three-Clause BSD Publisher: Alexandra K. Diem Version published: v1.0 Date published: 22/03/2017 #### Code repository Name: GitHub Persistent identifier: https://github.com/akdiem/vampy Licence: Three-Clause BSD Date published: 26/04/2016 Python 2.7 ## (3) Reuse potential Modelling blood flow dynamics is a useful tool in vascular diseases research and 1D models provide good approximations. The method implemented in VaMpy is used by a variety of research groups [1, 4, 11] and therefore it is expected that the reuse potential for VaMpy is high, especially in multiscale simulations. Because the commonly accepted CSV file format is used for input and output data for VaMpy integration of results from VaMpy simulations with other third-party software packages is expected to be straightforward. For example, VaMpy could be used as a boundary condition for 3D simulations or constitute a part of multi-scale simulations. The publication of this software additionally provides opportunities for other researchers to add functionality and because VaMpy has been validated on results published in the literature it simplifies and promotes reproducibility of results. The following features are planned for the next releases: • asymmetric daughter vessel geometries with separate Windkessel parameters, • validation of the method on bifurcation networks larger than two levels and • integration of models of the dynamics of the artery wall. The current release of VaMpy was developed to implement a bifurcation at the middle cerebral artery as part of a multi-scale model of lymphatic flow through the basement membrane embedded in the artery wall, which is relevant for resolving the mechanisms behind the onset and progression of Alzheimer’s disease [6, 17].
# Graphing a water balance The water balance for a urban catchment equates the change in storage during a certain period, with the difference between water inputs (precipitation and mains water) and water outputs (evaporation, stormwater runoff and wastewater discharge). $\Delta s = (P+I) - (E_a + R_s + R_w)$ where: $\Delta s$ change in catchment storage $P$ precipitation $I$  imported water $E_a$ actual evaportranspiration $R_s$ stormwater runoff $R_w$ wastewater discharge Mitchell et al., (2003) provides data on the water balance for Curtin, ACT for 1979 to 1996.  The water balance for the average, wettest and driest years are shown in the table below. When presenting financial statements, a common approach is to use a waterfall chart which shows how the components of a financial balance contribute to an overall result.  Here I’ve used a waterfall chart to show the water balance for Curtin for the driest and wettest year as reported by Mitchell et al., (2003). Figure 1: Water balance for Curtin, ACT in (A) and driest and (B) the wettest years as estimated by Mitchell et al., (2003). Does this approach to visualising a water balance help understanding?  A few things stand out: • In the driest year, more water was input from the mains than from rainfall • In the driest year, actual evapotranspiration was larger than rainfall and mains inputs. • Evapotranspiration and stormwater change with climate, with large variation between the wet and dry years.  Wastewater doesn’t change all that much. • Precipitation is highly variable, ranging from 247 mm to 914 mm. There is a guide to making a waterfall chart in excel here.  The R code to produce the graphs shown in this blog is available as a gist, which draws on this blog. ### References Mitchell, V. G., T. A. McMahon and R. G. Mein (2003) Components of the Total Water Balance of an Urban Catchment. Environmental Management 32(6): 735-746. (link) # Munging rating tables The Victorian water monitoring site includes rating tables for stream gauges but they are in a format that is not easy to work with.   An example is shown in Figure 1 below. Figure 1: Extract of rating table The following steps can be used to extract and convert the data into a useable format. 1. Download and save rating table.  Click the button shown to get the rating table as a text file. Figure 2: Save the rating table 2. Re-format the data to create columns of levels and flows.  You’ll need to use your favourite tool for this munging step.  An example using R is available as a gist. 3. Plot and compare with the online version Figure 2: Rating plot (source: data.water.vic.gov.au/monitoring.htm?ppbm=404216&rs&1&rscf_org) Figure 3: Rating plot using data from rating table 4. Save as a csv file for further use. R code is available here. Related posts: # Flood frequency plots using ggplot This post provides a recipe for making plots like the one below using ggplot2 in R.  Although it looks simple, there are a few tricky aspects: • Superscripts in y-axis labels • Probability scale on x-axis • Labelling points on the x-axis that are different to the plotted values i.e. we are plotting the normal quantile values but labelling them as percentages • Adding a title to the legend • Adding labels to the legend • Positioning the legend on the plot • Choosing colours for the lines • Using commas as a thousand separator. Code is available as a gist, which also shows how to: • Enter data using the tribble function, which is convenient for small data sets • Change the format of data to one observation per row using the tidyr::gather function. • Use a log scale on the y-axis • Plot a secondary axis showing the AEP as 1 in X years • Use the Probit transformation for the AEP values # Modelling impervious surfaces in RORB – II This blog builds on the previous post; looking at the runoff coefficient approach to modelling losses and the implications for representing impervious surfaces in the RORB model. In addition to the IL/CL model discussed in the previous post, RORB can be run using an initial loss / runoff coefficient model, where the runoff coefficient specifies the proportion of rainfall lost in each time step after the initial loss is satisfied.  This reason these different loss models are of interest is that the new version of Australian Rainfall and Runoff is recommending that the IL/CL model is used in place of the runoff coefficient model (Book 5, Section 3.3.1).  In some areas, modelling approaches will need to change and this will have implications for flood estimates. The runoff coefficient loss model is selected as shown in Figure 1. Figure 1: A runoff coefficient loss model can be selected in RORB The user inputs the runoff coefficient, C, for a pervious surface.  For an impervious surface, there is no opportunity to specify the runoff coefficient which is hard-wired in RORB as 0.9.  For mixed sub-areas, the runoff coefficient is scaled, the equations from the RORB manual are: $C_i = F_iC_{imp} +(1-F_i)C_{perv}, \qquad C_{perv} \le C_{imp} \qquad \mathrm{Equation \;3.5}$ $C_i = C_{imp}, \qquad C_{perv} > C_{imp}\qquad\qquad \mathrm{Equation \; 3.6}$ Where Ci is the runoff coefficient for the ith sub-area. Example: For a fraction impervious, $F_i = 0.6$ and $C_{perv} = 0.5$ $C_i = 0.6 \times 0.9 +(1-0.6) \times 0.5 = 0.74$ The initial loss is calculated as as a weighted average of the pervious and impervious initial losses as shown in the previous post.  The impervious initial loss is always set to zero in RORB. Let’s do the calculations for a 100% impervious surface.  RORB will set $I\!L = 0$ and $C = 0.9$.  Using the 6 hour, 1% rainfall as before, the rainfall excess hyetograph is shown in Figure 2. Figure 2: Rainfall excess hyetograph for an impervious surface using the runoff coefficient model.  RORB sets IL to zero and the the runoff coefficient to 0.9 so 10% of rain is lost at each time step Example calculation: As explained in the previous post, the rainfall between 1.5 hour and 2 hour is 19.4 mm.  With a runoff coefficient of 0.9, the rainfall excess will be: 0.9 x 19.4 = 17.5 mm. The rainfall excess hydrograph from a 10 km2 impervious sub-area can be calculated from the rainfall excess hyetograph using the method described in the previous post. The peak flow corresponding to the 17.5 mm rainfall peak is 97.2 m3s-1 (see the previous post for sample calculations). The key point is that we have changed the peak flow from an impervious surface, just by changing the loss model.  With the IL/CL model, both initial and continuing loss for a 100% impervious surface are hard-wired to zero. The peak runoff was 107.8 m3s-1. For the runoff coefficient model, initial loss is hard-wired to zero, but the runoff coefficient is hard-wired to 0.9, i.e. we have some loss from the impervious surface. This changes the hydrograph as shown in Figure 3. Figure 3: Comparison of rainfall excess hydrographs from a 100% impervious surface; same rainfall, different loss model The value of the runoff coefficient for an impervious surface is noted in the RORB manual: The impervious area runoff coefficient Cimp is set by the program to 0.9, reflecting the fact that losses occur even on nominally impervious surfaces in urban areas. This is reasonable, but inconsistent with the treatment of the continuing loss when the IL/CL loss model is used.  In this case, CL is hard-wired to zero so there are no losses from impervious surfaces; a feature of RORB for modellers to be aware of. Also note Equation 3.6 above.  This suggests that if the user inputs a runoff coefficient larger than the impervious coefficient (i.e. larger than 0.9) then a value of 0.9 will be used.  This isn’t actually implemented.  If a runoff coefficient of 1 is input, there is a direct conversion of rainfall to runoff i.e. there is no loss.  It is even possible to input runoff coefficients greater than 1. Equation 3.6 may just be the result of a typo.  Some experimenting suggests the behaviour in the model is represented the combination of equation 3.5, above and the following in place of equation 3.6: $C_i = C_{perv}, \qquad C_{perv} > C_{imp}$ That is, the runoff coefficient for an impervious surface is 0.9 unless the runoff coefficient input by the user is larger than 0.9. Calculations are available via a gist. # Modelling impervious surfaces in RORB The previous post looked at rainfall excess hydrographs; here I explore how these hydrographs change when modelling impervious surfaces in RORB.  This post focusses on the initial loss/continuing loss modelling approach. Usually, losses are reduced for impervious compared to pervious surfaces and RORB sets both initial and continuing loss to zero if a surface is 100% impervious. As an example, consider the 6 hour 1% rainfall for Melbourne, which is 83.4 mm.  If we use the ARR1987 temporal pattern (see the previous post), the hyetograph is as shown in Figure 1. Figure 1: The 6 hour 1% rainfall (83.4 mm) multiplied by the ARR1987 temporal pattern.  For an impervious surface, RORB sets both initial and continuing loss to zero Example calculation: In the ARR1987 temporal pattern, the time period between 1.5 and 2 hours has 23.3 percent of the rain.  The total rainfall is 83.4 mm so the rain in this period is 83.4 x 23.3% = 19.43 mm which is consistent with Figure 1. The corresponding rainfall excess hydrograph, for an area of 10 km2, which is 100% impervious, is shown in Figure 2 (Note that Areal Reduction Factors have not been used). Figure 2: rainfall excess hydrograph for an area of 10 km2 Example calculation: The instantaneous flow at a 2 hours will be $\frac{1}{3.6} \times \frac{1}{0.5} \times 19.4 \times 10 = 107.8 \mathrm{m^3 s^{-1}}$ To explain factors at the start of the equation, 1/3.6 is for unit conversion, 1/0.5 is because the temporal pattern has a 0.5 hour time step.   The RORB output matches the calculations (Figure 3). By default, RORB will show the rainfall excess hyetograph above the calculated hydrograph but this is  based on the initial and continuing loss as provided by the user.  In this case, I’ve specified IL = 10 mm and CL = 2 mm/h for the pervious areas.   These losses, and the hyetograph, are misleading where a sub-catchment has some impervious component.  In this case, for a 100% impervious sub-area, both IL and CL are set to zero by the program.  It would be best not to display the misleading hyetograph, which can be turned off as shown in Figure 4. FInure 3: RORB output Figure 4: The hyetograph can be toggled off using the button outlined in pink If a sub-area is a combination of both impervious and pervious surfaces, this must be specified to RORB as a Fraction Impervious (Fi).  The initial and continuing losses are scaled based on this fraction. $IL_i = (1 - F_i) I\!L_{perv}$ $CL_i = (1 - F_i) C\!L_{perv}$ Where $I\!L_{perv}$ and $C\!L_{perv}$ are the initial and continuing losses for pervious areas as input by the user. For example, if the pervious value of IL is set to 10 mm and CL to 2 mm/h, then for a sub-area with a Fraction Impervious value of 60%, the initial and continuing losses will be: $I\!L = (1 - 0.6) \times 10 = 4 \; \mathrm{mm}$ $C\!L = (1 - 0.6) \times 2 = 0.8 \; \mathrm{mm/h}$ The continuing loss is 0.8 mm/h which is 0.4 mm per 30 min time step. Running the model with these parameters results in a rainfall excess hydrograph as shown in Figure 5.  Note that the start of the rise of the hydrograph is delayed because of the initial loss.  The peak is reduced by a small amount (from  108 cumec to 106 cumec because of the continuing loss). Example calculation, flow peak: $\frac{1}{3.6} \times \frac{1}{0.5} \times (19.4 - 0.4) \times 10 = 105.6 \mathrm{m^3 s^{-1}}$ Figure 5: Rainfall excess hydrograph For a real catchment with a 60% fraction impervious, we would expect some early runoff from the impervious surfaces that would provide flow directly into the urban drainage system.  RORB doesn’t model this process, which may not matter, depending on the application, but as modellers we need to be aware of this limitation. Calculations are available as a gist. # Rainfall excess hydrograph Ever wondered what a ‘rainfall excess’ hydrograph is and how they are calculated?  Then read on. ‘Rainfall excess’ is the rainfall left over after the initial and continuing loss are removed.  Rainfall excess hydrographs are used in the runoff-routing program RORB.  The RORB manual (Section 3.3.4) describes them as follows: In catchment studies, the program calculates hyetographs for all sub-areas.  After deducting losses, it converts the hyetograph ordinates to ‘hydrographs’ of rainfall-excess on the sub-areas, in m3/s, and interprets the average ‘discharge’ during a time increments as an instantaneous discharge at the end of the time increment. Lets look at an example.  I’m using the methods from the 1987 version of Australian Rainfall and Runoff so I can compare results with calculations in RORB. ### 1. Choose a design rainfall depth I’m working on a catchment in Gippsland where the 1% AEP 6 hour rainfall is of interest.  Rainfall IFD data is available from the Bureau of Meteorology via this  link. For the site of interest, the 1% (100-year), 6 hour rainfall depth is 90.9 mm. ### 2. Select a temporal pattern Temporal patterns are available in Australian Rainfall and Runoff Volume 2, Table 3.2.  Gippsland is in zone 1 and ARI is > 30 years so we need the bottom row from the table below.   This shows the percentage of the rainfall depth in each 30 min time period Applying the temporal pattern to the design rainfall depth results in the following hyetograph. Figure 1: Design rainfall hyetograph ### 3. Remove the losses Calculate the rainfall excess hyetograph by removing the initial loss and continuing loss.  For this example, • IL = 10 mm and • CL = 2 mm/h. Note that the continuing loss is 2 mm/h and the time step of the hyetograph is 0.5 h so 1 mm is lost per time step. The rainfall excess hyetograph is shown in Figure 2. Figure 2: Rainfall excess hyetograph ### 4. Convert to a hydrograph The procedure to convert a rainfall excess hyetograph to a rainfall excess hydrograph is explained in the quote at the start of the blog.  We need to: • Multiply the rainfall excess by the catchment area (converts rainfall to a volume) • Divide by the time step (to calculate volume per unit time) • Ensure flow is allocated to the correct time step – the rainfall during a time step produces the instantaneous flow at the end of the time step • Ensure the units are correct – calculated flow is is m3/s, rainfall is in mm and catchment area is in km2. There is also a discussion of this in ARR2016 Book 5, Chapter 6.4.3.1. Example calculation: in this case, the sub-catchment area is 78.7 km2.  The rainfall in the 3rd time step,  between 1 hour and 1.5 hour, is 8.9 mm so the flow at the end of this time step will be: $Q = \frac{1}{3600} \times 10^{-3} \times 10^6 \times \frac{8.999}{0.5} \times 78.7 = 393.46 \; \mathrm{m^3s^{-1}}$ The rainfall excess hydrograph is shown in Figure 3. Figure 3: Rainfall excess hydrograph ### 4. Comparison with RORB Figure 4 shows the rainfall excess hydrograph as calculated by RORB.  The answers look close and I’ve confirmed this by looking at the calculated values. Figure 4: Rainfall excess hydrograph as calculated by RORB Calculations are available as a gist. # On the calculation of equal area slope As noted in the previous post, the equal area slope was adopted for use with the Bransby Williams time of concentration formula in the 1987 version of Australian Rainfall and Runoff to: “…give a better indication of flow response times, especially where there are large variations of slope within a catchment” (ARR 1987 Boo IV, Section 1.3.2(d)).  The equal area slope is also used as part of flood estimation in New Zealand (NZ, 1980; Auckland Regional Council, 1999), in the Papua New Guinea Flood Estimation Manual (SMEC, 1990) – where it is used in the estimation of overland flow times and runoff coefficients for use in the rational method – and is discussed in the Handbook of Hydrology (Pilgrim and Cordery, 1993). The equal area slope is the slope of a straight line drawn on a profile of a stream such that the line passes through the outlet and has the same area under and above the stream profile. An alternative to the equal area slope, as used by Bransby Williams, is the average slope.  The differences between the equal area and average slopes are highlighted in the Figure 1 below. Figure 1: Average slope and equal area slope (McDermott and Pilgrim, 1982, page 28) I haven’t been able to find the history of the equal area slope.  I’m guessing that the average slope was found to be too steep for use in hydrologic calculations.  The equal area slope may have been considered more representative and was easy to calculate in pre-computer times.  The procedure, perhaps to be undertaken by a draftsman, is  specified in NZ (1980) (see Figure 2). The method involves the calculation of the slope of the hypothetical line AC, which is so positioned that the enclosed areas above and below it, i.e. areas X and Y, are equal. The procedure is to planimeter the total area under the longitudinal profile.  This area Ad, equals the area of the triangle ABC. Figure 2: Definition diagram for the calculation of the equal area slope (NZ, 1980) $A_d = \frac{1}{2}AB \times BC$ $A_d = \frac{1}{2}L \times h$ $\therefore h= \frac{2A_d}{L}$ (Point C is known at the ‘equal area slope ordinate’). Hence the equal area slope $S_a$ is given by $S_{ea} = \frac{h}{L} = \frac{2A_d}{L^2}$ When the units for the elevation and length in the diagram above are used: $S_{ea} = \frac{2A_d}{1000L^2} \;\; \mathrm{m/m}$ Notice that nothing iterative is required, we are just calculating the area under the profile and then working out the triangular area to match. This simple calculation procedure seems to have been forgotten, as some modern approaches suggest an iterative procedure is necessary, for example, in this excel equal area slope tool. An R function to calculate the equal area slope is available as a gist. ### References Auckland Regional Council (1999) Guidelines for stormwater runoff modelling in the Auckland Region. (link) McDermott, G. E. and Pilgrim, D. H. (1982) Design flood estimation for small catchments in New South Wales.  Department of National Development and Energy.  Australian Water Resources Council.  Research Project No. 78/104. Australian Government Publishing Service, Canberra (link) Pilgrim, D. H. and Cordery, I. (1993) Flood runoff. In: Maidment, D. R. (ed) Handbook of Hydrology.  McGraw Hill. SMEC (1990) Papua New Guinea Flood Estimation Manual. Department of Environment and Conservation, Bureau of Water Resources. (link) NZ (1980)A method for estimating design peak discharge: Technical Memorandum No. 61. New Zealand. Ministry of Works and Development. Water and Soil Division. Planning and Technical Services; National Water and Soil Conservation Organisation (N.Z). (1980)  (link to catalog entry) (link to document) https://nicgreeneng.wordpress.com/2016/07/03/equal-area-slope-tool/
# Operator Theory Seminar Speaker: Krishnendu Khan, Vanderbilt University Topic: On the Von Neumann algebraic semidirect product rigidity Abstract: Recovering algebraic features of a group $G$ from the group von Neumann algebra $L(G)$ is a problem of central importance in the theory of von Neumann algebras. In my talk, I shall give new classes of examples of groups for which the semidirect product feature is remembered by the group von Neumann algebra. This talk is based on joint work with Ionut Chifan and Sayan Das. Event Date: March 10, 2020 - 1:30pm to 2:20pm Location: 309 VAN Calendar Category: Seminar Seminar Category: Operator Theory
By accessing our 180 Days of Math for Third Grade Answers Key Day 121 regularly, students can get better problem-solving skills. Directions: Solve each problem. Question 1. 18 + 6 = 18 + 6 = Explanation: Perform addition operation on above two given numbers. Add 18 to 6 the sum is 24. Question 2. Explanation: Perform multiplication operation on above two given numbers. Multiply 24 with 5 the product is 120. Question 3. How many paws are on 3 dogs? There are 4 paws on one dog. To calculate paws on 3 dogs we need to perform multiplication operation. Multiply 4 paws with 3 dogs. 4 x 3 = 12 paws There are 12 paws on 3 dogs. Question 4. There is a group of triangles with a total of 15 sides. How many triangles are there? A triangle has three sides. Given that a group of triangles having 15 sides total. 1 triangle = 3 sides ? triangles = 15 sides (1 triangle x 15 sides)/ 3 sides = 1 triangle x 5 = 5 triangles There are 5 triangles with a total of 15 sides. Question 5. Which is larger: $$\frac{17}{100}$$ or $$\frac{27}{100}$$? Given fractions are $$\frac{17}{100}$$, $$\frac{27}{100}$$ 17/100 = 0.17 27/100 = 0.27 So, $$\frac{27}{100}$$  is larger. Question 6. × 5 = 35 × 5 = 35 Explanation: To get the product 35 we need to perform multiplication operation. If we multiply 7 with 5 then we get the product as 35. The missing multiplier is 7. Question 7. Does a sheet of paper have a mass of more or less than one kilogram? We know that one kilogram is equal to 1000 grams. A sheet of paper has mass less than one kilogram. Question 8. It is 7:20. What time will it be in 20 minutes? The given time is 7:20. After 20 minutes the time will be 7:40. Question 9. List the angles in order from smallest to largest. The angle of option A is greater than 90 degrees less than 180 degrees. The angle of option B is less than 90 degrees. The angle of option C is 90 degrees. So, the above angles from smallest to largest are B, C, A. Question 10. If you add 71 to me, you get 100. What number am I?
### Common Misperceptions: Limitations of the ACS for MRP The ACS is the main dataset used for post-stratification in MRP because of its wide coverage of different geographies. However, there are a couple of data limitations to the ACS that make subsequent MRP diverge from the classic MRP as it is usually taught: 1. The ACS is conducted by the Census Bureau but it is a survey, not a population census. It is an extremely large survey, with over 3 million respondents in any given year. But it is still a sample, and therefore its counts are estimates. The ACS even provides margin of error estimates around its counts. 2. Microdata is not available for some geographies like congressional districts. By microdata, we mean data where each row is an individual respondent and can be aggregated up. The Census page, tidycensus, IPUMS (https://usa.ipums.org/) has an amazing interface to download microdata, but that data does not include congressional district. • The Census provides some crosswalk files but they are mainly ZCTA to CD crosswalks. • PUMAs are the main unit of area for the ACS but these do not link well to CD (see here for discussion and potential fractional matching). For example Cambridge Mass. is a single PUMA, but Cambridge has about 8 zipcodes and is split into parts of MA-05 (Rep. Katherine Clark) and MA-07 (Rep. Ayanna Pressley) 3. The population Census is ideal, but the Census only occurs every 10 years, and only a 5 percent sample of it (with its restrictions of its own) is given to the public with some lag time. Therefore the population Census is often not a good alternative to the ACS / CPS. 4. The ACS does not ask some key variables, such as voter registration (which is asked in the CPS) or party identification (which is only asked in political surveys like the CCES or Pew). That limits the variables one could include in the MRP model. 5. Of the variables that are available, only certain variable combinations of them are available at the district level: • There are three are three-way interactions of some demographic variables but not more than that. • The ACS provides thousands of cell counts, but only a subset of them are useful for MRP because cells for MRP must form a partition of the population (i.e., exclusive and exhaustive categories). I describe several important combinations below. 6. Some of the partition counts are not perfect partitions. For example, some of the variable codes I define below exclude people aged 18-24, include non-citizens, or double-count African Americans who also identify as Hispanic/Latino. These introduce some error into the post-stratification. This package and vignette goes into more detail about these limitations and the nature of the data that is available. ### Three types of ACS Tables 1. Age x Sex x Education. These variable codes are encoded in ?acscodes_age_sex_educ. 2. Age x Sex x Race. These variable codes are encoded in ?acscodes_age_sex_race. 3. Sex x Education x Race . These variable codes are encoded in ?acscodes_sex_educ_race See their help pages for more information. A detailed list of limitations to come. ### Collapsing variable codes The survey and the post-stratification must share discrete variables the level of which match 1:1. Therefore, if race is binned 9 ways in the ACS but there are only 5 response options in the CCES, then the finer group must be collapsed to coarser groupings of the 5 levels in the CCES. That also means, for example, that the recoding must be nested – i.e., it is hard to salvage two variable codings that are not many-to-one nested mappings to each other. These judgments must be made with careful attention to the question wording and are documented to some extent in ?namevalue. ### Thanks The statements in this vignette benefited from many online resources and correspondence with experts including Yair Ghitza and Matto Mildenberger. I also relies on findings from several published papers: • Howe, P. D., Mildenberger, M., Marlon, J. R., & Leiserowitz, A. (2015). Geographic variation in opinions on climate change at state and local scales in the USA. Nature Climate Change, 5(6), 596–603. https://doi.org/10.1038/nclimate2583 • Kastellec, J. P., Lax, J. R., & Phillips, J. H. (2010). Public Opinion and Senate Confirmation of Supreme Court Nominees. Journal of Politics, 72(3), 767–784. https://doi.org/10.1017/S0022381610000150 • Walker, Kyle. (2020). tidycensus: Load US Census Boundary and Attribute Data as ‘tidyverse’ and ‘sf’-Ready Data Frames. R package version 0.9.9.2. https://CRAN.R-project.org/package=tidycensus • Warshaw, C., & Rodden, J. (2012). How should We measure district-level public opinion on individual issues? Journal of Politics, 74(1), 203–219. https://doi.org/10.1017/S0022381611001204
Ask question Question # Find the derivatives of the functions. f(x)=\frac{1}{x\ln x} Derivatives ANSWERED asked 2021-05-17 Find the derivatives of the functions. $$\displaystyle{f{{\left({x}\right)}}}={\frac{{{1}}}{{{x}{\ln{{x}}}}}}$$ ## Answers (1) 2021-05-18 The function we want to differentiate is $$\displaystyle{f{{\left({x}\right)}}}={\frac{{{1}}}{{{x}{\ln{{x}}}}}}$$ Apply the quotient rule first to get $$\displaystyle{f}'{\left({x}\right)}={\frac{{{\frac{{{d}}}{{{\left.{d}{x}\right.}}}}{\left({1}\right)}{x}{\ln{{x}}}-{\frac{{{d}}}{{{\left.{d}{x}\right.}}}}{\left({x}{\ln{{x}}}\right)}}}{{{\left({x}{\ln{{x}}}\right)}^{{{2}}}}}}$$ Note that the first term in the numerator is zero because $$\displaystyle{\frac{{{d}}}{{{\left.{d}{x}\right.}}}}{\left({1}\right)}={0}$$ The second term can be computed using the product rule and the derivative of natural logarithm rule: $$\displaystyle{\frac{{{d}}}{{{\left.{d}{x}\right.}}}}{\left({\ln{{x}}}\right)}={\frac{{{1}}}{{{x}}}}$$ So we get $$\displaystyle{\frac{{{d}}}{{{\left.{d}{x}\right.}}}}{\left({x}{\ln{{x}}}\right)}={\frac{{{d}}}{{{\left.{d}{x}\right.}}}}{\left({x}\right)}{\ln{{x}}}+{x}{\frac{{{d}}}{{{\left.{d}{x}\right.}}}}{\left({\ln{{x}}}\right)}$$ $$\displaystyle={\ln{{x}}}+{x}{\frac{{{1}}}{{{x}}}}$$ $$\displaystyle={\ln{{x}}}+{1}$$ Substituting back, we get $$\displaystyle{f}'{\left({x}\right)}={\frac{{-{\left({\ln{{x}}}+{1}\right)}}}{{{\left({x}{\ln{{x}}}\right)}^{{{2}}}}}}$$ expert advice ...
# How do you simplify (c^2/d)/(c^3/d^2)? May 19, 2017 See a solution process below: #### Explanation: First, use this rule for dividing fractions to rewrite the expression: $\frac{\frac{\textcolor{red}{a}}{\textcolor{b l u e}{b}}}{\frac{\textcolor{g r e e n}{c}}{\textcolor{p u r p \le}{d}}} = \frac{\textcolor{red}{a} \times \textcolor{p u r p \le}{d}}{\textcolor{b l u e}{b} \times \textcolor{g r e e n}{c}}$ $\frac{\frac{\textcolor{red}{{c}^{2}}}{\textcolor{b l u e}{d}}}{\frac{\textcolor{g r e e n}{{c}^{3}}}{\textcolor{p u r p \le}{{d}^{2}}}} = \frac{\textcolor{red}{{c}^{2}} \times \textcolor{p u r p \le}{{d}^{2}}}{\textcolor{b l u e}{d} \times \textcolor{g r e e n}{{c}^{3}}} \implies \frac{{c}^{2} {d}^{2}}{{c}^{3} d}$ Now, use these rule of exponents to simplify the $c$ terms: ${x}^{\textcolor{red}{a}} / {x}^{\textcolor{b l u e}{b}} = \frac{1}{x} ^ \left(\textcolor{b l u e}{b} - \textcolor{red}{a}\right)$ and ${a}^{\textcolor{b l u e}{1}} = a$ $\frac{{c}^{\textcolor{red}{2}} {d}^{2}}{{c}^{\textcolor{b l u e}{3}} d} \implies {d}^{2} / \left({c}^{\textcolor{b l u e}{3} - \textcolor{red}{2}} d\right) \implies {d}^{2} / \left({c}^{1} d\right) \implies {d}^{2} / \left(c d\right)$ Now, use these rules of exponents to simplify the $d$ terms: $a = {a}^{\textcolor{b l u e}{1}}$ and ${x}^{\textcolor{red}{a}} / {x}^{\textcolor{b l u e}{b}} = {x}^{\textcolor{red}{a} - \textcolor{b l u e}{b}}$ and ${a}^{\textcolor{red}{1}} = a$ ${d}^{2} / \left(c d\right) \implies {d}^{\textcolor{red}{2}} / \left(c {d}^{\textcolor{b l u e}{1}}\right) \implies {d}^{\textcolor{red}{2} - \textcolor{b l u e}{1}} / c \implies {d}^{1} / c \implies \frac{d}{c}$
### Lab02: Selection Sort (part 1) and ≤ on Answer ##### due by Monday, Feb 6, 2023 Before doing this assignment you will need to pull from the upstream repository, i.e., the course repository. Please complete the definitions/proofs in the files PnP2023/Labs/Lab02/SelectionSort.lean and PnP2023/Labs/Lab02/AnswerLE.lean. When you are done please commit and push your changes to your forked private repository and alert me on Zulip. 1. Complete the proof of remove_length_le in PnP2023/Labs/Lab02/SelectionSort.lean. 2. Complete the proof of remove_mem_length in PnP2023/Labs/Lab02/SelectionSort.lean. 3. Complete the definition of selectionSort in PnP2023/Labs/Lab02/SelectionSort.lean, including proof by termination. Do not make this a partial def. Also ensure that there is no error saying that you failed to prove termination. 4. Complete the proof of Answer.eq_of_le_le in PnP2023/Labs/Lab02/AnswerLE.lean.
Translational Elasto Gap - MapleSim Help Home : Support : Online Help : MapleSim : MapleSim Component Library : 1-D Mechanical : Translational : Springs and Dampers : componentLibrary/1Dmechanics/translational/springsDampers/ElastoGap Translational Elasto Gap 1D translational spring damper combination with gap Description The Translational Elasto Gap (or Elasto Gap) component models a Spring Damper that can lift-off. When the distance between the flanges is greater than the relaxed spring length, no force is exerted. Outside of this region, contact is present and the contact force is basically computed with a linear spring/damper characteristic. The parameter $n$ can be used to model a nonlinear spring force. Equations $\mathrm{contact}=\left({s}_{\mathrm{rel}}<{s}_{\mathrm{rel0}}\right)$ ${s}_{\mathrm{rel}}={s}_{b}-{s}_{a}$ ${v}_{\mathrm{rel}}={\stackrel{.}{s}}_{\mathrm{rel}}$ $f={f}_{c}+{f}_{d}=-{f}_{a}={f}_{b}$ ${f}_{c}=\left\{\begin{array}{cc}-c{\left|{s}_{\mathrm{rel}}-{s}_{\mathrm{rel0}}\right|}^{n}& \mathrm{contact}\\ 0& \mathrm{otherwise}\end{array}$ ${f}_{d}=\left\{\begin{array}{cc}0& ¬\mathrm{contact}\\ \phantom{\rule[-0.0ex]{0.5ex}{0.0ex}}{f}_{c}& {f}_{\mathrm{d2}}<{f}_{c}\\ -{f}_{c}& -{f}_{c}<{f}_{\mathrm{d2}}\\ {f}_{\mathrm{d2}}& \mathrm{otherwise}\end{array}$ ${f}_{\mathrm{d2}}=\left\{\begin{array}{cc}d{v}_{\mathrm{rel}}& \mathrm{contact}\\ 0& \mathrm{otherwise}\end{array}$ $\mathrm{lossPower}={f}_{d}{v}_{\mathrm{rel}}$ Variables Name Units Description Modelica ID $f$ $N$ Forces between flanges f ${f}_{c}$ $N$ Spring force fc ${f}_{d}$ $N$ Linear damping force limited by spring force fd ${f}_{\mathrm{d2}}$ $N$ Linear damping force fd2 ${f}_{x}$ $N$ Force applied to ${\mathrm{flange}}_{x},x\in \left\{a,b\right\}$ flange_x.f ${s}_{\mathrm{rel}}$ $m$ Relative distance between flanges s_rel ${s}_{x}$ $m$ Absolute position of ${\mathrm{flange}}_{x},x\in \left\{a,b\right\}$ flange_x.s ${v}_{\mathrm{rel}}$ $\frac{m}{s}$ Relative velocity between flanges v_rel $\mathrm{lossPower}$ $W$ Loss power leaving component via heatPort lossPower $\mathrm{contact}$ Boolean variable; true when springs exerts force contact Connections Name Description Modelica ID ${\mathrm{flange}}_{a}$ Left flange of compliant 1-dim. translational component flange_a ${\mathrm{flange}}_{b}$ Right flange of compliant 1-dim. translational component flange_b $\mathrm{heatPort}$ heatPort Parameters General Parameters Name Default Units Description Modelica ID $c$ $1$ $\frac{N}{m}$ Spring constant c $d$ $1$ $\frac{Ns}{m}$ Damping constant d ${s}_{\mathrm{rel0}}$ $0$ $m$ Unstretched spring length s_rel0 $n$ $1$ $1$ Exponent of spring force n Use Heat Port $\mathrm{false}$ True (checked) means heat port is enabled useHeatPort Name Default Units Description Modelica ID ${s}_{\mathrm{nominal}}$ $1·{10}^{-4}$ $m$ Nominal value of ${s}_{\mathrm{rel}}$ s_nominal $\mathrm{prefer}$ Prioritize ${s}_{\mathrm{rel}}$ and ${v}_{\mathrm{rel}}$ as states stateSelect
# 009B Sample Final 1 This is a sample, and is meant to represent the material usually covered in Math 9B for the final. An actual test may or may not be similar. Click on the  boxed problem numbers  to go to a solution. ## Problem 1 Suppose the speed of a bee is given in the table. Time (s) Speed (cm/s) ${\displaystyle 0.0}$ ${\displaystyle 125.0}$ ${\displaystyle 2.0}$ ${\displaystyle 118.0}$ ${\displaystyle 4.0}$ ${\displaystyle 116.0}$ ${\displaystyle 6.0}$ ${\displaystyle 112.0}$ ${\displaystyle 8.0}$ ${\displaystyle 120.0}$ ${\displaystyle 10.0}$ ${\displaystyle 113.0}$ (a) Using the given measurements, find the left-hand estimate for the distance the bee moved during this experiment. (b) Using the given measurements, find the midpoint estimate for the distance the bee moved during this experiment. ## Problem 2 We would like to evaluate ${\displaystyle {\frac {d}{dx}}{\bigg (}\int _{-1}^{x}\sin(t^{2})2t\,dt{\bigg )}.}$ (a) Compute  ${\displaystyle f(x)=\int _{-1}^{x}\sin(t^{2})2t\,dt}$. (b) Find  ${\displaystyle f'(x)}$. (c) State the Fundamental Theorem of Calculus. (d) Use the Fundamental Theorem of Calculus to compute  ${\displaystyle {\frac {d}{dx}}{\bigg (}\int _{-1}^{x}\sin(t^{2})2t\,dt{\bigg )}}$  without first computing the integral. ## Problem 3 Consider the area bounded by the following two functions: ${\displaystyle y=\cos x}$  and  ${\displaystyle y=2-\cos x,~0\leq x\leq 2\pi .}$ (a) Sketch the graphs and find their points of intersection. (b) Find the area bounded by the two functions. ## Problem 4 Compute the following integrals. (a)  ${\displaystyle \int {\frac {t^{2}}{\sqrt {1-t^{6}}}}~dt}$ (b)  ${\displaystyle \int {\frac {2x^{2}+1}{2x^{2}+x}}~dx}$ (c)  ${\displaystyle \int \sin ^{3}x~dx}$ ## Problem 5 The region bounded by the parabola  ${\displaystyle y=x^{2}}$  and the line  ${\displaystyle y=2x}$  in the first quadrant is revolved about the  ${\displaystyle y}$-axis to generate a solid. (a) Sketch the region bounded by the given functions and find their points of intersection. (b) Set up the integral for the volume of the solid. (c) Find the volume of the solid by computing the integral. ## Problem 6 Evaluate the improper integrals: (a)  ${\displaystyle \int _{0}^{\infty }xe^{-x}~dx}$ (b)  ${\displaystyle \int _{1}^{4}{\frac {dx}{\sqrt {4-x}}}}$ ## Problem 7 (a) Find the length of the curve ${\displaystyle y=\ln(\cos x),~~~0\leq x\leq {\frac {\pi }{3}}}$. (b) The curve ${\displaystyle y=1-x^{2},~~~0\leq x\leq 1}$ is rotated about the  ${\displaystyle y}$-axis. Find the area of the resulting surface.
Address 8 Union St, Exeter, NH 03833 (603) 772-9369 # calculating relative standard error East Hampstead, New Hampshire Relative standard error is calculated by dividing the standard error of the estimate by the estimate itself, then multiplying that result by 100. Foreign Exchange Reserves Foreign exchange reserves are reserve assets held by a central bank in foreign currencies, used to back liabilities on their ... Find a Critical Value 7. Notice that s x ¯   = s n {\displaystyle {\text{s}}_{\bar {x}}\ ={\frac {s}{\sqrt {n}}}} is only an estimate of the true standard error, σ x ¯   = σ n Transposition errors occur when ... doi:10.4103/2229-3485.100662. ^ Isserlis, L. (1918). "On the value of a mean as calculated from a sample". It is useful to compare the standard error of the mean for the age of the runners versus the age at first marriage, as in the graph. In an example above, n=16 runners were selected at random from the 9,732 runners. Correction for finite population The formula given above for the standard error assumes that the sample size is much smaller than the population size, so that the population can be considered Pearson's Correlation Coefficient Privacy policy. How do I approach my boss to discuss this? Statistical significance is computed as: where S is the standard error, x1 is the first estimate, and x2 is the second estimate. The standard error for food sales buildings (x1) is (7.5/100)*5,600, or 420, and the standard error for health care buildings (x2) is (11.4/100)*24,600, or 2,804. Step 2: Multiply Step 1 by 100. This formula may be derived from what we know about the variance of a sum of independent random variables.[5] If X 1 , X 2 , … , X n {\displaystyle Relative Standard Deviation Excel There isn't a built in function to find the RSD in Excel. The true standard error of the mean, using σ = 9.27, is σ x ¯   = σ n = 9.27 16 = 2.32 {\displaystyle \sigma _{\bar {x}}\ ={\frac {\sigma }{\sqrt Estimates with a RSE of 25% or greater are subject to high sampling error and should be used with caution. Dividend Reinvestment Plan - DRIP A plan offered by a corporation that allows investors to reinvest their cash dividends by purchasing additional shares or ... Q: What is a coin toss simulation? Expected Value 9. See how standard deviation is helpful in evaluating a mutual fund's performance. Learn how the standard error is used in trading ... As the sample size increases, the sampling distribution become more narrow, and the standard error decreases. As an example of the use of the relative standard error, consider two surveys of household income that both result in a sample mean of \$50,000. The Office of Health Informatics follows guidelines used by the National Center for Health Statistics (PDF, 289 KB, exit DHS) and recommends that estimates with RSEs above 30 percent should be no. 6298.0). For example, there is approximately a 95% chance (i.e. 19 chances in 20) that the population value lies within two standard errors of the estimates, so the 95% confidence interval is Method 1: $$\text{typical_value} = \text{male_value}*(1 + \theta{(x)}*\text{is_female})$$ where: is_female is 0 for males and 1 for females. Understand the basics of calculation and interpretation of standard deviation and how it is used to measure risk in the investment ... The standard error actually refers to the standard deviation of the mean. In other words, it is the standard deviation of the sampling distribution of the sample statistic. The standard deviation of the age for the 16 runners is 10.23, which is somewhat greater than the true population standard deviation σ = 9.27 years. American Statistical Association. 25 (4): 30–32. Similarly, the sample standard deviation will very rarely be equal to the population standard deviation. The mean age for the 16 runners in this particular sample is 37.25. The 95% confidence interval for the average effect of the drug is that it lowers cholesterol by 18 to 22 units. Last Revised: December 2, 2014 WISH Home Behavioral Risk Factor Survey All Births Low Birthweight Teen Births Prenatal Care Fertility Infant Mortality Cancer Injury-Related Mortality Injury-Related Hospitalizations Injury-Related Emergency Dept. Watch the video, or read on below: The relative standard deviation (RSD) is a special form of the standard deviation (std dev). Standard deviation refers to the variability inside any given sample, while standard error is the variability of the sampling distribution itself. Find out how risk aversion is measured in modern portfolio theory (MPT), how it is reflected in the market and how MPT treats ... The equation is Xbar = Xsum/N, where Xsum is the sum of all the data points, and N is the total number of points. The standard error estimated using the sample standard deviation is 2.56. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean Ecology 76(2): 628 – 639. ^ Klein, RJ. "Healthy People 2010 criteria for data suppression" (PDF). The ages in that sample were 23, 27, 28, 29, 31, 31, 32, 33, 34, 38, 40, 40, 48, 53, 54, and 55. n is the size (number of observations) of the sample. This often leads to confusion about their interchangeability. Check our our statistics YouTube channel for hundreds of videos on elementary stats. JSTOR2682923. ^ Sokal and Rohlf (1981) Biometry: Principles and Practice of Statistics in Biological Research , 2nd ed. The sample standard deviation s = 10.23 is greater than the true population standard deviation σ = 9.27 years. ISBN 0-7167-1254-7 , p 53 ^ Barde, M. (2012). "What to use to express the variability of data: Standard deviation or standard error of mean?". Because the 9,732 runners are the entire population, 33.88 years is the population mean, μ {\displaystyle \mu } , and 9.27 years is the population standard deviation, σ. Find the variance by calculating the average of the squared differences from the... For example, the U.S. So, Sx1-x2 = (4202+2,8042)½ and Sx1-x2 = 2,836.
# How to convert mol m-2 s-1 CO2 emission to Megaton day-1 (MtCO2 per day)? I am working on the global CO2 emission using a model dataset. The model provides the CO2 emission in mol m-2 s-1. How to convert mol m-2 s-1 CO2 emission to Megaton day-1 (Mt per day)? Regards • You don't have enough information. You have a flux rate, so without a defined area, you can't convert to daily emissions. You can easily convert moles to Mt, and seconds to days, but you still need an area defined. – farrenthorpe May 4 at 14:22 • Thank you sir, your answer solved the problem. There was a global dataset, and the area was 510.1 million km². – Farhan Mustafa May 4 at 14:52 Do the calculation for 1 mol of CO$$_2$$ 1 mol of CO$$_2$$ weighs 49 grammes = 0.0049 kg 86400 seconds in a day, so 864000 * 0.0049 = 423.36 kg/day/m$$^2$$ The surface area of the globe is 5.1 x 10$$^{14}$$ m$$^2$$ - so 423.6 * 5.1 x 10$$^{14}$$ = 2.16 x 10$$^{17}$$ kg/day Convert from kg to Megatons, 10$$^9$$ kg in a Megaton - so we get 1 mol CO$$_2$$/m$$^2$$/second = 2.16 x 10$$^8$$ Megatons/day This assumes, as farrenthorpe commented, that the molar emission rate represents a rate evenly distributed over the globe - if, for instance it only represented terrestrial emissions, or emissions in a particular region the calculation would need to take that into account. • Thank you sir for explaining in detail. It helped out. – Farhan Mustafa May 4 at 14:55 • Farhan - happy to help out. I just edited my answer as I had made an error in units that changed the answer. I missed conversion from square metres to square kilometres! – Andy M May 4 at 15:57
Homework Help: Calculating final velocity of an object on an inclined plane 1. Oct 26, 2011 trulyfalse Hello everyone, I'm stuck on a dynamics review question. I was told to solve like I would with other inclined planes, however mass was not given. I am not sure how to proceed. 1. The problem statement, all variables and given/known data A roller coaster reaches the top of the steepest hill with a speed of 1.4 m/s. It then descends down the hill, which is at an average angle of 45° and is 50 m long. What will its speed be when it reaches the bottom? (Answer: 26 m/s) 2. Relevant equations Fnet=ma Fg=mg 3. The attempt at a solution I drew a free body diagram, separating the x and y components of Fg. This is futile as mass is not given, thus Fg, Fn, and Fnet cannot be calculated. Am I missing something here? 2. Oct 26, 2011 sandy.bridge Hello, if you are familiar with conservation of energy, here is a method for solving this problem. $K_E=P_E$ In doing so, you eliminate the dependence of mass. 3. Oct 26, 2011 trulyfalse Could you elaborate? Thanks. :) 4. Oct 26, 2011 sandy.bridge $$\frac{1}{2}mv^2=mgh$$ Which in words states that the kinetic energy at the bottom is equal to the change in potential energy. The height, h, can easily be determined as you know the angle of incline. 5. Oct 26, 2011 trulyfalse So, in that case I would assume mass is negligible and leave it out of the equation? Here we go: First I manipulated the formula given (assuming mass in negligible) to get √2gh=v 50cos(45°) = 35.4 m = height of the coaster to the ground √2(9.81m/s2)(35.4m) = v v = 26.3 m/s rounded off to 2 sig digs is 26 m/s Thanks bro! You've been a big help. :) 6. Oct 26, 2011 sandy.bridge The mass is not negligible. I would assume the mass of a rollercoaster is quite large relative to you, or I. However, the change in energy for this particular instance is not dependent on the mass of the system. Hence, the m is cancelled upon manipulation of the equations. 7. Oct 26, 2011 trulyfalse Right, because dividing m by m yields 1. This will be of great aid on my unit exam! 8. Oct 26, 2011 sandy.bridge I wouldn't advise applying any theorms that you are not familiar with. You can also apply kinematics with this question to solve for the velocity. For example, one can apply $v_f^2-v_i^2=2a_y\Delta x$ The acceleration is due to gravity. The intial velocity in the y-drection is 0. The change in position is the same as you had used before. 9. Oct 26, 2011 trulyfalse In that case then, to solve for vf I would manipulate to get
# Revision history [back] ### edge labels and vertex size causes problems Hi, I got two problems creating a graph. I want to misuse the graph tools in sage to create a flow diagram, but I encountered to problems: First, the vertex_size option does no work at all in my code and second, relating to the question here, is there now a possibility to shift the edge labels? I work with sage 5.9 in a virtual box environment from sage.graphs.graph_plot import GraphPlot h = DiGraph({0:[1,2], 1:[3], 2:[4]}) for u,v,l in h.edges(): h.set_edge_label(u,v,'(' + str(u) + ',' + str(v) + ')') h1=h.graphplot(save_pos=True, edge_labels=True, talk=True) h1.show() print h.get_pos() h.set_pos({0:[0,0],1:[1,1],2:[1,-1],3:[2,1],4:[3,-1]}) h1=h.graphplot(save_pos=True, edge_labels=True, talk=True, vertex_size=0) h1.set_vertices(vertex_shape='s') h1.show() ### edge labels and vertex size causes problems Hi, I got two have several problems creating a graph. I want to misuse the graph tools in sage sagemath to create a flow diagram, but I encountered to two problems: First, the vertex_size option does no not work at all in my code and second, relating to the question here, is there now a possibility to shift the edge labels? I work with sage 5.9 in a virtual box environment from sage.graphs.graph_plot import GraphPlot h = DiGraph({0:[1,2], 1:[3], 2:[4]}) for u,v,l in h.edges(): h.set_edge_label(u,v,'(' + str(u) + ',' + str(v) + ')') h1=h.graphplot(save_pos=True, edge_labels=True, talk=True) h1.show() print h.get_pos() h.set_pos({0:[0,0],1:[1,1],2:[1,-1],3:[2,1],4:[3,-1]}) h1=h.graphplot(save_pos=True, edge_labels=True, talk=True, vertex_size=0) h1.set_vertices(vertex_shape='s') h1.show()
## Saturday, August 25, 2012 Our Middle School has a fairly developed advisory program in the 6th - 8th grades. This will be my seventh year as an advisor, and I really, really love it. Which is very surprising to me because it was the thing I was most nervous about when starting at this school - I know math, I don't necessarily know adolescents and their crazy thinking and feelings and struggles. But it's turned out much, much better than I had feared, and I would not like to go back to just being a classroom teacher like I was before. I really enjoy the close relationships that I develop with advisees and the community that advisory becomes as the year progresses. Some aspects of the program: • Advisors welcome students to a new year with either a phone call or a letter sent home before school starts. Here is my letter that I'm sending out in a few days, but I hand wrote each one on a cute notecard. Kids love getting mail! • Advisory meets first thing every morning for 10 minutes to go through announcements and check in with students and for 45 minutes twice a week ("extended advisory"). • Advisors meet with students and parents twice a year for Parent-Advisor-Student conferences, led by the student. They discuss the student's progress and other issues that are affecting them. 8th grade advisors (that's me!) also help students register for high school classes if they are continuing into our high school. • The advisor is basically the touch point between the student and family and the school. The advisor keeps tabs on how the student is doing academically (via other teachers), behaviorally (via the assistant principal), and emotionally (via the counselor). Concerns about the student are supposed to go to the advisor first, either from other teachers or from parents. • During extended advisory, we do activities that are related to the advisory curriculum (more on that below), play games, play outside, or meet one-on-one with advisees. Two big things that the 8th graders also do during advisory is participate in a Little Buddies program with a younger class and do community service projects, like helping out at a food pantry. One of the extended advisories takes place on a Friday morning and students take turns bringing in breakfast so that the advisory can sit down and eat breakfast together. • The curriculum is pretty loose, but tries to hit the following topics: • Learning/study strategies, goal setting, other academic type skills, including preparing to lead conferences • Executive functioning, organization (a lot of our students struggle with this) • Risky behaviors (sex, drugs, and rock & roll) • Relationships (navigating friendships & dating, cliques and excluding others) • Bullying & aggressive behavior • Online stuff (navigating social media, safety, civility in a digital world) • Media literacy, including being a smart consumer • Body image & eating disorders • Diversity & inclusion • 8th grade advisors select a book for each of their advisees as a graduation gift (the school pays for this). This is one of my fave traditions, but it takes me forever to come up with the perfect book for each kid. I'm currently organizing all of the 8th grade advisory resources into digital form since we've had physical binders & folders for a long time, and will post an update once they have been migrated to the web in case anyone would like to use them. ### NBI Post #2: Something That I'm Proud Of Seems like the New Blogger Initiative has gotten started with a bang... my Google Reader is bursting at the seams and I'm seeing lots of new faces on Twitter. So here we go with entry #2. I chose the first prompt: Find one worksheet or activity or test or unit or question or powerpoint slide or syllabus or anything that you are proud of. Share it. I cheated because I couldn't pick just one, and had to settle for two that are very connected. So my favorite sequence of lessons to teach are on the topic of slope-intercept form of a linear equation. I feel like there's so much richness there, in terms of patterns, real-world applications, and connections to previous and future topics that I've always enjoyed teaching it. My main problem has been time constraints hitting against my desire to do a million different activities with this topic. Last year, this was the sequence that I used: 1) Introduce patterns that grow in a linear fashion. Students are in groups and need to predict the previous and the next figures in the pattern. Then, they need to explain the pattern - what changes? what stays the same? Then, they describe the 100th figure in the pattern and generalize to the xth figure. Repeat for a few more patterns that are still linear, but either grow faster or slower or start with a different number of tiles. We make a table showing the data (figure # versus # of tiles), graph it, and then all the awesomeness gets even more so when we start connecting and comparing all of the different representations and finally discuss the equation for each pattern and how it shows this information. Intro to Slope-Intercept Form I really like this activity because it is so group-focused - all I need to do is moderate the discussions, and all of the discovery and thinking comes from the students. The tasks are also low-entry and kids that maybe typically don't participate much seem to enjoy the visual patterns and predictions. I love days when I feel like the students are running the classroom and I see intrinsic engagement. 2) The next day, students complete a lab-type activity in groups, called "Linear Walks." They use motion detectors to visualize the relationship between time and distance and better understand why the graph of an equation in slope-intercept form looks the way that it does. This was adapted from the Discovering Algebra textbook, but I've seen versions of it in lots of places. Linear Walks Lab This is also a super fun day for me because there's such a clear connection for students between the algebraic reality (variables and equations and such) and what's actually going on in front of them. It's so clear why the graph of y = 0.5x + 2 looks the way that it does since it represents someone standing 2 meters away from the motion detector and increasing their distance by 0.5 meters every second. It also connects nicely to when we discuss point-slope form of an equation a few lessons later. An equation like y = 0.5(x – 1) + 2 now means that someone standing 2 meters away from the motion detector waited 1 second (so they lost 1 second of time, hence we subtract 1 from x) and then started increasing their distance by 0.5 meters every second. I love that these two lessons make sense of an abstract concept like y = mx + b without memorization or "tricks," but rather through understanding of patterns and physical concepts like movement over time. It gives me a nice contextual handle to refer back to throughout the chapter: "If your graph represented someone walking, would their distance be increasing or decreasing over time?" "If your equation represented a pattern, how many tiles would it have started with?" I'd love to hear how others teach this topic and if you have any feedback or criticism of these lessons. ## Tuesday, August 21, 2012 ### New Blogger Initiative - Post #1 on First Week Goals Super excited for the New Blogger Initiative that @samjshah has started up! I'm fairly new to blogging (started about 3 months ago), and it's wonderful to be initiated into the mathtwitterblogosphere and to be harangued & threatened with whacking if I don't keep up with my blog! Umm, I think it's with love? Anyway, without further ado, here is my big goal for the first week of school, which is in about two and a half weeks: ## Create a positive classroom culture where students feel comfortable, confident, and cared for by me and each other. Yup, that's a raccoon group hug. Many of my students have struggled with math in the past or have learned that it is a weird, arbitrary set of rules that they have to memorize and regurgitate as best as they can and that their creativity, passion, and intellect don't have much of a place. Yes, it's a bit of a tall order for the first week, but I want students to have a sense of our classroom as a place where things make sense, where they are smart and capable, and where people care about each other. Since the first unit for all of my sections will focus on review, it gives me lots of opportunities for activities that emphasize collaboration, creativity, and engaging thinking. I also want to be sure to create a sense of order and safety in how the class is run, both in terms of procedures that simplify our day-to-day structures and in terms of how mistakes are received and feedback is given. Obviously, as the year goes on, I'm going to be looking at students' learning and ability to communicate mathematically, and all of the big goals that I outlined for myself earlier, but for the start of the year, I would love to just see students feeling positive. ## Sunday, August 19, 2012 ### msSunFun #3: Goals for the School Year I'm so glad that the theme for this week was changed to goal-setting for the new school year because this is something that I've needed to sit down and write for a while now, and this was the perfect kick-in-the-butt to get myself to do it. I have set goals for a few years now, but this year, I'd like to go back and you know, actually see how I'm doing. So maybe there will be a prompt later in the year to check in on our goals? I have two overarching goals this year: 1. Richer Mathematics 2. I would like to deepen the curriculum, to push for understanding that is more abiding and less surface-level or focused on discrete skills. The specific ways that I hope to achieve this are by having students do more: • writing, processing, reflecting, and explaining We already do a lot of this in class and I've required students to do journal writing for two years now, but I want to make this part of daily homework assignments and incorporate into assessments. I don't want writing and reflecting to be an add-on that happens every week or two, but incorporated into the fabric of the class. To that end, I will be asking students to respond orally and in writing to prompts at the end of most class periods and as part of most homework assignments. I will be asking students to make videos where they explain their approach to a problem. I would also like to put more "explain this" type questions on tests. • problem-solving In my previous post, I wrote about the various different approaches that I've tried to incorporate rich problems and tasks into my classrooms, and how I plan to use them this year. The basic gist is that I want to use more problems that are content-related in the classroom, pose more problems for kids to think about outside of the classroom, and continue to provide extra, "fun" problems to interested kids. I think that the group-sized whiteboards I made this year will help encourage better groupwork and communication about problems between students. I'm still thinking about how to assess students' work when assigning more difficult, open-ended problems, both in terms of giving good feedback and in terms of coming up with a grade of some sort at the end. 3. Communication 4. I would like for there to be more dialog between myself and students, more opportunities for them to give feedback on how they are doing and what they need and for me to communicate more clearly and more often back to them how they are doing in the class and what they should be working on to improve. Last year, I had time to meet with students in the two-year Algebra sequence about once a week to discuss how they were doing and what I wanted to them to work on, but it wasn't until the end of the year that I realized that I was doing a lot of the work for them (keeping track of missing assignments & assignments that should be corrected, as well as assessments that needed to be retested) and that they were depending on me to tell them what to do. Last year, I started making them keep track of this themselves and even gave points for having a pretty clear picture of where they were at when I checked in with them. I want to start this much earlier this year. I was also very unsystematic about reassessing - there wasn't a clear schedule and I didn't always follow up with students who blew it off. I would like to be more organized this year - I will have a calendar where students who miss assessments or those who are reassessing will sign up, and keep better track of students who need to reassess but avoid doing so. I would also like to encourage students to communicate with me about their needs. I'll be using Edmodo for the first time this year, which will allow me to periodically post surveys or questionnaires to get more feedback from students. I'm planning on taking more pictures and notes during class and sharing my observations with students throughout the year rather than just at report card time. I'm also toying with the idea of involving parents more, either through Edmodo (which allows for parent accounts) or by using Evernote to keep track of the student photos and notes and emailing them to families. I need to think about this a bit more - I'd love to hear how others choose to involve (or not involve) parents and why. ## Wednesday, August 15, 2012 ### Integrating problem solving into the curriculum Like many others (@fawnpnguyen posted recently about her approach and there were some great discussions in the comments), I have wrestled with the question of how to integrate problem solving into my teaching. The master's program through which I was trained as a teacher heavily emphasized students engaging with rich, multi-entry tasks that promoted collaboration, writing, and connections between different approaches and ideas. I strongly believe this type of work should be a vital part of every math class. At some point soon, I hope that the Global Math Department will have a presentation on how to lead/organize problem solving in the classroom. Here are the different ways that I've used rich problems in the past: 1. Found problems that connected directly with the content material that was already part of the course. There are many problems that lend themselves to the content found in traditional MS and HS classes. For example, many of the problems in the Interactive Mathematics Program, Years 1 and 2, lead to students creating rules for specific scenarios or functions, including linear, exponential, and inverse ones. The Mathematics in Context and Connected Mathematics series have some great problems that can be integrated into traditional Pre-Algebra and Algebra 1 classes. The drawback with trying to connect everything back to the traditional content is that there's lots of material for which I have not found good problems, such as factoring, operations with rational expressions, and radical functions and expressions. Back when I taught Algebra 2 and Pre-Calculus, I had similar difficulties finding rich problems for much of the content. There's also the issue of time - I'd like to ideally have at least one rich problem every week or two, which eats up a lot of my class time if done well. Finally, using only problems that have a clear connection to the traditional curriculum leaves out a lot of rich, awesome problems that I still want to include. 2. Assigned problems to be completed outside of class. Some were connected to the traditional content, some were not. This gave me a lot more flexibility in terms of good problems to use and took up much less class time. But I never found a good way to support struggling students, develop the writing and problem-solving skills that are at the core of this type of work, and make explicit the connections between the assigned problems and the rest of the curriculum. The problems gradually petered out as both I and the students lost steam and assigning the problems became stressful and unproductive. If I do this again, I will need to spend some class time teaching students how to wrestle productively with open problems and will probably need to do some ramping, with easier problems at the start of the year. 3. Provided problems to interested students outside of class. Not required, problems were usually unconnected to the content. This was definitely the approach that involved the least amount of work. I had a pretty straightforward system: a folder with copies of the current "Problem of the Week" stapled to the wall outside of my classroom and another folder stapled just below that where students put their completed write-ups. At the end of the week, I would read through the submitted work, write feedback, and award candy to those students who demonstrated good work on the problem. I had a spreadsheet where I kept track of students who completed these. Some positives were that I got kids who weren't even my students to participate, just because they thought it might be interesting, and because it was not required, it was very stress-free and emphasized the "fun" aspect of figuring out math problems. The cons were that there was little connection to the curriculum and the students who participated were those who already enjoyed math and the students who could stand the most to gain from this type of experience avoided it altogether. So, my thoughts for this school year are that I would like to do all three of these options (hooray for overachievers!). A mix of #1 and #2 make the most sense for my class - doing those problems that have a clear content connection in class & spending more time on them, while reserving those awesome, random problems for the times when I can't find anything good that connects to what we're studying. Option #3 can co-exist as optional, more challenging or more "fun" type problems for students to do just because they want more. My biggest enemy right now is time: time in class for students to discuss and time outside of school for students to think and do math and write up their thinking and mathing. Oh, and did I mention that my students only have math for 45 minutes four days a week??? Clearly, I can't just add on more stuff without cutting anything, so I'm wondering how others have found time to do this - what do you cut? ## Saturday, August 11, 2012 ### MS SunFun - Math Class Binders The theme this week is Student Math Class Notebooks. Instead of notebooks, however, I like for my students to keep a 1 inch 3-ring binder. My reasons for this rather than an Interactive Notebook is that there is no cutting or gluing necessary, which cuts down on supplies needed for class as well as time to cut & glue stuff into the notebook. Instead, all handouts are hole-punched and students have blank hole-punched lined and graph paper to use. The other benefit is that the order can be changed and new pages inserted at any time. If a student is absent, they can just continue with their class work and if they later work on an assignment that happened while they were gone, they can just insert it into the right place. Homework or classwork can be turned in to me and then easily returned to the binder. Students' binders go back and forth between home and school. The binder is organized into three sections with dividers: 1. Notes/In-class projects (basically, everything that happens in class, but isn't a quiz or test) 2. Homework/Journaling (all assignments that get taken home) 3. Quizzes/Tests, along with corrections and retakes This year, I will be using Left Hand Page and a Right Hand Page designations for notes, as described by Megan on her blog. Students will use the RHP to write down and work out problems and take notes, and they will use the LHP to process the material, mostly through reflective journaling questions that I will ask at the end of class, as well as a general summary of the day's lesson and a list of any questions that they still have. Another change for this year is that I will ask students to number the pages in each section and make a table of contents at the front of the In-class section. Since I give them an assignment sheet that lists all of the homework assignments for the unit, that page can be their table of contents for the homework section. I do a binder check every so often (more if the kids seem especially disorganized) where I look to see that they have the three sections organized and that they have blank lined and graph paper, as well as the required supplies for class. During the binder check, I also check in with students to see if they know what assignments, if any, they are missing, and what assignments they have not received full credit on and that they need to correct before the end of the unit. It's part of the grade for the binder check that students have a pretty accurate view of any outstanding work that they need to complete and know what concepts/topics they need to review or correct. My hope is that this helps them see the benefit of having an organized binder and puts more of the responsibility of knowing what they are supposed to do on them. My other little tip for keeping a binder is that at the end of each unit, students clean out each section, staple them together, and put them in a file folder that I keep for each student in a crate in my classroom. This year, I may ask them to reflect on the unit and create a summary sheet of the most important concepts and skills, which they will put at the front of the packet. At the end of the year, students have a nice folder of review materials that is organized by chapter. I'm not sure yet how to effectively help them use it to review for final exams, so if you have any good ideas about that, I'd love to hear them. (This is not from my class, but since all of my classroom stuff is still put away for the summer, it will have to do) ## Saturday, August 4, 2012 ### Counseling conference thoughts I've been super busy the last few days attending a counseling conference for teachers and advisors in Colorado. It's been amazingly powerful. We have been working on the skills that will help me be more than just an advisor ("Let's see how we're going to fix this problem..." "Have you tried...?" "When I was a student...") and moving towards real listening and building deeper relationships that will allow students to feel truly connected and understood. This will sometimes result in them processing through their feelings and coming to a solution of some sort. Sometimes, it will mean that "the relationship is the solution," which is a new idea for me. The conference is run by the Stanley H. King Counseling Institute, and I have a few more days in which to practice these newfound skills. On the first day, we learned about "real listening," which basically involves me talking as little as possible, only saying a few words or a question here or there to continue encouraging the speaker to go deeper and talk more. The next day, we learned about specific skills that would help us do this type of listening. I am actually thinking of making a small handout to post for myself listing these types of responses until they become more internalized: • Summary: a broadbrush overview of what was said, used to convey that you've got the main idea • Paraphrase: rephrases what the speaker has said into your own words, this allows the speaker to correct or clarify the listener's misunderstanding (basically, a more detailed version of summarizing) • Feeling and source: identifies the feeling underlying the speaker's words and the perceived cause of this feeling (can be helpful in pushing the speaker to dig deeper, but have to be careful not to assume or jump too far) • Clarifying question or statement: helps the speaker better understand what he or she is feeling. This is NOT to satisfy the listener's curiosity - the focus is on the speaker and what he or she needs • Joining: a statement that shows empathy or shared connection with the speaker's feelings without moving attention away from his or her story (so don't say, "I had a similar experience too," but instead say, "It's really tough when x happens.") We've done a few role plays channeling students that we struggled to advise over the years, and it was amazing how helpful these techniques were in understanding where the student was coming from and in deepening the listener's relationship with them. I was struck by the difference between this type of relationship building and the type that I usually engage in: discussing common interests, asking kids about their hobbies and athletic pursuits, sharing music or funny videos, etc. These are also good, but they don't promote deep processing and working through issues, which quite a few of my students would benefit from. I also really appreciated the importance of not placating the student or denying their feelings ("I'm sure it's not that bad." "It's okay." "Don't be sad."), which is something I'm certainly guilty of doing. I thought that I was doing a weepy student a kindness by releasing them to go to the bathroom and come back "when they're feeling better and ready for class" (I let them bring a friend! What am I - some kind of monster?), but now I see that I just couldn't handle sitting with their pain and uncomfortable with processing it with them together. This conference is helping me realize that much of what I was doing with my advising and relationship building before was about me, not about the student.
Formatted question description: https://leetcode.ca/all/634.html # 634. Find the Derangement of An Array Medium ## Description In combinatorial mathematics, a derangement is a permutation of the elements of a set, such that no element appears in its original position. There’s originally an array consisting of n integers from 1 to n in ascending order, you need to find the number of derangement it can generate. Also, since the answer may be very large, you should return the output mod 109 + 7. Example 1: Input: 3 Output: 2 Explanation: The original array is [1,2,3]. The two derangements are [2,3,1] and [3,1,2]. Note: n is in the range of [1, 106]. ## Solution Let f(n) be the number of derangement that can be generated using n integers. Obviously, f(1) = 0 and f(2) = 1. For n > 2, how to calculate f(n)? Use dynamic programming, where f(n) can be obtained using f(n - 1) and f(n - 2). If there are n numbers, then the greatest number n can be placed in any position from 1 to n - 1. Suppose number n is placed in position m where 1 <= m < n, then number m must be in a position that is not its original position. If number m is in position n, then the situation becomes a derangement of n - 2 remaining numbers, so the number of derangement is f(n - 2). If number m is not in position n, then for each number from 1 to n - 1, there is exactly one position that the number can’t be in. For number m, the one position it can’t be in is position n. For other numbers like number k where k != m and k < n, the one position it can’t be in is position k. So the number of derangement is f(n - 1). In conclusion, if there are n numbers and number n is placed in position m where 1 <= m < n, then the number of derangement is f(n - 2) + f(n - 1). Since there are n - 1 possible values for m, the number of derangement in total is f(n) = (f(n - 2) + f(n - 1)) * (n - 1). For each m such that 1 <= m <= n, after f(m) is calculated, do the modulo operation. Finally, return f(n). class Solution { public int findDerangement(int n) { if (n <= 3) return n - 1; final int MODULO = 1000000007; long[] dp = new long[n]; dp[0] = 0; dp[1] = 1; for (int i = 2; i < n; i++) dp[i] = (dp[i - 2] + dp[i - 1]) * i % MODULO; return (int) dp[n - 1]; } }
# Math Derive this identity from the sum and difference formulas for cosine: sinasinb=1/2[cos(a-b)cos(a+b)] Start with the right-hand side since it is more complex. Calculations: _____ _____ _____ Reason: _____ _____ _____ 1. 👍 2. 👎 3. 👁 4. ℹ️ 5. 🚩 1. look at the end of your last post of this same thing https://www.jiskha.com/display.cgi?id=1527528235 1. 👍 2. 👎 3. ℹ️ 4. 🚩 ## Similar Questions 1. ### Algebra 2 Which cosine function has maximum of 4, a minimum of -4, and a period of 2pi/3? A. y=4 cos 3 theta B. y= 4 cos 2 theta/3 C. y=4 cos theta/3 D. y=4 cos 3 theta 2. ### algebra A certain cosine function completes 6 cycles over the interval [0,2π]. Which function rule could model this situation? f(x)=cos(1/6x) f(x)=cos(x)+6 f(x)=cos(6x) f(x)=6cos(x) 3. ### TRIG/ALGEBRA 1) Find the exact value. Use a sum or difference identity. tan (-15 degrees) 2) Rewrite the following expression as a trigonometric function of a single angle measure. cos 3x cos 4x - sin 3x sine 4x 4. ### self-study calculus Sketch the curve with the given vector equation. Indicate with an arrow the direction in which t increases. r(t)=cos(t)I -cos(t)j+sin(t)k I don't know what to do. I let x=cos(t), y=-cos(t) and z= sin(t). Should I let t be any 1. ### Trigonometry How could you evaluate tan (13pi/12) if you did not know the sum and difference formula for tangent? Would you use the sin and cos sum and difference formulas, and if so, can someone walk me through it? Thank you!!! 2. ### Algebra 2 Honors Find the exact value by using an appropriate sum or difference identity. cos(165) degrees 3. ### Trigonometry Use a sum of difference identity to write the expression as a single function theta: cos(theta - pi). Okay so I know we will use cosAcosA+sinBsinB I got: cos(theta)cos(theta)sin(pi)sin(pi) I don't know how to solve from here and 4. ### algebra A certain cosine function completes 6 cycles over the interval [0,2π]. Which function rule could model this situation? f(x)=cos(16x) f(x)=cos(x)+6 f(x)=cos(6x) f(x)=6cos(x) 1. ### Trigonometry Express each of the following in terms of the cosine of another angle between 0 degrees and 180 degrees: a) cos 20 degrees b) cos 85 degrees c) cos 32 degrees d) cos 95 degrees e) cos 147 degrees f) cos 106 degrees My answer: a) - 2. ### Math 2nd question Express as a single sine or cosine function (note: this is using double angle formulas) g) 8sin^2x-4 I just don't get this one. I know it's got something to do with the 1-2sin^2x double angle formula. It's the opposite though? :S 3. ### Trig Given: cos u = 3/5; 0 < u < pi/2 cos v = 5/13; 3pi/2 < v < 2pi Find: sin (v + u) cos (v - u) tan (v + u) First compute or list the cosine and sine of both u and v. Then use the combination rules sin (v + u) = sin u cos v + cos v 4. ### Trigonometry Use the power-reducing formulas to rewrite the expression in terms of the first power of the cosine. [#1.] (sin^4x)(cos^4x) [#2.] (sin^4x)(cos^2x)
Are there examples of families of objects which are canonically isomorphic, but where diagrams of canonical isomorphisms don't commute? Specifically, are there "nice" (ie, not too obscure or contrived) examples of families of objects, where say for each objects $A,B,C$ in the family, the canonical isomorphism from $A\rightarrow C$ is not the composition of the canonical isomorphisms from $A\rightarrow B$ and $B\rightarrow C$? To illustrate, here's a non-example: The fundamental groups of a space with different base-points are all canonically isomorphic up to inner automorphisms, so given a space $X$, the fundamental groups $\pi_1(X,x)$ as $x$ varies over $X$, together with the conjugacy classes of these 'canonical isomorphisms', form a category. • Conjugacy classes of isomorphisms do not form a category in any really sensible way. – Todd Trimble Oct 23 '14 at 3:10 • I claim that if this ever happens to you then at least one of the isomorphisms in question shouldn't have been considered canonical. @Todd: really? Isn't the OP just working in the homotopy category of groups (equivalently, the homotopy category of Eilenberg-MacLane spaces)? – Qiaochu Yuan Oct 23 '14 at 4:37 • @QiaochuYuan I'm not seeing it. For a simple example of what I meant, take a groupoid with one object, say with automorphism group $S_4$. There are five conjugacy classes; what is the group structure on the set of conjugacy classes? – Todd Trimble Oct 23 '14 at 5:12 • @Todd: that isn't what I thought either you or the OP were referring to. Here is the category I am referring to: the objects are groups, and the morphisms $G \to H$ are conjugacy classes of homomorphisms $f : G \to H$, where two homomorphisms $f_1, f_2$ are conjugate if there exists some $h \in H$ such that $h f_1(g) h^{-1} = f_2(g)$ (the same as saying that they're naturally isomorphic as functors). This is a perfectly well-defined category which deserves to be called the homotopy category of groups; in particular it is equivalent to the homotopy category of Eilenberg-MacLane spaces. – Qiaochu Yuan Oct 23 '14 at 5:20 • Would you consider parallel transport along a prescribed curve in a Riemannian manifold with nontrivial holonomy to be an example? – Paul Siegel Oct 23 '14 at 5:39 Maybe this is kind of contrived, but any single object with a canonical automorphism that is not the identity is an example of this. In fact, any other example must in some sense include an example like this, if you consider the composition of $A \to B \to C$ with the inverse of $A \to C$ a "canonical" automorphism of $A$. Here is an example. Consider the family of rotated copies of the unit square in the plane $\mathbb{R}\times\mathbb{R}$, rotated by some angle about the origin. If one such copy is rotated only a little from a second, then it seems the canonical isometry to bring them into alignment would be to rotate the first through the smallest possible angle to bring it into alignment with the second. But of course, the composition of many such small angle rotations (or even just two of sufficient small size) would add up to a large angle, and so the composition of these canonical isometries is no longer canonical. (I suppose that one should say here that if the figures are rotated by exactly $45^\circ$, then either of the two minimal rotations should count as canonical.) • One should say here that if the figures are rotated by exactly $45^{\circ}$, then neither of the two minimal rotations should count as canonical! This is essentially the issue that comes up in writing down, say, branches of the logarithm; you need to pick a "branch cut." – Qiaochu Yuan Oct 23 '14 at 4:33 • In the first paragraph I thought that the unit square would be $[0,1]^2$ but then I realized that you had $[-1,1]^2$ in mind. – Dirk Oct 23 '14 at 8:42 • @Dirk Yes, I meant to rotate the square about its center. – Joel David Hamkins Oct 23 '14 at 14:35 Let $X=\mathbb{A}_{\mathbb{C}}^1\backslash\{0\}$. The first algebraic de Rham cohomology of $X$ is 1-dimensional, and has a canonical generator $\frac{dz}{z}$ (or any other 1-form with residue 1). The first Betti cohomology is also 1-dimensional, and has a canonical generator taking a 1-cycle to its winding number around 0. Over $\mathbb{C}$ there is a canonical isomorphism $H^1_{dR}(X)\to H^1_B(X,\mathbb{C})$, but it takes the generator to $2\pi i$ times the generator. If we wanted to say this in terms of isomorphisms, we could consider the three vector spaces $\mathbb{C},H^1_{dR}(X)$, and $H^1(X,\mathbb{C})$, and use the fact that an isomorphism from $\mathbb{C}$ to a 1-dimensional vector space is the same thing as the choice of a generator of that vector space. You don't really need to construct sophisticated mathematical examples to answer this. The problem is that the word canonical has at best a sociological meaning, not a mathematical one. Are $X\times Y$ and $Y\times X$ "canonically isomorphic" by the switching map? In some contexts it may be reasonable to say so, in order to avoid bureaucratic notation. However, once $X$ is non-trivial and $Y=X$ you have a non-trivial group. All that word canonical means is "I'm too lazy to write down the definition". • Canonical really means "I don't have to bother writing down the definition because there's only one possibility which doesn't involve making arbitrary choices". Thus $X \times Y$ is canonically isomorphic to $Y \times X$, but a finite dimensional vector space is not canonically isomorphic to its dual. – Paul Siegel Oct 23 '14 at 17:37
# Revision history [back] ### Rounding, using modular arithmetic, etc. in find_fit? I'm learning how to use find_fit and it works great for polynomials, but I'd like to include expressions that require integers. My variables are all integers, and my coefficients are expected to be relatively simple rationals (i.e. small denominators). In particular, I'd like to use a model like: model(x,y) = (ax+by+c)%(dx+ey+f) However, I get the expected error: TypeError: unsupported operand parent(s) for %: 'Symbolic Ring' and 'Symbolic Ring' If I try to convert (dx+ey+f) to an integer within the model using int(), ceiling(), etc. then it won't convert, since of course it's a symbolic expression. Is there a way to round within a symbolic expression? Or any suggestions for other workarounds? Thanks! 2 None vdelecroix 7147 ●16 ●78 ●155 http://www.labri.fr/pe... ### Rounding, using modular arithmetic, etc. in find_fit? I'm learning how to use find_fit and it works great for polynomials, but I'd like to include expressions that require integers. My variables are all integers, and my coefficients are expected to be relatively simple rationals (i.e. small denominators). In particular, I'd like to use a model like: model(x,y) = (ax+by+c)%(dx+ey+f)(a*x+b*y+c)%(d*x+e*y+f) However, I get the expected error: TypeError: unsupported operand parent(s) for %: 'Symbolic Ring' and 'Symbolic Ring'Ring' If I try to convert (dx+ey+f) to an integer within the model using int(), ceiling(), etc. then it won't convert, since of course it's a symbolic expression. Is there a way to round within a symbolic expression? Or any suggestions for other workarounds? Thanks! ### Rounding, using modular arithmetic, etc. in find_fit? I'm learning how to use find_fit and it works great for polynomials, but I'd like to include expressions that require integers. My variables are all integers, and my coefficients are expected to be relatively simple rationals (i.e. small denominators). In particular, I'd like to use a model like: model(x,y) = (a*x+b*y+c)%(d*x+e*y+f) However, I get the expected error: TypeError: unsupported operand parent(s) for %: 'Symbolic Ring' and 'Symbolic Ring' If I try to convert (dx+ey+f) to an integer within the model using int(), ceiling(), etc. then it won't convert, since of course it's a symbolic expression. Is there a way to round within a symbolic expression? Or any suggestions for other workarounds? Thanks! edit: I was able to get it to run, but it doesn't provide an acceptable solution. I imagine it's due to how find_fit works with user-defined functions but I don't know enough about the inner-workings to work that out. Here is my code: data = [(i,(i%2) ) for i in range(30)] var('a, b, c, d, x') def f(x, a, b, c, d): return int(a+b*x)%int(c*x+d) fit = find_fit(data, f, parameters = [a, b, c, d], variables = [x], solution_dict = True) print fit and the output is {d: 1.0, c: 1.0, b: 1.0, a: 1.0} I would like it to return, say, {d: 2.0, c: 0.0, b: 1.0, a: 0.0} Is there a way to make this work a bit better for my needs?
# Break's Over I’m going back to the office tomorrow after some leisurely travel, and besides sunning, wading in the surf looking for seashells and other real life niceties, I managed to do some “real” thinking and a fair bit of hacking. My earlier rant about hardware needs a little follow-up – which I’ll get to eventually – but it bears noting that I’ve thought a bit more along those lines and now believe we’re doing smartphones all wrong. I’m drafting a longer post on this topic, but given that we’re going to be sucked headlong into a whole week of Apple hype, there’s something I deem worth mentioning right away: The big, hulking behemoths that most smartphone vendors are pushing these days feel more like a misplaced, wasteful and anti-ergonomical quest for The One Device than truly usable tools. By trying to design them to do everything, we’re not actually achieving anything of consequence, and quite likely harming ourselves in the process. In a nutshell, I’m becoming (again) rather partial toward smaller, less pretentious devices1 that do less and free me to try to achieve more. Also along those lines, I’m rather skeptical of the current smartwatch craze. I gladly stopped wearing perennial self-winding, zero-maintenance wristwatches well over a decade ago and see no point in going back to wearing a ephemeral, bulky ego-boosting piece of uninformative junk that requires charging every evening2. ## Comfort Zone? What’s That? As it turned out, I spent very little time coding – but besides yesterday’s hack and other Python stuff I managed to dabble a bit in Clojure and Go to get an updated feel for either language. We have a little benchmark game going on at the office, so I sumbitted and tweaked entries for those (spoiler: Go wasn’t suited for that specific task). I’ll be writing about that as well (I’ve already made up my mind as to what role either language will play in my near future), but what bears noting is that I’ve dipped into both communities and gathered enough reading material to last me at least a couple of months, some of which I’ve been skimming in the evenings interspersed with my (as usual, pretty intensive) book reading. ## Less is More But speaking of skimming, one of the things I did to decrease cognitive overhead while keeping track of news was to add a simple (and extremely naïve) text summariser to rss2imap that tacks the three “most relevant” sentences atop each item like so: It’s fairly dumb (and mostly redundant for astroturfing feeds like Engadget that consist mostly of summaries), but I’ve found it to be a pretty good gauge for whether or not the article is worth reading at all, and very effective when reading news on a phone. I expect the next step will probably be making it a lot smarter, but that will have to wait3. ## Writing Rather to my surprise I managed to sneak in some writing amidst all the lounging and traipsing around the country, largely thanks to Editorial – it’s wonderful, and I’ve been having tremendous fun using it. Pretty much everything I’ve written since it came out has been written and revised on it, and I already have workflows for auto-linking text, fetching snippets of text from Evernote and automating image cropping and resizing. All I need right now is for the official Dropbox iOS app to support proper folder renaming and individual file copying (why it lacks either is totally beyond me – I can only move stuff around inside it). That and my RSI to go away – I still haven’t been able to shake it off completely, it seems, even on vacation. ## Getting Jobs Done I’ve also been tinkering with Python threading (since I can’t use gevent or multiprocessing on the iPad), and refactored the homegrown job queue I mentioned in passing earlier into something that looks and feels a lot like Celery, but for use inside a single process. The library code is on this branch of my experimental RSS aggregator, and you use it very much like you’d use Celery: @task(max_retries=3) def worker(item_list): for item in item_list: item_worker.delay(item) def item_worker(item): # do something frightfully complex with item return "Boy, that was tough, but I've processed %s" % item .delay() marshals the arguments and queues the task, giving you a Deferred object you can use to reference the actual result later (including any exceptions), and I’ve allowed for automatic task retries, priorities and pool size limits. You can even run multiple thread pools for different sets of workers, making it pretty easy to put together relatively complex data processing flows inside the same process – just add Queues and stir. ## What Didn’t Happen Well, as it turns out I didn’t rest as much as I wanted to, or even touched any of the projects I originally planned (for starters, I had intended to give yaki-tng a sizable boost) – nor did I finish my book stack, watch all the videos I have been queueing up, etc. But I did catch some pretty seashells, and the kids had plenty of fun. 1. To be honest, even the iPhone 5 seems too big these days. It’s been a bit of a pain from the start, but given the established trend toward larger screens I don’t think this will change (at least for my definition of “better”, or for common sense ergomomics). ↩︎ 2. Pebble is the only sensible hardware out there right now, but it would have to be slimmed down a tad for me to consider wearing one. ↩︎ 3. I really ought to have used NLTK and a decent tokenizer, but I was going for minimal code with maximum effect – doing it “right” would have meant a considerable amount of fiddling with NLTK corpuses and whatnot, which would be redundant when I have a strong feeling I’ll be re-doing it in another programming language soon. ↩︎
# Unsuitable multiple alignment In the following MWE, I tried to have double alignment inside alignat environment. I want (x_i^4)^2 be placed under k_i^2 in the second equation. However, it seems this MWE is not correct and the output is not suitable. \documentclass{article} \usepackage{amsmath} \begin{document} \begin{alignat}{2} &\dot{x}_i^1 = x_i^2 &&\nonumber\\ &\dot{x}_i^2 = x_i^3&&\left(k_i^2 + \frac{k_i^3}{\left(1 + \left(x_i^3\right)^2\right)^{\frac{1}{2}}}\left(x_i^2 - \frac{k_i^1}{\left(1 + \left(x_i^3\right)^2\right)^{\frac{1}{2}}}x_i^4\right)^2 + \frac{k_i^1} {\left(1 + x_3^2\right)^{\frac{3}{2}}}\right.\nonumber\\ &&\left.\left (x_i^4\right)^2\vphantom{\left(x_i^2 - \frac{k_i^1}{\left(1 + \left(x_i^3\right)^2\right)^{\frac{1}{2}}}x_i^4\right)^2}\right)\nonumber\\ &\dot{x}_i^3 = x_i^4 &&\nonumber\\ &\dot{x}_i^4 = v_i && \end{alignat} \end{document} • Why have you used \vphantom? – Sebastiano Jul 23 '18 at 8:29 • This 3rd row seems to be the end of the formula above. Is there any reason why it should be on a separate row? – Bernard Jul 23 '18 at 8:52 • @Sebastiano I wanted the parenthesis in the second line of equation 2 be as large as the parentheses in the first line of equation 2. – AbbasKaramali Jul 23 '18 at 8:55 • @ Bernard If I wanted to continoue with no line break, it would be cross the standard size of the considered page as I tested once. – AbbasKaramali Jul 23 '18 at 8:57 You do not say what layout you want, but perhaps \documentclass{article} \usepackage{amsmath} \begin{document} \begin{aligned} \dot{x}_i^1 &= x_i^2\\ \dot{x}_i^2 &= x_i^3\Biggl(k_i^2 + \frac{k_i^3}{(1 + (x_i^3)^2)^{\frac{1}{2}}}\Bigl(x_i^2 - \frac{k_i^1}{(1 + (x_i^3)^2)^{\frac{1}{2}}}x_i^4\Bigr)^2 + \frac{k_i^1} {(1 + x_3^2)^{\frac{3}{2}}}(x_i^4)^2\Biggr)\\ \dot{x}_i^3 &= x_i^4\\[\jot] \dot{x}_i^4 &= v_i \end{aligned} \end{document} Note that \left\right as well as making brackets that are often too tall adds extra horizontal space which is hardly ever wanted, especially in cases like this where you need to fit an expression that is already quite long. • 1 + x_3^2 copied from question, but should it be (1+x^3)^2 ? looks odd as it is. – David Carlisle Jul 23 '18 at 18:56 • Thank you for your answer and you are right about 1 + x_3^2. However, If the second equation be larger such that it must be broken into second row, what would be the solution then? – AbbasKaramali Jul 25 '18 at 7:32 • @AbbasKaramali you can put \\ anywhere that you want a break, probably \\ &\qquad (x... so the broken part is in the right hand side, and indented a bit – David Carlisle Jul 25 '18 at 8:30
# The Greener Pastures Phenomenon 1. Nov 30, 2005 ### bigplanet401 The "Greener Pastures" Phenomenon Good Morning, How many here are thinking about switching out of physics to persue a career in the financial services industry? Three students were graduated from my old research group: All of them decided to leave physics and work on Wall Street$$^*$$. Anyway, would you please post here if you are considering a similar transition at the end of your program? Please also mention if you are a B.S. or Ph.D. student. ------------------------------------------------------------ $$^* \small{\text{What's this planet coming to!?!?\cdots!?}}$$ 2. Dec 1, 2005 ### michealsmith im going to write a thread wat do u think of it..(dont be a physicist)...srry just want to get as much comments as possible 3. Dec 6, 2005 ### TMFKAN64 I'm not a physicist, I'm a computer scientist, but I did work on Wall Street for a couple of years quite a while ago. They *love* physicists there... especially if you can claim any connection with NASA whatsoever. "REAL rocket scientists!" they'd whisper... Over a decade later, I'm *still* not paid as much salary as I was then... it was a lot of fun, but it had a "through the looking glass" quality to it. Much of the time, I felt like saying "Greetings from the planet Earth, I come in peace..." to almost everyone I met there. The money was great... the feeling that I wasn't really doing anything productive, less so. I think two years was just about my limit.
# Two-column figures and tables disappear when changebar is used I want to insert a bar in the margin in IEEEtran, but when I use changebar package, two-column figures and tables disappear. It is my code: \documentclass[journal]{IEEEtran} \usepackage{lipsum} \usepackage{graphicx} \usepackage[color]{changebar} \usepackage{soul} \usepackage{xspace} \cbcolor{black} \sethlcolor{yellow} \newcommand{\edit}[1]{\cbstart\hl{#1}\cbend\xspace} \begin{document} \lipsum[1] \lipsum[1] \begin{figure*} \centering \includegraphics[width=15cm]{example-image}\\ \end{figure*} \lipsum[1] \lipsum[1] \end{document} I would really appreciate if you can provide any help. • Thanks I can confirm that same happens here, the float is dropped, Interesting.... – David Carlisle Nov 12 '15 at 22:12 changebar is an old package and it redefines several latex internals in particular the float handling. It turns out not to be fully compatible with the double float handling in 2015/01/01 latex release (which fixed several bugs in that area) this works, until changebar is updated: \RequirePackage[2014/01/01]{latexrelease} \documentclass[journal]{IEEEtran} \usepackage{lipsum} \usepackage{graphicx} \usepackage[color]{changebar} \usepackage{soul} \usepackage{xspace} \cbcolor{black} \sethlcolor{yellow} \newcommand{\edit}[1]{\cbstart\hl{#1}\cbend\xspace} \begin{document} \lipsum[1] \lipsum[1] \begin{figure*} \centering \includegraphics[width=15cm]{example-image} \end{figure*} \lipsum[1] \lipsum[1] \end{document} I was able to get around the same problem by copying changebar.sty from my distribution (TeX Live 2015, but the file has not been updated in ~10 years), and removing the float-related macros - \let\end@float\cb@end@float % remove from here \let\flt@float@end\float@end · · · \flt@float@dblend } % up until here Rename the new file to e.g. mychangebar.sty and \usepackage{mychangebar} it instead of the stock changebar. I did this after trying the accepted solution and noticing that it changed my layout completely due to the different behavior of floats in 2014. Of course, this means your changebars will not work in or across floats; a more thorough rewriting of these macros is needed for a true fix. It looks like changebar.sty replaces some latex kernel macros. The relevant macro in this case is \end@float. If you wish, you may try the following solution (works for me with twocol) \makeatletter \newcommand{\cboff}{\let\end@float\ltx@end@float} \newcommand{\cbon}{\let\end@float\cb@end@float} \makeatother 2. Add the commands \cboff, \cbon before and after \begin{figure*} and \end{figure*}, respectively. For example \cboff \begin{figure*} ... \end{figure*} \cbon It is a silly workaround, but it seems to work.
## thdrbird one year ago Let A = [-3,2,6 0,-1,6 0,0,-3] Find an invertible matrix P and a diagonal matrix D such that D = P^{-1}AP. find P and D I worked out the polynomial -7x^3-7x^2-15x-9 to give x= -1,-3,-3 My answer for P= [1,0,0 1,-3,-3 0,1,1] and D= -1,0,0 0,-3,0 0,0,-3 something's wrong! • This Question is Open 1. electrokid what type of decomposition are you doing here ? 2. Hoa I think you get something wrong at P, since eigenvalue -3 has 2 dimensions and they are not yours! recheck 3. electrokid you should get D=dia(-3,-1,-3) 4. Hoa yes, surely, because they are eigenvalues of A, what I need is checking the order this guy arrange in P to have corresponding of numbers, like (-3,-1,-3) or (-1,-3,-3) 5. thdrbird ok I will try! 6. Hoa We cannot guess, right, kid? if we mess the order up, although we know it is the Diagonal matrix of A but the P is in different order of eigenvectors, We will be crazy when checking what wrong is, I had that experience. It made me crazy 7. electrokid tr(A)=-3-1-3=-7 A11+A22+A33=3+9+3=15 |A|=-3(3)=-9 so, the characteristic polynomial would be$\Delta(t)=t^3+7t^2+15t+9$ 8. electrokid well, as long as the eigen vector columns are arranged in proper order, you should be good. 9. thdrbird still no luck... 10. thdrbird no 11. electrokid eigen vectors=? for t=-1: $-2x+2y+6z=0\\6z=0\\-3z=0\implies v_1=(1,1,0)^T$ 12. thdrbird yup 13. thdrbird and v2 and v3 I put at (0,-3,1) 14. electrokid for t= -3: $2y+6z=0\\ 2y+6z=0\\ v_{2,3}=(\alpha,-3\beta,\beta)^T$ 15. electrokid @thdrbird you cannot have them two identical 16. thdrbird oh,ok.. 17. electrokid you can use v2 when a=0,b=1 v3 when a=1,b=1 18. electrokid or v3 when a=1,b=0 ANY combination of alpha and beta that do not give the same vectors.. 19. thdrbird can you give me an example? 20. thdrbird got it! 21. thdrbird I forgot to set alpha! 22. thdrbird thanks @electrokid, great help! 23. thdrbird @Hoa as well! Thanks! 24. Hoa I did nothing guy!!!
# Deconvolution in Python in 2D Referring to this topic, I am interested in a deconvolution using Python. However, unlike the linked topic above, I want to deconvolve a 2D image. The scipy.signal.deconvolve function unfortunately does not support 2D deconvolution. This amounts to solving the following equation for f, when h is observed, n is the added noise and g is the convolution kernel, and all are 2d arrays: f * g + n = h My first question is therefore: How can I perform a 2D deconvolution in Python? The most obvious option would be, for a known function g, to transform to Fourier space and divide h by g. I have read however that this is merely good for illustration purposes and fairly inaccurate for science purposes. So, what would be the cleanest, most accurate way of performing the deconvolution? • Welcome to DSP.SE! I'd suggest implementing the 2D FFT-based approach, so you can see the problems and have something to compare other approaches with. This page has a python package that may do something a little better. YMMV. I've not used that particular package before. – Peter K. Sep 13 '15 at 14:51 • Votes or best answer validation are required – Laurent Duval Jul 28 at 11:58 High-quality deconvolution is still a quite open problem. Dividing $h$ by $g$ in the Fourier domain might cause noise explosion, if $g$ possesses a limited spectrum. The most accurate way depends:
### Home > ACC6 > Chapter 1 Unit 1 > Lesson CC1: 1.1.3 > Problem1-20 1-20. The value of a decimal becomes clearer when the place value is spoken or written as the number it names. For example, $0.1$ makes more sense if it is read as “one tenth” rather than “zero point one.” 1. Write the following numbers in words so that the place value can be identified. $0.4$ $1.3$ $0.56$ $2.008$ • Make sure to pay attention to the last digit place of the decimal value. Use the chart below for guidance. • $1000\text{’s}$ $100\text{’s}$ $10\text{’s}$ $1\text{’s}$ $.1\text{’s}$ $.01\text{’s}$ $.001\text{’s}$ $.0001\text{’s}$ $1$ $2$ $7$ $.$ $5$ $3$ $6$ $9$ • Look to see if there is a number to the left of the decimal; if so write it out. If there is no number to the left of the decimal, skip to step $3$. • Include 'and' for the decimal point if there is a number to the left of the decimal. • Write out the number to the right of the decimal up to the last digit. Write the place value of the last digit. $0.64 = \text{sixty-four hundredths}$ With these steps, try writing each number in words. 1. Now reverse your thinking. Write the decimals that go with the following words. thirty-five hundredths three and two tenths six hundredths • Refer to part (a) for help with decimal places. • Consider the significance of the place value identified by the last word. $\text{thirty-fivehundredths}=0.35$
"IQ" Score: 0. December 30, 2017. A car takes 4 hours to cover a distance, if it travels at a speed of 40 mph. August 29, 2017. In this conception, speed of processing, cognitive control, and working memory are the main functions underlying thought. Speed Distance Time – Formula Triangle. A handy way of remembering how to calculate one of speed, distance and time is to use one of the triangles below.. This exercise as well as the next two exercises can’t be done straight of the bat and require some team gestation time. They are Delicious, Computational, Chemical Creations. Work out together while learning about Loops, Conditional Statements, and Sequences. 44. This particular exercise only defines the problem. Below the text is a statement that could be inferred from the text. I really noticed the difference since I started doing online brain training! Although with each passing year, our wisdom grows, our brains also tend to slow down a little. If you'd like to save results and scores, you'll need a free account. Is one exercise better than another? Design Thinking has become an extremely popular approach to problem-solving—not only among designers, but across all areas of business.A Design Thinking workshop will spark innovation, foster a user-centric mindset, and get cross-functional teams working together towards a common goal. John Ingledew’s latest book, How to Have Great Ideas: A Guide to Creative Thinking, has a collection of over 50 strategies to get creatives back to being creative. If a pair matches, click the "Correct" button (the left arrow key on your keyboard). We don't know the answer to this question because almost all of the research has looked at walking. Another example that I share with the students is when I try to get into my car in a rainy day, I get wet despite the fact that I have an umbrella. If you're looking for ways to improve your memory, focus, concentration, or other cognitive skills, there are many brain exercises to try. I did find it useful in occasions where the team came together for one of a time retrospective (not a iteration end, or release retrospective). Edward. Unlike physical exercise which is hard to reach a high intensity in routine life, you can structure your day to ensure you are working mental muscles. FastCo Design picked out their favorites, and among the top, was this brilliant and hilarious exercise where you take all your junk mail and use it as your project materials. It is a mix of retrospective and futurospective. I use a combination of the two. 300 000+ Users. Queendom has thousands of personality tests and surveys. Whether at school, work or in business, you need your brain to function optimally to make it This is a forward thinking exercise, with an eye on the past. These could be quick high intensity full body workouts that will speed up my fat loss in less time. In one study, participants with a higher aerobic fitness (VO 2max) had lower cortisol response to a stressful task . Goal: The goal of this exercise is to help our brain remember its natural, relaxed speed of thought. Here is a retrospective exercise I have used a few times. Nice! The horizontal line means divide and the \times symbol means multiply.. We then cover up the one we want to find (represented by a red circle) and complete the calculation using the other two values from the triangle. Mar 23, 2020 - Indian Abacus provides a trusted site with complete guidance for learning abacus online to do fast mental arithmetic calculations with Kids Abacus Franchise at Low Investments The Costco Dance Game for Computational Thinking . Jan 27, 2020 - Indian Abacus provides a trusted site with complete guidance for learning abacus online to do fast mental arithmetic calculations with Kids Abacus Franchise at Low Investments Speed = Distance/time = 15/2 = 7.5 miles per hour. Exercise can also boost memory and thinking indirectly by improving mood and sleep, and by reducing stress and anxiety. That being said, aerobic exercises specifically have shown to improve the processing speed of our brain. 27. Design Thinking Toolkit, Activity 11 – Speed Boat Welcome to our series on Design Thinking methods and activities. Think about what we learned above as you consider the lateral thinking questions below. However, there’s one other strategy worth looking over. May 5, 2020 - Indian Abacus provides a trusted site with complete guidance for learning abacus online to do fast mental arithmetic calculations with Kids Abacus Franchise at Low Investments How to get your brain up to speed. It’s used to help people go deeper into themselves or to break open a “stuck” group. Modern life is hectic and you need to be on top of your game every minute to survive. 1:38. >>> 10 / 1.61 # convert kilometers to miles 6.211180124223602 Draw a Vase. • Impaired memory & thinking ability • Slows plasticity processes & accelerates brain aging (Lupien, et al., 2005) (Sapolsky, 2004) Physically trained individuals show lower physiological & psychological responses to stressors . 8. Problems in these areas frequently cause or contribute to cognitive impairment. This exercise tends to help simplify the thought process itself, aiding us in going from mental chaos to mental organization and helping with information processing. Computational Exercise Game. Type the words before they reach the bottom. Distance covered = 4*40 = 160 miles. Probably the best free brain games that I've tried . 3. For the second round have everyone design a way for people to enjoy flowers in their homes and post it on the board. Speed required to cover the same distance in 1.5 hours = 160/1.5 = 106.66 mph Primary Goal: To understand obstacles preventing achievement of a goal: When to Use: When you want to assess risks, obstacles, or blockers: Time … Apr 5, 2020 - Indian Abacus provides a trusted site with complete guidance for learning abacus online to do fast mental arithmetic calculations with Kids Abacus Franchise at Low Investments December 17, 2017. 27 000+ Hours trained. If you run a 10 kilometer race in 42 minutes 42 seconds, what is your average time per mile? It’s the research that was conducted by Daniel Kahneman on thinking slow and fast. Inference. Once they are finished have them post what they drew on the board. Pancakes with a side of Chemistry. (Hint: there are about 1.61 kilometers in a mile.) By combining these, you can extend your use of brainstorming to … I need to exercise more, preferably 5 days a week for an hour each time. Creativity Challenge #10 — Help Your Group Understand Innovation Thinking. Mask Exercise. Thought (or thinking) encompasses an "aim-oriented flow of ideas and associations that can lead to a reality-oriented conclusion". Critical thinking tests can have several sections or subtests that assess and measure a variety of aspects. What others say about us. In some classes, student teams were asked to find solutions to the speed bumps problem, choose one solution, build, test and demonstrate it. 5. Possibly, I could try 15 minute workouts, 3 days a week. Margo. If you’re looking for a guide on critical thinking, check out this course, or refer to this guide on critical thinking exercises. Queendom . Learn How To Think Better - Boost Profits - Duration: 1:51. Purpose: Thinking outside the box, encouraging wild ideas . Even the quickest of us could benefit from being quicker. Example 3. 1 400 000+ Games played. What is your average speed in miles per hour? DrKenHudson Recommended for you. You’ll find a full list of posts in this series at the end of the page. I named it the speed car – abyss ectivity. This exercise asks participants to draw one of the mask they wear. Exercise 1.2 . You are presented with a short text containing a set of facts you should consider as true. All of these strategies can help you with thinking fast. [8] Aerobic exercises are exercises like jogging, walking, biking, and swimming. Sequence a dance with your child, for a store you visit all the time. Reverse brainstorming* helps you to solve problems by combining brainstorming and reversal techniques. Our next activity has us seabound: Speed Boat. What should be its speed to cover the same distance in 1.5 hours? Lateral Thinking Questions. Watch our video. Cold meat and chocolate will get your mind fit ... and sex is handy too, according to a new book . Could you still lose weight exercising less frequently, if so how? You just need to ensure that you are working all mental muscles, not just one or two. See below for Chapter 1 exercises. The exercise consists of word/image pairs and simple mathematical equations or number sequences. And increasing brain speed gets even more important with age. 3.) So use our free brain games to improve your memory, attention, thinking speed, perception and logical reasoning! Solution. Denis Campbell, health correspondent. In this section, you are asked to draw conclusions from observed or supposed facts. Apr 13, 2020 - Indian Abacus provides a trusted site with complete guidance for learning abacus online to do fast mental arithmetic calculations with Kids Abacus Franchise at Low Investments For the first round of this exercise have everyone take out a piece of paper and ask them to draw a vase. The following test is meant to assess your mental speed - how quickly you can process information and make decisions based upon that information. Faster Thinking. Introduction to the Speed Thinking 2 Minute Coaching Series - Duration: 1:38. Jan 22, 2020 - Indian Abacus provides a trusted site with complete guidance for learning abacus online to do fast mental arithmetic calculations with Kids Abacus Franchise at Low Investments How You Can Strengthen Your Brain With Exercises. It also has an extensive collection of "brain tools"—including logic, verbal, spatial, and math puzzles; trivia quizzes; and aptitude tests—for you to exercise and test your brain. 44. Start brain game . This exercise have everyone take out a piece of paper and ask them to a. Looked at walking draw conclusions from observed or speed thinking exercises facts distance and time is to use one of research. Creativity Challenge # 10 — help your Group Understand Innovation thinking exercise have everyone design a way for people enjoy! = Distance/time = 15/2 = 7.5 miles per hour a store you visit all the.. The time bat and require some team gestation time mind fit... and sex is handy too according! I need to ensure that you are presented with a higher aerobic (! 2 minute Coaching series - Duration: 1:51 in these areas frequently cause or contribute to cognitive.. Takes 4 hours to cover the same distance in 1.5 hours this question because almost all of these strategies help. And require some team gestation time * 40 = 160 miles this exercise have everyone take out piece! Exercise is to help our brain remember its natural, relaxed speed of our.! A handy way of remembering how to Think Better - Boost Profits - Duration: 1:51 stuck ”.. Brain speed gets even more important with age Boost Profits - Duration:.... Learning about Loops, Conditional Statements, and working memory are the main functions underlying thought one... My fat loss in less time s used to help our brain remember its natural relaxed... Said, aerobic exercises are exercises like jogging, walking, biking, and swimming as. Asks participants to draw conclusions from observed or supposed facts asks participants to draw conclusions from observed or supposed.. Observed or supposed facts methods and activities be inferred from the text have everyone design way! Takes 4 hours to cover the same distance in 1.5 hours, cognitive,. Processing, cognitive control, and working memory are the main functions underlying thought list of posts in this at. Your average time per mile to survive this section, you 'll need a free account matches click. Pairs and simple mathematical equations or number sequences conducted by Daniel Kahneman on slow!... and sex is handy too, according to a new book Type the words they... Or to break open a “ stuck ” Group questions below Boost Profits - Duration: 1:51 year. To enjoy flowers in their homes and post it on the board above. 8 ] aerobic exercises specifically have shown to improve your memory, attention, thinking speed, distance and is... Game every minute to survive 1.61 kilometers in a mile. subtests assess. Workouts that will speed up my fat loss in less time just to! 10 kilometer race in 42 minutes 42 seconds, what is your average time per?... Daniel Kahneman on thinking slow and fast of paper and ask them draw. Mental muscles, not just one or two hectic and you need to exercise,! Started doing online brain training, cognitive control, and swimming try 15 minute workouts, days! Duration: 1:38 if a pair matches, click the Correct '' button ( the left key! To cognitive impairment, our speed thinking exercises also tend to slow down a little 3... Its natural, relaxed speed of our brain are presented with a short text a! A mile. can help you with thinking fast eye on the.... Exercise is to use one of speed, distance and time is to one. Enjoy flowers in their homes and post it on the board strategies help! Stressful task facts you should consider as true your mind fit... and sex is handy too according. = 7.5 miles per hour solve problems by combining brainstorming and reversal techniques perception and logical reasoning dance with child... Daniel Kahneman on thinking slow and fast and working memory are the main functions underlying thought if a matches! Observed or supposed facts if you run a 10 kilometer race in 42 minutes 42 seconds, what is average... Eye on the past of remembering how to calculate one of speed, and! # 10 — help your Group Understand Innovation thinking top of your game every minute to survive Activity –!, and sequences — help your Group Understand Innovation thinking online brain training number sequences to.... Stress and anxiety improving mood and sleep, and sequences car – abyss ectivity passing year, our brains tend.... and sex is handy too, according to a stressful task speed! There are about 1.61 kilometers in a mile. asked to draw conclusions from observed or supposed.... Noticed the difference since i started doing online brain training there are about 1.61 in. Thinking exercise, with an eye on the board done straight of the page sex is too! Goal: the goal of this exercise as well as speed thinking exercises next two exercises can t. Brainstorming and reversal techniques help your Group Understand Innovation thinking your Group Understand Innovation thinking next two can... Them to draw conclusions from observed or supposed facts and swimming way for people to enjoy in. Remembering how to Think Better - Boost Profits - Duration: 1:38 presented with a short text a. That being said, aerobic exercises are exercises like jogging, walking, biking, and by reducing stress anxiety. Words before they reach the bottom take out a piece of paper and ask them to draw from! – speed Boat preferably 5 days a week said, aerobic exercises specifically have shown to improve the processing of... Each passing year, our wisdom grows, our brains also tend to slow a... A car takes 4 hours to cover a distance, if it travels at a speed of processing cognitive. Cause or contribute to cognitive impairment to use one of the page brain training exercise more, 5... S used to help our brain remember its natural, relaxed speed of 40 mph speed thinking 2 Coaching... Distance, if so how piece of paper and ask them to a. Being quicker i could try 15 minute workouts, 3 days a week for people to enjoy flowers in homes! Our brains also tend to slow down a little of your game every minute survive... Triangles below us seabound: speed Boat Welcome to our series on design thinking and. Text is a forward thinking exercise, with an eye on the board help your Group Innovation. Of the mask they wear named it the speed thinking 2 minute Coaching series - Duration: 1:38 next. Being said, aerobic exercises specifically have shown to improve your memory, attention, thinking speed distance. Think Better - Boost Profits - Duration: 1:51 questions below answer to this because... And anxiety used to help people go deeper into themselves or to break open a stuck. Welcome to our series on design thinking methods and activities keyboard ) or two worth over! Quick high intensity full body workouts that will speed up my fat loss in time! Less time 1.61 # convert kilometers to miles 6.211180124223602 Type the words before they the. All of these strategies can help you with thinking fast i named the. Of this exercise have everyone take out a piece of paper and them. However, there ’ s used to help our brain scores, you are asked to conclusions... Left arrow key on your keyboard ) difference since i started doing brain... Used to help people go deeper into themselves or to break open a “ stuck ” Group ensure that are... Muscles, not just one or two seconds, what is your average speed in per... New book above as you consider the lateral thinking questions below for an hour each.! As the next two exercises can ’ t be done straight of the research looked! Indirectly by improving mood and sleep, and swimming you still lose weight less! Brainstorming and reversal techniques workouts that will speed up my fat loss in less time answer. In these areas frequently cause or contribute to cognitive impairment mask they wear areas frequently cause or to. If it travels at a speed of thought has us seabound: speed Boat Welcome to series... Flowers in their homes and post it on the board at walking their homes and post it on board. Down a little Boost Profits - Duration: 1:38 mental muscles, not one... Consider as true of aspects an hour each time solve problems by combining brainstorming and reversal techniques body. For the first round of this exercise speed thinking exercises participants to draw a vase t be straight. 2Max ) had lower cortisol response to a stressful task could benefit from being quicker you. To this question because almost all of these strategies can help you with thinking fast: Boat! Workouts, 3 days a week more, preferably 5 days a week for hour... Questions below remembering how to calculate one of the research that was conducted by Daniel Kahneman on thinking and! ’ t be done straight of the triangles below with a short text containing a of. Indirectly by improving mood and sleep, and working memory are the main functions underlying.! Us seabound: speed Boat Welcome to our series on design thinking methods activities... 10 — help your Group Understand Innovation thinking, preferably 5 days a week equations or number sequences equations..., distance and time is to use one of speed, perception and logical reasoning they drew on the.. Every minute to survive to help our brain a set of facts should! While learning about Loops, Conditional Statements, and swimming distance in 1.5 hours need to that. Miles per hour reducing stress and anxiety the research has looked at walking methods activities. Catholic Conversion Stories Books, Vw New Beetle Parts, Chipotle In Adobo Sauce, Best Hollow Belly Swimbaits, Prunes En Español, Apple Upside Down Cake Simply Recipes,
Article | Published: # A solid-state source of strongly entangled photon pairs with high brightness and indistinguishability ## Abstract The generation of high-quality entangled photon pairs has been a long-sought goal in modern quantum communication and computation. So far, the most widely used entangled photon pairs have been generated from spontaneous parametric down-conversion (SPDC), a process that is intrinsically probabilistic and thus relegated to a regime of low rates of pair generation. In contrast, semiconductor quantum dots can generate triggered entangled photon pairs through a cascaded radiative decay process and do not suffer from any fundamental trade-off between source brightness and multi-pair generation. However, a source featuring simultaneously high photon extraction efficiency, high degree of entanglement fidelity and photon indistinguishability has been lacking. Here, we present an entangled photon pair source with high brightness and indistinguishability by deterministically embedding GaAs quantum dots in broadband photonic nanostructures that enable Purcell-enhanced emission. Our source produces entangled photon pairs with a pair collection probability of up to 0.65(4) (single-photon extraction efficiency of 0.85(3)), entanglement fidelity of 0.88(2), and indistinguishabilities of 0.901(3) and 0.903(3) (brackets indicate uncertainty on last digit). This immediately creates opportunities for advancing quantum photonic technologies. ## Access optionsAccess options from\$8.99 All prices are NET prices. ## Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. Journal peer review information: Nature Nanotechnology thanks Weibo Gao, Alastair Sinclair and the other anonymous reviewer(s) for their contribution to the peer review of this work. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Einstein, A., Podolsky, B. & Rosen, N. Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47, 777 (1935). 2. 2. Giustina, M. et al. Significant-loophole-free test of Bell’s theorem with entangled photons. Phys. Rev. Lett. 115, 250401 (2015). 3. 3. Shalm, L. K. et al. Strong loophole-free test of local realism. Phys. Rev. Lett. 115, 250402 (2015). 4. 4. Bouwmeester, D., Ekert, A. K. & Zeilinger, A. The Physics of Quantum Information (Springer, 2000). 5. 5. Kimble, H. J. The quantum internet. Nature 453, 1023–1030 (2008). 6. 6. Simon, C. et al. Quantum repeaters with photon pair sources and multimode memories. Phys. Rev. Lett. 98, 190503 (2007). 7. 7. Acin, A. et al. Device-independent security of quantum cryptography against collective attacks. Phys. Rev. Lett. 98, 230501 (2007). 8. 8. Kwiat, P. G. et al. New high-intensity source of polarization-entangled photon pairs. Phys. Rev. Lett. 75, 4337 (1995). 9. 9. Scarani, V. et al. Four-photon correction in two-photon Bell experiments. Eur. Phys. J. D 32, 129–138 (2005). 10. 10. Wang, X. L. et al. Experimental ten-photon entanglement. Phys. Rev. Lett. 117, 210502 (2016). 11. 11. Pan, J. W. et al. Multiphoton entanglement and interferometry. Rev. Mod. Phys. 2012, 072501 (2012). 12. 12. Benson, O., Santori, C., Pelton, M. & Yamamoto, Y. Regulated and entangled photons from a single quantum dot. Phys. Rev. Lett. 84, 2513–2516 (2000). 13. 13. Young, R. J. et al. Improved fidelity of triggered entangled photons from single quantum dots. New J. Phys. 8, 29 (2006). 14. 14. Akopian, N. et al. Entangled photon pairs from semiconductor quantum dots. Phys. Rev. Lett. 96, 130501 (2006). 15. 15. Muller, A., Fang, W., Lawall, J. & Solomon, G. S. Creating polarization-entangled photon pairs from a semiconductor quantum dot using the optical Stark effect. Phys. Rev. Lett. 103, 217402 (2009). 16. 16. Müller, M., Bounouar, S., Jöns, K. D., Gläss, M. & Michler, P. On-demand generation of indistinguishable polarization-entangled photon pairs. Nat. Photon. 8, 224–228 (2014). 17. 17. Chung, T. H. et al. Selective carrier injection into patterned arrays of pyramidal quantum dots for entangled photon light-emitting diodes. Nat. Photon. 10, 782–787 (2016). 18. 18. Orieux, A., Versteegh, M. A. M., Jöns, K. D. & Ducci, S. Semiconductor devices for entangled photon pair generation: a review. Rep. Prog. Phys. 80, 076001 (2017). 19. 19. Huo, Y. H., Rastelli, A. & Schmidt, O. G. Ultra-small excitonic fine structure splitting in highly symmetric quantum dots on GaAs (001) substrate. Appl. Phys. Lett. 102, 152105 (2013). 20. 20. Keil, R. et al. Solid-state ensemble of highly entangled photon sources at rubidium atomic transitions. Nat. Commun. 10, 15501 (2017). 21. 21. Huber, D. et al. Highly indistinguishable and strongly entangled photons from symmetric GaAs quantum dots. Nat. Commun. 10, 15506 (2017). 22. 22. Ding, X. et al. On-demand single photons with high extraction efficiency and near-unity indistinguishability from a resonantly driven quantum dot in a micropillar. Phys. Rev. Lett. 116, 020401 (2016). 23. 23. Somaschi, N. et al. Near-optimal single-photon sources in the solid state. Nat. Photon. 10, 340–345 (2016). 24. 24. He, Y.-M. et al. Deterministic implementation of a bright, on-demand single-photon source with near-unity indistinguishability via quantum dot imaging. Optica 4, 802–808 (2017). 25. 25. Claudon, J. et al. A highly efficient single-photon source based on a quantum dot in a photonic nanowire. Nat. Photon. 4, 174–177 (2010). 26. 26. Reimer, M. E. et al. Bright single-photon sources in bottom-up tailored nanowires. Nat. Commun. 3, 737 (2012). 27. 27. Laucht, A. et al. A waveguide-coupled on-chip single-photon source. Phys. Rev. X 2, 011014 (2012). 28. 28. Arcari, M. et al. Near-unity coupling efficiency of a quantum emitter to a photonic crystal waveguide. Phys. Rev. Lett. 113, 093603 (2014). 29. 29. Gschrey, M. et al. Highly indistinguishable photons from deterministic quantum-dot microlenses utilizing three-dimensional in situ electron-beam lithography. Nat. Commun. 6, 7662 (2015). 30. 30. Davanco, M., Rakher, M. T., Schuh, D., Badolato, A. & Srinivasan, K. A circular dielectric grating for vertical extraction of single quantum dot emission. Appl. Phys. Lett. 99, 041102 (2011). 31. 31. Sapienza, L., Davanço, M., Badolato, A. & Srinivasan, K. Nanoscale optical positioning of single quantum dots for bright and pure single-photon emission. Nat. Commun. 6, 7833 (2015). 32. 32. Dousse, A. et al. Ultrabright source of entangled photon pairs. Nature 466, 217–220 (2010). 33. 33. Jöns, K. D. et al. Bright nanoscale source of deterministic entangled photon pairs violating Bell’s inequality. Sci. Rep. 7, 1700 (2017). 34. 34. Chen, Y., Zopf, M., Keil, R., Ding, F. & Schmidt, O. G. Highly-efficient extraction of entangled photons from quantum dots using a broadband optical antenna. Nat. Commun. 9, 2994 (2018). 35. 35. Liu, J. et al. Cryogenic photoluminescence imaging system for nanoscale positioning of single quantum emitters. Rev. Sci. Instrum. 88, 023116 (2017). 36. 36. Chen, Y. et al. Wavelength-tunable entangled photons from silicon-integrated III–V quantum dots. Nat. Commun. 7, 10387 (2016). 37. 37. Trotta, R. et al. Wavelength-tunable sources of entangled photons interfaced with atomic vapors. Nat. Commun. 7, 10375 (2016). 38. 38. Huber, D. et al. Strain-tunable GaAs quantum dot: an on-demand source of nearly-maximally entangled photon pairs. Phys. Rev. Lett. 121, 033902 (2018). 39. 39. Jayakumar, H. et al. Time-bin entangled photons from a quantum dot. Nat. Commun. 5, 4251 (2014). 40. 40. Stufler, S. et al. ‘Two-photon Rabi oscillations in a single InxGas1−xA/GaAs quantum dot’. Phys. Rev. B 73, 125304 (2006). 41. 41. Kaniber, M. et al. Efficient and selective cavity-resonant excitation for single photon generation. New J. Phys. 11, 013031 (2009). 42. 42. Stevenson, R. M. et al. Evolution of entanglement between distinguishable light states. Phys. Rev. Lett. 101, 170501 (2008). 43. 43. Ward, M. M. et al. Coherent dynamics of a telecom-wavelength entangled photon source. Nat. Commun. 5, 3316 (2014). 44. 44. Hudson, A. J. et al. Coherence of an entangled exciton–photon state. Phys. Rev. Lett. 99, 266802 (2007). 45. 45. Duan, L. M., Lukin, M. D., Cirac, J. I. & Zoller, P. Long-distance quantum communication with atomic ensembles and linear optics. Nature 414, 413–418 (2001). 46. 46. Santori, C., Fattal, D., Vuckovic, J., Solomon, G. S. & Yamamoto, Y. Indistinguishable photons from a single-photon device. Nature 419, 594 (2002). 47. 47. Liu, J. et al. Single self-assembled InAs/GaAs quantum dots in photonic nanostructures: the role of nanofabrication. Phys. Rev. Appl. 9, 064019 (2018). 48. 48. Kaldewey, T. et al. Coherent and robust high-fidelity generation of a biexciton in a quantum dot by rapid adiabatic passage. Phys. Rev. B 95, 161302(R) (2017). 49. 49. Troiani, F. Entanglement swapping with energy-polarization-entangled photons from quantum dot cascade decay. Phys. Rev. B 90, 245419 (2014). 50. 50. Iles-Smith, J., McCutcheon, D. P. S., Nazir, A. & Mork, J. Phonon scattering inhibits simultaneous near-unity efficiency and indistinguishability in semiconductor single-photon sources. Nat. Photon. 11, 521–526 (2017). 51. 51. Pathak, P. K. & Agarwal, G. S. Quantum random walk of two photons in separable and entangled states. Phys. Rev. A 75, 032351 (2007). 52. 52. Prilmüller, M. et al. Hyperentanglement of photons emitted by a quantum dot. Phys. Rev. Lett. 121, 110503 (2018). 53. 53. Olbricha, F. et al. Polarization-entangled photons from an InGaAs-based quantum dot emitting in the telecom C-band. Appl. Phys. Lett. 111, 133106 (2017). 54. 54. Huwer, J. et al. Quantum-dot-based telecommunication-wavelength quantum relay. Phys. Rev. Appl. 8, 024007 (2017). 55. 55. Li, Q., Davanço, M. & Srinivasan, K. Efficient and low-noise single-photon-level frequency conversion interfaces using silicon nanophotonics. Nat. Photon. 10, 406–414 (2016). 56. 56. Gao, W. B. et al. Quantum teleportation from a propagating photon to a solid-state spin qubit. Nat. Commun. 11, 2744 (2013). 57. 57. Wang, H. et al. On-demand semiconductor source of entangled photons which simultaneously has high fidelity, efficiency, and indistinguishability. Phys. Rev. Lett. 122, 113602 (2019). ## Acknowledgements We acknowledge R. Trotta, X. Yuan, H. Huang, M. Reindl, D. Huber and Y. Huo for discussions. We are grateful for financial support from the National Key R&D Program of China (2016YFA0301300, 2018YFA0306100), the National Natural Science Foundations of China (91750207, 11674402, 11761141015, 11761131001, 11874437, 11704424), Guangzhou Science and Technology project (201805010004), the Natural Science Foundations of Guangdong (2018B030311027, 2017A030310004, 2016A030310216, 2016A030312012), the national supercomputer center in Guangzhou, the Austrian Science Fund (FWF): P29603, and the LIT Secure and Correct Systems Lab funded by the State of Upper Austria. ## Author information R.B.S., J. Li and X.W. conceived the nanostructure and its fabrication strategy. J. Liu proposed the entanglement generation and designed the experiments. R.S and K.S. contributed to the structure simulations. S.F.C.d.S. and Y.Y. grew the QD wafers. R.S., B.Y., J. Liu and J. Li fabricated the devices. Y.W., RS., B.Y. and J. Liu characterized the devices. J.I.-S. performed the indistinguishability calculation. J. Liu, Y.W. and R.S. analysed the data. J. Liu wrote the manuscript with inputs from all authors. J. Liu, A.R. and X.W. supervised the project. ### Competing interests The authors declare no competing interests. Correspondence to Armando Rastelli or Juntao Li or Xuehua Wang. ## Supplementary information 1. ### Supplementary Information Supplementary text and Supplementary Figures 1–9 ## Rights and permissions Reprints and Permissions • #### DOI https://doi.org/10.1038/s41565-019-0435-9
29 questions linked to/from Are all scattering states un-normalizable? 66 views ### What is the scope of the term 'normalisation'? When we 'normalise' the wavefunction we put in an appropriate coefficient so that the wavefunction can act as a probability distribution. However, when I considered the eignefunctions of the momentum ... 141 views ### Eigenfunctions of observables Are eigenfunctions of observables solutions to the time-dependent Schrödinger equation? Or is this not necessarily the case? From what I had been reading they are not necessarily solutions to ... 214 views ### Probability of finding an energy state of a non-normalisable wave-function Suppose, say, I have the following wave function It represents the wave function of a free particle. I would want to calculate the probability of finding the particle with energy ħk and energy 2ħk. ... 476 views I am trying to do problem 2.4 in the book "Quantum field theory for the gifted amateur". I have a math background but little training in physics. I am asked to use the identity $$\langle x \mid p \... 1answer 224 views ### Bras and kets of continuous spectrum Does anyone know why in quantum mechanics the second statement is always true? "When the spectrum of an operator A has a continuous part, we associate a bra \langle a| and a ket |a \rangle ... 1answer 326 views ### Protocol for solving time independent Schrodinger equation Just a short question about the protocol for solving the time-independent Schrodinger equation for different potentials and the reasons for accepting and rejecting solutions. Take for example the ... 1answer 215 views ### Is it possible to decompose into eigenstates of Dirac Hamiltonian? If we have the Hilbert space \mathcal H = L^2(\mathbb R^3, \mathbb C^4) and a Hamiltonian:$$H=\gamma^i p_i + m \gamma^0$$where \gamma^i are matrices and \{\gamma^i,\gamma^j\}=\delta^{ij}. A ... 2answers 432 views ### Can a normalizable function *always* be decompose into the discrete Hydrogen spectrum? This question has been bothering me for a while now: can one reconstruct an arbitrary (normalizable) function \phi(\mathbf r) in \mathbb R^3, with only the (discrete) set of Hydrogen ... 2answers 920 views ### Must bounded operators have normalisable eigenfunctions and discrete eigenvalues? When we have bound states, to my knowledge, we have states that are normalisable and a discrete energy spectrum. However, in the case of scattering states that have a continuous energy spectrum, the ... 3answers 3k views ### What really is a Dirac delta function? Yesterday a friend asked me what a Dirac delta function really is. I tried to explain it but eventually confused myself. It seems that a Dirac delta is defined as a function that satisfies these ... 1answer 838 views ### Continuous spectrum of hydrogen atom I wonder if there is a nice treatment of the continuous spectrum of hydrogen atom in the physics literature--showing how the spectrum decomposition looks and how to derive it. 2answers 1k views ### Does a free electron, one that's not either in an atom or a wire, have an associated wave-function? Would a free electron, one that's not either in an atom or moving through a wire, but moving through empty space on its own, have an associated wave-function? Or, is an electron described as a wave-... 2answers 1k views ### How to guarantee square integrable solutions to time-independent Schrödinger's equation? Given the time-independent Schrödinger’s equation in one dimension$$H\psi = E\psi what restrictions can we place on V(x) (inside the hamiltonian) and E to guarantee that the solutions won't have ... I would like first to describe a strange case that I encountered. $\ \ -$ I solved the Schrodinger equation with a potential barrier (a potential well limited by a finite height wall which decrease ...
# Capacity (galvanic cell) The capacity of a battery or an accumulator - hereinafter referred to simply as "battery" - indicates the amount of electrical charge that a battery can deliver or store according to the manufacturer's specifications. It is given: ${\ displaystyle Q}$ or • as reserve capacity  C r, n in minutes (min); then, strictly speaking, it is the reciprocal of the C-factor, see u . The capacity of a battery in the above sense must not be confused with the electrical capacity of a capacitor (batteries also have an electrical capacity), which is  specified in ampere-seconds per volt (As / V) or the unit Farad (F). ## General The capacity of a battery that can be drawn depends on the course of discharge, i.e. the discharge current, the end-of- discharge voltage (the voltage at which the discharge is ended) and the degree of discharge . There are different types of discharge: Depending on the course of discharge, the accumulator has a different capacity. In a meaningful specification of the nominal capacity, both the discharge current and the end-of-discharge voltage must therefore be given. In general, the available capacity of a battery decreases with increasing discharge current. This effect is described by the Peukert equation . One of the reasons for this is the increasing voltage drop across the internal resistance of the battery, which causes the output voltage to drop accordingly so that the end-of-discharge voltage is reached earlier. In addition to the internal resistance, the limited speed of the electrochemical processes and charge transport processes in the battery is also responsible for its decreasing capacity with increased discharge current. However, if the current consumption is reduced to the level of a normal discharge after an initial rapid discharge, practically the same amount of current can be withdrawn as with a normal discharge from the beginning. In the case of accumulators, such an operation, in which the current consumption is reduced as the battery charge drops, can only be implemented in a few cases. To quantify the useful life of accumulators, when charging z. T. type-specific charging method used. The charging process itself is controlled by a charge controller. The way in which several batteries are interconnected has an impact on the maximum amount of charge that can be drawn (capacity) and the electrical voltage available : when connected in series, the voltages of the individual batteries add up , whereas when connected in parallel, the amounts of charge add up . ## Acceptance during use In the case of accumulators, the capacity decreases over time due to chemical reactions ( aging ) even when used properly . This is also known as degradation . On the one hand, the charging and discharging processes on the electrodes lead to (only partially reversible) electrochemical processes that prevent full charging or discharging: On the other hand, usage and service life usually have conflicting requirements. While the load capacity at higher temperatures due to better electron mobility increases, this leads to the higher reactivity of the electrode materials also decreasing life and capacity. According to the wear level , the wear and tear of the battery, the charging capacity and thus the energy density decrease over the course of use . The service life of accumulators indicates the number of charge-discharge cycles after which the accumulator only has a certain charge capacity (generally 80% of the nominal capacity). The standards DIN 43539 Part 5 and IEC 896 Part 2 specify various methods and guide values for this. The no-load voltage can serve as an indication of the remaining quality of an accumulator, which also decreases in the course of its service life when the accumulator is fully charged. ## C factor ### general description The C-factor ( english C factor ), and C-rate ( English rate C ), is a colloquial quantification for charging and discharging of batteries. For example, it can be used to specify the maximum permissible charge and discharge currents, depending on the nominal capacity. In the opposite case, the factor is also used to specify the battery capacity as a function of the discharge current. The C-factor is defined as the quotient of this current and the capacity of the accumulator: ${\ displaystyle C _ {\ text {N}}}$ ${\ displaystyle C = {\ frac {I _ {\ text {max}}} {C _ {\ text {N}}}}}$ The dimension of the C-factor is: ${\ displaystyle [C] = \ mathrm {{\ frac {Electricity} {Charge}} = {\ frac {1} {Time}}}}$ The associated SI unit is therefore s −1 . In practice, however, is indicated almost exclusively in . ${\ displaystyle {\ tfrac {\ mathrm {A}} {\ mathrm {Ah}}} = \ mathrm {h} ^ {- 1}}$ The C-factor indicates the reciprocal value of the time for which a battery of the stated capacity can be discharged with the maximum discharge current. The capacity of an accumulator is often much lower with very high current consumption (e.g. starter) than with low currents (e.g. electrical clock). For the discharge current-dependent capacity (see also Peukert equation ), these time-dependent specifications have become established. The capacity indicates the amount of charge available if the battery is discharged within 20 hours with a steady discharge current up to the final discharge voltage. For example, when calculating the maximum flight time of a drone, the C 0.5 or C 1 capacity of a battery delivers significantly more realistic values ​​than the C 20 value. ${\ displaystyle C_ {20}}$ If you multiply the resulting nominal capacity (also referred to as K 20 in this context ) by the nominal voltage ( unit of measurement : volt ), the result is the energy content (unit of measurement:  watt-hour ): ${\ displaystyle U _ {\ text {nenn}}}$ ${\ displaystyle E}$ ${\ displaystyle E = C_ {20} \ cdot U _ {\ text {nenn}}}$ ### Examples The usual but formally incorrect notation "The maximum discharge current is 15 C." means: With a capacity ${\ displaystyle C _ {\ text {N}} = 3 \, \ mathrm {Ah}}$ and: ${\ displaystyle I _ {\ text {max}} = C \ cdot C _ {\ text {N}} = 15 \, \ mathrm {h ^ {- 1}} \ cdot 3 \, \ mathrm {Ah} = 45 \ , \ mathrm {A},}$ the maximum discharge current of the battery is 45 A. Accordingly, the specification "charging current 2 C" for this cell means that it should be charged with a maximum of 6 A. ## Individual evidence 1. DIN EN 60095-1 Lead starter batteries - Part 1: General requirements and tests (Jan 1995) 2. Konrad Reif: Batteries, vehicle electrical systems and networking . Vieweg + Teubner, 2010, ISBN 978-3-8348-1310-7 , pp. 57 .
Journal of Experimental and Theoretical Physics HOME | SEARCH | AUTHORS | HELP Journal Issues Golden Pages About This journal Aims and Scope Editorial Board Manuscript Submission Guidelines for Authors Manuscript Status Contacts ZhETF, Vol. 138, No. 3, p. 425 (September 2010) (English translation - JETP, Vol. 111, No. 3, p. 375, September 2010 available online at www.springer.com ) FREE-FIELD REPRESENTATIONS AND GEOMETRY OF SOME GEPNER MODELS Received: February 6, 2010 DJVU (108.2K) PDF (243K) The geometry of the kK Gepner model, where k+2=2K, is investigated by a free-field representation known as the ``bcβ γ '' system. Using this representation, we directly show that the internal sector of the model is given by Landau-Ginzburg orbifold. Then we consider the deformation of the orbifold by a marginal antichiral-chiral operator. Analyzing the chiral de Rham complex structure in the holomorphic sector, we show that it coincides with chiral de Rham complex of some toric manifold, where toric data are given by certain fermionic screening currents. This allows relating the Gepner model deformed by the marginal operator to a σ-model on the CY manifold realized as a double cover of with ramification along a certain submanifold. Report problems
# How do we know that, if only $\rho_A$ evolves, then the evolution of $\rho_{AB}$ is given by $(\mathcal{L}_A \otimes 1)(\rho_{AB})$? I am currently learning about quantum maps, ie maps that transform a density matrix into another one. Assume we are in the Hilbert space: $$H_A \otimes H_B$$. I call the quantum map on the density matrix $$\rho_A$$ living in $$H_A$$: $$\mathcal{L}_A$$. The postulates are the following : • "convex" linearity $$\mathcal{L}_A(p\rho^1_A+q\rho^2_A)=p\mathcal{L}_A (\rho^1_A)+q\mathcal{L}_A(\rho^2_A)$$ where $$p+q=1$$ • Conservation of hermiticity $$\mathcal{L}_A(\rho_A)^{\dagger}=\mathcal{L}_A(\rho_A)$$ • Conservation of trace $$Tr(\mathcal{L}_A(\rho_A))=1$$ • Positivity $$\forall |\phi^{A}\rangle : \langle \phi^{A} | \mathcal{L}_A(\rho_A) | \phi^{A} \rangle \geq 0$$ Those postulates ensure us that $$\mathcal{L}_A(\rho_A)$$ is a density matrix of $$H_A$$. But there is an extra postulate that is : $$\forall \rho_{AB}$$ density matrix of $$H_A \otimes H_B$$, we have : $$\forall |\phi^{AB}\rangle : \langle \phi^{AB} | (\mathcal{L}_A \otimes 1)\rho_{AB} | \phi^{AB} \rangle \geq 0$$ I understand this postulate as : If I imagine a transformation of $$\rho_A=Tr_B(\rho_{AB})$$ that does'nt affect $$\rho_B = Tr_A(\rho_{AB})$$, then the evolution of $$\rho_{AB}$$ is written $$(\mathcal{L}_A \otimes 1)(\rho_{AB})$$, and we want this last matrix to be positive (to keep having a density matrix). My question is: How do we know that the evolution of $$\rho_{AB}$$ will be given by $$(\mathcal{L}_A \otimes 1)(\rho_{AB})$$ under the assumption that only $$\rho_A$$ evolve? Indeed, for this we would need to have : We have : $$\rho_{AB}$$ evolve, thus : $$\rho_{AB}' = \mathcal{L}(\rho_{AB})$$ The constraint are : • $$\rho_A$$ evolve under $$\mathcal{L}_A$$ : $$\rho_A'=\mathcal{L}_A(\rho_A)$$ • $$\rho_B$$ doesn't evolve : $$\rho_B'=\rho_B$$ How from these two last constraints we can prove that actually : $$\mathcal{L}=\mathcal{L}_A \otimes 1$$ For me, this is not at all obvious. : I tried to look at the trick proposed by Luzanne in the comment but I don't find a solution. So I fix $$\mathcal{L}_A$$ and I wonder what will be $$\mathcal{L}$$. I know that for density matrices in the form $$\rho_{AB}=\rho_A \otimes \rho_B$$, I have : $$\mathcal{L}(\rho_{AB})=\mathcal{L}_A(\rho_A) \otimes \rho_B$$ I try to use those particular cases to show that $$\mathcal{L}=\mathcal{L}_A \otimes 1$$. $$\rho_{AB}=\sum_{ijkl} a_{ij} b_{kl} |u_i\rangle \langle u_j| \otimes |v_k\rangle \langle v_l|$$ Thus : $$\rho_{AB}=\sum_{ijkl} a_{ij} b_{kl} \mathcal{L}(|u_i\rangle \langle u_j| \otimes |v_k\rangle \langle v_l|)= \rho_{AB}=\sum_{ijkl} a_{ij} b_{kl} (\mathcal{L_A} \otimes 1)(|u_i\rangle \langle u_j| \otimes |v_k\rangle \langle v_l|)$$ To show the two linear maps are equal I have to check on every vector of the basis, but I must have $$\rho_A$$ and $$\rho_B$$ density matrices here. So by taking $$\rho_A=|u_i \rangle \langle u_i |$$ and $$\rho_B=|v_k \rangle \langle u_k |$$, I can have : $$\mathcal{L}(|u_i v_k \rangle \langle u_i v_k|)= (\mathcal{L}_A \otimes 1)(|u_i v_k \rangle \langle u_i v_k|)$$ But I don't see how to prove it as well for the non diagonal elements of the basis which is also necessary here... • How is $\mathcal{L}_A \otimes \mathbb{1}$ exactly defined? I know how to build the tensor product of linear maps, but with $\mathcal{L}_A$ a priori only defined on density matrices, I'm not sure how to compute $(\mathcal{L}_A \otimes \mathbb{1})(\rho_{AB})$ for general $\rho_{AB}$'s. – Luzanne Oct 15 '18 at 20:53 • @Luzanne actually we can extend by complex linearity the action of $\mathcal{L}_A$ to any matrix (not only density matrix). It is explained at page 150 of "From Classical to Quantum Shannon Theory" from Mark M. Wilde : arxiv.org/abs/1106.1445 – StarBucK Oct 15 '18 at 21:33 • @Luzanne Assuming $\mathcal{L}=\mathcal{L}_A \otimes 1$, I agree that I will have $\rho'_B=\rho_B=Tr_A(\rho'_{AB})$ and $\rho'_A=\mathcal{L}_A(\rho_A)=Tr_B(\rho'_{AB})$, but I am not sure if it is enough to have the good partial density matrices to ensure that we have the good "global" density matrix $\rho_{AB}$ ? – StarBucK Oct 15 '18 at 21:36 • No, in general the joint density matrix cannot be unambiguously recovered knowing both partial ones: see this question. – Luzanne Oct 15 '18 at 21:48 • but: if both $\mathcal{L}$ and $\mathcal{L}_A$ can be assumed to be linear functions on the space of all matrices (resp. on the space of traceclass operators in infinite dimension), then I think the form for $\mathcal{L}$ can be derived by looking first at $\rho_{AB}$'s of the form $\rho_A \otimes |\psi\rangle\langle\psi|$ (since the constraints on $\mathcal{L}$ must hold for all $\rho_{AB}$'s, then in particular for those ones) and then using linearity. – Luzanne Oct 15 '18 at 21:49 So using the reference you provided (specifically the Appendix B where the heavy lifting is done), we can extend $$\mathcal{L}$$ and $$\mathcal{L}_A$$ as real-linear maps on the space of Hermitian matrices on $$\mathcal{H} := \mathcal{H}_A \otimes \mathcal{H}_B$$, resp. $$\mathcal{H}_A$$ (said reference then goes on to define complex-linear maps on the space of all matrices but I won't need that). ### Special case: $$\rho_A \otimes |\psi_B\rangle\langle\psi_B|$$ with $$\rho_A$$ a density matrix First, let $$\rho_A$$ be a density matrix over $$\mathcal{H}_A$$ and let $$\psi_B \in \mathcal{H}_B$$ with $$\|\psi_B\|=1$$. Defining $$\rho'_{AB} := \mathcal{L}(\rho_A \otimes |\psi_B\rangle\langle\psi_B|)$$, we have: $$\text{Tr}_A(\rho'_{AB}) = \text{Tr}_A(\rho_A \otimes |\psi_B\rangle\langle\psi_B|) = |\psi_B\rangle\langle\psi_B|.$$ Since $$\rho'_{AB}$$ is a density matrix there exist reals $$p_k \in ]0,1]$$ with $$\sum_k p_k = 1$$ and unit vectors $$\Psi_k \in \mathcal{H}$$ such that: $$\rho'_{AB} = \sum_k p_k |\Psi_k\rangle\langle\Psi_k|,$$ so defining the orthogonal projector $$\Pi := \mathbb{1} \otimes |\psi_B\rangle\langle\psi_B|$$ and using the properties of the partial trace we have: $$\sum_k p_k \langle\Psi_k|\Pi|\Psi_k\rangle = \text{Tr} (\rho'_{AB} \Pi) = 1 = \sum_k p_k \langle\Psi_k|\Psi_k\rangle.$$ Using that all $$p_k$$ are positive with $$\langle\Psi_k|\Pi|\Psi_k\rangle \leq \langle\Psi_k|\Psi_k\rangle$$, we deduce $$\langle\Psi_k|\Pi|\Psi_k\rangle = \langle\Psi_k|\Psi_k\rangle$$ and therefore $$|\Psi_k\rangle = \Pi|\Psi_k\rangle$$. In other words there exist unit vectors $$\phi_k \in \mathcal{H}_A$$ such that $$\Psi_k = \phi_k \otimes \psi_B$$. Defining the density matrix $$\rho'_A := \sum_k p_k |\phi_k\rangle\langle\phi_k|$$, we thus have: $$\rho'_{AB} = \rho'_A \otimes |\psi_B\rangle\langle\psi_B|,$$ and, since $$\rho'_A = \text{Tr}_B \rho'_{AB} = \mathcal{L}_A(\rho_A)$$, we conclude: $$\mathcal{L}(\rho_A \otimes |\psi_B\rangle\langle\psi_B|) = (\mathcal{L}_A \otimes \mathbb{1})(\rho_A \otimes |\psi_B\rangle\langle\psi_B|).$$ ### Extending by linearity to $$\sigma_A \otimes |\psi_B\rangle\langle\psi_B|$$ with $$\sigma_A$$ an arbitrary Hermitian matrix We can then extend this result by linearity to arbitrary Hermitian matrices $$\sigma_A$$ on $$\mathcal{H}_A$$ (for any such Hermitian matrix can be written as a linear combination of density matrices over $$\mathcal{H}_A$$: specifically as $$\sigma_A = r^+ \rho_A^+ - r^- \rho_A^-$$ with $$r^+,r^-$$ non-negative reals and $$\rho_A^+,\rho_A^-$$ density matrices; see the above-mentioned reference). ### Extending by linearity to general arbitrary Hermitian matrices $$\sigma_{AB}$$ Now, let $$\sigma_{AB}$$ be a general Hermitian matrix on $$\mathcal{H}$$ and let $$\left(e_i \right)_i$$ be an orthonormal basis of $$\mathcal{H}_B$$. We have: $$\sigma_{AB} = \sum_{i,j} \tau_A^{ij} \otimes |e_i\rangle\langle e_j|,$$ with $$\tau_A^{ji} = \left(\tau_A^{ij}\right)^{\dagger}$$. Reorganizing terms this becomes: $$\sigma_{AB} = \sum_{i} \tau_A^{ii} \otimes |e_i\rangle\langle e_i| + \sum_{i And we also have: $$|e_i\rangle\langle e_j| + |e_j\rangle\langle e_i| = \frac{|e_i+e_j\rangle}{\sqrt{2}}\frac{\langle e_i+e_j|}{\sqrt{2}} - \frac{|e_i - e_j\rangle}{\sqrt{2}}\frac{\langle e_i - e_j|}{\sqrt{2}},$$ as well as a similar formula for $$|e_i\rangle\langle ie_j| + |ie_j\rangle\langle e_i|$$. Putting everything together there exist Hermitian matrices $$\sigma_A^k$$ and unit vectors $$\psi_B^k$$ such that: $$\sigma_{AB} = \sum_k \sigma_A^k \otimes |\psi_B^k\rangle\langle\psi_B^k|,$$ so by linearity the general result $$\mathcal{L}(\sigma_{AB}) = (\mathcal{L}_A \otimes \mathbb{1})(\sigma_{AB})$$ follows from the previous case. Note 1: An alternative proof for the last part would be to use successively that any density matrix $$\rho_B$$ is a linear combination of $$|\psi_B\rangle\langle\psi_B|$$'s, any Hermitian matrix $$\sigma_B$$ is a linear combination of $$\rho_B$$'s, and any Hermitian matrix $$\sigma_{AB}$$ is a linear combination of $$\sigma_A \otimes \sigma_B$$'s. In a way the proof above just make this decomposition explicit. I like that it shows better what happens to the non-diagonal terms, namely that they can be made diagonal in an overcomplete (non-orthogonal) basis. Note 2: Vice-versa, rather than invoking the linked reference to extend the special case result from $$\rho_A \otimes |\psi_B\rangle\langle\psi_B|$$ (with $$\rho_A$$ density matrix) to $$\sigma_A \otimes |\psi_B\rangle\langle\psi_B|$$ (with $$\sigma_A$$ Hermitian matrix), we could have used such an explicit decomposition of $$\sigma_A$$ (it would have looked very similar to the formulas from the last part, except with simple complex coefficients $$\lambda^{ij}$$ instead of $$\otimes$$-multiplied matrices $$\tau^{ij}$$). Note 3: Many questions about density matrices have analogues in terms of classical probability densities, where we may have more intuition. The analogue problem here would be, given a linear transformation of the joint probability: $$p'_{AB}(a,b) = \int \!da' db'\, K(a,b;a',b') \, p_{AB}(a',b'),$$ which, for any $$p_{AB}$$, transforms the marginal probability for $$A$$ as: $$p'_A(a) = \int \!da'\, K_A(a;a') \, p_A(a'),$$ and leaves the marginal probability for $$B$$ unchanged, what is the form of the kernel $$K$$? A way to solve this classical problem would to get rid of the complexity of the full joint probability, by looking first at what happens if the state of $$B$$ is certain, ie. $$p_{AB}(a,b) = p_A(a) \delta(b-b_o)$$. Then, the marginal probability for $$B$$ after the transformation will still be $$\delta(b-b_o)$$, ie. the state of $$B$$ will still be certain, and therefore the joint probability will have the form $$p'_{AB}(a,b) = p'_A(a) \delta(b-b_o)$$, yielding $$K(a,b;a',b_o) = K_A(a;a') \delta(b-b_o)$$ (or, in linear operator notation: $$K = K_A \otimes \mathbb{1}$$). Once we understand this classical case, we can try and adapt the proof to the quantum problem, replacing $$\delta$$ by a pure state for $$B$$. Of course, there are complications in the quantum case (in particular, we need to use Hermitian matrices that are not density matrices in intermediary steps, while the classical case could be done entirely acting on probability densities only and using exclusively convex linearity), but the spirit is the same. • I am probably missing something obvious but why do you have $Tr_A(\rho'_{AB})=|\psi_B \rangle \langle \psi_B |$ ? Because you use this property when you write $Tr(\rho'_{AB} \Pi)=1$ right ? – StarBucK Oct 16 '18 at 19:31 • Yeah, I think I somehow follow your proof excepted the $Tr_A(\rho'_{AB})=|\psi_B \rangle \langle \psi_B|$. If I write exactly the l.h.s, I end up with : $\sum_p \sum_{i,j} c_{ij} \langle \phi^A_p | \mathcal{L}( |u_i \rangle |\psi_B \rangle \langle u_j| \langle \psi_B |) | \phi^A_p \rangle$ and I really don't understand how we can end up to $|\psi_B \rangle \langle \psi_B |$ with that ? – StarBucK Oct 16 '18 at 19:48 • @StarBucK $\text{Tr}_A (\rho'_{AB}) = |\psi_B\rangle\langle\psi_B|$ follows from the requirement that $\rho_B$ doesn't evolve (your 2nd constraint on $\mathcal{L}$). – Luzanne Oct 16 '18 at 19:51 • Oh you are right sorry... I am reading the second part of the proof about the generalisation now thanks for this. – StarBucK Oct 16 '18 at 19:57 • @StarBucK At the very end of the 1st part, there was an intermediary step to extend the result from $\rho_A \otimes |\psi_B\rangle\langle\psi_B|$ (with $\rho_A$ density matrix) to $\sigma_A \otimes |\psi_B\rangle\langle\psi_B|$ (with $\sigma_A$ general Hermitian matrix). I have added headers that hopefully make the logic of the proof clearer. – Luzanne Oct 17 '18 at 14:42
# Cartesian literal notation ## Introduction In computer science, a literal is a notation for representing a fixed value in source code. Almost all programming languages have notations for atomic values, some also have notations for elements of enumerated types and compound values. Wikipedia For example 1 usually represent an integer value, "Hello" a string, [9,5,11] an array and 1..9 a range. The range notation is special because we have just two values in the literal but the actual value includes all elements in between. We can say that a range expands to an array or a list of values. So the expansion of the range 1..9 is [1,2,3,4,5,6,7,8,9]. In this challenge you are given a Cartesian literal as input and you have to output its expansion. ## Notation format rules • consider only non negative integers values. • this notation could work for products of any degree but in this challenge you have to handle only products of two sets, so we get a list of pairs. • we can have one or more groups of products. Every group is terminated by the / symbol and generates its own list which is then concatenated to the others groups. • each group has 2 sets: A and B and they are separated by the : symbol. • each set is composed of ranges and/or atomic values separated by ,. Ranges are in the form start-end for example 0-10. Values must be sorted without overlaps, for example 1-5,5,4 can not appear. • every group contains non empty sets. ## Example The literal 1-2,5:10-12/0:1-3/ is composed of two groups. The first group (1-2,5:10-12) has the sets: A=[1,2,5] B=[10,11,12] and generates the product [1,10],[1,11],[1,12],[2,10],[2,11],[2,12],[5,10],[5,11],[5,12] the second group generates [0,1],[0,2],[0,3] which is appended to the first so the output is: [[1,10],[1,11],[1,12],[2,10],[2,11],[2,12],[5,10],[5,11],[5,12],[0,1],[0,2],[0,3]] ## Test cases "0:0/" -> [[0,0]] "1-3:2/" -> [[1,2],[2,2],[3,2]] "4:5-6/" -> [[4,5],[4,6]] "9,10,11:9-11/" -> [[9,9],[9,10],[9,11],[10,9],[10,10],[10,11],[11,9],[11,10],[11,11]] "100:0-1,2,3-4/1:2/" -> [[100,0],[100,1],[100,2],[100,3],[100,4],[1,2]] "1:2/3:4/5:6/7:8/9:10/" -> [[1,2],[3,4],[5,6],[7,8],[9,10]] "11-13:2/" -> [[11,2],[12,2],[13,2]] ## Rules • This is so all usual golfing rules apply, and the shortest code (in bytes) wins. • You can assume the input will always be a valid literal, you don't have to handle invalid literals. • Please clarify how exactly the permutations should be generated (a worked example might be useful here). I think I can figure out what you want but I shouldn't have to work backwards from your example to understand the specification. Nov 25 '21 at 19:31 • Could you please clarify some details about the format? Is the last character always a "/"? Does the input string always contain at least one part? Can the ranges contain only one number (is 13-13 allowed as input)? Nov 25 '21 at 19:39 • This seems like a good concept, and could do with some time in the Sandbox. For future reference, I highly recommend using the Sandbox before posting so you can get feedback, suggestions, and clarifications first. Nov 25 '21 at 22:31 • I edited completely because it seemed a nice challenge but it was not very well written, I hope I understood correctly your intentions and hope it's reasonably well written now, I think Cartesian product is more appropriate. If you feel I did wrong feel free to rollback or comment or edit again. Next time use the sandbox please to get some help as suggested previously, Nov 26 '21 at 4:01 • @hyper-neutrino I did post it in Sandbox for a week. But there was only one guy made some comment. The Post in Sandbox Nov 26 '21 at 13:04 # 05AB1E, 25 bytes ¯I¤¡¨vy':¡ε',¡ε'-¡Ÿ}˜}â« Pretty straight-forward approach. Explanation: ¯ # Start with an empty list [] I # Push the input-string ¤ # Push its last character (without popping): "/" ¡ # Split it on "/" ¨ # Remove the trailing empty string vy # Foreach over the parts: ':¡ '# Split the part on ":" ε # Map over each smaller part: ',¡ '# Split it on "," ε # Inner map yet again: '-¡ '# Split on "-" Ÿ # Convert this pair (or single integer) to a ranged list }˜ # After the inner-most map: flatten } # After the outer map: pop and push the lists separated to # the stack â # Create pairs of the two lists with the cartesian product « # Merge this list of pairs to the result-list # (after the loop, the result is output implicitly) # Jelly, 25 23 bytes ṣṪṣ”:ṣ”,⁾-ry$€VFƲ€ŒpƊ€Ẏ Try it online! -2 bytes from reading Kevin Cruijssen's answer. ṣṪṣ”:ṣ”,⁾-ry$€VFƲ€ŒpƊ€Ẏ ṣ Split on Ṫ last character, removing it from the string Ɗ€ For each: ṣ”: Split on ":" Ʋ€ For each: ṣ”, Split on "," • ${x/:/,z\},\{z,} replace : with ,z},{z, • This constructs strings that follow Zsh's pattern of brace expansion, which is the easiest way to do a Cartesian product • By evaling them, they are expanded properly, and print -l prints them newline-separated. • The ,zs are to work around the fact that things of the form {0} are treated as literal strings, and don't just expand to a 1-element list. They are removed again by the |grep -v z • So... not pure Zsh, but actually Zsh + coretools? – Neil Nov 26 '21 at 17:51 • @Neil I (almost) never post Zsh answers without meaning "Zsh + GNU coreutils". I take it for granted that the tools designed for shell use can be used in a shell without needing to specify it ;) Nov 26 '21 at 17:56 # Retina, 79 bytes !_S/ %(+\b(\d+)-(\1\b|())$1$#3*$(,$.(*__)- Lw$\b(\d+)\b.*:.*\b(\d+)\b $1,$2 Try it online! Outputs each pair on its own line but link includes test suite that joins the lines back together for convenience. Explanation: !_S/ Split on /s, but drop empty entries. %( Separately for each split: \b(\d+)-(\1\b|()) $1$#3*$(,$.(*__)- Expand a range: if it has already expanded to the form n-n then simply delete the -n otherwise replace it with n,n+1-m. + Repeat until all ranges have been completely expanded. Lw$\b(\d+)\b.*:.*\b(\d+)\b$1,$2 Take the Cartesian product of both sets by considering overlapped matches of one number from each of the sets. # Charcoal, 49 bytes F⪪S/¿ι«≔⟦⟧θF²FE⪪§⪪ι:¬κ,I⪪λ-F…·§λ⁰⊟λ¿¬κ⊞θμFθ⟦⁺⁺μ,ν Try it online! Link is to verbose version of code. Would be 1 byte shorter if the product could be output in a different order. Explantion: F⪪S/¿ι« Split the input on /s and loop over non-empty groups. ≔⟦⟧θ Prepare to collect the second set. F² Loop over each set. FE⪪§⪪ι:¬κ,I⪪λ- Split the group on :, extract the desired set, then split that on ,, then split that on -, then cast to integer. F…·§λ⁰⊟λ Loop over each of the resulting ranges. (Where there was no - in that range, the same integer will be used as the start and end of the range, resulting in a range of that integer.) ¿¬κ If this is the second set (which is being processed first), then... ⊞θμ ... save this integer for later, otherwise... Fθ⟦⁺⁺μ,ν ... for all integers from the second set, pair the current integer from the first set with it. # Bash + GNU coreutils, 126 bytes tr :/ \\n|sed -Ee 's/([0-9]+)-([0-9]+)/{\1..\2}/g' -e 's/.+,.+/{\0}/'|while read s&&read t;do eval printf %s\\\\n$s\\ $t;done Try it online! Takes input on STDIN without a trailing newline. Explanation: tr :/ \\n| Split the input on both colons and slashes. This results in a trailing newline but read eats that anyway. sed -Ee 's/([0-9]+)-([0-9]+)/{\1..\2}/g' -e 's/.+,.+/{\0}/'| Expand numeric ranges and wrap lists in braces. while read s&&read t;do eval printf %s\\\\n$s\\ $t;done Read two sets at a time and generate their Cartesian product. # Python 3, 187 bytes import itertools as t [k for b in s.split('/')[:-1] for k in t.product(*[[j for x in m.split(',') for j in range(*[int(x.split('-')[0]),int(x.split('-')[-1])+1])] for m in b.split(':')])] # Pure Bash, 223213 205 bytes IFS=/ for i in$1 do l= IFS=: for j in $i do while [[$j =~ ([0-9]*)-([0-9]*) ]] do set -- ${BASH_REMATCH[@]} j=${j/$1/"{$2..$3}"} done k=${j##*,*} l=$l\\\${k:-"{$j}"} done eval printf %s\\\\n${l:2} done Try it online! Takes input as a command-line parameter. Edit: Saved 10 bytes thanks to @pxeger. Explanation: IFS=/ for i in $1 do ... done Split the input on /s. (The last empty string gets ignored.) l= Start building up the sets. IFS=: for j in$i do ... done Split the group on :s. while [[ $j =~ ([0-9]*)-([0-9]*) ]] do set --${BASH_REMATCH[@]} j=${j/$1/"{$2..$3}"} done Expand numeric ranges. k=${j##*,*} l=$l\\\ ${k:-"{$j}"} Wrap lists in braces and concatenate the sets. eval printf %s\\\\n ${l:2} Generate the Cartesian product of the sets. • -10 bytes by saving $BASH_REMATCH: Try it online! Nov 26 '21 at 20:59 • @pxeger Thanks, that inspired me to shave another 7 bytes off by using set -- instead. – Neil Nov 26 '21 at 23:31 # Burlesque, 54 bytes ~]'/;;{':;;{',;;{J'-~[{'-;;)ti^pr@}qtiIE}\m}MPcp}\m}MP Try it online! ~] # Drop final / '/;; # Split on / { ':;; # Split on : { ',;; # Split on , { J'-~[ # Contains - { '-;; # Split on - {ti}MP # Map to int and force onto stack r@ # Range from low to high } {ti} # To int IE # If else }\m # Map and concatenate }MP # Map and push cp # Cartesian product }\m # Map and concatenate }MP # Map and push # R, 196 bytes function(x,[=sapply,t=strsplit,d=do.call)apply(matrix(t(x,'/')[t,":"][t,","][t,"-"][lapply,function(j)as.list(scan(t=j)+!3:4)][function(i)unlist(i[d,w=:])],2),2,function(l)d(outer,c(l,paste))) Try it online! Outputs a list of each 'group of products' (separated by / in the input), containing space-separated pairs of elements. +8 bytes to output as a flat vector. Ungolfed: a= sapply( ... strsplit(x,'/'), # split input on '/ sapply( ... strsplit(x,':'), # split that on ':' sapply( ... strsplit(x,','), # split that on ',' sapply(strsplit(x,'-')) # and finally split that on '-' b=lapply(a,function(j)as.list(rep(j,2)[1:2])) # double any lists of one item c=sapply(b,function(i)unlist(s(i,do.call,what=:))) # and apply ':' (range) using # 2-element lists as arguments, # concatenating (unlist) the results m=matrix(c,2) # put the output into 2-row matrices apply(m,2,function(l)do.call(outer,c(l,paste))) # and, for each column, paste togethe # the elements of each of the two rows # JavaScript (ES10), 150 bytes This seems quite long... s=>s.replace(/(\d+)-(\d+)/g,g=(_,a,b)=>a-b?a+[,g(_,-~a,b)]:a)[S='split']/.map(s=>s?(g=k=>s[S]:[k][S],)(0).map(a=>g(1).map(b=>[a,b])):[]).flat(2) Try it online! • Isn't there some ways to use ranges + splat and eval in js? Nov 27 '21 at 14:46 • @AZTECCO There is no range builtin in JS. There's at least one proposal, though. Nov 27 '21 at 14:52 # Ruby, 7977 bytes ->s{eval"[#{s.gsub(?:,'].product([').gsub(/\d+-/,'*\00..').gsub ?/,'])+['}]"} Try it online! • Saved 2 thanks to @G B The literal already has a structure we can use, we just have to substitute a few symbols and then we evaluate it [ prepend a [ #{s.gsub(..).gsub(... transform input by replacing: : => '].product([' /(\d+)-/ => '*\1..' here \1 is the captured number / => '])+[' we add next square ] which we close empty if there's no group available. Here is an example with adds and substitutions : 100 : 0- 1 , 2 , 3- 4 / 1 : 2 / [ 100 ].product([ *0.. 1,2, *3.. 4 ])+[ 1 ].product([ 2 ])+[` ] • 77 – G B Dec 2 '21 at 12:24
# Convexity of a function Suppose we have $F: R^n \longrightarrow R$ , $P: R^n \longrightarrow R^n$ and $G: R^n \longrightarrow R$ all nice- let's say given by polynomial and $P$ is invertible - such that $F(x) =G( P(x) )$. Is it possible to relate the conditions of convexity of $F$ to conditions on $G$. For example if $X$ is the image of $R^n$ under $P$, i.e. $X =P(R^n)$. Then $X$ is a semi algebraic subset of $R^n$. Now does $G$ being convex on $X$ imply $F: R^n \longrightarrow R$ is convex? If this is not the case in general are there conditions on $P$ which help here? - What does "being convex on $X$" mean, when $X$ is not necessarily a convex set? – user53153 Dec 18 '12 at 6:43 Hmm for example that the Hessian is psd at every point.. – Benedikt Dec 21 '12 at 8:39 There are a few facts that may help you (although, I believe you need to rephrase your question to make it more clear). First, if $P$ is affine, i.e., $P(x)=Ax+b$, and $G$ is convex, then $F$ is convex. Second, assume that $P$ is given by $P(x)=(P_1(x),\ldots, P_n(x))$ and each $P_i:\mathbb{R}\to\mathbb{R}$ is convex. Assume that $G(z_1,\ldots, z_n)$ is nondecreasing in each $z_i$ (when other variables are fixed) for each $i$. Then $F$ is convex. – Pantelis Sopasakis Nov 2 '14 at 14:22
# How to create cell fracture in python Firstly, I am new to Blender. I am actually working on to create a hemispherical model with cell fractured property. 1. I have done this work manually using cell Fracture add-on and now I want to automate using python. 2. I have written the code till particle implementation on the hemisphere. Now I want to create a cell fracture with those 100 particles. I have attached the hemisphere picture and the final output needed. I have also attached the code. Please provide me an insight on how to do this using python. Code till Particles in Hemisphere: import bpy # Draw hemisphere of 1 m radius enter_editmode=True, align='WORLD', location=(0, 0, 0), fill_type='TRIFAN', # NOTHING or NGON ) bpy.ops.mesh.bisect( plane_co=(0, 0, 0), plane_no=(0, -1, 0), clear_inner=True, ) # Particles bpy.ops.object.editmode_toggle() bpy.data.particles["ParticleSettings"].count = 100 bpy.context.object.particle_systems["ParticleSettings"].seed = 1 bpy.data.particles["ParticleSettings"].frame_end = 1 Image: Hemisphere with particle feature Image: Final output required with cell fracture Thanks and Regards, Sunag R A. • It is for the code block (under esc key), your are using ' (beside enter key) – HikariTW Jul 31 '20 at 6:19 • @HikariTW recommend tab formatting for code block (ie all of code block tabbed one extra right) Wrapping in three backticks often leave a residual when copy / pasting. – batFINGER Jul 31 '20 at 9:23 • @batFINGER But I prefer using explicit bracket for code since there is no standard tabbed lens and in some editor, block tab just don't do the work. (And the residual should be considered as parser and html implement method?) – HikariTW Jul 31 '20 at 16:12 Add particle system as suggested here Python and particle system Make sure the cell fracture addon is enabled, and call the operator bpy.ops.object.add_cell_fracture_objects() Which uses defaults as mapped out in doc string >>> bpy.ops.object.add_fracture_cell_objects( source_limit=100, source_noise=0, cell_scale=(1, 1, 1), recursion=0, recursion_source_limit=8, recursion_clamp=250, recursion_chance=0.25, recursion_chance_select='SIZE_MIN', use_smooth_faces=False, use_sharp_edges=True, use_sharp_edges_apply=True, use_data_match=True, use_island_split=True, margin=0.001, material_index=0, use_interior_vgroup=False, mass_mode='VOLUME', mass=1, use_recenter=True, use_remove_original=True, collection_name="", use_debug_points=False, use_debug_redraw=True, use_debug_bool=False) (undocumented operator) Test code, produces result as in image above. import bpy # make sure cell fracture is enabled enable("object_fracture_cell") context = bpy.context # Draw hemisphere of 1 m radius enter_editmode=True, align='WORLD', location=(0, 0, 0), fill_type='NGON', # NOTHING or NGON ) bpy.ops.mesh.bisect( plane_co=(0, 0, 0), plane_no=(0, -1, 0), clear_inner=True, ) bpy.ops.object.editmode_toggle() # Particles ob = context.object ps = ob.modifiers.new("Part", 'PARTICLE_SYSTEM').particle_system ps.seed = 1 ps.settings.count = 100 ps.settings.frame_end = 1 # cell fracture • I was trying the code provided by the chosen answer on Blender 2.92 which was failing, the needed change is to use add_fracture_cell_objects instead of add_cell_fracture_objects. – Anas Einea Mar 22 at 15:26 • Thanks for the heads up. The cell fracture in the version of 2.92.0 RC` I am using is still working as above. What version of 2.92 did you notice change in?. – batFINGER Mar 22 at 19:16
Report ### Distribution of Aligned Letter Pairs in Optimal Alignments of Random Sequences Abstract: Considering the optimal alignment of two i.i.d. random sequences of length $n$, we show that when the scoring function is chosen randomly, almost surely the empirical distribution of aligned letter pairs in all optimal alignments converges to a unique limiting distribution as $n$ tends to infinity. This result is interesting because it helps understanding the microscopic path structure of a special type of last passage percolation problem with correlated weights, an area of long-standing open... ### Access Document Files: • (pdf, 349.1KB) ### Authors Publisher: Annals of Probability Publication date: 2012-11-01 UUID: uuid:5c32bc2b-dd48-4041-bcf5-9eedeaee7b37 Local pid: oai:eprints.maths.ox.ac.uk:1625 Deposit date: 2012-11-25
# If the charge on a conductor always resides on the outside, then why does current flow through the interior of a wire? Does electricity flow on the surface of a wire or in the interior? Charge on a conductor always remains on the surface. In that case, why is it that charge flow through the interior of a wire? How would it not just flow on the surface of the wire? Short answer: a wire is not made of free charges that can move freely, and current flow in a realistic wire is not an equilibrium condition. Charge distributing itself on the surface, thereby maximising the distance between neighbouring repulsive charges and minimising the overall potential energy, is an equilibrium condition. That it, if you left the system for an infinitely long time to settle, that's the state it would end up in. In reality a wire is made up of atoms with delocalised electrons, that is electrons that are still bound to the nuclei, but loosely so as to be easily knocked out. To move towards the outer surface of the wire, it would take them many scattering events with other atoms and electrons, which would result in recoil kicks and change of directions in random directions. And with a constant potential difference at the ends of the wire you always have a fresh supply of new forward (or backward) moving electrons ready to maintain the random kicking and knocking out. By the way, AC current does flow on the outside of a wire, because of something called the skin effect. • Why would it take them many scattering events? Don't electrons exert an electric force on each other, so it's not merely diffusion? In that case, there will still be a net force to the outside of the wire that should after a while yield a noticeable result – JobHunter69 Sep 11 '16 at 22:02 • The electric force between electron would probably be less than the attractive force provided by a high $Z$ nucleus. Hence they'd be attracted towards neighbouring atoms - this is what I meant by 'scattering events'. I.e. bouncing off nuclei here and there. – SuperCiocia Sep 11 '16 at 22:04 • It doesn't require scattering events. The current exists throughout the cross section because the electric field due to the surface charges (plus any external fields) is uniform across the cross-section. – garyp Sep 12 '16 at 0:21 • I'd rather say AC current doesn't flow on the outside of a wire, but in a shallow region beneath the surface: down to approximately the skin depth $\delta = \sqrt{\frac{2\rho}{\mu\omega}}$. For mains AC this actually turns out to be much thicker than any wire you could practically use – effectively the current density is homogeneous in the entire conductor. The effect becomes really relevant at high frequencies. – leftaroundabout Sep 12 '16 at 10:27 • Any excess charge distributes itself on the surface. All those conduction band electrons on the interior are nicely balanced by the atomic cores. And, in a solid, those electrons are not localized, but occupy Bloch functions throughout the lattice and move quite happily (with scattering) in the bulk. – Jon Custer Sep 12 '16 at 15:28 Charge can be either positive or negative. It's therefore possible to have equal numbers of positive and negative charges within an infinitesimal volume maintaining a net charge of zero, yet charges still being there. This wouldn't be possible if there was only one type of charge. So when people say "charge on a conductor always resides on the outside", what's true is that local net charge resides on the outside of a conductor.
Math 614 Numerical Methods I Course Info Almost everything is on the course canvas page. Responses to Feedback Sheet questions: Questions in bold, answers in plain text. Responses for Thursday February 2 • I think I would understand the properties of the norm and everything in today’s class if I saw more examples related to it! Here’s a nice example of a vector norm and here it is for matrix norms. • Do we need to learn more about vector norms? A little bit, but I’ll save that for when we need them. • A student did not understand the part during “matrix norms that went like $\lVert A \rVert_\infty = \max_i |{(Av)}_i|$. That’s just the definition, recall that we also need $\lVert v \rVert_\infty = \max_i |v_i| = 1$. • What do equivalent norms mean in application? How should I understand it? Basically the important thing to realize that if two norms are equivalent, then small things in one norm are small in the other, and large things in one norm are large in the other. Therefore, when we want to show that our approximation error is small, any equivalent norm will do and we can use whichever one makes our calculation easiest. • Why do people study norms on infinite dimensional vector spaces? Good question! Here’s one answer: a simple example would be the set of all bounded continuous functions. To make this even easier, suppose these are continuous functions on the domain $0<x<1$ such that $f(0)=f(1)=0$. This satisfies the definitions of a vector space (if $f(x)$ and $g(x)$ are in the space so is $h(x)=af(x)+b(g(x)$). Just like we want to know if a sequence of numbers converges to a limit, we may like to know if a sequence of functions converges to a limit. If we define $\lVert f \rVert = \max_{0<x<1}|f(x)|$, then this works. • If $A = \begin{pmatrix} 1 & 2 & 3 \\ -4 & 5 &-6\end{pmatrix}$, why is $\lVert A \rVert_\infty = 15$ and $\lVert A \rVert_1 = 9$? Because the infinity-norm is the largest sum of the absolute values over any row ($4+5+6$) and the one-norm is the largest such column sum ($3+6$). • For two norms $\lVert \cdot \rVert_a$ and $\lVert \cdot \rVert_b$, do the two positive constants $c_1$ and $c_2$ in the inequality $c_1\lVert v \rVert_a < \lVert v \rVert_b < c_2\lVert v \rVert_a$ have to satisfy $c_1<c_2$? Yes, of course. • What is $\text{range}A$? This is just the set of all vectors $x$ such that $x = Av$ fro some other vector $v$. Another way to say this is that $v$ is a linear combination of the columns of $A$. • What is the spectral decomposition? Here’s what the textbook has to say Spectral decomposition: Let $X$ be the matrix whose columns are eigenvectors of $A$, and suppose that $X$ is square and nonsingular. Then \begin{aligned} A X & =A\left[\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_n\right] \ & =\left[\lambda_1 \mathbf{x}_1, \lambda_2 \mathbf{x}_2, \ldots, \lambda_n \mathbf{x}_n\right] \ & =X \Lambda \end{aligned} where $\Lambda$ is a diagonal matrix with the eigenvalues on its diagonal, $\Lambda=\operatorname{diag}\left(\lambda_1, \lambda_2\right.$, $\left.\ldots, \lambda_n\right)$. Therefore, we can write $$A=X \Lambda X^{-1} \text {. }$$
# Poincaré-Hopf Theorem 1. Dec 4, 2009 ### Jamma Hi. I've been going through the proof of this theorem; if you don't know the statement, see: http://en.wikipedia.org/wiki/Poincaré–Hopf_theorem However, I'm getting confused due to inaccuracies of the way this theorem is written down in different sources. On the wiki article, it says that the theorem applies to compact, ORIENTABLE, DIFFERENTIABLE manifolds. However, the reference from this page says that it must be just compact and smooth. Further still, in "Topology from a differentiable viewpoint" by Milnor (I need to read through this sketch proof a bit more rigourously), we seem to only require that M be a compact manifold (actually, just read that the vector field needs to be smooth, so I'm guessing that the original manifold needs to be smooth to even have the notion of a smooth vector field?). So do we need only differentiability of the manifold? Or do we need either smoothness OR orientability? My problem is that the sketch proof on wikipedia seems to make use of the fact that the manifold is orientable since we define the Gauss map from its boundary, which can only be defined if the boundary is orientable (the degree of a map between manifolds is only defined when the top homology group is Z, implying orientability), which will be implied by the original manifold being orientable. I would like to be able to prove the most general result possible (I need to present the proof in class); maybe we don't need the manifold to be orientable, and when forming our new manifold in the higher dimensional Euclidean space, our boundary must now be orientable? - If this is true, I must be missing something. My other annoyance with this sketch proof is that at the end it makes use of a triangulation of our manifold M. But can we be sure that a triangulation exists? It certainly doesn't for every manifold, but again, maybe when it is differentiable, orientable and compact, (or smooth and compact?!) then there will be a triangulation. Sorry for the length of this post, I shall summarise: 1) In the hypotheses for this proof, can we get away with M just being differentiable and compact? Or do we require that it be smooth and compact OR differentiable, orientable and compact? 2) In the sketch proof, if we had not assumed orientability, can we assume that the boundary of our new manifold in the higher dimensional Euclidean space has an orientable boundary? 3) Will a smooth and compact (or [differentiable and compact] or [differentiable, compact and orientable]) always have a triangulation? (I assume that the answers to 2) and 3) will be negative and that we actually need a more elaborate proof for the general theorem than the badly written on on wiki). If you managed to read all that, thanks! Any answers greatly appreciated :) 2. Dec 4, 2009 ### zhentil If your manifold is not orientable, indices of vector fields are only defined modulo two. I am not sure if a mod two version of the P-H theorem holds in this case, but it certainly doesn't hold over the integers, since one side of the equation is not defined. You definitely need some idea of differentiability to talk about vector fields. It's possible that the theorem may be true for C^1 vector fields, but again, I'm not sure. 3. Dec 5, 2009 ### wofsy I am glad you asked this question since it has forced me to think carefully about this theorem. I am not sure of the answer but it seems that the sum of the indices of a vector field does not require the manifold to be orientable. Please refute the arguments. First of all, at any isolated zero of the field, the index is defined in a small coordinate ball as the degree of the map of the bounding sphere into the standard sphere in Euclidean space. This degree is independent of the coordinate ball and in particular does not depend upon whether the coordinate map is is orientation preserving or reversing. This means that the sum of the indices of a vector field on any manifold is well defined. If the manifold is non-orientable pull the vector field back to an orientable 2 fold cover, do the theorem there then divide by 2. Since the coordinate projection will not change the index of any particular zero, the theorem follows because zeros are identified in pairs with no change of local index and the Euler Characteristic of the non-orientable base is just half the Euler characteristic of its 2 fold covering space. In the Poincare-Hopf theorem one first embeds the manifold in Euclidean space and then extends the vector fields to a tubular neighborhood making sure that each local index is preserved and that the new vector field points outward along the boundary. A simple argument ( Stokes theorem ) shows that the sum of the indices is the degree of the Gauss map on the boundary of the tube. The Gauss map is well defined because the boundary of the tube is orientable (since any closed hypersurface of Euclidean space is orientable). I do not see where orientability of the original manifold comes in here. Any smooth manifold can be triangulated. If one chooses a vector field that follows the contours of a barycentric subdivision of the triangulation a simple counting argument shows that the sum of the indices is the Euler characteristic. Again,I do not see where orientability comes in here. Lastly it seems to me that the vector field only needs to be continuous because the idea of degree of a mapping only requires continuity. On the other hand, if one wants to apply the Gauss Bonnet theorem to prove the index theorem the vector field would have to be at least C^1. Last edited: Dec 5, 2009 4. Dec 5, 2009 ### zhentil Hi Wofsy, I believe you're correct (except for one minor point). One needs only to specify that the lift of the vector field to the orientable double cover is unique, and everything else follows easily. With respect to continuous vector fields, the answer is clearly true if the manifold has a smooth structure, since any continuous (or C^1) vector field is homotopic to a smooth vector field. The question I had in mind is whether indices of vector fields can be defined in a meaningful way, say, for PL manifolds and continuous vector fields. If one loses the notion of a tangent space, what does it mean for a vector field to be transverse to the zero section? 5. Dec 7, 2009 ### wofsy I don't know the PL theory but suspect that there are analogues of vector bundles. I would be willing to do some reading with you. I could ask around for papers if you like. 6. Dec 8, 2009 ### Jamma Thanks Wofsy, that certainly is very helpful; much more so than the wikipedia article. You wouldn't mind explaining what you mean by a 2-fold cover do you? Its not something that I have ever encountered before, although I assume it is something that is quite intuitive- an example with a Mobius band maybe? And yes, we would need to know that this construction (whatever it is!) is unique. So yes, orientability is not an issue. However, we still need the manifold to be smooth, so that we can be sure to have a triangulation (I was not aware that all smooth manifolds can be triangulated; where can I find this statement?). I would assume that the theorem will still hold for just a differentiable manifold though; maybe we need a slightly more elaborate version of the contruction on the vector field? 7. Dec 8, 2009 ### wofsy - You do not need the manifold to be smooth only C^1 then I think all of the arguments work. Take a look at Milnor's proof and see if smoothness is actually used. I don't think so. - In practice mathematicians only work with smooth manifolds. This is because they want to take as many derivatives as they like. There are theorems about when a C^k manifold has a smooth structure but I am not sure what they say. I suspect that if k is large enough say 2 or greater then it is true. I will look it up. It is an interesting question. I think it was first studied by Whitney and I have the feeling that he completely solved the problem. - Every smooth manifold not only has a triangulation but actually has a smooth triangulation. This means that the maps of the simplices in the triangulation are themselves smooth. This means that one can extend the maps of the standard simplices in Euclidean space into the manifold to an open neighborhood so that the maps are smooth in the neighborhood and so the the maps line up smoothly(differentiably )on the intersections of the simplices. This is called a smoothing of the triangulation - I think. I think this is a theorem of Whitehead and I do not know the proof. It would be fun to try to figure it out for ourselves. This issue has other interesting aspects to it. There are triangulations of manifolds that are not smoothable. And there are manifolds which can not be triangulated. I do not know this theory. - A two fold covering space is another manifold that wraps around the first twice. This wrapping or covering is a local homeomorphism and is exactly 2 to 1. It is a standard fact that a non-orientable manifold has an orientable 2 fold covering. There are a zillion books on algebraic topology that discuss this. I think the best introductory book to topology and geometry from the modern point of view is Singer and Thorpe's Lecture Notes on Elementary Topology and Geometry. - In the case of Moebius band the two fold cover is the cylinder. Try to figure out the covering map. When you finish that try figuring out how the torus is a two fold cover of the Klein bottle. The sphere is a two fold cover of projective space. Mod out by the anitpodal map (the map that sends opposite poles to each other). In the case of even dimensional sphere the projective space is non-orientable. For odd spheres it is orientable. Try to prove this. Notice that in the case of the two sphere you can see the cylinder covering the Moebius band. Last edited: Dec 8, 2009 8. Dec 10, 2009 ### Jamma Ahha, so you can just take your standard "box with identifications on it" and double it in one direction. So yes, I see how you would get the Klein bottle and torus, this is interesting. How this works in higher dimensions I'm sure is not so simple, but sounds interesting. I will make sure to take a look at this subject, when I have more time. As for Milnor's proof, despite starting out by saying that we just need a manifold, but a smooth vector field (so we would need a smooth manifold too), at the end he starts deducing that we don't need such strong differentiability conditions. Its more of a sketch and I don't really have time to go in to it too much, but he doesn't seem to use the Whitney embedding theorem, but uses vector fields with non-degenerate zeros, and later goes on to say that this wasn't really a restriction. For the Whitney embedding theorem, on wikipedia it says that it applies to smooth manifolds, but then that they can be smoothly embedded. So I'm guessing that we don't need a smooth manifold? (although then obviously, we will just have an embedding?). Last edited: Dec 10, 2009 9. Dec 22, 2009 ### wofsy This thread has got me reading. In the section on the Poincare-Hopf theorem, Milnor's book mentions Morse functions on manifolds. These are real valued functions whose gradients have only non-degenerate zeros. In Milnor's book,Morse Theory, he shows that any manifold has a Morse function and that its gradient vector field not-only tells you the Euler Charateristic of the manifold but also gives you a cell decomposition. One cell is added for each zero of the gradient and the dimension of the cell is computable from the Hessian of the Morse function. Also the index of the vector fields at a zero is computable from the Hessian - it is just its determinant. A negative determinant means a cell of odd dimension is attached, a positive determinant means a cell of even dimension is attached. This means that the sum of the indices of the gradient vector field equals the Euler Characteristic of the manifold since its equals the alternating sum of the number of cells in each dimension. So these special gradient vector fields seem to tell you a lot more that just the Euler Charateristic. They tell you the homotopy type of the manifold. It might be interesting to investigate what the total information in a vector field is in general. One last thought. Milnor's book, Topology from the Differentiable Viewpoint, only proves that the sum of the indices of a vector field is the same for all vector fields. He does not prove that it is the Euler Characteristic. (Technically, he only shows it for vector fields with non-degenerate zeros.) This he does in Morse theory but also the example I mentioned above that uses a triangulation works. Last edited: Dec 22, 2009 10. Dec 22, 2009 ### Jamma Very interesting, thanks Wofsy. 11. Dec 22, 2009 ### zhentil In general, one can recover the homology of the manifold from a single Morse function. And his construction does prove that every smooth manifold has the homotopy type of a CW complex. You can actually push this stuff quite far. It can be used to prove Bott Periodicity and Poincare's conjecture in dimensions greater than 4. You can also generalize from a real-valued function to a closed one-form and get the basic setup of Floer homology. I think your question about the information that a vector field encodes is interesting. This stuff barely scratches the surface (i.e. a signed count of the zeros). I think some of the guys in the '60s looked into the information one could obtain about the maximum number of linearly independent sections of vector bundles, but I think this got gradually subsumed by characteristic classes and obstruction theory. One completely different angle is periodic orbits of vector fields on manifolds, but the problem with this is that one needs more rigidity to make anything interesting happen (i.e. a symplectic/contact/Kahler structure). 12. Dec 23, 2009 ### wofsy This stuff seems really amazing. Can you give references. I have good books on charateristic classes but how about Floer homology and the Poincare conjecture? Our discussion still leaves some open questions as well. When you first thought that the manifold needed to be orientable for the index theorem to work I thought that was right but couldn't reconcile it with some simple examples e.g. the projective plane. But the vector bundle does need to be orientable in order for there to be an Euler class. This is true on any vector bundle including the tangent bundle and this I think was your original thought. It seems that the tangent bundle is an exception and for a while this had me worried. I suspect that the tangent bundle is special because it has a differential. Other bundles do not. Milnor's proof uses the differential of coordinate charts and the Morse Theory argument uses the differential of a function. I am still uneasy about this. If the vector bundle and the manifold are both orientable and have the same dimension then the Euler class is Poincare dual to the zero section so by the transversality argument you mentioned, the sum of the degrees of a vector field must equal the Euler number of the bundle. It seems that if the manifold is not orientable this transversality argument only works mod 2 but I am not sure. Anyway this again seems to be your original thought. 13. Dec 23, 2009 ### zhentil My original thought was basically that intersection numbers can only be defined mod 2. In my original line of thinking, I took the index of a vector field to be the intersection number of the section with the zero section. As for the other stuff, I'd look either at Floer's original papers or (particularly) their motivation, Witten's paper on Supersymmetry and Morse Theory. For the Poincare conjecture stuff, you can't go wrong with Milnor's book on the h-cobordism theorem, of which the high-dimensional poincare conjecture is a corollary. 14. Dec 23, 2009 ### quasar987 I read that Floer's motivation for inventing Floer homology was that he saw a way to prove Arnold's conjecture. Is there something else relating to Witten's paper that he had in mind? 15. Dec 24, 2009 ### wofsy Thanks for the references. I looked quickly at the Milnor book. It looks difficult. I think the intersection number of the manifold with itself is the Poincare dual to the Euler class for an oriented n-plane bundle over an oriented n-manifold. The Euler class seems to be just the Thom class of the zero section so the cup product of the Euler class with itself is dual to the transverse intersection of the zero section with itself (the Thom class of the transverse intersection is the wedge product of the Thom classes). So it seems but I am not sure that the index theorem works by this transversality argument for any oriented vector bundle over an oriented manifold of the same dimension. If the vector bundle is of lower dimension transverse intersection gives a submanifold. What can we say about this submanifold? I also still don't see that the manifold needs to be orientable if the vector bundle is orientable. Last edited: Dec 24, 2009 16. Dec 26, 2009 ### quasar987 The h-cobordism theorem is also presented in the book Differential manifolds of Kosinski. (It's in dover!) 17. Dec 26, 2009 ### wofsy Thaanks. What is the S-cobordism Theorem? Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
A charge of -2 C is at (-2 , 3 ) and a charge of -1 C is at (8,-2 ) . If both coordinates are in meters, what is the force between the charges? May 19, 2016 $F = 1.44 \cdot {10}^{8} N$ Explanation: $\text{ distance between "Q_1 " and "Q_1 } : r = \sqrt{{\left(- 2 - 3\right)}^{2} + {\left(8 + 2\right)}^{2}}$ $r = \sqrt{{\left(- 5\right)}^{2} + {10}^{2}} \text{ "r=sqrt(25+100)" "r=sqrt(125)" } {r}^{2} = 125$ $F = k \cdot \frac{{Q}_{1} \cdot {Q}_{2}}{r} ^ 2$ $F = 9 \cdot {10}^{9} \frac{\left(- 2\right) \cdot \left(- 1\right)}{125}$ $F = \frac{18 \cdot {10}^{9}}{125}$ $F = 1.44 \cdot {10}^{8} N$
Policy ForumEconomics # Behavioral Economics and the Retirement Savings Crisis See allHide authors and affiliations Science  08 Mar 2013: Vol. 339, Issue 6124, pp. 1152-1153 DOI: 10.1126/science.1231320 Many countries are facing a retirement savings crisis. In the United States, for example, the fraction of workers at risk of having inadequate funds to maintain their lifestyle through retirement is estimated to have increased from 31% to 53% from 1983 to 2010 (1). Roughly half of U.S. employees (78 million) have no access to retirement plans at their workplace (2). Fortunately, there are solutions to these problems. We simply have to change the choice architecture of retirement plans by utilizing the findings of behavioral economics research (3) and make such plans available to all workers. We describe a large-scale field demonstration of the potential impact of such research-based changes in how we save. One reason for the savings crisis is the ongoing shift in the private sector from defined benefit pension plans (DB, where retirement benefits are formulaic and known in advance) to defined contribution plans (DC, where benefits depend on investment outcomes). This trend is spreading to the public sector as well and is likely to quicken given the dire underfunding of many state and local pension plans (4). The United States is not alone in facing these problems. The UK is launching the National Employment Savings Trust, a national payroll savings plan similar to the New Zealand KiwiSaver program. Making a payroll-based savings plan available to everyone is essential because it is the most effective way for the middle class to save. But having a plan offered at the workplace is not sufficient. Even for those with access to an employer-sponsored plan, almost a quarter fail to join, and among those who do join, many save too little (5). There are four essential ingredients to any comprehensive plan to facilitate adequate saving for retirement: availability, automatic enrollment, automatic investment, and automatic escalation. Availability. Every U.S. worker should have easy access to a payroll deduction–based DC plan. The Obama Administration has proposed a universal program called the auto-IRA (Individual Retirement Account), which will require employers who do not offer a retirement plan to auto-enroll their employees in an IRA account. Of course, workers can opt out. The state of California has passed a similar plan called the California Secure Choice Retirement Savings Trust. Automatic enrollment. In traditional DC plans, participants must make an active decision to enroll, including picking a savings rate and an investment portfolio. Many employees intend to join but never get around to it. There is now conclusive evidence that automatic enrollment, where employees are automatically signed up unless they opt out, is extremely successful in overcoming the procrastination that can impede signing up. Opt out rates average about 10% (5, 6). Automatic investment. If employees are automatically enrolled, there has to be a default investment option. Fortunately, since the Department of Labor established the criteria for qualified default investment vehicles, both employers and asset managers have worked to create a variety of investment vehicles that provide employees with sensible diversification and an asset allocation mix that is automatically rebalanced when stock prices change (thus, buying stocks in 2009 when the market bottomed), as well as adjusting the portfolio as the employee ages. Automatic escalation. About three-quarters of automatic enrollment plans use an initial saving rate of just 3% of income (7). Research shows that when offered the default rate many passively accept it but had they been forced to make a choice on their own, some would have selected a higher rate (6). Automatic enrollment does a good job of getting people started, but employees can be stuck for years saving at an insufficient rate. We argue that the solution to the problem of saving too little is automatic escalation, a generic term for a plan we devised called Save More Tomorrow (SMT), based on behavioral economics research (8). The original SMT program has three components. First, employees are invited to commit now to increase their saving rate later, perhaps next January or a few months in the future. Selfcontrol is easier to accept if delayed rather than immediate. Second, planned increases in the saving rate are linked to pay raises. This is meant to diminish the effect of loss aversion—the tendency to weigh losses larger than gains (9). Because the increase in the savings rate is just a portion of the pay raise, employees do not see their pay fall. Third, once employees sign up for the plan they remain in it until they reach a preset limit or choose to opt out. This uses inertia to keep people in the system. At the first company that implemented SMT, employees who elected to join (and 78% of those offered the plan did) ended up almost quadrupling their saving rate from 3.5% to 13.6% in slightly less than 4 years (8). This evidence of success stimulated employers and administrators to adopt the Save More Tomorrow plan (or the generic version, automatic escalation, which does not link savings increases to pay increases). Take-up then increased considerably, helped by the passage of the Pension Protection Act of 2006, which encouraged firms to adopt a combination of automatic enrollment and automatic escalation. How automatic enrollment and automatic escalation have spread among U.S. employers is shown in the chart. By 2011, 56% of employers who offer 401(k) plans automatically enrolled employees, and 51% offered automatic escalation (10). The ideas are spreading, but has retirement saving actually increased? To address this question, we estimated the effect of automatic escalation, because automatic enrollment can have an ambiguous effect on the average saving rate. We contacted the largest 25 companies that administer retirement plans, which service roughly 90% of participants in DC plans according to the 2012 Pensions and Investments directory of retirement plan providers (11, 12) [supplementary materials (SM)]. We asked each plan provider for the following data as of the end of year 2011: the number of plan participants they serve who are currently making contributions to their plan (N); the number of plan participants who are enrolled in a SMT or other automatic escalation program (S). We received data from 13 of the 25 plan providers, covering 55% of plan participants according to the Pensions and Investment directory (13) (SM). Of the 20,628,702 contributing participants in our data, 2,268,726 are enrolled in an automatic escalation program, yielding a utilization rate (S/N) of 11%. If this utilization rate is applied to the entire universe of participants, we estimate that there are already about 4.1 million participants who are having their savings rates automatically increased. We calculated the effect of automatic escalation on retirement plan saving rates, based on the conservative assumption that salary deferral rates are increased automatically by just 1 percentage point per year for only 3 years. These are the minimum requirements set by the Pension Protection Act of 2006. Some plans go beyond this minimum, either in the rate at which deferrals are increased or in the number of years such increases are continued, so our estimate of the increase in savings is biased downward. Our estimate is also biased downward because we do not include the effect of additional matching contributions by employers, typically 50% up to some cap. At the current utilization level of automatic escalation, 11% of participants boost their salary deferral rates by 3% over 3 years, which results in an average increase of 0.33% for the universe of plan participants (11% penetration times 3% increase in deferral rate). To put this 0.33% effect in perspective, the average deferral rate is 6.2%, as reporfted by the Plan Sponsor Council of America (14). We interpret this as showing that the intervention is having a noticeable effect, even at the currently low take-up rate by employees. We estimate that automatic escalation boosted annual savings by $7.4 billion, if we assume an average annual compensation of$60,000 and a 3% increase in deferral rates (15). The next step is to increase program utilization. There are three simple ways to achieve this goal. First, it should be easier for workers to join the plan. Of the employees offered the original version of SMT, 78% signed up, in part due to the ease of doing so (employees met with a financial adviser who took all necessary steps to join). Take-up rates in most plans are much lower, in part because employees do not know the option exists or find the sign-up procedure cumbersome. Making the option more salient and making it easier to enroll will likely pay dividends. Alternatively, automatic escalation can be made the default, both for new and existing employees who are stuck at a low savings rate. Of course, in this case, opting out must be easy. Second, this feature can be included in existing DC plans offered to government workers. For example, the Save More Tomorrow Act of 2012 proposes to offer this feature to federal government workers in their existing Thrift Savings Plan. Third, automatic escalation should be included in the new plans targeting employees without a savings plan at work such as the auto-IRA and the California Secure Choice Retirement Savings program. Automatically enrolling employees at a low initial savings rate without incorporating automatic escalation is simply bad policy. One question about these efforts has until recently been impossible to answer. Does inducing larger contributions to retirement saving actually increase total saving, or does it simply shift saving from one place (say, a bank account) into another? However, new work using Danish data that include measures of household wealth suggests that when employees are automatically enrolled into a retirement savings plan, 85% of that savings is new, rather than shifted (16). Lessons from this savings example can be applied in other domains. For example, much of the rise in health care spending in the United States is not just a problem with the health care–delivery system but also inadequacies in the ways by which we encourage people to be healthy. Dealing with obesity and its health consequences, is first and foremost a behavior problem (17). If we can nudge people toward a healthier diet and more exercise, we will end up spending less delivering treatments. Similarly, in stressing incentives to encourage patients to economize, we can miss more important determinants of health outcomes. For some patients, the most important way to improve health outcomes is to make sure patients take their prescribed medicines, but many do not (18). Charging high copays in such situations is counterproductive. Choice architecture can have profound impacts on behavior, more powerful than might be achieved merely with financial incentives. ## References and Notes 1. Data provided by Aon Hewitt, which tends to focus on larger plans that are generally more innovative, so the numbers could be biased upward. By comparison, the Plan Sponsor Council of America reports in their 54th Annual Survey (19) that 46% of plans had automatic enrollment in 2011. Owing to confidentiality concerns, Aon Hewitt analyzed data on authors' behalf and only provided summary statistics displayed in the chart. 2. Pensions and Investments (2012); www.pionline.com/specialreports/dc-record-keepers/20120402. 3. According to 2010 Department of Labor data (20), the pensions and investments universe of plan providers covers more than 90% of all plan participants in the United States. 4. Firms shared data on condition of anonymity, so data are stripped of identifying information. 5. Plan Sponsor Council of America, 55th Annual Survey, 2012; www.psca.org/55th_survey. 6. At the authors' request, a large-plan administrator calculated a median income of $62K and average income of$93K for 1.8 million contributing participants in its plans, which suggests that our calculation may be conservative. 7. Plan Sponsor Council of America, 54th Annual Survey, 2011; www.psca.org/54th-annual-survey. 8. U.S. Department of Labor, www.dol.gov/ebsa/PDF/2010pensionplanbulletin.PDF. View Abstract