text
stringlengths
104
605k
1. Nov 9, 2007 ### corny1355 1. The problem statement, all variables and given/known data The random variable X has a double-exponential distribution with parameter p>0 if its density is given by f_x (x) = (1/2)e^(-p|x|) for all x. Show that the expected value of X = 0. 2. Relevant equations I know that the expected value of a random variable x is ∫ x * f(x) dx 3. The attempt at a solution We are told that f_x (x) = (1/2)e^(-p|x|) So I'm guessing you have to do the following integral going from 0 to infinity: ∫ x * (1/2)e^(-p|x|) dx But I'm unsure about how to compute this integral. Last edited: Nov 9, 2007 2. Nov 9, 2007 ### Galileo If your sample space is $[0,\infty)$, how could the average value of X be 0? Also, there wouldn't be a need for absolute values if x couldn't be negative. I`m sure that the problem implicitly assumes that X can take all values in R. You could evaluate the integral by splitting it in two pieces. There's a faster way though. Maybe drawing the graph of f will help.
INTELLIGENT WORK FORUMS FOR ENGINEERING PROFESSIONALS Remember Me Are you an Engineering professional? Join Eng-Tips Forums! • Talk With Other Members • Be Notified Of Responses • Keyword Search Favorite Forums • Automated Signatures • Best Of All, It's Free! *Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail. #### Posting Guidelines Promoting, selling, recruiting, coursework and thesis posting is forbidden. Jobs from Indeed Just copy and paste the # Slow (145RPM) balance specifications Share ## Slow (145RPM) balance specifications (OP) I am trying to find some "slow" speed balance specifications.. It "seems" to me that the standard ISO or API specs are perhaps not that valid for rotors in the 100 RPM or lower speed ranges. One can "validate" (to a degree)by doing some math and calculating what kind of actual forces are developed at operating speeds at various balance specifications. However the formula I use .062 x (N/1000)^2 x W (in grams) x distance (in inches)= a result in pounds of force.. is not totally valid (I suspect) for ALL speed ranges. When used for a rotor rotating at 5 RPM for example the value does not make that much sense. On a horizontal rotor I use as a minimum a balance that will prevent the rotor from finding its own heavy spot.. typically on our Schenck HB5 setup this works out to a static API 4W/N calculation... however this is more of a safety generated specification than a operational one. The rotor that has initiated this question is a vertical application, 16 foot shaft turning a impeller / agitator at 145 RPM with combined weight of 3600 pounds (1600 & 2000). So.. my "safety" induced spec is not required and need to justify an operational one... Anyone with any thoughts?? This is a bit of an ongoing job.. and so far we (I) have picked a ISO 1940 spec of G16 for the impeller and G6.3 for the shaft. Still, that leaves 38KG.in of unbalance in the impeller when I do the standard math ### RE: Slow (145RPM) balance specifications I did a quick check of your numbers and 38kg-in for the impeller looks about right. This corresponds to a radial force of 52lbf at a speed of 145 rpm. Since the shaft and impeller form a single rotating unit, why have you chosen different G values? Apply G6.3 to the impeller and the radial force reduces to just over 20lbf. I would suggest you get these components onto the balancing machine and get the residual unbalance value down down to the lowest resolution that the machine can deal with. ### RE: Slow (145RPM) balance specifications if the 16 foot shaft is mounted to a face at the upper end, the accuracy of the face (axial runout) and the pilot diameter (radial runout) WILL combine with the accuracy of the shaft features to move the impeller off center. ISO G 6.3 @ 145 rpm equates to less than 0.015 inch eccentricity, but is also equal to a change in centering. That could be moving from 0.007 offset at 0 degrees to 0.007 offet at 180 degrees.  If the installed runout and clearance of the impeller centering feature of the shaft are not much better than .007 " TIR, then the tag that says "balanced to G 6.3 (when centered in the balance machine)" should disappear like smoke. ### RE: Slow (145RPM) balance specifications (OP) Thanks to both... The different ISO specs were picked for purely practical reasons.. The "shaft" is actually a 12" diam pipe with ordinary pipe flanges (16" OD) (gusseted) It is "easy" to balance by welding weights inside the pipe. Both faces and pilots are machined to be true. This was one of the initial problems.. as the fab shop that made these actually thought they were sections of pipe. The impeller.. looks very much like the juicer you use to get your orange juice from an orange with.. 5 vanes cast into a top plate (44" diam).. with openings from the center.. A large quantity of air is blown down the pipe / shaft.. this coupled with the rotation helps separate the sand from the liquid (froth flotation a tar sands operation). To balance I can cut off a chord (from the top plate) but must stay ~2 inches from the vanes to prevent the air from short circuiting. Thus I am limited to taking ~3kg per segment.. The spec ISO G16 was picked more to enable a "balanced" status than an engineered balance. Some of these castings are very badly centered and the worst one so far has required a 6.47KG correction at 22 inches. This one required grinding 3 chords back almost to the maximum to achieve an "in tolerance" even though it still required a correction of 900 grams. Because of the "large" corrections yet still withing the tolerance I was hoping to find some engineering data / specs on relatively slow speed rotors. An area where the traditional formulas do not apply as readily.. in my opinion.. ### RE: Slow (145RPM) balance specifications It "seems" to me that the standard ISO or API specs are perhaps not that valid for rotors in the 100 RPM or lower speed ranges Could you explain why? ISO 1940 is as good as anything you will find, since it takes account of operating speed. this works out to a static API 4W/N calculation... however this is more of a safety generated specification than a operational one 4W/N works out at about 67% of G1 allowance so it is a very good balance spec. The spec ISO G16 was picked more to enable a "balanced" status than an engineered balance Whats the difference? You could have picked ISO G4000 and made the same statement. I suspect that you are not doing this regularly and would suggest that you take the opportunity to get the balance as good as you possibly can before the machine is built up. TMoose comments are very valid: how are the shaft and impeller joined together. I would suggest that the best results will be obtained by balancing as a built unit, possibly dowelled to maintain relative positions if you have to disassemble. I am not sure what the concerns are here - balancing is carried out to reduce forces acting (principally) on the bearings - large slow moving machines can have larger allowable unbalance since the resulting force is lower. Some of these castings are very badly centered and the worst one so far has required a 6.47KG correction at 22 inches. This one required grinding 3 chords back almost to the maximum to achieve an "in tolerance" even though it still required a correction of 900 grams Surely thats a problem with the casting - could you consider changing the process so that the castings were better centred or change some of the casting dimensions to allow better scope for balancing? ### RE: Slow (145RPM) balance specifications (OP) I am not so sure that the ISO 1940 specs adequately deal with slow speed rotors.. and specifically do not deal with slow vertical rotors. One can certainly use them.. but.. I am questioning there validity.. And, as my original post asked.. was any one aware of any "other" standards that one could consider. API 4W/N is over kill for many jobs.. and is addressed in the API 610 (and others) as being essentially non repeatable and thus somewhat a waste of time. Meeting a customers spec of ISO G2.5 will leave enough imbalance in a large bull gear rotating at 30 RPM to find its own heavy spot and thus becomes a safety issue (my opinion). My own personal spec.. if possible and time and conditions allow... is to balance all rotors that run at less than 1000 RPM so that at least statically they meet 4W/N. The balance report states  ISO G2.5 but I have done the math and know statically it meets 4W/N. True.. But in the applications listed in the ISO 1940 charts this application comes closer to ISO G16 than the crankshaft of a steam propelled frigate. I have been balancing on Schenck Cab 690 on a frame 5HB machine since 1993 full time for a large jobbing shop. I consider myself to be quite knowledgeable on the technical end of balancing.. self taught but with an inquiring mind... Thus my request for any additional thoughts from my peers on slow speed vertical specifications. But, to get back to why I quit where I do (at ISO G16).. strictly practical.. there is just not the material available to go all the way. Picking ISO G16 allows me to balance the impeller / agitator and show the customer a balance report that shows "in tolerance". How the assembly is assembled and subsequent runouts are beyond our control.I would be very surprised if the runouts (tir) are less than .125" at the impeller mounting flange in a free vertical state. Once the rotor is influenced by the hydraulic forces it is any ones guess. If.. balanced perfectly, I calculated that once a runout of .041 is reached (.082 tir)the rotor has induced  38kg.in of imbalance.. So much for the balance! Back to my original question.. There is a difference in the forces generated (and restrained by the bearings) between a vertical and horizontal slow speed application. Consider a large disk turning at, say 10 RPM with a large imbalance, vertically the unbalance weight will want to bend the shaft or cause it to not be vertical, this will apply a side load on the bearing but the torque required to turn this rotor will be constant. Now consider this same rotor turning horizontally.. the load on the bearings (overhung) will essentially be equal as the shaft is rotated. the torque required to turn this rotor will vary considerably. At 10 RPM centrifugal forces are not sufficient to influence either case yet the horizontal rotor clearly needs balancing while the vertical one does not. At "some" speed clearly the centrifugal forces become a issue.. I was hoping to find some information / rational behind "that" point. Yes.. the casting issue needs addressing... hard to tell a customer though that the $20,000 dollar casting he had shipped from Brazil is junk.. ### RE: Slow (145RPM) balance specifications For my money, acceptable bearing loads are the first criteria, followed by how the machine "feels" when running. The variable torque of a horizontal shaft is first rank too, but, as you said, normally removed on higher speed machines. If more impellers are in production perhaps a mass centering operation could be added before machining the locating features. Could you introduce an eccentric "adapter" between the impeller and the "shaft" to shift the impeller CG closer to the rotating center? Similarly, if the "high spot" of a circular feature is marked on the impeller, comparison of the as-installed high spot would help determine where you really are, balance wise. If the installed centerline runout of a beautifully balanced 125 rpm rotor is 0.125" (0.062" eccentric) you are back up over G40 ### RE: Slow (145RPM) balance specifications I am not so sure that the ISO 1940 specs adequately deal with slow speed rotors.. and specifically do not deal with slow vertical rotors. One can certainly use them.. but.. I am questioning there validity.. And, as my original post asked.. was any one aware of any "other" standards that one could consider. Lets get right back to basics – all you are trying to achieve when balancing a machine is to get the centre of mass to coincide with the centre of rotation. You cannot achieve perfect balance for any machine so there are guidelines such as ISO1940 which suggest acceptable levels of unbalance for machines based on their mass, speed and duty. If you choose not to believe the guidelines, then you must surely accept that unbalance is measured in units of mass-distance (gram-mm or oz-in or even kg-in) so , if ISO 1940 doesn't float your boat, then you can invent your own standard but ultimately, you have to go for the smallest value of kg-in that fits your scenario. API 4W/N is over kill for many jobs.and is addressed in the API 610 (and others) as being essentially non repeatable and thus somewhat a waste of time. This is what API 610 actually says: API Standard 610--Centrifugal Pumps for Petroleum, Petrochemical and Natural Gas Industries 5.9.4.4 If specified, impellers, balancing drums and similar rotating components shall be dynamically balanced to ISO 1940 -1 grade G1 (equivalent to "4W/n" in US Customary terminology). ... ..... ...... With modern balancing machines, it is feasible to balance components mounted on their arbors to U = 4W/n (nominally equivalent to ISO grade G1), or even lower depending upon the weight of the assembly, and to verify the unbalance of the assembly with a residual unbalance check. However, the mass eccentricity, e, associated with unbalance less than U = 8W/n (nominally equivalent to ISO grade G2.5) is so small (e.g. U = 4W/n gives e = 0,000 070 in for an assembly intended to run at 3600 r/min) that it cannot be maintained if the assembly is dismantled and remade. Balance grades below G2.5 (8W/n) are, therefore, not repeatable for components . API610 does not say that 4W/N is overkill and neither does it say that it is a waste of time. What it does say and explain is that Balance Grades below G2.5 are not repeatable for components or assemblies that are balanced and then disassembled for rebuild. What this is telling you and what you are ignoring is that you should be either balancing your impeller and shaft as a single assembled unit or improving the build. But, to get back to why I quit where I do (at ISO G16).. strictly practical.there is just not the material available to go all the way. Picking ISO G16 allows me to balance the impeller / agitator and show the customer a balance report that shows "in tolerance". Isn't this somewhat dishonest? You are simply selecting the G value to meet your capability and then using this value taking advantage of the customers naivety. I would be very surprised if the runouts (tir) are less than .125" at the impeller mounting flange in a free vertical state. Once the rotor is influenced by the hydraulic forces it is any ones guess. If.balanced perfectly, I calculated that once a runout of .041 is reached (.082 tir)the rotor has induced 38kg.in of imbalance.. So much for the balance! What has this got to do with the balance? If you incorrectly assemble already balanced components what on earth do you expect? This is really bad engineering. There is a difference in the forces generated (and restrained by the bearings) between a vertical and horizontal slow speed application. Consider a large disk turning at, say 10 RPM with a large imbalance, vertically the unbalance weight will want to bend the shaft or cause it to not be vertical, this will apply a side load on the bearing but the torque required to turn this rotor will be constant. Now consider this same rotor turning horizontally.. the load on the bearings (overhung) will essentially be equal as the shaft is rotated. the torque required to turn this rotor will vary considerably. At 10 RPM centrifugal forces are not sufficient to influence either case yet the horizontal rotor clearly needs balancing while the vertical one does not. At "some" speed clearly the centrifugal forces become a issue This is just techno-babble - there is no difference in the force generated by a horizontal unbalance or a vertical one - the equation is quite simple force = mass*radius*angular velocity^2. The bearings of a horizontally mounted machine have to take account of gravity (i.e. weight). Lets look at your 2000lb impeller at 145rpm: its static weight is 8909N You have suggested 38kg-in as the residual unbalance – this corresponds to 950kg-mm or 0.950kg-m (this is the m*r in force = m*r*w^2 - need to keep this in consistent units) Doing the sums shows that at 145rpm, the impeller unbalance produces a radial force of 215N (49lbf), which is a tiny fraction of the static weight – at 10rpm, the unbalance force would be around 0.25 lbf. hard to tell a customer though that the$20,000 dollar casting he had shipped from Brazil is junk Of course it isn't – maybe you should try working with your customer so that he can request better castings from his supplier so that you and he can work together to get a better final product. ### RE: Slow (145RPM) balance specifications (OP) Thanks to both for your thoughts..This application is "new technology" and the customer is having numerous issues and ongoing design changes. However, for the moment we have what we have and must try to accommodate the customer as best as we can. Lets get right back to basics – all you are trying to achieve when balancing a machine is to get the center of mass to coincide with the center of rotation. This is true and to determine the location of this axis the balance machine relies on the forces generated by centrifugal force. So.. in the absence of sufficient centrifugal forces.. how then to balance? Given a horizontal application one can "at least" do a static balance without even running the balance machine. In a vertical application there is no "static". Given that difference I was hoping there were some "other" standards. API610 does not say that 4W/N is overkill and neither does it say that it is a waste of time. I agree.. it does not. Overkill and "waste of time" are not the kind of words one would find in an  API manual. What this is telling you and what you are ignoring is that you should be either balancing your impeller and shaft as a single assembled unit or improving the build. I agree..if, we were trying to achieve better than ISO grade G.2 which we are not Isn't this somewhat dishonest? You are simply selecting the G value to meet your capability and then using this value taking advantage of the customers naivety. No.. I selected a G value based on what is achievable from a practical perspective, coupled with the application and general rotor types listed in the ISO 1940 nomogram. However.. these do not address low speed (chart starts at 100) or vertical applications. This is just techno-babble - there is no difference in the force generated by a horizontal unbalance or a vertical one - the equation is quite simple force = mass*radius*angular velocity^2. The bearings of a horizontally mounted machine have to take account of gravity (i.e. weight). I agree.. but only when the speed of rotation generates enough centrifugal forces to mask the forces of gravity.  That speed is where a vertical application versus a horizontal one would see a difference. Consider a rotor, horizontal between bearings, with a large unbalance.. the weight on each bearing will be constant due to gravity. Consider then this same rotor vertical, gravity will attempt to move the axis from vertical thus putting a side load on the bearings, as the rotor turns this force on the bearings also rotates. This is a force not seen in the horizontal application so long as the RPM is "low". Doing the sums shows that at 145rpm, the impeller unbalance produces a radial force of 215N (49lbf), which is a tiny fraction of the static weight – at 10rpm, the unbalance force would be around 0.25 lbf. Interesting.. given that the ultimate goal of balancing is to keep centrifugal forces within a manageable amount for the bearings to restrain. Even with the crude ISO G16 spec  your calculations at 49lbf is only 2.49% of the static journal load which seems reasonably acceptable Close Box # Join Eng-Tips® Today! Join your peers on the Internet's largest technical engineering professional community. It's easy to join and it's free. Here's Why Members Love Eng-Tips Forums: • Talk To Other Members • Notification Of Responses To Questions • Favorite Forums One Click Access • Keyword Search Of All Posts, And More... Register now while it's still free!
# Latex Chapter Style Example Report Fancy chapter headings with TikZ TeXblog. Format-Style Guide for Preparing Research Reports This 8th edition of WRF’s Format Style Guide has three major CHAPTER 2 FORMAT COVER AND TITLE OF REPORT, typical problems that arise while writing a thesis with LaTeX and suggests The book class has some advantages over the report class since it For example, the. ### latex positions of page numbers position of chapter Sections and chapters Overleaf Online LaTeX Editor. LaTeX/Document Structure. For example, if you want a report to be in If the hyperref package is used and the link does not point to the correct chapter,, LaTeX: Sample Template for a Report. The following is a template for a report using LaTeX. If you don't know how to use LaTeX, you can search the Knowledge Base for. This article describes how to customize the page layout of your LaTeX example K. Grant is writing a report style Some LATEX commands, like \chapter, 25/04/2017 · Describes how to use several different numbering systems in documents that contain both chapter both chapter and appendix style level. Example 1 IEEE Style: All Examples. Reporters, BP releases report into Gulf of Mexico oil spill. Lateline, Chapter or Article in Edited Book [8] [8] positions of page numbers, position of chapter headings, your own style for chapter that change \chapter command rendering. Here is an example of command \usepackage[style]{fncychap} If the option, style, example 9 1 Figure 3.1: The stared chapter style sonny CHAPTER 1 P ack age description I n this c hapter a short in Generic template for midsize and larger documents based on KOMA script classes. - novoid/LaTeX-KOMA-template How do I create this chapter style in LaTeX? Can audit reports be written on LaTex? Ask New Question. Still have a question? Ask your own! Ask. Related Questions. typical problems that arise while writing a thesis with LaTeX and suggests The book class has some advantages over the report class since it For example, the You can customize the numbering or lettering style in these cases. For example: the style of the chapter Formatting captions and subcaptions in LaTeX. Report Writing with LaTeX at LaTeX IIB project report classes based An alternative to the plain style used in the above example is unsrt which lists To produce a simple LaTeX document, and the text of a sample paper One style file you can include if you need phonetic symbols in your document is Headers and Footers in LaTeX using fancyhdr the header contains the chapter name on the left, so let’s look at an example: \documentclass{article} Annual Chapter Reports. Share This: See some examples of outstanding chapter reports here. Preparing Chapter Reports. Choose the link for your Zone (See below). For example: \documentclass{report} \usepackage Chapter Style in Lyx. Related. 4. Chapter formatting and spacing. 0. \usepackage[style]{fncychap} If the option, style, example 9 1 Figure 3.1: The stared chapter style sonny CHAPTER 1 P ack age description I n this c hapter a short in LaTeX Document Classes article: for articles, the most commonly used class report: variant of article for a report and the style of formula numbering. ### sectioning Chapter formatting - TeX - LaTeX Stack Exchange Chapter heading format change LaTeX.org. How to number chapters, appendixes, and pages in documents When you work with documents that contain both chapter for example, Chapter 1 (the last style, LaTeX: Sample Template for a Report. The following is a template for a report using LaTeX. If you don't know how to use LaTeX, you can search the Knowledge Base for. memoir – texblog. IEEE Style: All Examples. Reporters, BP releases report into Gulf of Mexico oil spill. Lateline, Chapter or Article in Edited Book [8] [8], I am distributing sample Latex files for the Report, For chapters it will take input from different Latex files meant for that chapter only. For an example,. ### latex positions of page numbers position of chapter Chapter heading format change LaTeX.org. LaTeX: Basics and Reports, Articles and it will also cover the topics of setting up your report to format and print and the first chapter without https://en.m.wikipedia.org/wiki/Table_of_contents For example: \documentclass{report} \usepackage Chapter Style in Lyx. Related. 4. Chapter formatting and spacing. 0.. LaTeX/Title Creation. Sample LaTeX documents; Index; A title page for a book or a report to get a university degree {Bachelor, Master, This book template features an elegant layout with a beautiful title page and part/chapter headings. The book itself is highly structured into parts, chapters positions of page numbers, position of chapter headings, your own style for chapter that change \chapter command rendering. Here is an example of command Format-Style Guide for Preparing Research Reports This 8th edition of WRF’s Format Style Guide has three major CHAPTER 2 FORMAT COVER AND TITLE OF REPORT \usepackage[style]{fncychap} If the option, style, example 9 1 Figure 3.1: The stared chapter style sonny CHAPTER 1 P ack age description I n this c hapter a short in To produce a simple LaTeX document, and the text of a sample paper One style file you can include if you need phonetic symbols in your document is Fancy LaTeX chapter styles. 3. books and theses. Here is an example: \documentclass{report The package will use the LaTeX default chapter style in case the Sectioning. Sectioning commands provide the means to structure your text into units. \part \chapter (report style only) \section \subsection \subsubsection Note that you don't specify the section number as LaTeX does this will depend on the chapter. So, for example, that element style. For example, I am trying to write my Thesis using LaTeX. I am using the 'report' document class. I have created multiple TeX files (one each for abstract, acknowledgment, and Report Writing with LaTeX at LaTeX IIB project report classes based An alternative to the plain style used in the above example is unsrt which lists Write scientific documents in LaTeX and perform BibTeX Style Examples , volume = 4, series = 5, chapter = 8 , pages Different page style for chapter title pages; \documentclass[12pt, twoside, openany]{report You can change this for example by patching the chapter command LaTeX/Customizing Page Headers and Footers. like running section and chapter names in there. Standard LaTeX the plain style, for example to have a A Sample Thesis Report, Showing the Reader the Wonder of Formatting Documents Using LATEX Claire Connelly \chapter book & report only \section \subsection LaTeX Page Styles The \documentstyle for example, to give the , you can use them to override the normal headings in the headings style, since LaTeX uses these ## LyX wiki FAQ / Numbering LyX wiki FAQ / Numbering. typical problems that arise while writing a thesis with LaTeX and suggests The book class has some advantages over the report class since it For example, the, Tips on Writing a Thesis in LaTeX. editing the LaTeX document using, for example, backref=true, style=custom-numeric-comp, citereset=chapter, maxcitenames. ### Fancy chapter headings with TikZ TeXblog Section Headings in LaTeX Trinity College Dublin. Various chapter styles for the memoir In any good chapter style design one should have given a Therefore the example text features both a numbered chapter and, 16/07/2012 · This is an old style of emphasizing, I am trying to write my report in latex because it is so much easier than Chapter heading format change Topic is solved.. How to number chapters, appendixes, and pages in documents When you work with documents that contain both chapter for example, Chapter 1 (the last style LaTeX/Document Structure. For example, if you want a report to be in If the hyperref package is used and the link does not point to the correct chapter, You can customize the numbering or lettering style in these cases. For example: the style of the chapter Formatting captions and subcaptions in LaTeX. A guide to IEEE referencing style for • These examples are for chapters or • Only the first letter of the first word of the title of the chapter or LaTeX Page Styles The \documentstyle for example, to give the , you can use them to override the normal headings in the headings style, since LaTeX uses these \usepackage[style]{fncychap} If the option, style, example 9 1 Figure 3.1: The stared chapter style sonny CHAPTER 1 P ack age description I n this c hapter a short in I am trying to write my Thesis using LaTeX. I am using the 'report' document class. I have created multiple TeX files (one each for abstract, acknowledgment, and Key Style Points: Chapter Title Page; For books in Springer’s standard format, the style files are included in Springer's LaTeX package. Example: Fancy chapter headings. Do you have a question regarding this example, TikZ or LaTeX in general? But what about parts and title in the same style? Format-Style Guide for Preparing Research Reports This 8th edition of WRF’s Format Style Guide has three major CHAPTER 2 FORMAT COVER AND TITLE OF REPORT Tips on Writing a Thesis in LaTeX. editing the LaTeX document using, for example, backref=true, style=custom-numeric-comp, citereset=chapter, maxcitenames I am trying to write my Thesis using LaTeX. I am using the 'report' document class. I have created multiple TeX files (one each for abstract, acknowledgment, and 4 ♦ Chapter 1. LATEX Basics 1.4.3 Special Characters Certain characters have special meaning to LATEX. An example is the % sign, which indicates that the remainder For example, the style applied to this paragraph is named “Normal.” For example, use the same style for all chapter headings. or Report” (http://www.k LaTeX/Title Creation. Sample LaTeX documents; Index; A title page for a book or a report to get a university degree {Bachelor, Master, How to set the chapter style in report class? LaTeX report - table of contents How to format my \chapter* similarly to my \chapter style. 1. Generic template for midsize and larger documents based on KOMA script classes. - novoid/LaTeX-KOMA-template 25/04/2017 · Describes how to use several different numbering systems in documents that contain both chapter both chapter and appendix style level. Example 1 This article describes how to create advanced page headers and footers in LATEX style for the report in Chapter 2. Do it now’ (the last example is LaTeX: Sample Template for a Report. The following is a template for a report using LaTeX. If you don't know how to use LaTeX, you can search the Knowledge Base for LaTeX/Document Structure. For example, if you want a report to be in If the hyperref package is used and the link does not point to the correct chapter, 25/04/2017 · Describes how to use several different numbering systems in documents that contain both chapter both chapter and appendix style level. Example 1 LaTeX/Title Creation. Sample LaTeX documents; Index; A title page for a book or a report to get a university degree {Bachelor, Master, Article or Book Chapter in Edited Book or Anthology (for example, a book citation includes the author, title, report, handbook, Annual Chapter Reports. Share This: See some examples of outstanding chapter reports here. Preparing Chapter Reports. Choose the link for your Zone (See below). Generic template for midsize and larger documents based on KOMA script classes. - novoid/LaTeX-KOMA-template Tips on Writing a Thesis in LaTeX. editing the LaTeX document using, for example, backref=true, style=custom-numeric-comp, citereset=chapter, maxcitenames How to number chapters, appendixes, and pages in documents When you work with documents that contain both chapter for example, Chapter 1 (the last style Format-Style Guide for Preparing Research Reports shown below and in the example. Pay special attention to the style of Report Chapters (e.g., Chapter 1: positions of page numbers, position of chapter headings, your own style for chapter that change \chapter command rendering. Here is an example of command Format-Style Guide for Preparing Research Reports This 8th edition of WRF’s Format Style Guide has three major CHAPTER 2 FORMAT COVER AND TITLE OF REPORT A Sample Thesis Report, Showing the Reader the Wonder of Formatting Documents Using LATEX Claire Connelly \chapter book & report only \section \subsection ### Fancy chapter headings with TikZ TeXblog Preface in Latex? Google Groups. 16/07/2012 · This is an old style of emphasizing, I am trying to write my report in latex because it is so much easier than Chapter heading format change Topic is solved., How to number chapters, appendixes, and pages in documents When you work with documents that contain both chapter for example, Chapter 1 (the last style. Latex-Seminar-Project-Template/report.tex at master. IEEE Style: All Examples. Reporters, BP releases report into Gulf of Mexico oil spill. Lateline, Chapter or Article in Edited Book [8] [8], Format-Style Guide for Preparing Research Reports shown below and in the example. Pay special attention to the style of Report Chapters (e.g., Chapter 1:. ### Preface in Latex? Google Groups Fancy Chapter Headings LaTeX.org. Using LATEX for report writing Hans Fangohr (for example the pdf file provided on (this is type writer style) https://en.m.wikipedia.org/wiki/Table_of_contents Note that you don't specify the section number as LaTeX does this will depend on the chapter. So, for example, that element style. For example,. • FncyChap V1 Harvey Mudd College Mirror Server • Preface in Latex? Google Groups • LaTeX Page Styles Personal pages of the CEU • Section Headings in LaTeX. Section headings of various sizes are produced (for example, the book style has a \chapter command for beginning a new chapter). ... hundreds of LaTeX \part and \chapter are only available in report and book \chapter can be used in documents and reports. Below you can see an example: With report or article classes, (in the LaTeX preamble): Specifically, set the style to "enumerate", enter the brackets in ERT, change the environment LaTeX Page Styles The \documentstyle for example, to give the , you can use them to override the normal headings in the headings style, since LaTeX uses these typical problems that arise while writing a thesis with LaTeX and suggests The book class has some advantages over the report class since it For example, the This book template features an elegant layout with a beautiful title page and part/chapter headings. The book itself is highly structured into parts, chapters Sectioning. Sectioning commands provide the means to structure your text into units. \part \chapter (report style only) \section \subsection \subsubsection How can I write a simple report in LaTeX? Next, enter the text of your report. Where you want a chapter heading, put: \chapter{Your Chapter Title Here} LaTeX: Basics and Reports, Articles and it will also cover the topics of setting up your report to format and print and the first chapter without You can customize the numbering or lettering style in these cases. For example: the style of the chapter Formatting captions and subcaptions in LaTeX. More generally you can configure the depth of the chapter and section A well customized latex style can be a Example 2: Put the chapter number in 25/04/2017 · Describes how to use several different numbering systems in documents that contain both chapter both chapter and appendix style level. Example 1 LaTeX/Title Creation. Sample LaTeX documents; Index; A title page for a book or a report to get a university degree {Bachelor, Master, If, for example, the style you use causes incollection (for a chapter in an edited It requires the LaTeX style file natbib.sty to produce citations in IEEE article templates help you IEEE Article Templates. Easily format your Visit the conference publishing website to access Word and LaTeX Home → General LaTeX tips and tricks → Fancy chapter titles 13. Example usage. In the preamble: The ‘Lenny’ style is one of the predefined styles. With report or article classes, (in the LaTeX preamble): Specifically, set the style to "enumerate", enter the brackets in ERT, change the environment Using LATEX for report writing Hans Fangohr (for example the pdf file provided on (this is type writer style) Write scientific documents in LaTeX and perform BibTeX Style Examples , volume = 4, series = 5, chapter = 8 , pages LaTeX Page Styles The \documentstyle for example, to give the , you can use them to override the normal headings in the headings style, since LaTeX uses these LaTeX: Sample Template for a Report. The following is a template for a report using LaTeX. If you don't know how to use LaTeX, you can search the Knowledge Base for Write scientific documents in LaTeX and perform BibTeX Style Examples , volume = 4, series = 5, chapter = 8 , pages The titlesec, titleps and titletoc Packages* Chapter Example, 23. 1. Introduction †For bug reports, I am distributing sample Latex files for the Report, For chapters it will take input from different Latex files meant for that chapter only. For an example, LaTeX/Title Creation. Sample LaTeX documents; Index; A title page for a book or a report to get a university degree {Bachelor, Master, I am distributing sample Latex files for the Report, For chapters it will take input from different Latex files meant for that chapter only. For an example, If, for example, the style you use causes incollection (for a chapter in an edited It requires the LaTeX style file natbib.sty to produce citations in LaTeX/Title Creation. Sample LaTeX documents; Index; A title page for a book or a report to get a university degree {Bachelor, Master, 4 ♦ Chapter 1. LATEX Basics 1.4.3 Special Characters Certain characters have special meaning to LATEX. An example is the % sign, which indicates that the remainder Using LATEX for report writing Hans Fangohr (for example the pdf file provided on (this is type writer style) \usepackage[style]{fncychap} If the option, style, example 9 1 Figure 3.1: The stared chapter style sonny CHAPTER 1 P ack age description I n this c hapter a short in A Sample Thesis Report, Showing the Reader the Wonder of Formatting Documents Using LATEX Claire Connelly \chapter book & report only \section \subsection
Recent questions tagged two Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Calculating Probability Using Two Dice Calculating Probability Using Two Dice Calculating Probability Using Two Dice     ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 How do I calculate the length between two points in $R^2$ and $R^3$? How do I calculate the length between two points in $R^2$ and $R^3$?How do I calculate the length between two points  in $R^2$ and $R^3$? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What is the dot (Euclidean inner) product of two vectors? What is the dot (Euclidean inner) product of two vectors?What is the dot (Euclidean inner) product of two vectors? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Interactive Quiz: Intraoperative Nerve Monitoring Two Interactive Quiz: Intraoperative Nerve Monitoring Two Interactive Quiz: Intraoperative Nerve Monitoring Two     ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 How do I compute the probability of the union of two events? How do I compute the probability of the union of two events?How do I compute the probability of the union of two events? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 How do I add two complex numbers? How do I add two complex numbers?How do I add two complex numbers? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Explain why all the other even numbers after 2 are not prime. close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 How can we prove that 2+2 always equals 4? How can we prove that 2+2 always equals 4?How can we prove that 2+2 always equals 4? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Why does binary only need the two digits 0 and 1? Why does binary only need the two digits 0 and 1?Why does binary only need the two digits 0 and 1? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Describe two methods of direct marketing Describe two methods of direct marketingDescribe two methods of direct marketing ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What is half of two plus two? What is half of two plus two?What is half of two plus two? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What are the sum of the whole numbers between 100 and 1000 which are divisible by 11? What are the sum of the whole numbers between 100 and 1000 which are divisible by 11?What are the sum of the whole numbers between 100 and 1000 which are divisible by 11? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What does it mean that two shapes are similar? What does it mean that two shapes are similar?What does it mean that two shapes are similar? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What is the name given to all the numbers that can be expressed as the sum of two consecutive whole numbers? What is the name given to all the numbers that can be expressed as the sum of two consecutive whole numbers?What is the name given to all the numbers that can be expressed as the sum of two consecutive whole numbers? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Give the symbol for two halogens Give the symbol for two halogensGive the symbol for two halogens ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 How can I sync two local directories? How can I sync two local directories?How can I sync two local directories? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Simplify $(x^{2} - 9 )(x^{4} -16)$ using difference of two squares method. Simplify $(x^{2} - 9 )(x^{4} -16)$ using difference of two squares method.Simplify $(x^{2} - 9 )(x^{4} -16)$ using difference of two squares method. ...
InfoQ Homepage News Oracle Publishes Report on the State of Java’s Module System # Oracle Publishes Report on the State of Java’s Module System This item in japanese Oracle's Java Platform Group chief architect Mark Reinhold has published a report on the State of the Module System, with an emphasis on what the modularization objectives are (and aren’t.) The publication has triggered comments among users due to the apparent overlap with existing frameworks, particularly OSGi. As explained in the report, and fully detailed in JSR-376 and in the Module System project page, the module system is meant to address two omissions in the current Java accessibility model: • Reliable configuration: the current way components access classes from other components through the class path is considerably error-prone, particularly when attempting to use classes that aren’t in the class path or that are present multiple times. • Strong encapsulation: there is no way to restrict the classes that a particular component exposes to other components, every class categorised as public will be accessible from outside. Full details can be found both in the report and in previous InfoQ articles, but to summarize, each component is typically (but not necessarily) represented by a jar file, which includes a module descriptor file called module-info.java with the following structure: module com.foo.bar { requires com.foo.baz; exports com.foo.bar.alpha; exports com.foo.bar.beta; } The file is structured such that one or more lines of exports will indicate the packages that are to be accessible from other components, and where zero or more lines of requires will indicate the modules that are required by this module. This system provides a method for assessing at compile time whether the access types have the right visibility (i.e. they are public and exported by the required component), and at run time to assess whether the necessary modules are available, without having to inspect the full class path. It is here where the similarities with OSGi are manifest. ## OSGi Background OSGi is a modularization system and service platform for Java that implements a complete and dynamic component model. First proposed in 1998 with JSR-8 and with subsequent reviews being published over time (the last in 2014), OSGi allows the definition of bundles (akin to modules), which take the form of a JAR file with the following MANIFEST.MF file: Bundle-Name: Hello World Bundle-SymbolicName: org.wikipedia.helloworld Bundle-Description: A Hello World bundle Bundle-ManifestVersion: 2 Bundle-Version: 1.0.0 Bundle-Activator: org.wikipedia.Activator Export-Package: org.wikipedia.helloworld;version="1.0.0" Import-Package: org.osgi.framework;version="1.3.0" (Sample taken from Wikipedia.) It is apparent that in spite of the format differences, the intent expressed is similar to the Java Platform Module System. Indeed, the similarities between the Java Platform Module System and OSGi have been noticed since the initial attempts to modularise Java started in 2005 with JSR-277, the “Java Module System”. Initially aiming at Java 7, JSR-277 focused on easing distribution and execution of Java artefacts. Despite having a nearly identical name to JSR-376, that initiative had slightly different objectives; although it was tasked to fix the problem of "reliable configuration", it did not attempt to tackle the issue of "strong encapsulation". And in contrast to JSR-376, it also tried to add a versioning model to the Java artefacts. The similarities between these objectives and the functionality provide by OSGi were stark enough for the authors to initially consider OSGi as a solution, later discarding it reasoning that OSGi version control was too weak. JSR-294 was created shortly after that, with the objective of implementing “Improved Modularity Support in the Java Programming Language”. Also aiming at Java 7, this JSR was created to add the concepts of modules (called “superpackages”) to fix the strong encapsulation problem; this concept aligns with the current Java Platform Module System project. Both JSR-277 and JSR-294 have been dormant ever since they were labeled as such in 2012, when the Java 7 target was dropped, and are superseded by JSR-376. Another link between OSGi and the modularisation of Java can be found in JSR-291, "Dynamic Component Support for Java SE" (essentially an implementation for OSGi Service Platform Release 4). That JSR made a reference to JSR-277, the original Java Module System, to clarify the difference in scope of both initiatives: JSR-277 would focus on the static module definition to be used by Java, while JSR-291 focused on dynamic components that can be loaded and unloaded on runtime. Finally, JSR-376 itself also makes a reference to OSGi, mainly to discard it as a valid solution because its scope was far greater than the Java Framework Module System spec. Given all the above, it seems reasonable that a number of users would have difficulty differentiating the new Module System and OSGi. However, the conclusion is that the module system and OSGi are understood to be complementary systems serving different purposes, with OSGi being a framework built on top of Java to create an environment where bundles can be managed dynamically in an always-running application, and the module system being a new capability of Java itself that allows for tighter and easier control of statically managed modules. ## Differences between Java Platform Module System and OSGi To understand this a bit better, InfoQ talked to Holly Cummins, co-author of Enterprise OSGi in Action. While the following doesn’t intend to be a thorough description of the differences between the Java Platform Module System and OSGi, it provides the reader with a basic understanding of how different their objectives are. On one hand, the new Module System in Java will provide an easy way to check visibility of packages and classes at compile time, however, when we asked whether OSGi’s bundles can be used in the same way, Holly declared that “the answer for this is surprisingly complex”. OSGi dependencies are expressed in a bundle’s manifest, and there are two basic approaches to creating the manifest: "code-first" and "manifest-first". With the code-first approach (used in the bnd tool, and the maven bundle plugin), the list of dependencies isn't enforced at compile-time, it's actually generated at compile time. The compilation goes ahead in the usual way, and then the tooling works out what's needed at runtime based on what was needed at compile time. The alternate approach is manifest-first, which is used by the Eclipse PDE tooling. In this approach, dependencies are declared in the manifest, and the Eclipse IDE will use the manifest to work out what classes your code can see, and highlight cases where a dependency is missing. There's a PDE command-line build, and a Maven plugin called Tycho, and both of those enforce the dependencies at compile-time. However, it’s important to note that this visibility isn’t enforced by OSGi itself but by the tools accompanying PDE; since not all teams using PDE use one of those tools, and there's a possibility to miss a trick at compile-time. Another key aspect of the new Module System is the ability to restrict the modules that a particular package is exposed to, useful in situations where a set of related modules need to access each other, but they shouldn’t be accessed beyond that. As indicated in Mark Reinhold’s report, this can be achieved using the following syntax: module java.base { ... exports sun.reflect to java.corba, java.logging, java.sql, java.sql.rowset, jdk.scripting.nashorn; } OSGi didn’t initially have this capability, but when it was added it again went further than the objectives of the Module System. As Holly explains, “Any bundle can register a resolver hook, and that can be used to filter matches, so that packages are only exposed to specific bundles. You could use the same mechanism to do things as sensible as exposing packages to bundles which declare certain metadata, or as crazy as exposing a package only on Tuesdays”. Style ## Hello stranger! You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p by Peter Kriens / • ##### Re: A Tad Confused by Ant hony / • ##### Re: A Tad Confused by Peter Kriens / • ##### Re: A Tad Confused by Ant hony / • ##### Re: A Tad Confused by Peter Kriens / • ##### Project Jigsaw might be the best thing that could have happened to OSGi by Victor Grazi / by Peter Kriens / Your message is awaiting moderation. Thank you for participating in the discussion. The history of JSR is long and confusing but I am not sure this article helps to clarify the situation. I was especially puzzled by the last section, not sure how to relate this? To summarize: The JSR376 proposal consists of a new accessibility mode for modules, a dependency model, and a service model. The dependency model is limited to wiring versionless modules within a given module path. (A class path for modules.) The real dependencies (with versions) will have to come from a build system using propriety conventions. The service model is based on the Java service loader model. Since resources will no longer be accessible between modules a modular extension was needed. The new module accessibility will be eagerly used by OSGi to protect the runtime and probably enforce compile visibility. The service loader model will be transparently supported as it is already today. However, not sure what to do with the dependency model. The OSGi provides a much richer dependency model and many patterns (e.g. whiteboard, extender) that support popular application techniques that are far outside the ambition horizon of JSR 376. JSR 376 will not provide any backward compatibility for existing applications except to run as a classic class path application. Applications that want to move to JSR 376 will have to go through the painful process of modularizing because JSR 376 will punish any illegal crossing of module boundaries. (A lot of people will discover that their babies are not as modular as they thought.) Many people claim OSGi is hard without acknowledging that modularizing applications is the hard part. Mostly because Java (EE) applications have so many unmodular practices. JSR 376 will demonstrate that OSGi was just the messenger and actually not the cause. If you go to the pain of modularizing then OSGi seems a much better choice because of its maturity, support, tooling, and feature richness. We've got 15 years of experience and have tested solutions to virtually all the problems that you will encounter when you modularize. Since this article even confused me about OSGi you might be attracted to JSR 376 because it looks so much simpler. Heck, it is simpler! However, when you traverse down the modularization hole of your application you will quickly realize that those additional features OSGi provides actually solve your problems. • ##### Re: A Tad Confused by Ant hony / Your message is awaiting moderation. Thank you for participating in the discussion. I'm not sure what you meant to say with this JSR 376 will not provide any backward compatibility for existing applications except to run as a classic class path application. but isn't backward compatibility provided by the concept of "unnamed modules"? As for OSGi: about 3 years ago I wanted to learn it, so I did the Apache Felix tutorials ( felix.apache.org/documentation/tutorials-exampl... ). Given that Felix is one of the major implementations, I thought that'd be a good starting point, but I found the tutorials to be quite poor (and even today, several examples are "Coming soon...", just as they were back then). Moreover, OSGi occured as rather chaotic to me: e.g. there was iPOJO, BluePrint, Declarative Services, ... and I had a hard time trying to figure out what the current best practices were. In contrast: after reading the "The State of the Module System", it occurs as a natural extension to the Java platform and it all makes sense to me. In combination with the quick start guide, I already feel like I "got it". Of course, it's simpler & doesn't provide all the features OSGi offers, but I'm wondering: why would I need those features? Maybe it's, as you say, because I haven't traversed down the modularization hole yet, but at this point I feel JSR 376 will meet the needs of most applications. What I would like to read a year from now, is a book with the following starting point: a JSR 376-modularized Java SE application, which uses ServiceLoader & CDI 2. And then goes on to explain its problems & how OSGi allows to solve them. • ##### Re: A Tad Confused by Peter Kriens / Your message is awaiting moderation. Thank you for participating in the discussion. Portability: Since modules will be encapsulated things like annotation processing will not work out of the box. Applications and spec providers will have to be adapted to take modules into account (and this is the way it should be). About the tutorials. Yes, we are aware of it.Since OSGi has been around so long there is just a lot of stale and bad stuff from loads of different people. The OSGi enRoute project (enroute.osgi.org) is an attempt to provide an up to date easy entry in OSGi, hopefully we can get this at the op of Google searches. Though I would love to accept the challenge to write your proposed book but I lack the funds to pursue this. What I could do is take a simple but typical application and port it to OSGi enRoute to show how powerful OSGi is. Pet store or do you have a better example? • ##### Re: A Tad Confused by Ant hony / Your message is awaiting moderation. Thank you for participating in the discussion. Thanks for your reply and mentioning the OSGi enRoute project in particular. It looks promising & has surely renewed my interest in learning OSGi. As for getting it on top of Google searches: searching for "OSGi tutorial" lists www.osgi.org/Technology/HowOSGi as the second result. However, there's no mention of the enRoute project there. So giving it a prominent place on that page would already help a lot in increasing its visibility. An OSGi enRoute pet store example, which showcases how powerful it is, would definitely be nice. This would allow me to invert the idea of the book I proposed: take the example OSGi application & port it to a non-OSGi application myself (by trying to replace it with JSR 376, ServiceLoader and such). Then I'd see for myself what advantages OSGi has & where JSR 376 falls short. • ##### Re: A Tad Confused by Peter Kriens / Your message is awaiting moderation. Thank you for participating in the discussion. Thanks for the tips, will work with OSGi marketing to correct this. Ok, I will look at the pet shop but it will take some time, I am not full time on this. • ##### Project Jigsaw might be the best thing that could have happened to OSGi by Victor Grazi / Your message is awaiting moderation. Thank you for participating in the discussion. Modularization is a new concept to most Java developers. OSGi may be light-years ahead of Project Jigsaw, but most Java developers still struggle to get their arms around it (Ant hony above expressing some common emotions.) and OSGi has accordingly struggled to break in. That said, once modularization becomes part of the Java core tool set, developers will begin to embrace it en-masse, and as they do so, they will seek more robust and more mature solutions. Enter OSGi! Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p Is your profile up-to-date? Please take a moment to review and update. Note: If updating/changing your email, a validation request will be sent Company name: Company role: Company size: Country/Zone: State/Province/Region: You will be sent an email to validate the new email address. This pop-up will close itself in a few moments.
Tag Info Takeoff is the first phase of flight, when an aircraft lifts off from the runway or other surface. Takeoff is when an aircraft lifts off the ground and is the first phase of flight. Takeoff techniques vary depending on the aircraft and the runway or other surface, but a typical takeoff in an has four stages: • Lining up (or positioning): taxiing onto the runway and lining up with the centerline • Rolling: applying power and increasing speed to the point where the aircraft is creating enough lift to fly • Rotation: lifting the nose of the aircraft to initiate the actual lift-off from the ground • Climb-out: climbing to an initial minimum altitude before making any maneuvers such as turning Aircraft always take off into the if possible, because it reduces the groundspeed required to become airborne; e.g. if an aircraft requires an airspeed of 60 knots to take off and there is a headwind of 20 knots then the groundspeed at takeoff is reduced to 40 knots. A lower groundspeed at takeoff means that the aircraft can lift off sooner, requiring less runway, and also means that less braking is required in the event of a . A departing aircraft lines up on the runway centerline before beginning the takeoff roll, to make sure that in case of crosswinds or control issues there is maximum space available on both sides of the aircraft. The aircraft may start rolling immediately or it may wait on the runway before starting the roll (in an ATC environment, the pilot may be instructed to "line up and wait"). The takeoff roll begins when the aircraft starts accelerating to reach takeoff speed. The required speed can vary depending on the aircraft's configuration (e.g. with or without flaps). When the aircraft reaches takeoff speed, the pilot rotates (lifts the nose) to start the actual flight. Larger aircraft also have a "decision speed" referred to as $$V_1$$, where the aircraft is not yet able to rotate but has passed the point where it is still possible to abort the takeoff in the remaining runway length. After rotation the aircraft starts its initial climb and during this phase of takeoff the pilot configures the aircraft for the climb by setting an appropriate airspeed, retracting flaps and undercarriage (if applicable) and performing any other required actions. Takeoffs in a , , or other special aircraft types require different techniques, and all aircraft may require special takeoff procedures for specific conditions such as short runways. This tag is appropriate for all questions about takeoff techniques and procedures, legal regulations on takeoffs and so on.
# Re: Proposed math coding scheme >Karl Berry writes: >As you imply, Humanist-Geometric is a continuum, not a dichotomy, and >along several dimensions. Very true unfortunately, but politics and font encodings are the art of the possible... There are two fonts which contain non-extensible math symbols: currently these are cmsy and cmmi, and it's pretty arbitrary which glyph is in which font. I'm not suggesting the Humanist/Geometric divide is a platonic division of glyphs, but a useful hack to make life slightly easier for font designers: a geometric CM symbol font should be adaptable for use with other fonts. >I don't understand why we want to put any non-mathematical characters >into the math fonts. Neither do I, but we have to find somewhere to put glyphs such as \dagger, \P, etc. Either we put them in a separate font, containing 71 glyphs (29 greek + 23 Greek + 10 alt. numerals + 3 music + 8 from cmsy) or we cram them into the math fonts (the Greek and numerals need to be in math anyway). I'm a bit worried about the amount of space math encoded character fonts would take up, without every text font having an extension font. >There aren't nearly enough positions in a 256-character encoding to >specify even all the useful math characters. Why even consider adding >the bullet and dagger and so forth? I agree totally---but these characters aren't in the Cork encoding, and we need to find homes for them. Either they should be in the math fonts (the current messy solution) or in a separate encoding (eating up even more font memory). Alan. Alan Jeffrey Tel: +46 31 72 10 59 [email protected] Department of Computer Sciences, Chalmers University, Gothenburg, Sweden
All Questions 166 views Why does iron consume more energy in the fusion process than it produces? I understand that once a star starts fusing iron, it's doomed to collapse because iron fusion requires more energy than it releases in the process, allowing the opposing gravity of the star to cause ... 49 views How certain are we about the universe's flatness? The universe is thought to be flat: $\Omega = 1 \pm 1\%$. As I understand it we can determine this by measuring triangles against the CMB. Yet during inflation dark energy made the universe grew ... 69 views Which kind of properties can we get for cosmic ray particles hitting on an optical ccd? It is very common that we meet cosmic ray particles in optical images recorded by CCDs. You can see the "snowflakes" in the hubble images below: Generally we should remove them in order to get ... 87 views How massive can a star be at birth? [duplicate] We have discovered some incredibly massive stars. R136a1, the most massive known star is estimated to have 265 times mass as our Sun. Yet it has been burning for at least a million years, and must ... 51 views Is there an astronomy exam I can take? I am currently living in Bangkok and I love astronomy. However, I haven't been able to find an official astronomy exam that I can do to gain college credit. Could someone help me out please? 53 views Long term effect on rocks of high pressure and temperature? The mean surface temperature of Venus is about 460 C and the pressure is about 92 atmospheres. Assuming this situation (or similar) has prevailed for millions of years what changes (if any) would ... 108 views Why are explosions always used to represent the Big Bang? In documentaries the Big Bang is often represented as blackness followed by a big explosion with loud noises. This always feels cheap and falling short of explaining the origin of the Universe. Is ... 111 views What is the galaxy M87 doing these days? The massive elliptical galaxy M87 in the Virgo cluster is 53,490,000 light years away. It also contains one of the largest, heaviest supermassive black holes in the known Universe. But it's also my ... 163 views Calculate telescope orientation based on RA, DEC and Lat/Long I'm parsing FITS files for a project based on data from a telescope. These files include 'DEC', 'RA' and lat long values. I understand roughly the concept of celestial coordinates and I assume that ... 103 views Sun from SuperNova I have read that our sun was created from older star(s) which had exploded in a supernova. If all the matter is travelling away from the central point of explosion, how does it coalesce back into a ... 127 views How to use a telescope to find a specific celestial body? I am a beginner interested in astronomy. I bought a Celestron AstroMaster 130EQ telescope. It's a Newtonian Reflector and it's not computerised. I did manage to see the Moon in some magnifications but ... 219 views Stellar mass of galaxies Given the magnitudes (in the i-band) of certain galaxies, I would like to calculate their stellar mass (in terms of solar masses). So far, I have calculated their absolute magnitudes and gotten to ... 151 views Are there standard algorithms and procedures for creating unique sky maps based on latitude/longitude/date/time? I am trying to build an astronomy app that will use the user's latitude and longitude, along with the current date and time, to create a 3D view of the celestial sphere as seen based on that data. ... 74 views Movement of the satellites of the planets . . . Is the movement of the satellites (moons) of a planet coplanar, like the planets being coplanar around the local Sun? 58 views Is there online data on asteroid axial tilts? I am hoping to find axial tilts for asteroids and also their spring and fall equinox. Some of the asteroids I'm interested in are: 4 Vesta, 1 Ceres, 24 Themis, 65 Cybele, 153 Hilda, 624 Hektor 89 views Do planetary surface temperatures change in unison in a solar system? Are there any known correlations between the changes in planetary surface temperatures in a solar system? If so, do the farthest planets have smaller albeit correlated changes? 135 views What's the origin and culture of funny astronomical terminology? I'm not in the industry myself, but as an interested member of the public the terminology of astronomy seems a bit funny. Astronomers who today talk publicly about the interstellar medium say that the ... 86 views How to estimate age of asteroid family Erigon I have one problem: On picture below we can see Erigone family asteroids in plane semimajor axis - absolute magnitude. Based on V-type as a consequence of Yarkovsky effect, estimate age of this ... 65 views Rotation and relativity When a planet is spinning around its own axis, it has an effect on the trajectory of its satellites. I believe it is called frame dragging. Spin increases the kinetic energy of an object, ... 21 views Is the Astronomy community still concerned about the lumpyness of matter distribution in the universe? A decade or so ago, when I was still a science undergrad, one of the open questions in astrophysics was to explain the uneven distribution of galaxies in the observable universe. That is, why did the ... 82 views How did the moon's orbit become eccentric? The Moon's orbit is more eccentric, 0.0549, than most planets. I can understand that planets get eccentric by disturbing each other like under the late heavy bombardment. And likewise for multiple ... 120 views How can we be 13.2 billion light years from another galaxy? How did we get 13.2 billion light years from the faraway galaxies being discovered with Hubble? If we all started near each other at the Big Bang, and we all travelled slower than the speed of light, ... 71 views Why Earthian atmosphere is so thin? Venus is somewhat lighter then Earth, yet has a much thicker atmosphere. One would imagine that the following should be true: During the formation phase, all inner planets had captured as much gas ... 22 views How thick can planetary rings be? This arose from a comment on Worldbuilding. We have data from four planets in the Solar System with rings, which doesn't make for a very good sample size. Observations of exoplanets could change ... 157 views Why are there no stars on New Horizons images of Pluto I followed the New Horizons Mission a little, and saw among others this image of Pluto: I wonder, why you can't see any stars on it. As far as my very basic knowledge in astronomy goes, I think you ... 107 views Is expanding universe adding potential energy? A system with two massive bodies has potential energy proportional to their separation. Since the universe is expanding, is the potential (and total?) energy of such a system slowly increasing? What ... 72 views Regarding solar system dynamics, i.e. planets in stellar systems and moons in planetary systems, this is often mentioned in the literature, but it is difficult to find a good analysis/explanation of ... 32 views Are trojans in L5 more likely than in L4? Mars has much more trojans in L5. Does it reflect a common pattern? 136 views Image sets for testing stacking algorithms? I am looking for sets of astronomical images for testing different kinds of stacking algorithms. The idea is simple: if one has $N$ images of the same object, the signal-to-noise ratio of the ... 77 views How Does Black Hole Evaporation Look From the Inside? Imagine you're inside a black hole event horizon. The curvature of spacetime is such that the direction towards the singularity becomes timelike, as reverse movement becomes literally impossible. How ... 46 views How can you determine the initial volume of a planet's atmosphere? Since the surface pressure of a planet is determined by the mass of the column of gasses above it one would surmise that to determine the pressure you must know the volume and mass of the atmosphere. ... 87 views Total solar eclipse, supermoon, and spring equinox all happening at the same time: anything special about this? Today (March 20, 2015) is seeing a rare combination of the spring equinox, a total solar eclipse, and a supermoon. I am wondering if there is anything special astronomically about all three of these ... 147 views Sky view from Stellarium software vs. Sky view with naked eye I'm slowly starting to interest in astronomy. Currently I'm enjoying in stargazing but unfortunately place where I live is very light polluted. I'm planing my vacation in a couple of months and one ... 21 views Estimates for how many Trans-Neptunian Objects there are [duplicate] The Minor Planet Center shows that we know of about 1350 Trans-Neptunian Objects (TNO's). I think it is safe to assume that we have not found all the TNO's there are to find, even the ones that we ... 81 views Is there a cosmic, rather than technological, upper limit to what a telescope can resolve? Space radio interferometers could have a baseline of millions of kilometers, but is there a point where a larger baseline doesn't improve the resolution anymore because the photons observed are ... 90 views Questions about a fictional binary system, and habitability Note: Questions are at the bottom. The rest had simply been written in a format to better organize my thoughts. The formulas used to construct this fictional solar system, had been borrowed from ... 36 views What is the cause of the variation from high and low mean obliquity periods of Mars? It is reasonably well known that Mars has a greater obliquity range than Earth, due to Mars lacking a stabilising influence of a large moon. However, the Martian obliquity seems to have gone through ... 120 views What range of exit pupils work for observing the full moon? I'm observing the full moon, from a major city, with heavy light pollution and dust. The objective size is fixed, for this comparison, something between 4 and 6 inches. The question is about the ... 147 views What are the azimuths of the planets' orbits? I am creating a virtual solar system model and I want it to be as realistic as possible (e.g. orbits are ellipses, not circles, and orbits are oriented correctly, not all coplanar). In order for me ... 172 views What would happen if a Black Hole and White Hole Collided? Given our understanding of both black holes and white holes, what would be the outcome if they were both to suddenly collide? Black Holes: A black hole is a region of spacetime from which gravity ... 67 views How to determine scalar-to-tensor ratio r from CMB polarization spectrum? CMB polarization spectrum can tell us about the primordial scalar and tensor perturbation. By analyze B and E mode angular spectrum power spectrum and temperature power spectrum we can determine the ... 57 views Mapping selenographic coordinates onto a sphere I have a 3D rendering of the moon, which is represented as a sphere with a radius of N pixels. I also have selenographic coordinates. For example, the coordinates of the Apollo 11 landing site are ... 26 views Does the radius of the Universe correspond to its total entropy? I heard a claim that due to holographic principle, the surface area of the cosmic horizon corresponds to the universe's total entropy. As such the initial state had zero surface area and later ... 22 views When do Mercury/Venus reach greatest elevation at sunset/twilight for a given location? On what day does Mercury reach its greatest elevation (in degrees from the horizon) at sunset a given location? The obvious answer is the day of Mercury's greatest elongation from the Sun, but, ... 113 views Is there a general term for epicycles, deferents, and eccentrics in Ptolemaic astronomy? According to Ptolemy's (c. 150 CE) account of the motions of planets, planets moved in circular paths ("epicycles") around center points that in turn moved around the center of the earth along a path ... 52 views What are the biggest problems about the numerical, finite-element GR models? As I know, for example the modelling of the collapse of a neutron star (to a black hole) wasn't done correctly until now. Why? Yes, I know, the Einstein Field Equations aren't really easy to solve. ... 241 views Milky way: How do we know its appearance? [duplicate] How can we know how does the Milky Way look like if we are in it? Sorry if the answer is evident, I am not an expert. 53 views What is that void that the universe is growing into? [duplicate] We understand that this is the universe and it is essentially full of space and time, but what is on the outside of the universe, and isn't that nothing, something, and how far does that reach. I.e. ...
# Zero-fuel weight The zero fuel weight (ZFW) of an aircraft is the total weight of the airplane and all its contents, minus the total weight of the usable fuel on board (unusable fuel is included in ZFW). For example, if an aircraft is flying at a weight of 5,000 kg and the weight of fuel on board is 500 kg, the ZFW is 4,500 kg. Some time later, after 100 kg of fuel has been used, the total weight of the airplane is 4,900 kg, the weight of fuel is 400 kg, and the ZFW is unchanged at 4,500 kg. As a flight progresses and fuel is consumed, the total weight of the airplane reduces, but the ZFW remains constant (unless some part of the load, such as parachutists or stores, is jettisoned in flight). For many types of airplane, the airworthiness limitations include a maximum zero fuel weight.[Note 1] ## Maximum zero fuel weight The maximum zero fuel weight (MZFW) is the maximum weight allowed before usable fuel and other specified usable agents (engine injection fluid, and other consumable propulsion agents) are loaded in defined sections of the aircraft as limited by strength and airworthiness requirements. It may include usable fuel in specified tanks when carried in lieu of payload. The addition of usable and consumable items to the zero fuel weight must be in accordance with the applicable government regulations so that airplane structure and airworthiness requirements are not exceeded. ### Maximum zero fuel weight in aircraft operations When an aircraft is being loaded with crew, passengers, baggage and freight it is most important to ensure that the ZFW does not exceed the MZFW. When an aircraft is being loaded with fuel it is most important to ensure that the takeoff weight will not exceed the maximum permissible takeoff weight. MZFW : The maximum weight of an aircraft prior to fuel being loaded.[Note 2] ${\displaystyle ZFW+FOB=TOW}$ For any aircraft with a defined MZFW, the maximum payload can be calculated as the MZFW minus the OEW (operational empty weight) ${\displaystyle MaxPayload=MZFW-OEW}$ ### Maximum zero fuel weight in type certification The maximum zero fuel weight is an important parameter in demonstrating compliance with gust design criteria for transport category airplanes.[3] ## Wing bending relief In fixed-wing aircraft, fuel is usually carried in the wings. Weight in the wings does not contribute as significantly to the bending moment in the wing as does weight in the fuselage. This is because the lift on the wings and the weight of the fuselage bend the wing tips upwards and the wing roots downwards; but the weight of the wings, including the weight of fuel in the wings, bend the wing tips downwards, providing relief to the bending effect on the wing. When an airplane is being loaded, the capacity for extra weight in the wings is greater than the capacity for extra weight in the fuselage. Designers of airplanes can optimise the maximum takeoff weight and prevent overloading in the fuselage by specifying a MZFW. This is usually done for large airplanes with cantilever wings. (Airplanes with strut-braced wings achieve substantial wing bending relief by having the load of the fuselage applied by the strut mid-way along the wing semi-span. Extra wing bending relief cannot be achieved by particular placement of the fuel. There is usually no MZFW specified for an airplane with a strut-braced wing.) Most small airplanes do not have a MZFW specified among their limitations. For these airplanes with cantilever wings, the loading case that must be considered when determining the maximum takeoff weight is the airplane with zero fuel and all disposable load in the fuselage. With zero fuel in the wing the only wing bending relief is due to the weight of the wing. ## Notes 1. ^ 14CFR 110.2 "Maximum zero fuel weight means the maximum permissible weight of an aircraft with no disposable fuel or oil."[1] 2. ^ 14CFR 125.9(c) "For the purposes of this part, maximum zero fuel weight means the maximum permissible weight of an aircraft with no disposable fuel or oil." [2]
# hyperref with implicit=false gives a warning I am getting the following warning when I use beamer Package hyperref Warning: Option pdftitle' has already been used I can recreate the problem with the following MWE based on the article class \documentclass{article} \usepackage[implicit=false]{hyperref} \AtBeginDocument{\hypersetup{pdftitle=mytitle}} \begin{document} Hello world \end{document} For reasons that are beyond my understanding (and beyond my desire to know) beamer requires hyperref to be loaded with implicit=false. The \AtBeginDocument{\hypersetup{pdftitle=mytitle}} occurs in a package written by me. Given that I need to load hyperref with implicit=false and I need to set pdftitle after I load hyperref, is there a way to avoid this warning (without using the silence package)? Yes, I know it is only a warning and doesn't really matter. \documentclass{article} \AtBeginDocument{\hypersetup{pdftitle=mytitle}} \AtBeginDocument{\hypersetup{pdftitle=mytitle}} \usepackage[implicit=false]{hyperref} \begin{document} Hello world \end{document} doesn't give the warning even though pdftitle is given twice. - The setting of option pdftitle is used in \PDF@FinishDoc. Usually it is called at the output of the first page, but with option implicit it is done via \AtBeginDocument. Thus, either you can use your call of \AtBeginDocument before package hyperref is loaded, or package etoolbox helps with hook \AtEndPreamble: \documentclass{article} \usepackage[implicit=false]{hyperref} \usepackage{etoolbox} \AtEndPreamble{\hypersetup{pdftitle=mytitle}} \begin{document} Hello world \end{document} - A little more digging and I think I have an answer. The issue is that implicit causes \PDF@FinishDoc to run earlier than usual. The default behavior of \PDF@FinishDoc is to disable the ability of \hypersetup to set pdftitle and instead cause it to produce a warning. A simple work around is to use \pdfinfo{/Title (mytitle)} instead of \hypersetup{pdftitle=mytitle}. My guess is that this has some drawbacks, but I don't know what they are yet. Another possibility is to redefine \PDF@FinishDoc so it runs later. I am hesitant to make it run later, since my guess is that there is a reason for it to run earlier with implicit. It might also be possible to either prevent pdftitle from being disabled, or to re-enable pdftitle`, set it, and then disable it again.
Skip to main content # Illustration Light Clock: Light Paths in Right Triangle Get illustration Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially. Sharing and adapting of the illustration is allowed with indication of the link to the illustration. In a moving light clock, the photon travels a different distance $$L'$$ than in a stationary light clock: $$L$$. Moving light clock moves to the right with velocity $$v$$. The photon always has the speed of light $$c$$ and covers the distance $$L'$$ within the time $$\Delta t'$$ and the distance $$L$$ within the time $$\Delta t$$.
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Forthcoming papers Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Mat. Sb.: Year: Volume: Issue: Page: Find Mat. Sb., 1996, Volume 187, Number 5, Pages 59–64 (Mi msb127) Topology of domains of possible motions of integrable systems V. V. Kozlov, V. V. Ten M. V. Lomonosov Moscow State University Abstract: A study is made of analytic invertible systems with two degrees of freedom on a fixed three-dimensional manifold of level of the energy integral. It is assumed that the manifold in question is compact and has no singular points (equilibria of the initial system). The natural projection of the energy manifold onto the two-dimensional configuration space is called the domain of possible motion. In the orientable case it is sphere with $k$ holes and $p$ attached handles. It is well known that for $k=0$ and $p\geqslant 2$, the system possesses no non-constant analytic integrals on the corresponding level of the energy integral. The situation in the case of domains of possible motions with a boundary turns out to be very different. The main result can be stated as follows: there are examples of analytically integrable systems with arbitrary values of $p$ and $k\geqslant 1$. DOI: https://doi.org/10.4213/sm127 Full text: PDF file (197 kB) References: PDF file   HTML file English version: Sbornik: Mathematics, 1996, 187:5, 679–684 Bibliographic databases: UDC: 517.9+531.01 MSC: Primary 58F05; Secondary 70G25, 70F10 Citation: V. V. Kozlov, V. V. Ten, “Topology of domains of possible motions of integrable systems”, Mat. Sb., 187:5 (1996), 59–64; Sb. Math., 187:5 (1996), 679–684 Citation in format AMSBIB \Bibitem{KozTen96} \by V.~V.~Kozlov, V.~V.~Ten \paper Topology of domains of possible motions of integrable systems \jour Mat. Sb. \yr 1996 \vol 187 \issue 5 \pages 59--64 \mathnet{http://mi.mathnet.ru/msb127} \crossref{https://doi.org/10.4213/sm127} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=1400352} \zmath{https://zbmath.org/?q=an:0871.58043} \transl \jour Sb. Math. \yr 1996 \vol 187 \issue 5 \pages 679--684 \crossref{https://doi.org/10.1070/SM1996v187n05ABEH000127} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-0030497053} • http://mi.mathnet.ru/eng/msb127 • https://doi.org/10.4213/sm127 • http://mi.mathnet.ru/eng/msb/v187/i5/p59 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. Bolotin, SV, “A variational construction of chaotic trajectories for a Hamiltonian system on a torus”, Bollettino Della Unione Matematica Italiana, 1B:3 (1998), 541 2. Bertotti, ML, “Chaotic trajectories for natural systems on a torus”, Discrete and Continuous Dynamical Systems, 9:5 (2003), 1343 3. Rudnev, M, “Integrability versus topology of configuration manifolds and domains of possible motions”, Archiv der Mathematik, 86:1 (2006), 90 • Number of views: This page: 348 Full text: 113 References: 57 First page: 6
# Basis Function Models Often times we want to model data $y$ that emerges from some underlying function $f(x)$ of independent variables $x$ such that for some future input we’ll be able to accurately predict the future output values. There are various methods for devising such a model, all of which make particular assumptions about the types of functions the model can emulate. In this post we’ll focus on one set of methods called Basis Function Models (BFMs). ## Basis Sets and Linear Independence The idea behind BFMs is to model the complex target function $f(x)$ as a linear combination of a set of simpler functions, for which we have closed form expressions. This set of simpler functions is called a basis set, and work in a similar manner to bases that compose vector spaces in linear algebra. For instance, any vector in the 2D spatial coordinate system (which is a vector space in  $\mathbb R^2$) can be composed of linear combinations of the $x$ and $y$ directions. This is demonstrated in the figures below: Illustration of basis vectors along the x (blue) and y(red) directions, along with a target vector (black) Above we see a target vector in black pointing from the origin (at xy coordinates (0,0)) to the xy coordinates (2,3), and the coordinate basis vectors $b^{(x)}$ and $b^{(y)}$, each of which point one unit along the x- (in blue) and y- (in red) directions. We can compose the target vector as as a linear combination of the x- and y- basis vectors. Namely the target vector can be composed by adding (in the vector sense) 2 times the basis $b^{(x)}$ to 3 times the basis $b^{(y)}$: Composing the target vector as a linear combination of the basis vectors One thing that is important to note about the bases $b^{(x)}$ and $b^{(y)}$ is that they are linearly independent. This means that no matter how hard you try, you can’t compose the basis vector $b^{(x)}$ as a linear combination of the other basis vector $b^{(y)}$, and vice versa. In the 2D vector space, we can easily see this because the red and blue lines are perpendicular to one another (a condition called orthogonality). But we can formally determine if two (column) vectors are independent by calculating the (column) rank of a matrix $A$ that is composed by concatenating the two vectors. $A = [b^{(x)},b^{(y)}]$ $= \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix}$ The rank of a matrix is the number of linearly independent columns in the matrix. If the rank of $A$ has the same value as the number of columns in the matrix, then the columns of  $A$  forms a linearly independent set of vectors. The rank of $A$ above is 2. So is the number of columns. Therefore the basis vectors $b^{(x)}$ and $b^{(y)}$ are indeed linearly independent. We can use this same matrix rank-based test to verify if vectors of  much higher dimension than two are independent. Linear independence of the basis set is important if we want to be able to define a unique model. %% EXAMPLE OF COMPOSING A VECTOR OF BASIS VECTORS figure; targetVector = [0 0; 2 3] basisX = [0 0; 1 0]; basisY = [0 0; 0 1]; hv = plot(targetVector(:,1),targetVector(:,2),'k','Linewidth',2) hold on; hx = plot(basisX(:,1),basisX(:,2),'b','Linewidth',2); hy = plot(basisY(:,1),basisY(:,2),'r','Linewidth',2); xlim([-4 4]); ylim([-4 4]); xlabel('x-direction'), ylabel('y-direction') axis square grid legend([hv,hx,hy],{'Target','b^{(x)}','b^{(y)}'},'Location','bestoutside'); figure hv = plot(targetVector(:,1),targetVector(:,2),'k','Linewidth',2); hold on; hx = plot(2*basisX(:,1),2*basisX(:,2),'b','Linewidth',2); hy = plot(3*basisY(:,1),3*basisY(:,2),'r','Linewidth',2); xlim([-4 4]); ylim([-4 4]); xlabel('x-direction'), ylabel('y-direction'); axis square grid legend([hv,hx,hy],{'Target','2b^{(x)}','3b^{(y)}'},'Location','bestoutside') A = [1 0; 0 1]; % TEST TO SEE IF basisX AND basisY ARE % LINEARLY INDEPENDENT isIndependent = rank(A) == size(A,2) ## Modeling Functions with Linear Basis Sets In a similar fashion to creating arbitrary vectors with vector bases, we can compose arbitrary functions in “function space” as a linear combination of simpler basis functions  (note that basis functions are also sometimes called kernels). One such set of basis functions is the set of polynomials: $b^{(i)} = x^i$ Here each basis function is a polynomial of order $i$. We can then compose a basis set of $D$ functions, where the $D-th$ function is $b^{(D)}$, then model the function $f(x)$ as a linear combinations of these $D$ polynomial bases: $f(x) = \beta_0 b^{(0)} + \beta_1 b^{(1)} + ... \beta_D b^{(D)}$ where $\beta_i$ is the weight on the $i$-th basis function. In matrix format this model takes the form $f(x) = A \beta$ Here, again the matrix $A$ is the concatenation of each of the polynomial bases into its columns. What we then want to do is determine all the weights $\beta$ such that $A\beta$ is as close to $f(x)$ as possible. We can do this by using Ordinary Least Squares (OLS) regression, which was discussed in earlier posts. The optimal solution for the weights under OLS is: $\hat \beta = (A^T A)^{-1}A^T y$ Let’s take a look at a concrete example, where we use a set of  polynomial basis functions to model a complex data trend. ## Example: Modeling $f(x)$ with Polynomial Basis Functions In this example we model a set of data $y$ whose underlying function $f(x)$ is: $f(x) = cos(x/2) + sin(x)$ In particular we’ll create a polynomial basis set of degree 10 and fit the $\beta$ weights using OLS. The Matlab code for this example, and the resulting graphical output are below: Left: Basis set of 10 (scaled) polynomial functions. Center: estimated model weights for basis set. Right: Underlying model f(x) (blue), data sampled from the model (black circles), and the linear basis model fit (red). %% EXAMPLE: MODELING A TARGET FUNCTION x = [0:.1:20]'; f = inline('cos(.5*x) + sin(x)','x'); % CREATE A POLYNOMIAL BASIS SET polyBasis = []; nPoly = 10; px = linspace(-10,10,numel(x))'; for iP = 1:nPoly polyParams = zeros(1,nPoly); polyParams(iP) = 1; polyBasis = [polyBasis,polyval(polyParams,px)]; end % SCALE THE BASIS SET TO HAVE MAX AMPLTUDE OF 1 polyBasis = fliplr(bsxfun(@rdivide,polyBasis,max(polyBasis))); % CHECK LINEAR INDEPENDENCE isIndependent = rank(polyBasis) == size(polyBasis,2) % SAMPLE SOME DATA FROM THE TARGET FUNCTION randIdx = randperm(numel(x)); xx = x(randIdx(1:30)); y = f(xx) + randn(size(xx))*.2; % FIT THE POLYNOMIAL BASIS MODEL TO THE DATA(USING polyfit.m) basisWeights = polyfit(xx,y,nPoly); % MODEL OF TARGET FUNCTION yHat = polyval(basisWeights,x); % DISPLAY BASIS SET AND AND MODEL subplot(131) plot(polyBasis,'Linewidth',2) axis square xlim([0,numel(px)]) ylim([-1.2 1.2]) title(sprintf('Polynomial Basis Set\n(%d Functions)',nPoly)) subplot(132) bar(fliplr(basisWeights)); axis square xlim([0 nPoly + 1]); colormap hot xlabel('Basis Function') ylabel('Estimated Weight') title('Model Weights on Basis Functions') subplot(133); hy = plot(x,f(x),'b','Linewidth',2); hold on hd = scatter(xx,y,'ko'); hh = plot(x,yHat,'r','Linewidth',2); xlim([0,max(x)]) axis square legend([hy,hd,hh],{'f(x)','y','Model'},'Location','Best') title('Model Fit') hold off; First off, let’s make sure that the polynomial basis is indeed linearly independent. As above, we’ll compute the rank of the matrix composed of the basis functions along its columns. The rank of the basis matrix has a value of 10, which is also the number of columns of the matrix (line 19 in the code above). This proves that the basis functions are linearly independent. We fit the model using Matlab’s internal function $\text{polyfit.m}$, which performs OLS on the basis set matrix. We see that the basis set of 10 polynomial functions (including the zeroth-bias term) does a pretty good job of modeling a very complex function $f(x)$. We essentially get to model a highly nonlinear function using simple linear regression (i.e. OLS). ## Wrapping up Though the polynomial basis set works well in many modeling problems, it may be a poor fit for some applications. Luckily we aren’t limited to using only polynomial basis functions. Other basis sets include Gaussian basis functions, Sigmoid basis functions, and finite impulse response (FIR) basis functions, just to name a few (a future post, we’ll demonstrate how the FIR basis set can be used to model the hemodynamic response function (HRF) of an fMRI voxel measured from brain).
# 1.5.3 - Measures of Variability 1.5.3 - Measures of Variability To introduce the idea of variability, consider this example. Two vending machines A and B drop candies when a quarter is inserted. The number of pieces of candy one gets is random. The following data are recorded for six trials at each vending machine: Vending Machine A Pieces of candy from vending machine A: 1, 2, 3, 3, 5, 4 mean = 3, median = 3, mode = 3 Vending Maching B Pieces of candy from vending machine B: 2, 3, 3, 3, 3, 4 mean = 3, median = 3, mode = 3 The dot plot for the pieces of candy from vending machine A and vending machine B is displayed in figure 1.4. They have the same center, but what about their spreads? ### Measures of Variability There are many ways to describe variability or spread including: • Range • Interquartile range (IQR) • Variance and Standard Deviation Range The range is the difference in the maximum and minimum values of a data set. The maximum is the largest value in the dataset and the minimum is the smallest value. The range is easy to calculate but it is very much affected by extreme values. $Range = maximum - minimum$ Like the range, the IQR is a measure of variability, but you must find the quartiles in order to compute its value. Interquartile Range (IQR) The interquartile range is the difference between upper and lower quartiles and denoted as IQR. \begin{align} IQR &=Q3 -Q1\\&=upper\ quartile - lower\ quartile\\&= 75th\ percentile - 25th\ percentile \end{align} Note! The IQR is not affected by extreme values. It is thus a resistant measure of variability. ## Try it! Find the IQR for the final exam scores example. 24, 58, 61, 67, 71, 73, 76, 79, 82, 83, 85, 87, 88, 88, 92, 93, 94, 97 $IQR=Q3-Q1=89-70=19$ ## Variance and Standard Deviation One way to describe spread or variability is to compute the standard deviation. In the following section, we are going to talk about how to compute the sample variance and the sample standard deviation for a data set. The standard deviation is the square root of the variance. Variance the average squared distance from the mean Population variance $\sigma^2=\dfrac{\sum_{i=1}^N (x_i-\mu)^2}{N}$ where $\mu$ is the population mean and the summation is over all possible values of the population and $N$ is the population size. $\sigma^2$ is often estimated by using the sample variance. Sample Variance $s^2=\dfrac{\sum_{i=1}^n (x_i-\bar{x})^2}{n-1}=\dfrac{\sum_{i=1}^n x_i^2-n\bar{x}^2}{n-1}$ Where $n$ is the sample size and $\bar{x}$ is the sample mean. Why do we divide by $n-1$ instead of by $n$? When we calculate the sample sd we estimate the popoulation mean with the sample mean, and dividing by (n-1) rather than n which gives it a special property that we call an "unbiased estimator". Therefore $s^2$ is an unbiased estimator for the popoulation variance. The sample variance (and therefore sample standard deviation) are the common default calculations used by software. When asked to calculate the variance or standard deviation of a set of data, assume - unless otherwise instructed - this is sample data and therefore calculating the sample variance and sample standard deviation. ## Example 1-8 Calculate the variance for these final exam scores. 24, 58, 61, 67, 71, 73, 76, 79, 82, 83, 85, 87, 88, 88, 92, 93, 94, 97 Answer First, find the mean: $\bar{x}=\dfrac{24+58+61+67+71+73+76+79+82+83+85+87+88+88+92+93+94+97}{18}=\dfrac{233}{3}$ Next, use a table to sum the squared distances. Click to show the full table. $x_i$ $(x-\bar{x})$ $(x-\bar{x})^2$ 24 -161/3 25921/9 58 -59/3 3481/9 61 -50/3 2500/3 67 -32/3 1024/9 71 -20/3 400/9 73 -14/3 196/9 76 -5/3 25/9 79 4/3 16/9 82 13/3 169/9 83 16/3 256/9 85 22/3 484/9 87 28/3 784/9 88 31/3 961/9 88 31/3 961/9 92 43/3 1849/9 93 46/3 2116/9 94 49/3 2401/9 97 58/3 3364/9 Sum 0 46908/9 Finally, $s^2=\dfrac{\sum_{i=1}^n (x_i-\bar{x})^2}{18-1}=\dfrac{46908/9}{17}=\dfrac{5212}{17}\approx 306.588$ ## Try it! Calculate the sample variances for the data set from vending machines A and B yourself and check that it the variance for B is smaller than that for data set A. Work out your answer first, then click the graphic to compare answers. a. 1, 2, 3, 3, 4, 5 $\bar{y}_A=\dfrac{1}{6}(1+2+3+3+5+4)=\dfrac{18}{6}=3$ $s^2_A=\dfrac{(1-3)^2+(2-3)^2+(3-3)^2+(3-3)^2+(4-3)^2+(5-3)^2}{6-1}=2$ b. 2, 3, 3, 3, 3, 4 $\bar{y}_B=\dfrac{1}{6}(2+3+3+3+3+4)=\dfrac{18}{6}=3$ $s^2_B=\dfrac{(2-3)^2+(3-3)^2+(3-3)^2+(3-3)^2+(3-3)^2+(4-3)^2}{6-1}=0.4$ ## Standard Deviation The standard deviation is a very useful measure. One reason is that it has the same unit of measurement as the data itself (e.g. if a sample of student heights were in inches then so, too, would be the standard deviation. The variance would be in squared units, for example $inches^2$). Also, the empirical rule, which will be explained later, makes the standard deviation an important yardstick to find out approximately what percentage of the measurements fall within certain intervals. Standard Deviation approximately the average distance the values of a data set are from the mean or the square root of the variance Population Standard deviation $\sigma=\sqrt{\sigma^2}$ It has the same unit as the $x_i$’s. This is a desirable property since one may think about the spread in terms of the original unit. $\sigma$ is estimated by the sample standard deviation $s$ : Sample Standard Deviation $s=\sqrt{s^2}$ A rough estimate of the standard deviation can be found using $s\approx \frac{\text{range}}{4}$ ### Adding and Multiplying Constants What happens to measures of variability if we add or multiply each observation in a data set by a constant? We learned previously about the effect such actions have on the mean and the median, but do variation measures behave similarly? Not really. When we add a constant to all values we are basically shifting the data upward (or downward if we subtract a constant). This has the result of moving the middle but leaving the variability measures (e.g. range, IQR, variance, standard deviation) unchanged. On the other hand, if one multiplies each value by a constant this does affect measures of variation. The result on the variance is that the new variance is multiplied by the square of the constant, while the standard deviation, range, and IQR are multiplied by the constant. For example, if the observed values of Machine A in the example above were multiplied by three, the new variance would be 18 (the original variance of 2 multiplied by 9). The new standard deviation would be 4.242 (the original standard 1.414 multiplied by 3). The range and IQR would also change by a factor of 3. ### Coefficient of Variation Above we considered three measures of variation: Range, IQR, and Variance (and its square root counterpart - Standard Deviation). These are all measures we can calculate from one quantitative variable e.g. height, weight. But how can we compare dispersion (i.e. variability) of data from two or more distinct populations that have vastly different means? A popular statistic to use in such situations is the Coefficient of Variation or CV. This is a unit-free statistic and one where the higher the value the greater the dispersion. The calculation of CV is: Coefficient of Variation (CV) $CV = \dfrac{\text{Standard Deviation}}{\text{Mean}}$ To demonstrate, think of prices for luxury and budget hotels. Which do you think would have the higher average cost per night? Which would have the greater standard deviation? The CV would allow you to compare this dispersion in costs in relative terms by accounting for the fact that the luxury hotels would have a greater mean and standard deviation. ## Example 1-9: Comparing Prices You are shopping for toilet tissue. As you compare prices of various brands, some offer price per roll while others offer price per sheet. You are interested in determining which pricing method has less variability so you sample several of each and calculate the mean and standard deviation for the sampled items that are priced per roll, and the mean and standard deviation for the sampled items that are priced per sheet. The table below summarizes your results. Item Mean Standard Deviation Price per Roll 0.9196 0.4233 Price Per Sheet 0.01134 0.00553 Answer Comparing the standard deviations the Per Sheet appears to have much less variability in pricing. However, the mean is also much smaller. The coefficient of variation allows us to make a relative comparison of the variability of these two pricing schemes: $CV_{roll}=\dfrac{0.4233}{0.9196}=0.46$ $CV_{sheet}=\dfrac{0.00553}{0.01134}=0.49$ Relatively speaking, the variation for Price per Sheet is greater than the variability for Price per Roll. [1] Link ↥ Has Tooltip/Popover Toggleable Visibility
### POWER AND SAMPLE SIZE CALCULATIONS: #### STATISTICAL POWER CALCULATION: If we know $$\sigma$$ and $$n$$ is large, and with $$\beta$$ being the type II error rate, the power is $$1-\beta$$ \begin{align} 1-\beta &= \Pr\left(\frac{\bar X -\mu_0}{\sigma/\sqrt{n}} > z_{1-\alpha} \mid \mu = \mu_a \right)\\[3ex] &= \Pr\left(\frac{\bar X-\mu_a +\mu_a -\mu_0}{\sigma/\sqrt{n}} > z_{1-\alpha} \mid \mu = \mu_a \right)\\[3ex] &=\Pr\left(\frac{\bar X -\mu_a}{\sigma/\sqrt{n}} > z_{1-\alpha} - \frac{\mu_a-\mu_0}{\sigma/\sqrt{n}} \mid \mu = \mu_a \right)\\[3ex] &= \Pr\left(Z > z_{1-\alpha} - \frac{\mu_a-\mu_0}{\sigma/\sqrt{n}} \mid \mu = \mu_a \right) \end{align} Suppose that we wanted to detect an increase in mean of the RDI (respiratory disturbance index) in the context of sleep apnea of at least $$2\small \text{ events/hour}$$ above $$30$$. Assume normality and that the sample in question has a standard deviation of $$4$$. What would be the power if we took a sample of $$16?$$ $Z_{1-\alpha}=1.645$ or… qnorm(0.95) ## [1] 1.644854 and with $$\mu_a$$ being the true mean under the alternative hypothesis (i.e. sleep-apnea carries along a higher number of RDI with a mean of 32): $\frac{\mu_a - 30}{\sigma/\sqrt{n}}=\frac{2}{4/\sqrt{16}}=2$ Therefore, $\Pr(Z>1.645-2)=\Pr(Z>-0.355)=64\%$ or… 1 - pnorm(qnorm(0.95) - 2/(4 / sqrt(16))) ## [1] 0.63876 #### STATISTICAL SAMPLE SIZE CALCULATION: What $$n$$ sample size would be required to get a power of $$80\,\%$$ (a common benchmark in the sciences)? For a one-sided test ($$H_a: \mu_a > \mu_0$$): $0.8=\Pr\left(Z> \, z_{1-\alpha} -\frac{\mu_a -\mu_o}{s/\sqrt{n}} \mid \mu=\mu_a\right)$ which implies that $z_{1-\alpha} - \frac{\mu_a -\mu_o}{s/\sqrt{n}} = z_{0.20}$ We set $$z_{1-\alpha} - \frac{\mu_a -\mu_o}{s/\sqrt{n}} = z_{0.2}$$ and solve for $$n$$ for any value of $$\mu_a$$: $n=\left( \sigma \frac{z_{1-\alpha} - z_{0.20}}{\mu_a -\mu_0} \right)^2$ We pick $$\mu_a$$ as the smallest effect that we would reasonably like to detect. In the cases of $$H_a:\mu_a \neq \mu_0$$ we can just take one of the sides but with $$\alpha/2$$. For the example above: (n <- (4*(qnorm(0.95)-qnorm(0.2))/2)^2) ## [1] 24.73023 which would indeed carry an $$80\%$$ power: 1 - pnorm(qnorm(0.95) - 2/(4 / sqrt(n))) ## [1] 0.8
# # Signal Intensities When taking measurements, the signal intensity is an important factor. Automatic gain control (auto-gain) can be used to automatically set pulse_length and intensity to receive a good signal intensity. ## # Defining an Auto-Gain First step is to setup an auto-gain control in a protocol. The following parameters have to be defined: Parameter Type Description index int Select an index number to access the gain settings in the protocol (0-9) led int Select the LED used for auto-gain detector int Select the detector used for auto-gain pulse length int Define the duration of the pulse (in µs) target intensity int Define the target intensity (0-65,535) "autogain": [ [ <index>, <led>, <detector>, <pulse length>, <target intensity> ] ] TIP Find the protocol command documentation here. ### # Defining Multiple Auto-Gains Multiple auto-gains can be used inside a protocol, depending on the number of lights and detector combinations used. Up to 10 auto-gain settings can be defined. { "autogain": [ [ 1, 3, 1, 30, 3000 ], [ 2, 4, 1, 30, 5000 ], ... ] } Note Always make sure, that the index used for each gain is unique. There is no error checking on the Instrument side. ## # Applying Auto-Gain Values When autogain is defined in a protocol, for each auto-gain the variables auto_duration<index> and auto_bright<index> will be return and can be used to set the pulse_length and pulsed_lights_brightness. To access the output for the autogain with the index 2, the variables would be auto_duration2 and auto_bright2. "pulse_length": [ [ "auto_duration<index>" ], ... ] "pulsed_lights_brightness": [ [ "auto_bright<index>" ], ... ] ## # Code Example { "autogain": [ [ 1, 3, 1, 30, 3000 ] ], "pulse_length": [ [ "auto_duration1" ] ], "pulsed_lights": [ [ 3 ] ], "pulsed_lights_brightness": [ [ "auto_bright1" ] ], "detectors": [ [ 1 ] ] }
# Set material of a vertex group of a certain face I have a mesh and I want to know how I can color the whole vertex group of a certain face. I currently can color a face from a list as shown below, but I want to set the material of the whole vertex group of each of these faces in the list instead of just the faces. for poly in bpy.data.objects['Mymesh'].data.polygons: #set material 1 for the whole mesh poly.material_index = 0 for a in list: #set material 2 for some faces - I want it instead to set the material of vertex groups #of these faces in the list instead of just the faces bpy.data.objects['Mymesh'].data.polygons[a].material_index = 1 You can modify a previous answer to do this: import bpy o = bpy.context.object for p in o.data.polygons: # Get all the vertex groups of all the vertices of this polygon verts_vertexGroups = [ g.group for v in p.vertices for g in o.data.vertices[ v ].groups ] # Find the most frequent (mode) of all vertex groups counts = [ verts_vertexGroups.count( idx ) for idx in verts_vertexGroups ] modeIndex = counts.index( max( counts ) ) mode = verts_vertexGroups[ modeIndex ] groupName = o.vertex_groups[ mode ].name # If this polygon belongs to a specific VG, change its material if groupName == "VGwereInterestedIn": p.material_index = material_index # The material slot we want to use to shade polygons in this VG ### EDITED: I revised the code to answer the actual question, while leaving the above answer in case it interests other users. The code below takes a list of face indices and a material index, and assigns the material index to all the polygons that belong to these faces' vertex group/s. import bpy o = bpy.context.object # List of face indices that represent vertex groups we want to shade with a specific material myFaceList = [ 0, 100, 52, 32 ] # Index of the material for shading the vertex group/s of the faces above materialIndex = 1 def find_polygons_vertexgroup( p ): # Get all the vertex groups of all the vertices of this polygon verts_vertexGroups = [ g.group for v in p.vertices for g in o.data.vertices[ v ].groups ] # Find the most frequent (mode) of all vertex groups counts = [ verts_vertexGroups.count( idx ) for idx in verts_vertexGroups ] modeIndex = counts.index( max( counts ) ) mode = verts_vertexGroups[ modeIndex ] return mode for pi in myFaceList: # Find the current polygon's vertex group index vgIndex = find_polygons_vertexgroup( o.data.polygons[ pi ] ) # Iterate over all polygons and change their material VG is the same for p in o.data.polygons: p_vgIndex = find_polygons_vertexgroup( p ) if p_vgIndex == vgIndex: p.material_index = materialIndex • Thanks for your answer. I think you misunderstood my question. I don't want to color a specific VG by name. Each vertex in my mesh is assigned to one and only one vertex group, where a vertex group will have it's unique vertices, so no vertex sharing among vertex groups. I then have a list of faces which I want to color the whole vertex groups connected to this faces with a certain material. Can you leave the answer above as I may use it in the future and it may be helpful to others and just edit it with another approach to fit my case? Many thanks – Tak Dec 13 '16 at 23:04 • I've tried doing it but I did it with couple of next for loops which made it very slow, so I look forward for your efficient solution, and as I mentioned previously, please don't remove or update the script in your answer as it may be helpful to others, I'd just appreciate it if you could edit the answer and include another script for my problem. I've already upvoted your answer. – Tak Dec 13 '16 at 23:25 • @Tak, revised answer included in edited answer. Let me know if this works for you. – TLousky Dec 14 '16 at 9:39 • Hello, sorry to bother you again. I'm currently using this method to color the whole vertex groups of faces in a list. The problem that my mesh is very high poly and blender hangs for ages during the animation every frame (about 20 sec per frame) so too slow. There is no better way to set the material of a vertex group at once? so if vertex a has weights in both vg1 and vg2 straight away set the color or material of these groups to red without having to loop on all vertices? – Tak Feb 27 '17 at 9:03 • @Tak, sorry, unfortunately not really available to try solve this at the moment (workload). I'd try using operators to mass select the entire vertex group (bpy.ops.object.vertex_group_select()) and mass assign materials (blender.stackexchange.com/questions/27190/…) – TLousky Feb 28 '17 at 8:37
## Friday, December 27, 2013 ### Technocracy. It's new. No. It's not. I've read a number of recent books and articles about how technology, particularly computers and robots, will change everything and create a bipartite society, where "there will be those who tell computers what to do, and those who are told what to do by computers" – in a compact form. (As a computer engineer, I sort of approve of this message. :-) This idea of a bipartite society with a small elite lording over the undifferentiated masses is not new (really, not new at all). That it's a result of technology instead of divine intervention or application of force is also not new, but, since most people have an "everything that happened before me is irrelevant because my birth was the most important event in the totality of space-time" attitude towards the past, this is ignored. There are a few reasons contributing to the popularity of this idea: It's mostly right, and in a highly visible way. Technological change makes life harder for those who fail to adapt to it. In the case of better robotics and smarter computers, adaptation is more difficult than it was for other changes like the production line or electricity. One way to see this is to see how previously personalized services have been first productized (ex: going from real customer service representatives to people following an interactive script on a computer) and then the production processes were automated (ex: from script-following humans to voice-recognition speech interfaces to computers). Technological change is real, it's important, and it's been a constant for a long time now. (Added Jan. 7, 2014: Yes, I understand that the economics of technology adoption have a lot to do with things other than technology, namely broader labor and economic policies. I have teaching exercises for the specific purpose of making that point to execs and MBAs. Because discussion of these topics touches the boundaries of politics, I keep them out of my blog.) It's partially wrong, but in a non-obvious way. People adapt and new careers appear that weren't possible before; there are skilled jobs available, only the people who write books/punditize/etc don't understand them; and humans are social animals. The reason why these are non-obvious, in order: it's hard to forecast evolution of the use of a technology; people with "knowledge work" jobs don't get Mike Rowe's point about skilled manual labor; most people don't realize how social they are. (On top of these sociological reasons there's a basic point of product engineering that most authors/pundits/etc don't get, as they're not product engineers themselves: a prototype or technology demonstrator working in laboratory conditions or very limited and specific circumstances is a far cry from a product fitting with the existing infrastructure at large and usable by an average customer. Ignoring this difference leads authors/pundits/etc to over-estimate the speed of technological change and therefore the capacity of regular people to adapt to it.) Change sells. There's really a very small market for "work hard and consume less than you produce" advice, for two reasons. First, people who are likely to take that advice already know it. Second, most people want a shortcut or an edge; if all that matters is change, that's a shortcut (no need to learn what others have spent time learning) and gives the audience an edge over other people who didn't get the message. It appeals to the chattering classes. The chattering classes tend to see themselves as the elite (mostly incorrectly, in the long term, especially for information technologies) and therefore the idea that technology will cement their ascendancy over the rest of the population appeals to them. That they don't, in general, understand the technologies, is beyond their grasp. It appeals to the creators of these technologies. Obviously so, as they are hailed as the creators of the new order. And since these tend to be successful people whom some/many others want to understand or imitate, there's a ready market for books/tv/consulting. Interestingly enough, most of the writers, pundits, etc, especially the more successful ones, are barely conversant with the technical foundations of the technologies. Hence the constant reference to unimportant details and biographical information. It appeals to those who are failing. It suggests that one's problems come from outside, from change that is being imposed on them. Therefore failure is not the result of goofing off in school, going to work under the influence of mind-altering substances, lack of self-control, the uselessness of a degree in Narcissism Studies from Givusyourstudentloans U.  No, it's someone else's fault. Don't bother with STEM, business, or learning a useful skill. Above all, don't do anything that might harm your self-esteem, like taking a technical MOOC with grades. It appeals to those in power. First, it justifies the existence of a class of people who deserve to have power over others. Second, it describes a social problem that can only be solved by the application of power: since structural change creates a permanent underclass, not by their fault, wealth must be redistributed for the common good. Third, it readily identifies the class of people who must be punished/taxed: the creators of these technologies, who also create new sources of wealth to be taxed. Fourth, it absolves those in power from responsibility, since it's technology, not policy that is to blame. Fifth, it suggests that technology and other agents of change should be brought under the control of the powerful, since they can wreak such havoc in society. To be clear, technology changes society and has been doing so since fire, the wheel, agriculture, writing,  – skipping ahead – printing press, systematic experiments, the production line, electricity, DNA testing, selfies... The changes these technologies have brought are now integrated in the way we view the world, making them so "obvious" that they don't really count. Or do they? Maybe "we" should do some research. If these changes were obvious, certainly they were accurately predicted at the time. Like "we" are doing now with robots and AI. You can find paper books about these changes on your local sky library dirigible, which you reach with your nuclear-powered Plymouth flying car, wearing your metal fabric onesie with a zipper on your shoulder, right after getting your weekly nutrition pill. You can listen to one of three channels bringing you music via telephone wires, from the best orchestras in Philadelphia and St. Louis while you read. Or you can look up older predictions using Google on your iPhone, while you walk in wool socks and leather shoes to drink coffee brewed in the same manner as in 1900. The price changed, though. It's much cheaper to make, but you pay a lot more for the ambiance. ## Saturday, December 21, 2013 ### Books I read in 2013 At the beginning of 2013 I decided to keep a book log (including Audible audiobooks). These are the non-work books I read in 2013, by author. Some are re-readings, and there's still enough time for a few more. I'll be adding notes later. ✏ Chris Anderson: Makers: The New Industrial Revolution ✏ Julian Assange et alli: Cypherpunks: Freedom and the Future of the Internet ✏ Walter Bagehot: Lombard Street: A Description of the Money Market (reread; free) ✏ Albert-Laszlo Barabasi: Bursts: The Hidden Pattern Behind Everything We Do ✏ Gregory Benford: Foundation and Fear (reread on Dec 31st.) Screenshot; I'm impressed by how careful Prof. Benford is to make sure that none of his personal feelings about being an academic in America comes across in his SciFi writing. The Foundation series is a good illustration of the preachiness and neoteny of most science fiction; it's mostly amateur sociology with minimal exploration of the real changes that technology creates. As a former aficionado, I have some residual interest in the genre, but you bet better futurism from A McKinsey or Bain conjectural report than from most SciFi, even Cyberpunk. ✏ Gregory Benford and Larry Niven: Bowl of Heaven The only new sci-fi book I read this year. Hard sci-fi took a hit after 2000, when some authors decided to join the culture wars and write metaphors for the american political system. In July I decided to reduce the amount of stuff I owned, so I replaced a number of paper books with electronic copies. This led to an assessment of which scifi books I wanted to reread. Earth was one of them, Brin's best book in my opinion. Some of the other scifi books to get replaced by eBooks are mentioned below. In the end, I donated or recycled almost two thousand paper books. ✏ Sean Carroll: The Particle at the End of the Universe: How the Hunt for the Higgs Boson Leads Us to the Edge of a New World ✏  Phillip Dennis Cate et alli: Impressionists on the Water (FAMSF Exhibition Catalog) ✏ Arthur C Clarke: Childhood's End (reread) ✏ Daniel Dennett: Intuition Pumps and Other Tools for Thinking ✏ Edward Dolnick: The Forger's Spell Few things describe the arts world as precisely as the end of chapter 11: "Van Meegeren fooled the world with a seventeenth-century painting made of plastic." ✏ Niall Ferguson: Civilization: The West and the Rest ✏ Niall Ferguson: The Great Degeneration Like Civilization, you can get most of the content of the book from Niall Ferguson's talks. But I wanted the notes and details so I read the books. ✏ Seth Godin: The Icarus Deception ✏ Rose-Marie Hagen and Ranier Hagen: Masterpieces in Detail (Art book) ✏ Chip Heath and Dan Heath: Decisive: How to Make Better Choices in Life and Work ✏ Robert Heinlein: The Cat Who Walks Through Walls (reread) ✏ Daniel Kahneman: Thinking Fast and Slow (reread) ✏ Walter Lewin: For the Love of Physics As autobiographies of scientists go, this one is more educative than the Feynman pair (Surely you jest, Mr Feynman and What do you care what other people think?). Lewin is a superstar Physics professor from MIT, who would be the first to say that students learn Physics only when they solve the problem sets, not in the lectures. Screenshot. ✏ William Manchester and Paul Reid: The Last Lion: Winston Spencer Churchill: Defender of the Realm, 1940-1965 Vol. III of Manchester's biography of Churchill, written by Reid based on Manchester's notes. Hard on the French. Read in one day plus two evenings. 1200 pages, but the last 130 are notes and references. ✏ Michael Moss: Salt Sugar Fat: How the Food Giants Hooked Us ✏ Larry Niven and Jerry Pournelle: Lucifer's Hammer (reread) One of my favorite sci-fi books (and the only Pournelle to make my top 10). I reread parts of it often (notes and highlights help). I bought it 5 times in different languages and formats. ✏ Donald Norman: The Design of Everyday Things: Revised and Expanded Edition (added Dec 25) There are enough changes from the previous edition to merit purchasing it anew, but in my case I get the added benefit of moving from a paper edition to a Kindle book, reducing the need for physical storage space. Subsumes Living With Complexity as well. ✏ Iain Pears: The Bernini Bust (reread) ✏ Iain Pears: The Titian Committee (reread) ✏ Terry Pratchett and Neil Gaiman: Good Omens (reread) ✏ Mark Sisson: The Primal Blueprint Good book, though I wouldn't want to give up resistant starch altogether and the high-impact exercise recommendation is better ignored. But worth reading as motivation for life changes. ✏ Benn Steil: The Battle of Bretton Woods: John Maynard Keynes, Harry Dexter White, and the Making of a New World Order Reread for the 10th or 20th time, despite being over 1000 pages long (read it in one very long reading marathon when it came out); possibly Stephenson's best book. Screenshot. ✏ Nassim Nicholas Taleb: Antifragile: Things That Gain from Disorder Best non-fiction book I read in 2013. I think I'll be rereading my highlights and notes for years to come. Sometimes NNT's style may be a little over the top, but the substance is worth it. ✏ Barbara Tuchman: The Guns of August (reread on Remembrance Day; screenshot) ✏ Barbara Tuchman: The March of Folly ✏ Mark Twain: The Innocents Abroad Mark Twain takes on tourism, Americans, and foreigners. For some reason I had never read it before. It's available for free, since it predates the Mickey Mouse copyright rules. ✏ Lea Van Der Vinde et alli: Girl with a Pearl Earring: Dutch Paintings from the Mauritshuis (FAMSF Exhibit Catalog) ✏ Ingo Walther and Norbert Wolf: Masterpieces of Illumination: The World's Most Famous Manuscripts 400 To 1600 (Art book) It's a book about class, friendship, religion, and growing up. The movie was a distortion of the book as bad as Starship Troopers was of the Heinlein original; the ITV series was acceptable, but the writing itself is a major part of the value of the book, and cannot be appreciated from video. ✏ Evelyn Waugh: Sword of Honor (reread) ✏ Evelyn Waugh: Vile Bodies (reread) ✏ P.G. Wodehouse: Big Money (reread) ✏ P.G. Wodehouse: Carry On Jeeves (reread) ✏ P.G. Wodehouse: The Code of the Woosters (reread, for the 20th time or so...) ✏ P.G. Wodehouse: A Man of Means Found a Wodehouse I hadn't read before. The year was worth it. Huzzah! ✏ P.G. Wodehouse: Mulliner Nights (reread) ✏ William Zinsser: On Writing Well (reread) Best book on writing ever, IMNSHO. I reread parts of it often; read the whole book at least once a year; and reread my notes about it before starting any writing project. Technically it's a work book for me, but I like to read it for pleasure as well. The secret to reading this many books: watching very little television. Most of these books take only a few hours to read (though some may take a lot more), so an evening or two without television is enough to read a book. By that metric, I read a lot less than my potential, and that's not considering the multitasking afforded by audiobooks during walks or repetitive exercise like Concept II rowing. ## Sunday, December 8, 2013 ### How strong must evidence be to reverse belief? I've seen this quote attributed to Carl Sagan and to Christopher Hitchens, but I think Rev. Thomas Bayes may have called dibs on it a few centuries ago: "Extraordinary claims demand extraordinary evidence." As I've written before [at length, poorly, and desperately in need of an editor], I find the attitude of most people who use this phrase counterproductive. But instead of pointlessly arguing back and forth like they do in certain disciplines, we'll dig into the numbers involved and see what we can learn. I know a super-genius, an Einstein-grade mind, who for decades believed that "this is the year that the Red Sox will come back and start a long series of victories," a belief unfounded in reality. Yes, very smart people can have strong beliefs that appear nonsensical to others. Let's say that the claim is about some proposition ("God exists," "Red Sox are a great team") which we'll call $G$. The prior belief in $G$ we'll denote $q \doteq \Pr[G]$; so a person may be a strong believer if $q = .99$ or a moderate believer if $q = .80$. Let's call the evidence against $G$, $E$, which is a binary observable ("no Rapture","loss against the Chicago Cubs"), with the probability of observing the evidence given that $G$ is false denoted by $p \doteq \Pr[E| \neg G]$. We'll consider evidence that has symmetric error probabilities, $\Pr[E|G] = 1 - \Pr[E|\neg G]$ so the probability that we get a false positive is equal to that of a false negative, $1-p$. For example, if $p=0.90$ there's a 10% chance of no Rapture even if God exists; if $p = 0.99$, then there's only a 1% probability of the faithful burning up in Hellfire on Earth with the rest of us sinners, when there is a God. Note that with symmetric errors, $p=0.90$ has the interesting characteristic that there's a 10% change of Rapture with no God at all, which probably would say something about the design of the experiment (psilocybin would be my guess). Now this is the question we want to ask: for any given prior belief $q$ in $G$, how good would the evidence against it have to be (meaning how big would $p$ have to be) to convince the believer to flip her beliefs, i.e. to believe against $G$ with the probability $q$, or formally, to have $\Pr[G|E] = 1-q$. The reason to go for a flip of beliefs is empirical: no zealot like a convert. (Really trying to goose up page views here. Was it Stephen Hawking who said a book's potential audience is halved by each formula in it? This blog must be down to individual quarks...) For example, if JohnDCL believes in the greatness of the Red Sox with $q= 0.99$, how strong a piece of evidence of Sox suckage would be necessary for JohnDCL to think that the probability of the Red Sox being great is only 1%? The result is $p \approx 0.9999$, in other words, JohnDCL would have to believe that the evidence only gives a false positive (it's evidence against $G$, remember) once every 10,000 tries. Let's say the evidence is losing against the Chicago Cubs. For JohnDCL to flip his beliefs based on observing such a defeat, he'd have to believe that, were the Sox a great team, they could play the Cubs 10,000 times and lose only once. (Recall that we're assuming symmetric errors, for simplicity.) Here are a few other values for $q$ and corresponding $p$: $q = 0.999 \Rightarrow p \approx 0.999 999$ one false positive in a million tries; $q = 0.9999 \Rightarrow p \approx 0.999 999 99$ one false positive in one hundred million tries; $q = 0.99999 \Rightarrow p \approx 0.999 999 999 9$ one false positive in ten billion tries. $q = 0.999999 \Rightarrow p \approx 0.999 999 999 999$ one false positive in one trillion tries. (How strong is faith in the Red Sox? In God? In Quantitative Easing Forever And Ever?) In other words, it's true that to reverse a strong belief you need extraordinary evidence. What is equally true is that the beliefs and the evidence aren't perceived equally by all participants in a conversation. People who proselytize for a cause will not be able to convince anyone else until they see the probabilities from the other person's point of view. Of course, those who say "extraordinary claims demand extraordinary evidence" typically see the world from their own point of view only. ## Friday, December 6, 2013 ### Word salad of scientific jargon "The scientists that I respect are scientists who work hard to be understood, to use language clearly, to use words correctly, and to understand what is going on. We have been subjected to a kind of word salad of scientific jargon, used out of context, inappropriately, apparently uncomprehendingly." – Richard Dawkins, in the video Dangerous Ideas - Deepak Chopra and Richard Dawkins, approximately 27 minutes in. That's how I feel about a lot of technical communications: conference panels, presentations, and articles.  An observed regularity: the better the researchers, the less they tend to go into "word salad of scientific jargon" mode. ## Thursday, December 5, 2013 ### Identifying the problem: innumeracy or science ignorance? In previous posts I said that many people who believe in Science™ (as opposed to people who know science) can't answer simple questions, like "what is the kinetic energy of a 2-ton SUV going 65MPH?" An insightful person suggested that the problem might be due to innumeracy (which is bad in itself; read the linked book) so here's another version that requires no computation: which has more kinetic energy, the aforementioned SUV or a 1-ton car going 130MPH? A sample of three people who believe in Science™ showed 100% inability to answer with explanation. (Explanation is necessary because a random pick will be right about half of the time.) Two of three picked the wrong answer (SUV) and the third "felt" the car was the right answer. The question can be answered without calculation, as long as one knows how mass and speed relate to kinetic energy. We're not talking advanced science here: this used to be taught in the seventh ninth grade. -- -- -- -- Note: these posts have nothing to do with the wrong idea that scientists have faith in science in the same sense that religious people have faith in a deity. This is about people who don't know any science but like to invoke Science™ as a talisman or a prop. Postscript: I'm compiling a list of questions to ask when faced with a Science™ believer, tagged by fashionable intellectual pursuit; after a round of testing, I'll probably post it.
## Algebra 2 Common Core $3$ vans and $2$ sedans. Let $v =$ number of vans $s=$ number of sedans. Knowing there are five vehicles, then $v + s = 5$ (Equation 1) Knowing there are a total of $31$ people, with vans carrying seven and sedans five: $7v + 5s = 31$ (Equation 2) Thus, the system that models the situation is: $v+s=5$ (Equation 1) $\\7v+5s=31$ (Equation 2) Solve for $v$ by subtracting $s$ to both sides of Equation 1: $v = 5-s$ Substitute $5 - s$ to $v$ in Equation 2: $7v+5s=31 \\7(5-s) + 5s = 31$ Distribute $7$: $35 - 7s + 5s = 31$ Collect like terms: $35 - 2s = 31$ Subtract $35$ to both sides of the equation: $-2s =-4s$ Divide both sides by $-2$: $s = 2$ Substitute $s = 2$ into Equation 1: $v+s=5 \\v+2=5 \\v=5-2 \\v=3$ Thus, $v = 3$, $s = 2$ Therefore there should be $3$ vans and $2$ sedans.
# Reference request for an identity for tangent numbers The tangent numbers $(T_{2n+1})=(1,2,16,272,7936,...)$ (cf. OEIS: A000182) satisfy many recurrences. I would be interested to find references for the following which I think must be very old: $T_3 -2T_1=0$, $T_5 -8T_3 =0,$ $T_7 -18T_5 +8T_3 =0,...$ or more generally $${T_{2n + 1}} = \sum\limits_{j \ge 1} {}{(-1)}^{j-1}{2^{2j}} {\binom{n+1}{2j}} {\frac{n+1-j}{n+1}}T_{2n - 2j + 1}.$$ - Using the standard formula $$T\_{2k-1}=(-1)^{k-1}2^{2k}(2^{2k}-1)\frac{B_{2k}}{2k},$$ your formula can be rewritten as $$(2^{2n+2}-1)B_{2n+2}=\sum_{j\ge1}(-1)^{n-j+1}\binom{n+1}{2j}(2^{2n-2j+2}-1)B_{2n-2j+2},$$ which looks "simpler", and also might be more recognisable by specialists, since identities for Bernoulli numbers are usually more "popular". @Vladimir: Thank you for the idea to reformulate the identity with other known numbers. I now tried it with the Genocchi numbers $G_{2n}$. They satisfy $nT_{2n-1}=2^{2n-2}G_{2n}$. Then the identity reduces to $\sum{(-1)^j}{\binom{n}{2j}}G_{2n-2j}$. This is Seidel’s identity for the Genocchi numbers. So I accept your comments as answer to my question. –  Johann Cigler Jul 25 '12 at 20:17 Ah, that's nice. I noticed that the identity for Bernoulli numbers I ended up with becomes especially simple if one considers the sequence of numbers $\{(2^{2n}-1)B_{2n}\}$ (which is very close to the sequence of Genocchi numbers as I now see), but I did not explore it further. Glad I could help you to figure it all out! –  Vladimir Dotsenko Jul 25 '12 at 20:28
emerge is a tool to build the KDE sources and its third-party requirements on MS Windows. It is the easy way to build KDE on MS Windows. ## Setting up emerge ### Setting up a compiler Currently emerge supports both the MinGW and Microsoft Visual Studio (msvc) compiler. While MinGW is provided by emerge Visual Studio, must be installed by the user. ### Direct X SDK In order to compile the Qt5 qtbase package with MinGW, you will also need to install the Microsoft DirectX SDK, make sure to open a new command line window after the installation. ### Installing Emerge • Start a powershell environment. • Allow executionm of powershell scripts. Set-ExecutionPolicy RemoteSigned • Install emerge and folow the instructions ## Using emerge To use emerge you need to start a Powershell window, point that to KDEROOT\emerge and run the initalization script. For example: C:\KDEROOT\emerge\kdeenv.ps1 This tells emerge about your environment settings (e.g. paths). It will load your configuration from KDEROOT\etc\kdesettings.ini. It should not give any error messages, otherwise emerge will not work as expected. The output should look similar to this one (of course with your paths): PS C:\kderoot\emerge>.\kdeenv.ps1 KDEROOT  : C:\kderoot\emerge KDECOMPILER : msvc2015 PYTHONPATH  : C:\kderoot\python PS C:\kderoot\emerge> ### Installing the base system You are now ready to start building KDE, it is recommended to do so progressively, relying on emerge to automatically resolve the required dependencies at each set step: • Enter emerge qt5. This will fetch and install Windows versions of numerous UNIX-like utilities and libraries, then checkout, compile and install Qt. This will take up to several hours. • Enter emerge frameworks. This will checkout, compile and install the kde frameworks 5 modules. You will now have successfully installed a base KDE system and can now install other KDE modules as required. Every time you want to update or install a package, you should first update your emerge checkout (simply run cd C:\kderoot\emerge git pull to ensure you are using the latest package recipes. ### Common emerge commands • Installing a package and its dependencies: Simply run emerge packagename • Updating an installed package: Once you have packagename built, type emerge -i packagename to update packagename. Content is available under Creative Commons License SA 4.0 unless otherwise noted.
# dBZ (meteorology) The scale of dBZ values can be seen along the bottom of the image. dBZ stands for decibels relative to Z. It is a meteorological measure of equivalent reflectivity (Z) of a radar signal reflected off a remote object.[1] The reference level for Z is 1 mm6 m−3, which is equal to 1 μm3. It is related to the number of drops per unit volume and the sixth power of drop diameter. Reflectivity of a cloud is dependent on the number and size of reflectors (hydrometeors), which includes rain, snow, graupel, and hail. A large number of small hydrometeors will reflect the same as one large hydrometeor. The signal returned to the radar will be equivalent in both situations, so a group of small hydrometeors is virtually indistinguishable from one large hydrometeor on the resulting radar image. A meteorologist can determine the difference between one large hydrometeor and a group of small hydrometeors as well as the type of hydrometeor using the polarization and phase shifting of the Doppler Radar. The reflectivity image is just one type of image produced by the radar and a meteorologist cannot tell the difference between nickle sized hail and heavy rain. In combination with other images gathered by the radar during the same scan (dual polarization products), they can distinguish between hail, rain, snow, biologicals (birds, bugs), and other atmospheric phenomena. One dBZ-scale of rain: • >65 Extreme • 46-65 heavy • 24-45 moderate • 8-23 light • 0-8 Barely anything dBZ values can be converted to rainfall rates in millimetres per hour using this formula: • $\frac{\mathrm{mm}}{\mathrm{hr}} = \left ( \frac{10^{(dBZ/10)}}{200} \right )^{5 \over 8}$[2] dBZR (mm/h)Rate (in/hr)Intensity 50.07< 0.01Hardly Noticeable 100.15< 0.01Light Mist 150.30.01Mist 200.60.02Very Light 251.30.05Light 302.70.1Light to Moderate 355.60.22Moderate Rain 4011.530.45Moderate Rain 4523.70.92Moderate to Heavy 5048.61.90Heavy 551004Very Heavy / Small Hail 602058Extreme / Moderate Hail 6542116.6Extreme / Large Hail
# Is it possible to deduce a function from its fourier series ? 1. Oct 3, 2011 ### zahero_2007 Is it possible to deduce a function from its fourier series ? 2. Oct 3, 2011 ### lurflurf You can recover (1/2)(f(x-)+f(x+)) from the Fourier series of f. Among other things the Fourier series does not distinguish between functions that differ on a set of measure zero. This is not a major problem as we often either know what f does on sets of measure zero or do not care. If you mean can we find the equation of a certain form for f, generally yes in principle but it can be very difficult. Like if you were given some Fourier series of say (3x^3+2x +4)e^(3x^2+9x)cos((3x^2+5x-7)sin(9x^2+4x+3) you would have a hard time guessing that.
## Čech analytic and almost $$K$$-descriptive spaces.(English)Zbl 0806.54030 Recall that a topological space $$X$$ is Čech-complete if it is a $$G_ \delta$$ in some compact (Hausdorff) space, and Čech-analytic if it can be obtained from the Borel sets of $$X$$ by the Suslin operation (equivalently, Čech-analytic spaces are the completely regular projections of Čech-complete spaces along a separable metrizable factor). A family $${\mathcal U}$$ in $$X$$ is called $$\text{sb}_ d$$-$$\sigma$$- decomposable if $$U = \bigcup\{U_ n : n < \omega\}$$ for each $$U \in {\mathcal U}$$ such that each family $$\{U_ n : U \in {\mathcal U}\}$$ is a disjoint family with a scattered base (equivalently, if $${\mathcal U}$$ is point-countable with $$\sigma$$-scattered base). Finally, a space $$X$$ is almost $$K$$-descriptive (resp. almost descriptive) if there is a completely metrizable $$M$$ and an upper semi-continuous (resp. continuous) compact-valued map $$f:M \to X$$ which preserves $$\text{sb}_ d$$- $$\sigma$$-decomposable families. The authors show that every Čech-analytic space is almost $$K$$- descriptive, and that almost $$K$$-descriptive and Čech-analytic coincide with each other (and with analyticity) in metric spaces. Furthermore, they show that the class of almost $$K$$-descriptive spaces shares various properties with the Čech-analytic spaces: it is closed under countable unions and intersections, the Suslin operation, closed subspaces, and open subspaces; and an almost $$K$$-descriptive completely regular space is $$\sigma$$-scattered or contains a compact perfect set. Finally, it is shown that almost $$K$$-descriptive and almost descriptive coincide for subspaces of Banach spaces with the weak topology. ### MSC: 54H05 Descriptive set theory (topological aspects of Borel, analytic, projective, etc. sets) Full Text: ### References: [1] G. Fodor: On stationary sets and regressive functions. Acta Sci. Math. (Szeged) 27, 105-110. · Zbl 0199.02102 [2] D.H. Fremlin: Čech-analytic spaces. [3] Z. Frolík: Topologically complete spaces. Comm. Math. Univ. Carol. 1 (1960), 3-15. · Zbl 0100.18702 [4] Z. Frolík: On the topological product of paracompact spaces. Bull. Acad. Polon. Sci. Ser. Sci. Math. Astr. Phys. VIII (1960), 747-750. · Zbl 0099.38601 [5] Z. Frolík: Absolute Borel and Souslin sets. Pac. J. Math. 32 (1970), no. 3, 663-683. · Zbl 0215.24002 [6] Z. Frolík and P. Holický: Analytic and Luzin spaces (non-separable case). Topology and its Appl. 19 (1985), 129-156. · Zbl 0579.54026 [7] Z. Frolík and P. Holický: Applications of Luzinian separation priciples (non-separable case). Fund. math. CXVII (1983), 165-185. · Zbl 0543.54035 [8] G. Gruenhage and J. Pelant: Analytic spaces and paracompactness of $$X^2\backslash \Delta$$. Topology and its Appl. 28 (1988), 11-15. · Zbl 0636.54025 [9] R.W. Hansell: Descriptive sets and the topology of nonseparable Banach spaces. · Zbl 0982.46012 [10] R.W. Hansell: On characterizing nonseparable analytic and extended Borel sets as types of continuous images. Proc. London Math. Soc. 28 (1974), no. 3, 683-699. · Zbl 0313.54044 [11] J.E. Jayne, I. Namioka and C.A. Rogers: Properties like the Radon-Nikodým property. [12] G. Koumoullis: Topological spaces containing compact perfect sets and Prohorov spaces. Topology and its Appl. 21 (1985), 59-71. · Zbl 0574.54041 [13] I. Namioka: Radon-Nikodým compact spaces and fragmentability. Mathematika 34 (1987), 258-281. · Zbl 0654.46017 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# How do you solve 2sqrtx + 3 = 10? Jul 24, 2015 You isolate $\sqrt{x}$ on one side, then square both sides of the equation. #### Explanation: The first thing you need to do in order to solve this equation is isolate $\sqrt{x}$ on one side of the equation. To do this, first add $- 3$ to both sides of the equation $2 \sqrt{x} + \cancel{3} - \cancel{3} = 10 - 3$ $2 \sqrt{x} = 7$ Next, divide both sides of the equation by $2$ $\frac{\cancel{2} \cdot \sqrt{x}}{\cancel{2}} = \frac{7}{2}$ $\sqrt{x} = \frac{7}{2}$ To get the value of $x$, square both sides (sqrt(x))""^2 = (7/2)^2 $x = \textcolor{g r e e n}{\frac{49}{4}}$
# How do I solve algebraic fractions with two fractions on one side? How do I solve algebraic fractions with two fractions on one side? $$\frac{a}{4}-\frac{a+2}{3}=9$$ Just like that one, what would I do? Please explain why you did what you did. I can't show working out for something I don't know how to do. • Multiply either sides by LCM$(4,3)$ – lab bhattacharjee May 31 '14 at 6:15 • @labbhattacharjee do I multiply everything on both sides by the LCM? – mahat May 31 '14 at 6:15 • by doing that you should learn why LCM is used – lab bhattacharjee May 31 '14 at 6:17 • Alternatively break $\frac{a+2}3$ to $\frac a3+\frac23$. – peterwhy May 31 '14 at 6:27 • $$\frac {a}{4} = \frac{1} {4}a$$ $$\frac {a+2} {3} = \frac {1}{3}a + \frac {2} {3}$$ – Adola Jan 26 '17 at 5:17 Step 1: Find the common denominator: $3\cdot 4 = 12$ Step 2: Multiply both sides by common denominator $12$. $12\cdot \left(\dfrac{a}{4} - \dfrac{a+2}{3}\right) = 12\cdot 9$. Step 3: Distribute $12$ into the parentheses of the left side. $12\cdot \dfrac{a}{4} - 12\cdot \dfrac{a+2}{3} = 108$, and simplify. $3a - 4(a+2) = 108$, and simplify again. $3a - 4a - 8 = 108$, and simplify again. $-a - 8 = 108$, add $8$ both sides. $-a - 8 + 8 = 108 + 8$, simplify. $-a = 116$, and finally multiply $-1$ to both sides to get the answer. $a = -116$. Multiply the entire equation by all of the denominators, and then solve for A as if there were no fractions. So in this case, multiply the entire equation by 4. Then multiply by 3. $$\frac {a}{4} = \frac{1} {4}a$$ $$\frac {a+2} {3} = \frac {1}{3}a + \frac {2} {3}$$ $$\frac {1} {4}a-(\frac {1} {3}a + \frac {2} {3}) = 9$$ $$\frac{1}{4}a - \frac {1} {3}a - \frac {2}{3} = 9$$ Let's find the value of $\frac {1}{4}a -\frac {1}{3}a$. Then we will subtract the results by $\frac {2} {3}$ or add $-\frac {2}{3}$ $$\frac {1} {4}a -\frac{1}{3}a = \frac {3 - 4}{12}a = -\frac {1} {12}a$$ So: $$-\frac {1} {12}a - \frac {2} {3} = 9$$ $$-\frac {1} {12}a = 9\frac {2} {3}$$ $$-\frac {1} {12}a = \frac {29} {3}$$ $$a = \frac {29} {3} * -\frac {12} {1}$$ $$a = \frac {29 * -12}{3 * 1}$$ $$a = \frac {-348} {3}$$ $$a = -116$$
A Day In The Life III This year #paperOTD (or paper of the day for any readers not on Twitter) did not go well for me. I’ve been busy with lots of things and I’m now reviewing more grants than last year because I am doing more committee work. This means I am finding less time to read one paper per day. Nonetheless I will round up the stats for this year. I only managed to read a paper on 59.2% of available days… The top ten journals that published the papers that I read: • 1 Nat Commun • 2 J Cell Biol • 3 Nature • 4= Cell • 4= eLife • 4= Traffic • 7 Science • 8= Dev Cell • 8= Mol Biol Cell • 8= Nat Cell Biol Nature Communications has published some really nice cell biology this year and I’m not surprised it’s number one. Also, I read more papers in Cell this year compared to last. The papers I read are mainly recent. Around 83% of the papers were published in 2015. Again, a significant fraction (42%) of the papers have statistical errors. Funnily enough there were no preprints in my top ten. I realised that I tend to read these when approving them as an affiliate (thoroughly enough for #paperOTD if they interest me) but I don’t mark them in the database. I think my favourite paper was this one on methods to move organelles around cells using light, see also this paper for a related method. I think I’ll try again next year to read one paper per day. I’m a bit worried that if I don’t attempt this, I simply won’t read any papers in detail. I also resolved to read one book per month in 2015. I managed this in 2014, but fell short in 2015 just like with #paperOTD. The best book from a limited selection was Matthew Cobb’s Life’s Greatest Secret. A tale of the early days of molecular biology, as it happened. I was a bit sceptical that Matthew could bring anything new to this area of scientific history. Having read Eighth Day of Creation, and then some pale imitations, I thought that this had pretty much been covered completely. This book however takes a fresh perspective and it’s worth reading. Matthew has a nice writing style, animating the dusty old main characters with a insightful detail as he goes along. Check it out. This blog is going well, with readership growing all the time. I have written about this progress previously (here and here). The most popular posts are those on publishing: preprints, impact factors and publication lag times, rather than my science, but that’s OK. There is more to come on lag times in the New Year, stay tuned. I am a fan of year-end lists as you may be able to tell. My album of the year is Battles – La Di Da Di which came out on Warp in September. An honourable mention goes to Air Formation – Were We Ever Here EP which I bought on iTunes since the 250 copies had long gone by the time I discovered it on AC30. Since I don’t watch TV or go to the cinema, I don’t have a pick of the year for that. When it comes to pro-cycling, of course I have an opinion. My favourite stage race was Critérium du Dauphiné Libere which was won by Chris Froome in a close contest with Tejay van Garderen. The best one-day race was a tough pick between E3 Harelbeke won by Geraint Thomas and Omloop Het Nieuwsblad won by Ian Stannard. Although E3 was a hard man’s race in tough conditions, I have to go for Stannard outfoxing three(!) Etixx Quick Step riders to take the win in Nieuwsblad. I’m a bit annoyed that those three picks all involve Team Sky and British riders…. I won’t bore everyone with my own cycling (and running) exploits in 2015. Just to say, that I’ve been more active this year in any year since 2009. I shouldn’t need to tell you where the post title comes from. If you haven’t heard Sgt. Pepper’s Lonely Hearts Club Band by The Beatles, you need to rectify this urgently. The greatest album recorded on 4-track equipment, no question. 🙂 I read this article on the BBC recently about alcohol consumption in the UK. In passing it mentions how many people in the UK are teetotal. I found the number reported – 21% – unbelievable so I checked out the source for the numbers. Sure enough, ~20% of the UK population are indeed teetotal (see plots). The breakdown by gender and age is perhaps to be expected. There are fewer teetotal men than women. Older women (65+) in particular are more likely to be teetotal. There has been a slight tendency in recent years for more abstinence across the board, although last year is an exception. The BBC article noted that young people are pushing up the numbers with high rates of sobriety. There are more interesting stats in the survey which you can check out and download. For example, London has the highest rate of teetotallers in the UK (32%). I thought this post would make a fun antidote in the run up to the holidays, which in the UK at least is strongly linked with alcohol consumption. The post title is taken from “Lemonade Secret Drinker” by Mansun, which featured on their first EP (One). It’s a play on “Secret Lemonade Drinker” the theme from R Whites Lemonade TV commercial in the 70s/80s (which I believe was written and sung by Elvis Costello’s father). Where You Come From: blog visitor stats It’s been a while since I did some navel-gazing about who reads this blog and where they come from. This week, quantixed is close to 25K views and there was a burst of people viewing an old post, which made me look again at the visitor statistics. Where do the readers of quantixed come from? Well, geographically they come from all around the world. The number of visitors from each country is probably related to: population of scientists and geographical spread of science people on Twitter (see below). USA is in the lead, followed by UK, Germany, Canada, France, Spain, Australia, etc. Where do they click from? This is pretty interesting. Most people come here from Twitter (45%), around 20% come via a search on Google (mainly looking for eLife’s Impact Factor) and another ~20% come from the blog Scholarly Kitchen(!). Around 3% come from Facebook, which is pretty neat since I don’t have a profile and presumably people are linking to quantixed on there. 1% come from people clicking links that have been emailed to them – I also value these hits a lot. I guess these links are sent to people who don’t do any social media, but somebody thought the recipient should read something on quantixed. I get a few hits from blogs and sites where we’ve linked to each other. The remainder are a long list of single clicks from a wide variety of pages. The traffic is telling me that quantixed doesn’t have “readers”. I think most people are one-time visitors, or at least occasional visitors. I do know which posts are popular: 1. Strange Things 2. Wrong Number 4. Publication lag times I and II 5. Violin plots 6. Principal Component Analysis Just like my papers, I’ve found it difficult to predict what will be interesting to lots of people. Posts that took a long time to prepare and were the most fun to think about, have received hardly any views. The PCA post is most surprising, because I thought no-one would be interested in that! I thoroughly enjoy writing quantixed and I really value the feedback that I get from people I talk to offline about it. I’m constantly amazed who has read something on here. The question that they always ask is “how do you find the time?”. And I always answer, “I don’t”. What I mean is I don’t really have the free time to write this blog. Between the lab, home life, sleep and cycling, there is no time for this at all. The analyses you see on here take only three hours or less. If anything looks tougher than this, I drop it. If draft posts aren’t interesting enough to get finished, they get canned. Writing the blog is a nice change from writing papers, grants and admin. So I don’t feel it detracts from work. One aim was to improve my programming through fun analyses; and I’ve definitely learnt a lot about that. The early posts on coding are pretty cringe-worthy. I also wanted to improve my writing which is still a bit dry and scientific… My favourite type of remark is when people tell me about something that they’ve read on here, not realising that I actually write this blog! Anyway, whoever you are, wherever you come from; I hope you enjoy quantixed. If you read something here and like it, please leave a comment, tweet a link or email it to a friend. The encouragement is appreciated. The post title is taken from “Where You Come From” by Pantera. This was a difficult one to pick, but this song had the most apt title, at least. Your Favorite Thing: Algorithmically Perfect Playlist I’ve previously written about analysing my iTunes library and about generating Smart Playlists in iTunes. This post takes things a bit further by generating a “perfect playlist” outside of iTunes… it is exclusively for nerds. How can you put together a perfect playlist? What are your favourite songs? How can you tell what they are? Well, we could look at how many times you’ve played each song in your iTunes library (assuming this is mainly how you consume your music)… but this can be misleading. Songs that have been in there since the start (in my case, a decade ago) have had longer to accumulate plays than songs that were added last year. This problem was covered nicely in a post by Mr Science Show. He suggests that your all-time greatest playlist can be modelled using $$\frac{dp}{dt}=\frac{A}{Bt+N_0} + Ce^{-Dt}$$ Where $$N_0$$ is the number of tracks in the library at $$t_0$$, time zero. A and B are constants and the collection growing linearly over time. The second component is an additional correction for the fact that songs added more recently are likely to have garnered more plays, and as they age, they relax back into the general soup of the library. I used something similar to make my perfect playlist. Calculating something like this is well beyond the scope of iTunes and so we need to do something more heavy duty. The steps below show how this can be achieved. Of course, I used IgorPro to do almost all of this. I tried to read in the iTunes Music Library.xml directly in Igor using the udStFiLrXML package, but couldn’t get it to work. So there’s a bit of ruby followed by an all-Igor workflow. You can scroll to the bottom to find out a) whether this was worth it and b) for other stuff I discovered along the way. All the code to do this is available here. I’ll try to put quantixed code on github from now on. Once the data is in Igor, the strategy is to calculate the expected number of plays a track should have received if iTunes was simply set to random. We can then compare this number to the actual number of plays. The ratio of these numbers helps us to discriminate our favourite tracks. To work out the expected plays, we calculate the number of tracks in the library over time and the inverse of this gives us the probability that a given track, at that moment in the lifetime of the library, will be played. We know the total number of plays and the lifetime of the library, so if we assume that play rate is constant over time (fair assumption), this means we can calculate the expected number of plays for each track. As noted above, there is a slight snag with this methodology, because tracks added in the last few months will have a very low number of expected plays, yet are likely to have been played quite a lot. To compensate for this I used the modelling method suggested by Mr Science Show, but only for recent songs. Hopefully that all makes sense, so now for a step-by-step guide. Step 1: Extract data from iTunes xml file to tsv After trying and failing to write my own script to parse the xml file, I stumbled across this on the web. #!/usr/bin/ruby require 'rubygems' require 'nokogiri' list = [] doc = Nokogiri::XML(File.open(ARGV[0], 'r')) doc.xpath('/plist/dict/dict/dict').each do |node| hash = {} last_key = nil node.children.each do |child| next if child.blank? if child.name == 'key' last_key = child.text else hash[last_key] = child.text end end list << hash end p list This script was saved as parsenoko.rb and could be executed from the command line find . -name "*.xml" -exec ruby parsenoko.rb {} > playlist.csv \; after cd to appropriate directory containing the script and a copy of the xml file. Step 2: A little bit of cleaning The file starts with [ and ends with ]. Each dictionary item (dict) has been printed enclosed by {}. It’s easiest to remove these before importing to IgorPro. For my library the maximum number of keys is 38. I added a line with (ColumnA<tab>ColumnB<tab>…<tab>ColumnAL), to make sure all keys were imported correctly. Step 3: Import into IgorPro Import the tsv. This is preferable to csv because many tracks have commas in the track title, album title or artist name. Everything comes in as text and we will sort everything out in the next step. LoadWave /N=Column/O/K=2/J/V={"\t"," \$",0,0} Step 4: Get Igor to sort the key values into waves corresponding to each key This is a major type of cleaning. What we’ll do is read the key and its value. The two are separated by => and so this is used to parse and resort the values. This will convert the numeric values to numeric waves. This is done by executing iTunes() Step 5: Convert timestamps to date values iTunes stores dates in a UTC timestamp with this format 2014-10-02T20:24:10Z. It does this for Date Added, Date Modified, Last Played etc. To do anything meaningful with these, we need to convert them to date values. IgorPro uses the time in seconds from Midnight on 1st Jan 1904 as a date system. This requires double precision FP64 waves. We can parse the string containing this time stamp and convert it using DateRead() Step 6: Discover your favourite tracks! We do all of this by running Predictor() The way this works is described above. Note that you can run whatever algorithm you like at this point to generate a list of tracks. Step 7: Make a playlist to feed back to iTunes The format for playlists is the M3U file. This has a simple layout which can easily be printed to a Notebook in Igor and then saved as a file for importing back into iTunes. To do this we run WritePlaylist(listlen) Where the Variable listlen is the length of the playlist. In this example, listlen=50 would give the Top 50 favourite tracks. So what did I find out? My top 50 songs determined by this method were quite different to the Smart Playlist in iTunes of the Most Played tracks. The tracks at the top of the Most Played list in iTunes have disappeared in the new list and these are the ones that have been in the library for a long time and I suppose I don’t listen to that much any more. The new algorithmically designed playlist has a bunch of fresher tracks that were added in the last few years and I have listened to quite a lot. Looking through I can see music that I should explore in more detail. In short, it’s a superior playlist and one that will always change and should not go stale. Other useful stuff There are quite a few parsing tools on the web that vary in their utility and usefulness. Some that deserve a mention are: • The xml file should be readable as a plist by cocoa which is native to OSX • Visualisation of what proportion of an iTunes library is by a given artist – bdunagan’s blog • itunes-parser on github by phiggins • Really nice XSLT to move the xml file to html – moveable-type • Comprehensive but difficult to follow method in ruby. The post title comes from “Your Favorite Thing” by Sugar from their LP “File Under: Easy Listening” I saw this great tweet (fairly) recently: I thought this was such a great explanation of when to submit your paper. It reminded me of a diagram that I sketched out when talking to a student in my lab about a paper we were writing. I was trying to explain why we don’t exaggerate our findings. And conversely why we don’t undersell our results either. I replotted it below: Getting out to review is a major hurdle to publishing a paper. Therefore, convincing the Editor that you have found out something amazing is the first task. This is counterbalanced by peer review, which scrutinises the claims made in a paper for their experimental support. So, exaggerated claims might get you over the first hurdle, but it will give you problems during peer review (and afterwards if the paper makes it to print). Conversely, underselling or not interpreting all your data fully is a different problem. It’s unlikely to impress the Editor as it can make your paper seem “too specialised”, although if it made it to the hands of your peers they would probably like it! Obviously at either end of the spectrum no-one likes a dull/boring/incremental paper and everyone can smell a rat if the claims are completely overblown, e.g. genome sequence of Sasquatch. So this is why we try to interpret our results fully but are careful not to exaggerate our claims. It might not get us out to review every time, but at least we can sleep at night. I don’t know if this is a fair representation. Certainly depending on the journal the scale of the y-axis needs to change! The post title is taken from “Middle of the Road” by Teenage Fanclub a B-side from their single “I Don’t Want Control of You”. Science songs I thought I’d compile a list of songs related to biomedical science. These were all found in my iTunes library. I’ve missed off multiple entries for the same kind of thing, as indicated. Neuroscience • Grand Mal -Elliott Smith from XO Sessions • She’s Lost Control – Joy Division from Unknown Pleasures (Epilepsy) • Aneuryism – Nirvana from Hormoaning EP • Serotonin – Mansun from Six • Serotonin Smile – Ooberman from Shorley Wall EP • Brain Damage – Pink Floyd from Dark Side of The Moon • Paranoid Schizophrenic – The Bats from How Pop Can You Get? • Headacher – Bear Quartet from Penny Century • Headache – Frank Black from Teenager of the Year • Manic Depression – Jimi Hendrix Experience and lots of other songs about depression • Paranoid – Black Sabbath from Paranoid (thanks to Joaquin for the suggestion!) Medical • Cancer (interlude) – Mansun from Six • Hepatic Tissue Fermentation – Carcass or pretty much any song in this genre of Death Metal • Whiplash – Metallica from Kill ‘Em All • Another Invented Disease – Manic Street Preachers from Generation Terrorists • Broken Nose – Family from Bandstand • Ana’s Song – Silverchair from Neon Ballroom (Anorexia Nervosa) • 4st 7lb – Manic Street Preachers from The Holy Bible (Anorexia Nervosa) • November Spawned A Monster – Morrissey from Bona Drag (disability) • Castles Made of Sand – Jimi Hendrix Experience from Axis: Bold As Love (disability) • Cardiac Arrest – Madness from 7 • Blue Veins – The Raconteurs from Broken Boy Soldiers • Vein Melter – Herbie Hancock from Headhunters • Scoliosis – Pond from Rock Collection (curvature of the spine) • Taste the Blood – Mazzy Star… lots of songs with blood in the title. Pharmaceutical • Biotech is Godzilla – Sepultura from Chaos A.D. • Luminol – Ryan Adams from Rock N Roll • Feel Good Hit Of The Summer – Queens of The Stone Age from Rated R (prescription drugs of abuse) • Stars That Play with Laughing Sam’s Dice – Jimi Hendrix Experience (and hundreds of other songs about recreational drugs) • Tramazi Parti – Black Grape from It’s Great When You’re Straight… • Z is for Zofirax – Wingtip Sloat from If Only For The Hatchery • Goldfish and Paracetamol – Catatonia from International Velvet • L Dopa – Big Black from Songs About Fucking Genetics and molecular biology • Genetic Reconstruction – Death from Spiritual Healing • Genetic – Sonic Youth from 100% • Hair and DNA – Hot Snakes from Audit in Progress • DNA – Circle from Meronia • Biological – Air from Talkie Walkie • Gene by Gene – Blur from Think Tank • My Selfish Gene – Catatonia from International Velvet • Sheer Heart Attack – Queen (“it was the DNA that made me this way”) • Mutantes – Os Mutantes • The Missing Link – Napalm Death from Mentally Murdered E.P. • Son of Mr. Green Genes – Frank Zappa from Hot Rats Cell Biology • Sweet Oddysee Of A Cancer Cell T’ Th’ Center Of Yer Heart – Mercury Rev from Yerself Is Steam • Dead Embryonic Cells – Sepultura from Arise • Cells – They Might Be Giants from Here Comes Science (songs for kids about science) • White Blood Cells LP by The White Stripes • Anything by The Membranes • Soma – Smashing Pumpkins from Siamese Dream • Golgi Apparatus – Phish from Junta • Cell-scape LP by Melt Banana Album covers with science images Godflesh – Selfless. Scanning EM image of some cells growing on a microchip? Circle – Meronia. Photograph of an ampuole? Insane In The Brain Back of the envelope calculations for this post. An old press release for a paper on endocytosis by Tom Kirchhausen contained this fascinating factoid: The equivalent of the entire brain, or a football field of membrane, is turned over every hour If this is true it is absolutely staggering. Let’s check it out. A synaptic vesicle is ~40 nm in diameter. So the surface area of 1 vesicle is $$4 \pi r^2$$ which is 5026 nm2, or 5.026 x 10-15 m2. Now, an American football field is 5350 m2 (including both endzones), this is the equivalent of 1.065 x 1018 synaptic vesicles. It is estimated that the human cortex has 60 trillion synapses. This means that each synapse would need to internalise 17742 vesicles to retrieve the area of membrane equivalent to one football field. The factoid says this takes one hour. This membrane load equates to each synapse turning over 296 vesicles in one minute, which is 4.93 vesicles per second. Tonic activity of neurons differs throughout the brain and actually 5 Hz doesn’t sound too high (feel free to correct me on this). We’ve only considered cortical neurons, so the factoid seems pretty plausible! For an actual football field, i.e. Association Football. The calculation is slightly more complicated. This is because there is no set size for football pitches. In England, the largest is apparently Manchester City (7598 m2) while the smallest actually belongs to the greatest football team in the world, Crewe Alexandra (5518 m2). A brain would hoover up Man City’s ground in an hour if each synapse turned over 7 vesicles per second, while Gresty Road would only take 5 vesicles per second. What is less clear from the factoid is whether a football field really equates to an “entire brain”. Bionumbers has no information on this. I think this part of the factoid may come from a different bit of data which is that clathrin-mediated endocytosis in non-neuronal cells can internalise the equivalent of the entire surface area of the cell in about an hour. I wonder whether this has been translated to neurons for the purposes of the quote. Either way, it is an amazing factoid that the brain can turnover this huge amount of membrane in such a short space of time. So there you have it: quanta quantified on quantixed. The post title is from “Insane In The Brain” by Cypress Hill from the album Black Sunday. My Favorite Things I realised recently that I’ve maintained a consistent iTunes library for ~10 years. For most of that time I’ve been listening exclusively to iTunes, rather than to music in other formats. So the library is a useful source of information about my tastes in music. It should be possible to look at who are my favourite artists, what bands need more investigation, or just to generate some interesting statistics based on my favourite music. Play count is the central statistic here as it tells me how often I’ve listened to a certain track. It’s the equivalent of a +1/upvote/fave/like or maybe even a citation. Play count increases by one if you listen to a track all the way to the end. So if a track starts and you don’t want to hear it and you skip on to the next song, there’s no +1. There’s a caveat here in that the time a track has been in the library, influences the play count to a certain extent – but that’s for another post*. The second indicator for liking a track or artist is the fact that it’s in the library. This may sound obvious, but what I mean is that artists with lots of tracks in the library are more likely to be favourite artists compared to a band with just one or two tracks in there. A caveat here is that some artists do not have long careers for a variety of reasons, which can limit the number of tracks actually available to load into the library. Check the methods at the foot of the post if you want to do the same. What’s the most popular year? Firstly, I looked at the most popular year in the library. This question was the focus of an earlier post that found that 1971 was the best year in music. The play distribution per year can be plotted together with a summary of how many tracks and how many plays in total from each year are in the library. There’s a bias towards 90s music, which probably reflects my age, but could also be caused by my habit of collecting CD singles which peaked as a format in this decade. The average number of plays is actually pretty constant for all years (median of ~4), the mean is perhaps slightly higher for late-2000s music. Favourite styles of music: I also looked at Genre. Which styles of music are my favourite? I plotted the total number of tracks versus the total number of plays for each Genre in the library. Size of the marker reflects the median number of plays per track for that genre. Most Genres obey a rule where total plays is a function of total tracks, but there are exceptions. Crossover, Hip-hop/Rap and Power-pop are highlighted as those with an above average number of plays. I’m not lacking in Power-pop with a few thousand tracks, but I should probably get my hands on more Crossover or Hip-Hop/Rap. Using citation statistics to find my favourite artists: Next, I looked at who my favourite artists are. It could be argued that I should know who my favourite artists are! But tastes can change over a 10 year period and I was interested in an unbiased view of my favourite artists rather than who I think they are. A plot of Total Tracks vs Mean plays per track is reasonably informative. The artists with the highest plays per track are those with only one track in the library, e.g. Harvey Danger with Flagpole Sitta. So this statistic is pretty unreliable. Equally, I’ve got lots of tracks by Manic Street Preachers but evidently I don’t play them that often. I realised that the problem of identifying favourite artists based on these two pieces of information (plays and number of tracks) is pretty similar to assessing scientists using citation metrics (citations and number of papers). Hirsch proposed the h-index to meld these two bits of information into a single metric, the h-index. It’s easily computed and I already had an Igor procedure to calculate it en masse, so I ran it on the library information. Before doing this, I consolidated multiple versions of the same track into one. I knew that I had several versions of the same track, especially as I have multiple versions of some albums (e.g. Pet Sounds = 3 copies = mono + stereo + a capella), the top offending track was “Baby’s Coming Back” by Jellyfish, 11 copies! Anyway, these were consolidated before running the h-index calculation. The top artist was Elliott Smith with an h-index of 32. This means he has 32 tracks that have been listened to at least 32 times each. I was amazed that Muse had the second highest h-index (I don’t consider myself a huge fan of their music) until I remembered a period where their albums were on an iPod Nano used during exercise. Amusingly (and narcissistically) my own music – the artist names are redacted – scored quite highly with two out of three bands in the top 100, which are shown here. These artists with high h-indeces are the most consistently played in the library and probably constitute my favourite artists, but is the ranking correct? The procedure also calculates the g-index for every artist. The g-index is similar to the h-index but takes into account very highly played tracks (very highly cited papers) over the h threshold. For example, The Smiths h=26. This could be 26 tracks that have been listened to exactly 26 times or they could have been listened to 90 times each. The h-index cannot reveal this, but the g-index gets to this by assessing average plays for the ranked tracks. The Smiths g=35. To find the artists that are most-played-of-the-consistently-most-played, I subtracted h from g and plotted the Top 50. This ranked list I think most closely represents my favourite artists, according to my listening habits over the last ten years. Track length: Finally, I looked at the track length. I have a range of track lengths in the library, from “You Suffer” by Napalm Death (iTunes has this at 4 s, but Wikipedia says it is 1.36 s), through to epic tracks like “Blue Room” by The Orb. Most tracks are in the 3-4 min range. Plays per track indicates that this track length is optimal with most of the highly played tracks being within this window. The super-long tracks are rarely listened to, probably because of their length. Short tracks also have higher than average plays, probably because they are less likely to be skipped, due to their length. These were the first things that sprang to mind for iTunes analysis. As I said at the top, there’s lots of information in the library to dig through, but I think this is enough for one post. And not a pie-chart in sight! Methods: the library is in xml format and can be read/parsed this way. More easily, you can just select the whole library and copy-paste it into TextEdit and then load this into a data analysis package. In this case, IgorPro (as always). Make sure that the interesting fields are shown in the full library view (Music>Songs). To do everything in this post you need artist, track, album, genre, length, year and play count. At the time of writing, I had 21326 tracks in the library. For the “H-index” analysis, I consolidated multiple versions of the same track, giving 18684 tracks. This is possible by concatenating artist and the first ten characters of the track title (separated by a unique character) and adding the play counts for these concatenated versions. The artist could then be deconvolved (using the unique character) and used for the H-calculation. It’s not very elegant, but seemed to work well. The H-index and G-index calculations were automated (previously sort-of-described here), as was most of the plot generation. The inspiration for the colour coding is from the 2013 Feltron Report. * there’s an interesting post here about modelling the ideal playlist. I worked through the ideas in that post but found that it doesn’t scale well to large libraries, especially if they’ve been going for a long time, i.e. mine. The post title is taken from John Coltrane’s cover version of My Favorite Things from the album of the same name. Excuse the US English spelling. Belly Button Window A bit of navel gazing for this post. Since moving the blog to wordpress.com in the summer, it recently accrued 5000 views. Time to analyse what people are reading… The most popular post on the blog (by a long way) is “Strange Things“, a post about the eLife impact factor (2824 views). The next most popular is a post about a Twitter H-index, with 498 views. The Strange Things post has accounted for ~50% of views since it went live (bottom plot) and this fraction seems to be creeping up. More new content is needed to change this situation. I enjoy putting blog posts together and love the discussion that follows from my posts. It’s also been nice when people have told me that they read my blog and enjoy my posts. One thing I didn’t expect was the way that people can take away very different messages from the same post. I don’t know why I found this surprising, since this often happens with our scientific papers! Actually, in the same way as our papers, the most popular posts are not the ones that I would say are the best. Wet Wet Wet: I have thought about deleting the Strange Things post, since it isn’t really what I want this blog to be about. An analogy here is the Scottish pop-soul outfit Wet Wet Wet who released a dreadful cover of The Troggs’ “Love is All Around” in 1994. In the end, the band deleted the single in the hope of redemption, or so they said. Given that the song had been at number one for 15 weeks, the damage was already done. I think the same applies here, so the post will stay. Directing Traffic: Most people coming to the blog are clicking on links on Twitter. A smaller number come via other blogs which feature links to my posts. A very small number come to the blog via a Google search. Google has changed the way it formats the clicks and so most of the time it is not possible to know what people were searching for. For those that I can see, the only search term is… yes, you’ve guessed it: “elife impact factor”. Methods: WordPress stats are available for blog owners via URL formatting. All you need is your API key and (obviously) your blog address. Instructions are found at http://stats.wordpress.com/csv.php A basic URL format would be: http://stats.wordpress.com/csv.php?api_key=yourapikey&blog_uri=yourblogaddress replacing yourapikey with your API key (this can be retrieved at https://apikey.wordpress.com) and yourblogaddress with your blog address e.g. quantixed.wordpress.com Various options are available from the first page to get the stats in which you are  interested. For example, the following can be appended to the second URL to get a breakdown of views by post title for the past year: &table=postviews&days=365&limit=-1 The format can be csv, json or xml depending on how your preference for what you want to do next with the information. The title is from “Belly Button Window” by Jimi Hendrix, a posthumous release on the Cry of Love LP. Tips from the Blog II An IgorPro tip this week. The default font for plots is Geneva. Most of our figures are assembled using Helvetica for labelling. The default font can be changed in Igor Graph Preferences, but Preferences need to be switched on in order to be implemented. Anyway, I always seem to end up with a mix of Geneva plots and Helevetica plots. This can be annoying as the fonts are pretty similar yet the spacing is different and this can affect the plot size. Here is a quick procedure Helvetica4All() to rectify this for all graph windows.
# KdV hierarchy via Abelian coverings and operator identities @article{Eichinger2019KdVHV, title={KdV hierarchy via Abelian coverings and operator identities}, author={Benjamin Eichinger and Tom Vandenboom and Peter Yuditskii}, journal={Transactions of the American Mathematical Society, Series B}, year={2019} } • Published 31 January 2018 • Mathematics, Physics • Transactions of the American Mathematical Society, Series B We establish precise spectral criteria for potential functions V V of reflectionless Schrödinger operators L V = − ∂ x 2 + V L_V = -\partial _x^2 + V to admit solutions to the Korteweg–de Vries (KdV) hierarchy with V V as an initial value. More generally, our methods extend the classical study of algebro-geometric solutions for the KdV hierarchy to noncompact Riemann surfaces by defining generalized Abelian integrals and analogues of the Baker… Expand 11 Citations #### Figures from this paper SP ] 3 J an 2 02 0 STAHL – TOTIK REGULARITY FOR CONTINUUM SCHRÖDINGER OPERATORS • 2020 We develop a theory of regularity for continuum Schrödinger operators based on the Martin compactification of the complement of the essential spectrum. This theory is inspired by Stahl–TotikExpand Local Existence and Uniqueness of Spatially Quasi-Periodic Solutions to the Generalized KdV Equation In this paper, we study the existence and uniqueness of spatially quasi-periodic solutions to the generalized KdV equation (gKdV for short) on the real line with quasi-periodic initial data whoseExpand Stahl--Totik regularity for continuum Schrödinger operators • Mathematics, Physics • 2020 We develop a theory of regularity for continuum Schr\"odinger operators based on the Martin compactification of the complement of the essential spectrum. This theory is inspired by Stahl--TotikExpand Absence of Absolutely Continuous Spectrum for Generic Quasi-Periodic Schrödinger Operators on the Real Line • Mathematics, Physics • 2019 We show that a generic quasi-periodic Schrodinger operator in $L^2(\mathbb{R})$ has purely singular spectrum. That is, for any minimal translation flow on a finite-dimensional torus, there is aExpand SP ] 1 2 Ju l 2 02 1 Construction of KdV flow-a unified approach - A KdV flow is constructed on a space whose structure is described in terms of the spectrum of the underlying Schrödinger operators. The space includes the conventional decaying functions and ergodicExpand Ergodic Schrödinger operators and its related topics • 2019 In the …rst half of this lecture is to provide a basic knowledge for ergodic one dimensional Schrödinger operators which will be needed for the study of the KdV equation starting from ergodic initialExpand Invariance of white noise for KdV on the line • Mathematics • 2019 We consider the Korteweg--de Vries equation with white noise initial data, posed on the whole real line, and prove the almost sure existence of solutions. Moreover, we show that the solutions obeyExpand The Deift Conjecture: A Program to Construct a Counterexample • Physics, Mathematics • 2021 We describe a program to construct a counterexample to the Deift conjecture, that is, an almost periodic function whose evolution under the KdV equation is not almost periodic in time. The approachExpand Direct Cauchy theorem and Fourier integral in Widom domains We derive the Fourier integral associated with the complex Martin function in the Denjoy domain of the Widom type with the Direct Cauchy Theorem (DCT). As an application we study canonical systemsExpand A surpr is ing connect ion between quantum mechanics and shal low water waves Quantum mechanics grew out of attempts to understand the structure of microscopic systems (likes atoms and molecules) and to explain observed phenomena in electromagnetism (such as blackbodyExpand #### References SHOWING 1-10 OF 63 REFERENCES Construction of KdV flow I. Tau-Function via Weyl Function • S. Kotani • Mathematics, Physics • Zurnal matematiceskoj fiziki, analiza, geometrii • 2018 Sato introduced the tau-function to describe solutions to a wide class of completely integrable differential equations. Later Segal-Wilson represented it in terms of the relevant integral operatorsExpand The absolutely continuous spectrum of Jacobi matrices I explore some consequences of a groundbreaking result of Breimesser and Pearson on the absolutely continuous spectrum of one-dimensional Schr"odinger operators. These include an Oracle Theorem thatExpand Almost periodicity in time of solutions of the KdV equation • Mathematics, Physics • Duke Mathematical Journal • 2018 We study the Cauchy problem for the KdV equation $\partial_t u - 6 u \partial_x u + \partial_x^3 u = 0$ with almost periodic initial data $u(x,0)=V(x)$. We consider initial data $V$, for which theExpand Soliton Equations and their Algebro-Geometric Solutions • Mathematics, Physics • 2003 As a partner to Volume 1: Dimensional Continuous Models, this monograph provides a self-contained introduction to algebro-geometric solutions of completely integrable, nonlinear, partialExpand On the Direct Cauchy Theorem in Widom Domains: Positive and Negative Examples Peter Yuditskii We discuss several questions which remained open in our joint work with M. Sodin “Almost periodic Jacobi matrices with homogeneous spectrum, infinite-dimensional Jacobi inversion, and Hardy spaces ofExpand Counterexamples to the Kotani-Last Conjecture for Continuum Schr\"odinger Operators via Character-Automorphic Hardy Spaces • Mathematics, Physics • 2014 The Kotani-Last conjecture states that every ergodic operator in one space dimension with non-empty absolutely continuous spectrum must have almost periodic coefficients. This statement makes senseExpand Almost periodic Jacobi matrices with homogeneous spectrum, infinite dimensional Jacobi inversion, and hardy spaces of character-automorphic functions • Mathematics • 1997 All three subjects reflected in the title are closely intertwined in the paper.LetJE be a class of Jacobi matrices acting inl2(ℤ) with a homogeneous spectrumE (see Definition 3.2) and with diagonalExpand Almost periodic Sturm-Liouville operators with Cantor homogeneous spectrum • Mathematics • 1995 Being based on an infinite-dimensional real version of the Jacobi inversion problem [24], we establish the direct generalization of the well-known properties of finite-band Sturm-Liouville operatorsExpand Integrals of Nonlinear Equations of Evolution and Solitary Waves In Section 1 we present a general principle for associating nonlinear equations evolutions with linear operators so that the eigenvalues of the linear operator integrals of the nonlinear equation. AExpand Kotani–Last problem and Hardy spaces on surfaces of Widom type • Mathematics, Physics • 2012 This is a small theory of non almost periodic ergodic families of Jacobi matrices with purely (however) absolutely continuous spectrum. The reason why this effect may happen is that under ourExpand
# User talk:Javascap/Archive 2 ## Sorry to bother you... Can I ask you to rename me "Alexa" and leave off the "J"? (P.S.) The "new messages" on your talk page logged me out, second, how did you get the font on this page? AlexaJ 23:23, 18 September 2008 (EDT) AlexaJ, soon to be Alexa, I can indeed rename you. In answer to your P.S, the "new messsages" box is a prank that I set up. Gotcha =). Second, to get this text, I used <font color="black" face="Monotype Corsiva" size = "3"> Just wait while I rename you. JÁνąŞ₡Ωp Talk to ME BAN SOMEONE! MOWSE! JASLEKSA, then? OK... ħuman 00:03, 19 September 2008 (EDT) ## Not cool Why did you delete my user page? http://rationalwiki.com/wiki/index.php?title=Special%3ALog&type=&user=&page=User%3ABeastmasterGeneral BeastmasterGeneral 09:03, 19 September 2008 (EDT) Hm? Oh crap O_O. Well, I suppose the short story would be this, I was on a mop and dust spree cleaning up RW, and I suppose I accidentally wiped your page. It did say in the edit summary "Someone delete this page please" so... I followed through. Javasca₧ and his Sword of Wiki-Editing! ## OOB Okay, lets talk, whats the evidence? I am intrigued because you brought it up in terms of a cognitive psych class. tmtoulouse 14:50, 22 September 2008 (EDT) ## winning a debate with a creationist I moved your winning a debate with a creationist thingy to the essay namespace. Could you delete the redirect when you're ready? Cheers. --JeevesMkII 10:31, 24 September 2008 (EDT) Will do, just adding a fre touches before I remove it. Javasca₧ and his Sword of Wiki-Editing! ## Announcement Allow me to be the first to congratulate you on your announcement. Welcome to the Dark Side, oh newly deconverted one! CSimpacted with knowledge 22:03, 25 September 2008 (EDT) I would think that, naturally, in the complete absence of a "light side" to compare to the "dark side" it would be more of a "semi-gray side"? Javasca₧ and his Sword of Wiki-Editing! I struggled for many years with how to present my beliefs to the outside world, particularly family. I started by slowly introducing concept of skepticism into my conversations with them, sometime not about religion at all. Once they were used to thinking about my general skepticism towards paranormal, mysticism and general spirituality the revelation of atheism wasn't so harsh for them to take. Beyond my family the whole "outside" world presents a conundrum. The easiest thing to do to get along is just not talk about it at all, even if pressed. If really pressed I used to only admit to a form of agnosticism. It was just easier than the debate or whatever you want to call it that ensues in day to day encounters. I enjoy structured debate on my own time. Hence the internet, it is great. But I really do not feel like getting into a debate with my barber or some guy at a bar. With in the last 1-2 years though I have started to shift form that perspective. It is more about wanting to stand up and demonstrate to the world that atheism is far more common than they are led to believe. I am much more open about it, and more willing to engage in conversation about it. It works great in Canada, back home though in the States it has been a mixed result. --Tmtoulouse 23:38, 25 September 2008 (EDT) Thanks for the advice, Tmtoulouse. If I may point out, I generally don't debate my barber, as such a debate can leave you in a somewhat compromised posistion... Javasca₧ and his Sword of Wiki-Editing! Hm. Interesting, Tmtoulouse. I'm kind of surprised you chose to share this story so out of the blue. I would've liked to have heard it earlier; I enjoy hearing such things. Don't take that the wrong way. Radioactive afikomen Please ignore all my awful pre-2014 comments. 23:57, 25 September 2008 (EDT) ## Powerz? I can haz powerz back now plz? DickTurpis 22:44, 25 September 2008 (EDT) I was about to give them back to you when I finished banhammering everyone... but you already went to Kels, so... meh. Javasca₧ and his Sword of Wiki-Editing! Yes Kels is better person than U. And iz grrl too. DickTurpis 23:25, 25 September 2008 (EDT) And haz no weird fonts and floating boxen. DickTurpis 23:25, 25 September 2008 (EDT) Would you like me to create a floating box that follows you eveywhere? Javasca₧ and his Sword of Wiki-Editing! I says "Thanx but no thanx on that box to nowhere." DickTurpis 23:28, 25 September 2008 (EDT) Hehe he. Fair enough. By the way, your complaint about the font has been observed, and is now a vivid colour of Green. Enjoy =) Javasca₧ and his Sword of Wiki-Editing! ## Intestinal blockages You do realize I already blocked all sysops in alphabetical (and ascending numerical, by minutes) order a few months ago, yes? Yer stealin' my bit! --Kels 23:04, 25 September 2008 (EDT) Well, I didn't know you blocked everyone... but as you did, I guess it was overdue for another sysop to remove everyone ;) Javasca₧ and his Sword of Wiki-Editing! ## Deconversion Do your Real Life friends and family know yet? Radioactive afikomen Please ignore all my awful pre-2014 comments. 23:31, 25 September 2008 (EDT) Nope, none of them yet. I am planning on telling them around Thanksgiving, but I don't think they will take it lightly. My closest friend does know, and as does my roommate, but aside from that, nobody else. Javasca₧ and his Sword of Wiki-Editing! ## Random words [1] You know, we also have a template:location to say things like "about". Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:58, 28 September 2008 (EDT) And I am tickled pink that you like my random word templates. : ) Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:58, 28 September 2008 (EDT) And I am glad to see you like the userbox I made with the templates you made =). I did take a look at the location template, and I honestly cannot think of a way to squeeze it in without the userbox becoming shoddily long. (While it is random, every now and then I do get a string of words that extend the userbox quite a bit. I am thinking of making "User Insane2", which would have "location". Javasca₧ and his Sword of Wiki-Editing! ## Moar deconversion You said on my talk page, "Well, Human, I have just joined the ranks of atheists... now how do I manage this thing? Any hints or words of advice from an experienced atheist?" In reality you're on your own. We don't have a dogma, or rules, we only have in common that we don't believe in any gods. How we move beyond the time when we make that known to ourselves is pretty much a personal process. For me, I still enjoy stuff that can't be explained - but I don't think I have an explanation for it. Does that make any sense? Like, let's say for the silly, I hear rocks or trees talking to me. To me, that is personal, and not part of "how the world works". In other words, it's my own trip, and there is now god or gods behind it, it's just a "wow" experience. I don't try to explain it in supernatural ways, mostly I think it's just part of being a chunk of meat. Do I worry about "death" or other things like that? No, I just hope that my life is worth living, and that I enjoy of it what I can. Then the worms eat me ;) Any more questions I will gladly mislead you with answers for. ħuman 02:21, 28 September 2008 (EDT) I know the flashing mowse is cool and all, but it irritates the hell out of me on talk pages where you have commented several times. No complaint about your comments, but that fucking flashing mowse gets on my last fucking nerve. No offense intended with my flashy language ;) ħuman 23:57, 28 September 2008 (EDT) Dear Human, as per your suggestion, I have gathered that you do not see enough of the mowse to have learned to love him. Hence, he is taking a prominant space on my user talk page. Look at him go!!! Javasca₧ and his Sword of Wiki-Editing! ## Town Crier? What's that? PFoster 22:55, 30 September 2008 (EDT) That... is an excellent question. And it deserves an excellent answer I am either unable or unwilling to provide at this time. Javasca₧ and his Sword of Wiki-Editing! (P.S, I don't know what a town crier is either) Ha Ha. Very Funny. Vandal. I'll Vandalize You. PFoster 23:06, 30 September 2008 (EDT) ## Deletion Could you tell me why you deleted my articles? They did not exist so I created them. Thanks.--Herbert the Hamster 17:48, 3 December 2008 (EST) I deleted the articles as they had little to do with our mission statement in this wiki. We are not wikipedia, so for the most part, we stick to science and politics. You can find more information in the Newcomers Guide. Hope this helps. Javasca₧ and his Sword of Wiki-Editing! ## Herbert the Hamster Java: you've upset a harmless little Hampster without telling him why.and marmite 02:32, 4 December 2008 (EST) Hello Javascap. Thank you for pointing out the guide, it explains many things. It would have been nice if somebody had pointed it out before removing my articles. I'm still not sure why you used the vandal brake on me though. I'm not looking for a fight, I'd just like to understand the rules. Could you also explain why you (or somebody) deleted, for example, "Chlorophyll"? It was only a stub, but I would have expanded it if you'd asked. I see there is an article "carbon".--Herbert the Hamster 03:30, 4 December 2008 (EST) I do deeply apologise for my mistake in vandal breaking you (probabally had something to do with my remarkable lack of sleep that day, so I jumped the gun on the vandal break. As for the Chlorophyll, I do not know if I deleted that... let me check that out, be right back. Javasca₧ and his Sword of Wiki-Editing! ## Block I missed it! Off brewing coffee - wanna do it again so I can be all annoyed? and marmite 10:39, 6 December 2008 (EST) ## I'm going to kill you, sorry Nothing personal, but I swore I'd kill the next person to rick roll me in any way, shape, or form. Pinto's5150 Talk 01:35, 17 December 2008 (EST) I'm pleased you're impressed. On the 8th of July last year Radioactive afikomen was less impressed. I don't want unpleasantness that I feel happens here to follow me. I don't suppose you can give any guarantee individually or collectively that it won't. If you can give a guarantee I would be happy for a link but I fear that's impossible. It's not personal. I just don't know who I can trust here and who I can't. Proxima Centauri 00:20, 25 January 2009 (EST) I'll be back later. It's still night time here. Proxima Centauri 00:37, 25 January 2009 (EST) Wow, you freak out way too much, PC. It's the internet. No one can guarantee you a comfy pillow. You rolls your dice, you takes your chances. I notice you have note taken my advice regarding the use of the "comma" yet. ħuman 00:58, 25 January 2009 (EST) I had actually completely forgotten about atheism wiki. Rest assured, I have no interest in continuing any activities there. My attention is split between three wikis already, and I haven't the slightest inclination to participate in a fourth. Radioactive afikomen Please ignore all my awful pre-2014 comments. 01:28, 25 January 2009 (EST) ## thanks! :-) Wow, thanks! I have new tabs! what fun.. although I probably won't be doing anything with them :p but it's still nice of you. :-) PS Did you know you have a flashing mouse in the middle of your page? lol. Refugeetalk page 18:01, 6 February 2009 (EST) ## Wolf Cheetan Bring back the pic!DSFARGEG 20:52, 7 February 2009 (EST) ## Sig I've sorted your sig so that if you put "User:Javascap/sig}}" into the "nickname" field on your preferences it will leave "{{User:Javascap/sig0}}" in the source along with a timestamp. Wisest bastard Hoover! 12:44, 9 February 2009 (EST) Answering my own question. Alas! I feel like a fool >_>. ĴάΛäšςǍ₰ edits from 1.231.42.12 PSSST, your sig is bleeding! ĴάΛäšςǍ₰ edits from 1.231.42.12 Hehehe. Wisest bastard Hoover! 12:47, 9 February 2009 (EST) I used </small> to kill the bleeding on this page... Muhahaha. I sincerly reccoment you look into whatever problem is being had (although it might be the Monotype Corsivo on my page...) JÁνąŞ₡Ωp الحديث لي BUNNY! Never mind, you seem to have fixed it... ĴάΛäšςǍ₰ edits from 1.231.42.12 I added "</font>" to my sig to screw up the font. Wisest bastard Hoover! 12:51, 9 February 2009 (EST) For that, I am going to give you a light kick, laugh lightly, then move on. ĴάΛäšςǍ₰ edits from 1.231.42.12 ## Really Must you and CUR keep defiling others user pages? Pinto's5150 Talk 20:07, 9 February 2009 (EST) Yes, yes we must. It is our quest. We have a quest! Don't interfere in our quest! Sir Javascap, I am being attacked by the electrocutioner! Help! We must rush to my defense! Then I'll rush to yours once he comes over to your talk page. --"CURtalk 20:20, 9 February 2009 (EST) Well, it was fun attacking other people castles, the joke is up. It was fun, fin'. ĴάΛäšςǍ₰ edits from 1.231.42.12 Beans, has it really been so long since the good old days? As long as they stick to each other it's all good fun, right? ħuman 21:54, 9 February 2009 (EST) If it was just their pages I wouldn't care, but they were creating userpages for random users, that's what annoyed me. Pinto's5150 Talk 22:41, 9 February 2009 (EST) And I created pages only for people who are sysops, and if they wish to, in a few mouse clicks, they will be gone. I will just go into the history, create a few sub pages, and it will still be recorded, no hard feelings, and life moves on ĴάΛäšςǍ₰ edits from 1.231.42.12 Yeah, Beans, that's rude. Play by the lack of rules, ye noobs! ħuman 23:55, 9 February 2009 (EST) ## Just a non-sequitur... Not a personal attack. No offense, JS JC JSC HOW THE HELL DO YOU ABREVIATE "JAVASCAP"? The electrocutioner 20:21, 9 February 2009 (EST) In all honesty, I have never thought of what my abbreviated name would be. For shorts, I just go with 'Scap. It works ĴάΛäšςǍ₰ edits from 1.231.42.12 "Small abode"? Everybody else has a castle and I've got a "small abode"? I am semi-pseudo-offended. The electrocutioner 22:22, 9 February 2009 (EST) Contents of your abode prior to my assault... " . " ĴάΛäšςǍ₰ edits from 1.231.42.12 (EC) Contents of the "castles" prior to your "assault": " ". I rest my case. The electrocutioner 22:35, 9 February 2009 (EST) Um... that was a very dramatic... interesting writing in my castle... I kinda liked it being red, but... really, WTF? Aimless Blaster 22:31, 9 February 2009 (EST) ## Order of the Evil Red Link Sir Javascap! I need thy help! I am combating the Order of the Evil Red Link! We must make castles! We must defend the good land of RationalWiki! I mount my rusty-spotted cat, you your goat, and we must battle the evil red link dragon! --"CURtalk 18:49, 10 February 2009 (EST) Gad that was fun. We need to do that again sometime- but with everybody else. --"CURtalk 18:37, 11 February 2009 (EST) ## Plagiarism! Wisest bastard Hoover! 16:26, 11 February 2009 (EST) I say thanks but no thanks on the plagerism accusation to nowhere ĴάΛäšςǍ₰ edits from 1.231.42.12 I demand a public apology! Wisest bastard Hoover! 16:29, 11 February 2009 (EST) Thank you for providing me with the complete inspiration for that particular sysop message. ĴάΛäšςǍ₰ edits from 1.231.42.12 ## 'Crats Could you ease up on the 'crats a bit? We rarely hand these out. Gauss was fine but Nate River's was a complete Wiki Noob when he came and he has been here less than 3 months. If we make everyone a 'crat it could do a lot more damage than sysops. - User ${\displaystyle \pi }$ 23:44, 11 February 2009 (EST) Fair enough, makes sense. I had no other intentions for any further demotions, just those two. I do realise River was a wiki noob, but I think he is ready for Cratship. And I do understand that 'crats can do more damage. I guess "don't worry, you have a good point" is the default response here. ĴάΛäšςǍ₰ edits from 1.231.42.12 No worries. - User ${\displaystyle \pi }$ 23:48, 11 February 2009 (EST) We really need to sort of kinda figure out a general guideline for cratting people. Or, the next step could just be that we crat everyone... who knows, might work? ħuman 00:01, 12 February 2009 (EST) I am kind of worried about people promoting each other in anger. That can cause a few more hurt feelings than a block. I think a time limit would be good. - User ${\displaystyle \pi }$ 00:04, 12 February 2009 (EST) Yeah, I am thinking that if we did establish rules on 'Crats, it should follow along the lines of this - Hangs around enough to know the community, has been here for a while, maybe 6 months (I know River was only here for 3 months, and it seems hypocitical... meh), Have to have contributed to articles, provided suggestions, and been involved with policies. They also have to not be too particually emotional, to prevent revenge promotions. Furthermore, I think a good 'crat should be able to tell between a vandal and a new user (A mistake that I made in the past, and felt awfully guilty about Herbert the Hamster... :(. ĴάΛäšςǍ₰ edits from 1.231.42.12 ## Thanks Haha, thanks for the warm welcome XD By the way, if its not too much trouble, mind reading my user page please? Whenever you have time is fine ^^ — Unsigned, by: Namelessphoenix / talk / contribs I am currently in the process of digesting and thinking about what you have written. It looks like a valid argument process, but I do like looking at things closely. ps, don't forget to sign your post with ~~~~. ĴάΛäšςǍ₰ edits from 1.231.42.12 Oh, mkay, Heheh XD Thanks, and I'll remember to do that in the future XD Namelessphoenix 23:18, 13 February 2009 (EST) ## Embarassing Conservatives Your latest addition doesn't seem to make any sense. It seems to assume that the readers of that article will be American Conservatives, which they mostly won't be. Besides that, personally I'd say it's way too snarky. Do we really want another article with a remark that fundamentalists are stupid? I don't anyone is really waiting for that. --GTac 12:10, 16 February 2009 (EST) It's nicer now, but I still don't understand why it's aimed at the reader. --GTac 12:32, 16 February 2009 (EST) ## Sig template I know it was kind of "POINTy" of me (and funny!) to edit the template, but it is in the general template space, and it shouldn't be. You should move it to your user space where such things belong... ħuman 18:06, 18 February 2009 (EST) And will be deleted soon... By the way, using the "verb" template with suffixes like "ing" works very badly. ħuman 00:03, 25 February 2009 (EST) ## HURM I'd love to answer that, but I'd rather see CUR accuse every new user of being me...DSFARGEG 08:46, 19 February 2009 (EST) ## Sig, lesson 2 When you sign as an IP, type 5 tildes to leave a timestamp after manually inserting your sig. Wisest bastard Hoover! 16:23, 24 February 2009 (EST) ## Sig, lesson n+1 Open your preferences and put this into the nickname box: {{SUBST:User:Javascap/Sig0}} You can then sign with four tildes without leaving a bunch of code: Javasca₧ and his Sword of Wiki-Editing! -- Nx talk 11:15, 28 February 2009 (EST) Here goes, hope it works. Javasca₧ and his Sword of Wiki-Editing! 11:19, 28 February 2009 (EST) Well golly damn and I'll be! Javasca₧ and his Sword of Wiki-Editing! See, that didn't hurt. You should also nuke this page, it does the same thing as the Sig0 one, and its name is confusing -- Nx talk 11:29, 28 February 2009 (EST) ## On demotions. Takes the mop and broom firmly in hand:: Thank you Javascap. I vow to keep the environment clean, so that well ordered reasoned thoughts may abide. Also, regarding your user page. You bastard. I will get you back someday! :-) ʇɐqɯoʍıuɯO Wombats come from Austrailia, see?Leave a message at the beep. 16:32, 1 March 2009 (EST) ## Profuse thanks doubleplusthanks for the cratship. It will come in most useful. The EmperorKneel before Zod! 15:00, 2 March 2009 (EST) You are very welcome. Take care of that soap! Javasca₧ and his Sword of Wiki-Editing! HELP! RA is trying to taking away my 'cratship! PLEASE! I must have POWER!The EmperorKneel before Zod! 15:46, 2 March 2009 (EST) ## Datestamp Could you please remember to add the datestamp when signing? It gets rather confusing to see apparently timeless comments on talk pages. Radioactive afikomen Please ignore all my awful pre-2014 comments. 15:44, 2 March 2009 (EST) ## You're still a bot Not much of a flurry of edits there. - User ${\displaystyle \pi }$ 01:39, 18 March 2009 (EDT) Yeah, I thought there were more webcomics banging around on this wiki, but in reality, I could only find four so... meh. Javasca₧ and his Sword of Wiki-Editing! 09:17, 18 March 2009 (EDT) That's because we cared about getting permission. ħuman 23:49, 20 March 2009 (EDT) ## win I've knocked your pens over. Will you consider getting an OCA FOR YOUR OWN SAKE now? Mei 18:20, 18 March 2009 (EDT) Problem is both pens are still there, right where they were. And no, I am not interested in a cult, thank ye very muuuuch :) Javasca₧ and his Sword of Wiki-Editing! 18:22, 18 March 2009 (EDT) Yeah yeah we've all heard the stories - but Scientology is not a cult. Seriously you should read Dianetics some time. Mei 18:49, 18 March 2009 (EDT) ## Oh.... Boop. ħuman 23:51, 20 March 2009 (EDT) Huh? Javasca₧ and his Sword of Wiki-Editing! 12:04, 21 March 2009 (EDT) ## Vandal bin I don't think it's working, actually. At least, the vandal bin that sysops can access at RationalWiki:Vandal isn't working, since it says I'm Flying has been in there since March 11. I think all that works is changing the group membership, which on bureaucrats can do. --Arcan ¡ollǝɥ 01:10, 28 March 2009 (EDT) I'll try that? ħuman 01:36, 28 March 2009 (EDT) I think the apostrophe is exploiting a bug. Trent!!! ħuman 01:37, 28 March 2009 (EDT) No, it was just the capital F. Thanks a lot. Anyway, they're already in the "vandal" usergroup as well. ħuman 01:38, 28 March 2009 (EDT) Well, whatever it was, I'm flying was not in the bin, a problem I fixed. I am going to mod my test account and see if the sysop-accessable bin recognises the change... Javasca₧ and his Sword of Wiki-Editing! 10:20, 28 March 2009 (EDT) That is very odd, the Sysop bin says I'm flying has been binned since March 11th, but my Userright extension showed he was not in the bin last night, which I rectified... how very interesting... how very interesting... Javasca₧ and his Sword of Wiki-Editing! 10:26, 28 March 2009 (EDT) The Vandal usergroup doesn't do anything as far as I know, and it's definitely not the same as the vandal bin (Special:Userrights is a core part of MW btw). The bin stores users in a separate table, and I'm flying was in there, but there was a problem with the code that calculated the time elapsed from a user's last edit. I replaced that with code from Vandal brake 2.0, and it looks like it's working now. --  Nx/talk  10:51, 28 March 2009 (EDT)
# Using the Mean Value Theorem, prove this inequality. Using the Mean Value Theorem, prove that $|\cos^2(b)-\cos^2(a)|\gt \frac{1}{4}|b-a|$ for all $a,b \in (\frac{\pi}{4},\frac{\pi}{3})$ So far, I have Let $f(x)=\cos^2(x)$ Then $f(x)$ is continuous and differentiable on all $x\in \mathbb{R}$ so it is continuous and differentiable on $x\in (\frac{\pi}{4},\frac{\pi}{3})$. Applying the Mean Value Theorem to $f(x)$, we have: $\frac{\cos^2(b)-\cos^2(a)}{b-a}= -2\cos(x)\sin(x)$ $\frac{\cos^2(b)-\cos^2(a)}{b-a}= -\sin(2x)$ $|\cos^2(b)-\cos^2(a)| = -\sin(2x)|b-a|$ Now I do not know how to go from here, as on the interval $(\frac{\pi}{4},\frac{\pi}{3})$, $-\frac{\sqrt3}{2}\lt-\sin(2x)\lt -1$. Or am I making a mistake? You have forgotten the absolute value signs and in the interval $$\frac{\pi}{4}<x<\frac{\pi}{3}$$ is $$|\sin(2x)|$$ greater than $$\frac{1}{4}$$ $$|\sin(2x)|$$ has the lowest value $$|\sin(\frac{2\pi}{3})|>\frac{1}{4}$$
# TMA4205 Numerical linear algebra: Fall 2014 ## News • 31.07: Written re-take exam will be held on 06.08.2015. • 11.12: I have now graded the exams and final grades are here. Of course I cannot be on a high horse here, but I am genuinely disappointed with the results. If you have any reasonable explanation as to why there is such a gap between the project grades and the exam grades, please provide this information/complaints/suggestions to the reference group for their final report and/or to me. • 09.12: Very embarrassingly and in spite of our best efforts at weeding mistakes in exam questions, questions 2c) and 4b) turned out to contain them. Namely, in 2c) the correct inequality is $\|r_{\text{new}}\|^2_2 \le \bigg[1-\frac{1}{n\kappa^2_2(A)}\bigg] \|r\|^2_2,$ and in 4b) the stopping criterion should be $\| A_1 x^{(k)} - A_2 x^{(k)} - b \|_2 \le \varepsilon \|b\|_2$. I will take special care when interpreting your answers to these questions and will err on your side. I apologize most profoundly for these and thank you for all the feedback to the exam! • 08.12: Here are brief solutions to today's exam. I will try to grade it this week. • 05.12: Here is a table containing the grades for projects 1+2 (note: project 2 is twice as 'valuable' as project 1). Columns 2 and 3 of the table contain "+" if I have your report and "x" otherwise. Please let me know if you see any mistakes in these columns. The last column contains a grade (I used prosentvurderingsmetoden). Some of the grades are a bit too close to the boundary, so I will have to make a more educated assessment after the exam. Also please note that exam is 70/30 times "more valuable" than the projects, as far as the grading is concerned. Best of luck at the exam! • 04.12: I have finished going through your reports for Project #2. Overall I am content with your work, and a few of them are excellent. To combine the results of projects 1 & 2 I will have to figure out the mapping between the student and candidate numbers - I hope to finish that tomorrow. • 30.11: If you loved/hated project 2 and/but want to write a semester project about a problem with more-or-less the same structure, but very different spectral properties, please contact me and this can be arranged. Basically instead of the Stokes system we need to solve a so-called Brinkman system modelling the flow of slow incompressible liquids through a porous medium. To complicate things we will need to consider either spatially heterogeneous but isotropic permeability tensor of the porous media (easy case) or even heterogeneous, non-isotropic, indefinite permeability tensor (non-physical, but appears in certain numerical algorithms). The code for generating the linear system remains more or less the same but the preconditioning strategy will have to be adjusted. • 20.11: There is some programming/debugging work involved in making the multigrid work for the Laplace operators in the velocity space. Here are some tips: • First of all, make sure that the two-grid algorithm works (ie two grid levels only). • For the ease of implementation, homogenize the boundary conditions (i.e., make them 0) by solving the problem A deltaX = b - A x_0 = r_0. In this way restriction/prolongation operators close to the boundary will look the same on all levels. Then the fine solution is x_0 + deltaX. • Visualize everything (solutions, errors, residuals) - this may help to identify the errors. • Unit tests are the bread and butter of all good engineering and software development in particular. Instead of trying to understand, what is wrong with the whole code make sure that individual pieces work as expected. • For example, under-relaxed Jacobi is, albeit very inefficient, an iterative solver. Run thousands or millions of iterations of it and you should be able to solve the problem - make sure that the error goes to zero, and by visualizing the error make sure that the oscillatory components decay fast. • Restriction/interpolation involve a lot of indices in this case. Draw a picture of a coarse/fine grid and see which variables go where with what weights. Write a non-vectorized Matlab implementation first, with loops, and make sure it works. Visualise the input/result of the operation (on coarse/fine grids) to make sure that the operation works as expected. • 20.11: There is a very minor typo in the project 2 description, p. 5, possible multigrid cycle code listing: in the recursive call to mgv_u, the size of the coarse grid correction should be zeros((nx/2-1)*ny/2,1) and not zeros((nx/2-1)*(ny-1),1). • 19.12: Here is my summary of the course - basically just keywords. Hopefully this is still helpful for the exam preparation. • 19.12: There has been some (very understandable) naming confusion in connection with the project: MINRES method is not the same as MR iteration described in section 5.3.2 in Saad. MINRES, like MR, is a residual projection method (see section 5.2.2 in Saad). Unlike MR, it is a Krylov subspace method (Chapter 6 in Saad) for solving symmetric definite or indefinite problems. Mathematically it is equivalent to GMRES when the latter is applied to a symmetric problem; that is, they are identical as projection methods. Algorithmically/implementation-wise the method is nearly equivalent to DQGMRES (section 6.5.6 in Saad) though one utilizes Lanczos process (accounting for symmetry) as opposed to truncated Arnoldi for orthogonalization. • 12.11: If you are taking the course as TMA4505-fordypningsemne please email me so that we can agree on the exam dates and format. • 06.11: A few things: • Torbjørn is not available 07.11 and 21.11 - he will hold office hours 10.11 and 24.11 at 10:00 instead. • When you implement restriction/prolongation operators in project 2: it is OK to assume that the coarse grid is exactly half the size of the fine grid in each coordinate direction. You can achieve this by selecting the size of the coarsest grid in your multigrid code, and then determine the size of the finest grid from the number of needed grid levels, by refining the mesh each time (halving the cell size in each coordinate direction). Also, in my opinion, drawing a picture of where each unknown type (U, V, P) goes during the refinement/coarsening procedure helps a lot in implementing the operations in a sensible way. • I am planning to discuss iterative methods for large scale eigenvalue problems next. I will post more details over the weekend. • 03.11: You may use Matlab's PCG & MINRES in the project! In fact I would highly recommend doing this as you will have fewer things to debug/worry about. • 31.10: Due to the reoccurrence of the question "Is this algorithm faster than Matlab's backslash?" I timed my implementation (not optimized in any way) of algorithms in project 2. Here are the absolute and relative (to backslash operator) timings on an Intel Xeon E5-2665 2.4GHz machine with 64Gb RAM. I kept increasing the size of the problem until Matlab reported "out of memory". For smaller problems the additional effort in building/applying the sophisticated preconditioner does not pay off, but as the size of the problem increases one beats all strategies involving (sub-)matrix factorizations, complete or otherwise. Note that one can also solve rather large problems in this way, even when factorizations run out of memory. • 22.10: There will be no lecture 23.10 - please read Chapters 4 & 5 from Trefethen & Bau (copies are found on itslearning). A slightly easier (in my opinion) proof of uniqueness of SVD is by appealing to the uniqueness of eigenvalues/vectors, see for example this note. I will prepare slides for the coming weeks of the course. • 14.10: I have now uploaded the description of the second part of the Project. Deadline: 28.11. Submission rules are the same as for the first part. Please let me know if there are any inconsistencies or unclear parts in the description. • 01.10: I will try to rectify some of the omissions in the convergence proofs today at the lecture tomorrow by going in detail through the convergence analysis of MINRES for indefinite symmetric matrices. Here are my notes on this. In particular, here is the short answer to the story as to why |C_k(-x)| = |C_k(x)| for Chebyshev polynomials. • 24.09: There will be lectures in October/November - I will update the lecture plan soon. • 17.09: I was going through my notes from today's lecture and I found a mistake in my derivation of Kantorovich inequality. Here is the correct derivation. • 11.09: So much for my great attempt to use itslearning for submitting the project reports. As it seems impossible to upload Matlab flies there please submit your projects to Torbjørn Ringholm instead. You should submit the report as a PDF file and the accompanying code as a ZIP archive. I will close the submission through itslearning. • 09.09: I stand corrected on the subject of TMA4505-fordypningsemne. If you are taking this course as TMA4505 you will take an oral exam at the end, which will account for 100% of the grade. In this sense, the projects are not compulsory and will not together count as 30% of the grade. On the other hand, I need to test all the learning objectives, including "L2.1: The student is able to implement selected algorithms for a given model problem" at the oral exam. If you chose to exercise your right not to deliver the project during the semester and not to receive feedback on your solution, I will have to test your meeting the learning objective L2.1 by asking you to orally present the project material, including the details of the implementation, during the oral examination. • 08.09: If you are taking this course as TMA4505-fordypningsemne: contrary to the circulating rumours you are required to hand in the projects. The only difference is that you get an oral exam at the end of the course instead of the written one. • 08.09: One word has been added to the project description: in b) we require that the matrix is row diagonally dominant. Please let me know if you find any other issues in the project description! • 04.09: There will be no lectures on 10.09 and 11.09. Instead, please work on project 1. Torbjørn will be available for questions/answers as usual on Wed at 16 or Fri at 10. • 20.08: We have tentatively agreed that AE will hold office hours on Mondays 11-12. Please do not forget to volunteer for the reference group or you risk being appointed :) ## Schedule MoTuWeThFr 08:15 09:15 10:15   Lecture (K26)TR office hour (SBII/1056) 11:15AE office hour (SBII/1038) 12:15 13:15 14:15  Lecture (K26) 15:15 16:15  Exercise (R60) ## Exam information The exam will test a selection of the Learning outcome. Permitted examination support material: C: Specified, written and handwritten examination support materials are permitted. A specified, simple calculator is permitted. The permitted examination support materials are: • Y. Saad: Iterative Methods for Sparse Linear Systems. 2nd ed. SIAM, 2003 (book or printout) • L. N. Trefethen and D. Bau: Numerical Linear Algebra, SIAM, 1997 (book or photocopy) • G. Golub and C. Van Loan: Matrix Computations. 3rd ed. The Johns Hopkins University Press, 1996 (book or photocopy) • E. Rønquist: Note on The Poisson problem in <jsm>\mathbb{R}^2</jsm>: diagonalization methods (printout) • K. Rottmann: Matematisk formelsamling • Your own lecture notes from the course (handwritten) ## Learning outcome Here is a list of the goals that we will achieve in this course. 1. Knowledge • L1.1: The student has knowledge of the basic theory of equation solving. • L1.2: The student has knowledge of modern methods for solving large sparse systems. • L1.3: The student has detailed knowledge of techniques for calculating eigenvalues of matrices. • L1.4: The student understands the mechanisms underlying projection methods in general. • L1.5: The student understands the mechanisms underlying Krylov methods in general. • L1.6: The student has detailed knowledge about a selection of projection and Krylov algorithms. • L1.7: The student understands the principle of preconditioning. • L1.8: The student understands selected preconditioning techniques in detail. • L1.9: The student is familiar with the practical use of matrix factorization techniques. 2. Skills • L2.1: The student is able to implement selected algorithms for a given model problem. • L2.2: The student can assess the performance and limitations of the various methods. • L2.3: The student can make qualified choices of linear equation solvers/eigenvalue algorithms for specific types of systems. • L2.4: The student can assess the complexity and accuracy of the algorithms used. 3. General competence • L3.1: The student can describe a chosen scientific method and communicate his or her findings in a written report using precise language. ## Curriculum The curriculum consists of all topics that have been covered in the lectures, all self-study topics, the semester project, and the exercises with solutions. The lectures and self-study topics are based on the following material. The list is not yet final. • Saad: 1.1–1.13, 2.2, 4.1, 4.2.1–4.2.3, 5.1–5.3, 6.1–6.11, 9.1–9.3, 10.1–10.2, 13.1–13.5, 14.1–14.3 • Trefethen & Bau: Lectures 4, 5, 10, 25, 26, 27, 28, 29 • Golub & Van Loan: 2.5 • Rønquist: Note on The Poisson problem in $\mathbb{R}^2$: diagonalization methods. [pdf] Of the reading material listed below, you should buy the book by Saad, but not necessarily any of the other ones. Saad's book is also available online for free through NTNU's network at SIAM. • Y. Saad: Iterative Methods for Sparse Linear Systems. 2nd ed. SIAM, 2003. (main text) • L. N. Trefethen and D. Bau: Numerical Linear Algebra, SIAM, 1997. Photocopies are available at itslearning. • G. Golub and C. Van Loan: Matrix Computations. 3rd ed. The Johns Hopkins University Press, 1996. Photocopies are available at itslearning. • W. L. Briggs, V. E. Henson, S. F. Mc Cormick: A multigrid tutorial, SIAM, 2000. Available online at SIAM. • J. W. Demmel: Applied Numerical Linear Algebra, SIAM, 1997. Available online at SIAM. ## Lecture log and plan S refers to Saad, TB to Trefethen & Bau, GVL to Golub & Van Loan, D to Demmel, and R to Rønquist. Date Topic Reference Learning outcome 27.11 No lecture. Work on project 2 L2.1, L2.2 26.11 No lecture. Work on project 2 L2.1, L2.2 20.11 Q&A 19.11 Summary; Q&A 13.11 The Rayleigh-Ritz method and Lanczos algorithm for eigenvalue computation D7.1-D7.7 L1.3, L2.4 12.11 Perturbation of symmetric eigenvalue problems D5.2 L1.3 06.11 QR algorithm with shifts, Wilkinson shift TB29 L1.3, L2.4 05.11 QR algorithm without shifts, simultaneous iteration TB28 L1.3 30.10 Power iteration, inverse iteration, Rayleigh quotient iteration TB27 L1.3 29.10 Eigenvalue problems, eigenvalue-revealing factorizations, eigenvalue algorithms TB24–25, TB26 (self-study), slides L1.3, L1.9 23.10 Matrix properties via the SVD TB5 L1.9 22.10 The singular value decomposition (SVD) GVL2.5, TB4 L1.9 16.10 Domain decomposition (DD) methods, Schwarz' alternating procedure, multiplicative and additive overlapping Schwarz, DD as a preconditioner S14.1, S14.3–14.3.3 L1.8, L2.3 15.10 Intergrid operators, two-grid cycles, V-cycles and W-cycles, red-black Gauss–Seidel, MG as a preconditioner S13.3–13.4.3, S12.4.1 L1.2, L1.8, L2.3 09.10 Introduction to multigrid (MG) methods, weighted Jacobi iteration S13.1–13.2.2 L1.2, L1.8 08.10 Preconditioned GMRES and CG. Flexible GMRES. S9.1, S9.2.1, S9.3.1-9.3.2, 9.4 L1.2, L1.7, L1.8 02.10 Convergence analysis of MINRES, indefinite case. Note, S9.1 L1.6, L2.2, L2.4 01.10 Convergence analysis of CG and GMRES S6.11.1-6.11.4 (S6.11.2-self study) L1.6, L2.2, L2.4 25.09 The D-Lanczos algorithm, the conjugate gradient method (CG) S6.7.1 L1.2, L1.6 24.09 GMRES (GMRES(k), QGMRES, and DQGMRES), Lanczos method S6.5.1, S6.5.3, S6.5.5, parts of S6.5.6, S6.6.1 L1.2, L1.6, L1.9 18.09 Arnoldi's algorithm, the full orthogonalization method (FOM) and variations of FOM S6.3–6.4.2 L1.2, L1.5, L1.6, L2.2 17.09 Convergence of steepest descent, minimal residual (MR) iteration, convergence of MR, Krylov subspace methods S5.3.1–5.3.2, S6.1 L1.4, L1.5, L1.6, L2.2 11.09 No lecture. Work on project 1 L2.1, L2.2 10.09 No lecture. Work on project 1 L2.1, L2.2 04.09 Projection methods, error projection, residual projection, steepest descent method S5.1.1-5.2.2, S5.3-5.3.1 L1.4 03.09 Orthogonal projections, Jacobi and Gauss-Seidel iteration, convergence of splitting methods, Gershgorin's theorem, the Petrov–Galerkin framework S1.12.3–1.12.4, S4.1, S4.2–4.2.1, S1.8.4, S4.2.3, S5.1 L1.1, L1.4 28.08 Perturbation analysis, finite difference methods, diagonalization methods, projection operators S1.13.1 (self-study), S1.13.2, S2.2–2.2.5 (self-study), R, S1.12.1 L1.2, L1.4 27.08 Similarity transformations. Normal, Hermitian, and positive definite matrices S1.8–1.8.1, S1.8.2 (self-study), S1.8.3, S1.9, S1.11 L1.9 21.08 QR factorizations S1.7, TB10 L1.9 20.08 Background in linear algebra S1.1–1.3 (self-study), S1.4–1.6
# A Few Good Trig Identities Geometry Level 2 $\dfrac { 1 }{ 1+\sin^{2} {x} } +\dfrac { 1 }{ 1+\cos^{2} {x} } +\dfrac { 1 }{ 2+\tan ^{2} {x} } +\dfrac { 1 }{ 2+\cot ^{ 2 }{ x } }$ For all $x$ on the domain of the $\tan x$ and $\cot x,$ what is the value of the expression above? ×
# Validation of the compilation of Data-Parallel C while'' loops for shared memory architectures Abstract : This report focuses on the compilation of the while'' loops in data-parallel languages for MIMD Shared Memory architectures. An efficient compilation must decrease the number of global synchronizations'' due to dependencies. We validate an optimization suggested by Hatcher and Quinn for the DPC language. It consists in splitting the original loop into two loops : one computation loop'' without any additional control dependencies, and one waiting loop'' to assure termination. Computation loop's body presents a minimal number of global synchronizations. We study informaly its correction proof, and give the methodology leading of its conception. The formal proof is based on the axiomatic semantics of Owiki and Gries. We give an axiomatization of the global synchronization statement, and specify which are the sufficient conditions for a non-deadlocking execution. In Hatcher and Quinn's solution, we observe that the waiting loop is independant of the computation one. The former loop absorbs residual synchronizations of any parallel program. We conclude by presenting a modular method to elaborate parallel programs. Keywords : Document type : Reports Domain : Cited literature [18 references] https://hal-lara.archives-ouvertes.fr/hal-02101986 Contributor : Colette Orange <> Submitted on : Wednesday, April 17, 2019 - 9:11:24 AM Last modification on : Wednesday, May 8, 2019 - 1:34:27 AM ### File RR1994-13.pdf Files produced by the author(s) ### Identifiers • HAL Id : hal-02101986, version 1 ### Citation Gil Utard. Validation of the compilation of Data-Parallel C while'' loops for shared memory architectures. [Research Report] LIP RR-1994-13, Laboratoire de l'informatique du parallélisme. 1994, 2+21p. ⟨hal-02101986⟩ Record views
## Conqueror one year ago Back after being gone for almost 6 months. 1. Nnesha welcome back. 2. Conqueror yayz parteh time ! 3. rvc wb :) 4. anonymous Welcome Back. 5. King.Void. WB BRO! 6. pooja195 conquiii :D 7. horsegirl27 WELCOME BACK!!! Let's have a party!! 8. Jaynator495 OH GOD *runs screaming through the os walls* 9. Jaynator495 BURN IT WITH FIRE!!!!! *throws fireballs* 10. ♪Chibiterasu This is a dark day for us. 11. horsegirl27 Jay, don't you think that's a bit of an overreaction? 12. Jaynator495 No, you don't know conq ._> 13. Jaynator495 chibi, is it a dark day that conq returned, or that im the first one to...you know :P 14. ♪Chibiterasu No I don't know. 15. Jaynator495 i was talking to horse 16. Jaynator495 the second one was yours :P 17. ♪Chibiterasu I know. I don't know what you want me to know. 18. horsegirl27 Hmm guess we shouldn't have a party then. 19. Jaynator495 Conq, is EVIL @jigglypuff314 @iGreen @Librarian @pooja195 20. Jaynator495 oh and @King.Void. also knows what im talking about xD 21. iGreen @jaynator495 You shouldn't make attacks on people.. ._. 22. anonymous Welcome Back :) 23. Conqueror Man, the truth really hurts. o_O 24. Conqueror $$\color{blue}{\text{Originally Posted by}}$$ @iGreen @jaynator495 You shouldn't make attacks on people.. ._. $$\color{blue}{\text{End of Quote}}$$ If OpenStudy doesn't represent attacks on people, don't call 'em ambassadors. 25. ♪Chibiterasu What. 26. Conqueror The easy way to tell if people are helping on OS is if their smartscore goes up fast in a little bit of time, it's clear @Jaynator495 hasn't been doing that ;) 27. ♪Chibiterasu He's is a technical helper not a math helper. 28. ♪Chibiterasu Maybe you should pay attention before making snarky remarks. 29. Jaynator495 i also help a lot in os feedback 30. iGreen I think what @Conqueror was trying to say was that; Jaynator shouldn't be called an Ambassador since he was attacking people, which is what OpenStudy Ambassadors aren't supposed to represent. 31. ♪Chibiterasu You're not making this any better, Jay. 32. Jaynator495 im not exactly trying to ._> 33. Jaynator495 This man attacked me, and others with out knowing all the facts 34. Nnesha then tell me what's the difference between you and him ? if you're going to do same thing then you re on a same category 35. Jaynator495 eachone was a personal attacck 36. Nnesha keep in mind you're an ambassador but he isn't 37. Jaynator495 we gave him several extra chances, and every time he screwed up and did the same thing 38. Jaynator495 with all the chances he got why should he deserve another? if memory serves he apologized and did it again. (don't take my word for this one, idk) 39. iGreen @Jaynator495 @♪Chibiterasu is right, you're just making things worse..delete that comment with the link.. 40. Jaynator495 oh right, why am i showing people what hes done! 41. iGreen Exactly, it creates more chaos.. 42. ♪Chibiterasu It's not like anybody doesn't about peach post anyway, but the way you're reacting to this doesn't seem very mature. Just because he attacked you doesn't mean you need to attack him. You're an ambie, Jay, show that you deserved the position. 43. iGreen ^ 44. Jaynator495 $$\color{#0cbb34}{\text{Originally Posted by}}$$ @Jaynator495 oh right, why am i showing people what hes done! $$\color{#0cbb34}{\text{End of Quote}}$$ and i was being sarcastic 45. iGreen I know that^^ 46. iGreen Peach Post wasn't really that bad..it just had "Top 5 Worst Ambi's" and "Top 5 Best Mods".. :l 47. Jaynator495 This is why i always said i dont deserve the title of ambassador :I 48. Nnesha |dw:1440609517488:dw| 4 y'all 49. ♪Chibiterasu The best part is when he said Saif founded Khan Academy LOL. 50. Jaynator495 Because honestly after all this time, i may seem like a better ambassador... but it seems i easily fall into old habbits... 51. Nnesha wait a sec |dw:1440609611817:dw| peanut butter choco for jay 52. iGreen $$\color{#0cbb34}{\text{Originally Posted by}}$$ @♪Chibiterasu The best part is when he said Saif founded Khan Academy LOL. $$\color{#0cbb34}{\text{End of Quote}}$$ Yeah, Saifoo himself actually gave that info..I guess he was trolling. 53. Jaynator495 |dw:1440609752996:dw| nobody carries these anymore ;-; 54. Jaynator495 dont mock me with a string a fishpole taped to my head and this tied to it, because we cant get it anymore ;-; 55. Conqueror @Jaynator495 I didn't send trolling cuss messages to people. And no one gave you any warnings ? 56. Conqueror There's nothing wrong with telling the truth or giving an opinion or giving bad feedback in openstudy feedback. You can give bad and good feedback in OpenStudy Feedback, FYI. 57. Jaynator495 I said you didnt know all the facts, that message was sent by my younger cousin... he saw i was logged in while i was in the bathroom, and used my OS... I do not curse... 58. Conqueror Ahh, I see, the old 'Blaming the younger relative" trick. 59. Jaynator495 Ive said this a few hundred times now, i never curse... 60. Jaynator495 and im not going into this with you 61. anonymous Heyyy 62. ♪Chibiterasu gasp its batmon 63. Jaynator495 hi 64. camerondoherty Feelssss!! cx 65. King.Void. Really Jay? 66. horsegirl27 Jay, I feel like you made this very public when you didn't need to. You didn't even have to say anything at all and yet you freaked out. 67. Jaynator495 Yea, i went ape ._. 68. EclipsedStar |dw:1440635051203:dw| Welcome back. 69. Jaynator495 |dw:1440638228909:dw| 70. Abhisar Hey @conqueror ! Welcome back, I definitely missed you c: 71. Xmoses1 Shoot. I just got on for the first time in a little over a year and now things are a liiiiiitle different. Its rather intimidating actually lol. That and all of my old friends seemed to have retired from OS. Wow it sucks to jump with both feet into the real world :'{ 72. Xmoses1 Things have changed a great deal on OS i mean to say 73. Nnesha whitemonsterbunny17 and solomonZelman are still here :=) 74. Nnesha @☯ 75. Nnesha thats what i found on ur fan list very cool @☯ ☯ ☯ :P 76. anonymous Welcome back :) 77. anonymous wb :) 78. pooja195 And it starts... 79. Xmoses1 Oh I have not seen them on @Nnesha! Annnnnd as for the peace guy i only fanned him because i thought he had a cool name xP But thanks for the WB's everybody :) The OS family is still as friendly as i remember thankfully! 80. Nnesha :=) o^_^o 81. Nnesha @ಠ_ಠ ^^ I fanned him bec of his CO_OL face :P ಠ_ಠಠ_ಠಠ_ಠ 82. camerondoherty You can embed? @EclipsedStar 83. TheSmartOne wb 84. EclipsedStar You mean the gifs @camerondoherty? Yeah, get Jay's extension for that. XD 85. camerondoherty Where can I get it? 86. jigglypuff314 ahhhhh this is what I get for not checking everyday xD @Conqueror welcome back :3 although it might have not been very welcoming </3 :( though I did miss you, honestly! 87. acxbox22 hi 88. Jaynator495 $$\color{#0cbb34}{\text{Originally Posted by}}$$ @Xmoses1 Oh I have not seen them on @Nnesha! Annnnnd as for the peace guy i only fanned him because i thought he had a cool name xP But thanks for the WB's everybody :) The OS family is still as friendly as i remember thankfully! $$\color{#0cbb34}{\text{End of Quote}}$$ im stillhere! xD 89. Jaynator495 $$\color{#0cbb34}{\text{Originally Posted by}}$$ @camerondoherty Where can I get it? $$\color{#0cbb34}{\text{End of Quote}}$$ http://ultrilliam.com/google-store/ 90. Xmoses1 Ok i'm confused now... 91. Xmoses1 But i feel kind of important now that i have been quoted! 92. Nnesha LOL 93. Conqueror I had to have a little 9 year old ambassador to come onto my post and do his regular daily trolling. And even if you have a younger cousin, I bet he's way more mature than you are. Oh, and I suggest you should swallow a bar of soap to clean that filthy and hypocritical mouth of yours. And if that doesn't work, try dish detergent. Have a nice day :) 94. Jaynator495 $$\color{#0cbb34}{\text{Originally Posted by}}$$ @Conqueror I had to have a little 9 year old ambassador to come onto my post and do his regular daily trolling. And even if you have a younger cousin, I bet he's way more mature than you are. Oh, and I suggest you should swallow a bar of soap to clean that filthy and hypocritical mouth of yours. And if that doesn't work, try dish detergent. Have a nice day :) $$\color{#0cbb34}{\text{End of Quote}}$$ Firstly: I Do not troll anymore, (much) Instead a develop a extension to enhance peoples openstudy experience. Secondly: I'm 14 xD Thirdly: I do have a younger cousin, in fact two of them, 1 is 13 (turning 14, but ill be 15 before shes 14) and a immature perverted 8 year old cousin (boy) Fourthly: welp, i didnt have a bar of soap so i used liquid, now i know why thats a threat, soap tastes beyond terrible ;-; anywho 95. Conqueror I can't find much 14 year olds on OpenStudy who go around trolling. 96. Jaynator495 XD finally something your right about xD but as i said, i dont troll as much anymore, im benifiting the community now in a attempt to make up for that, not repeat old mistakes 97. Xmoses1 I cannot figure out what the heck is going on 98. Jaynator495 LOL 99. Xmoses1 lol 100. TheSmartOne welcome back @Xmoses1 The history section misses you :P
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율 1 초 128 MB 0 0 0 0.000% ## 문제 A small computer of Von Neumann architecture named ICPC is used for the BDN Programming Contest. ICPC is a 16-bit integer machine. There is a sufficient number of instruction memory cells in ICPC but it has only one data memory cell. ICPC has 6 registers: R1, R2, R3, R4, R5, and PC. The Rn registers are general purpose but PC is the program counter storing the address of the next instruction to be executed. The PC value can be changed only by a group of control-flow instructions. This machine follows the usual fetch-decodeexecute cycles. The PC value is normally increased automatically except for the cases of the control flow instructions. ICPC has only two addressing modes, immediate value and register, but PC cannot be used as an operand. The whole set of instructions of ICPC is shown in Table 1. In Table 1, r denotes a register, v denotes a register or an integer value, and M denotes the data memory cell. Table 1: The instruction set of ICPC ICPC Instruction The meaning of the instruction in C load r r = M; store v M = v; move r v r = v; add r v r += v; sub r v r -= v loop r   instructions pool while (r > 0) {   execute the instructions  } cond r   instructions dnoc if (r > 0) {   execute the instructions  } Every step of the fetch-decode-execute cycle is called “cycle.” Therefore every instruction consumes three cycles at least and every ICPC instruction, except for dnoc, consumes exactly three cycles. The instruction dnoc is just used for denoting the end of cond and not executed at all. This kind of instruction is called a pseudo instruction; dnoc is the only pseudo instruction in ICPC. The instruction pool denotes the end of loop, just like dnoc, but it is executed indeed for the control flow should return the beginning of the loop at the end of a loop. To minimize the execution time of programs, pipelining is usually adopted in modern computer architectures and ICPC also adopts it. Assuming that three instructions, namely A, B, and C, are to be executed in sequence, the decode cycle of A can overlap the fetch cycle of B and the execute cycle of A can overlap the decode cycle of B and also can overlap the fetch cycle of C. Therefore, it takes only 4 cycles for one move and one add instruction in sequence rather than 6 cycles, as shown in the first two instructions of Fig. 1(a). In Fig. 1, we used F for fetch, D for decode, and E for execute cycle. Pipelining is stalled if the PC encounters a control flow instruction because we do not know the next instruction to be executed. The next instruction can be determined only after the control flow instruction is executed. The cond instruction in Fig. 1(a) shows this fact. Note that dnoc is not executed at all since it is a pseudo instruction and it takes 9 cycles to execute all the instructions in Fig. 1(a). However the instruction pool, similar to dnoc, is really a control flow instruction. For example, in Figure 1(b), not only the instruction loop but also the instruction pool stalls the pipelining. Figure 1: Example programs and the corresponding cycle counts ## 입력 Your program is to read the input from standard input. The input consists of T test cases. The number of test cases T is given in the first line of the input. The first line of each test case contains the number of lines L of ICPC instructions (L > 0) and the remaining L lines contain the sequence of ICPC instructions of the test case. Every instruction line contains exactly one ICPC instruction. The input may be indented according to the nesting of control structures. The loop and cond should contain at least one instruction, i.e. there is no empty loop or empty branch. The maximum number of characters in an input line is 100. Immediate values are 16-bit two’s complement integers, i.e. an immediate value N is in -32768 ≤ N ≤ +32767. The op code and the operands are separated with at least one blank character. There is no infinite loop in the test cases. ## 출력 Your program is to write to standard output. Your program should count the number of cycles when executing the ICPC program given in standard input. The initial contents of the data memory cell and the registers are assumed to be 0. When overflow or underflow occurs in any of the registers or the data memory cell, your program should write “error” rather than a cycle count. The following shows sample input and output for three test cases. ## 예제 입력 1 3 7 move R1 0 move R2 10 loop R2 sub R2 1 pool store R1 4 move R1 1 loop R1 pool 5 move R1 1 cond R1 dnoc 88
### November 27, 2006 Both authors are supported in part by National Science Foundation. <ph f="cmbx">A </ph> <math xmlns="http://www.w3.org/1998/Math/MathML"> <mi>C</mi> <mi>r</mi> </math> <ph f="cmmi"> </ph><ph f="cmbx">Closing Lemma for a Class of Symplectic Diffeomorphisms</ph> ### Zhihong Xia & Hua Zhang Department of Mathematics, Northwestern University, Evanston, IL 60208 E-mail address : [email protected] & [email protected] • Abstract. We prove a ${C}^{r}$  closing lemma for a class of partially hyperbolic symplectic diffeomorphisms. We show that for a generic ${C}^{r}$  symplectic diffeomorphism, $r=1,2,\dots ,$  , with two dimensional center and close to a product map, the set of all periodic points is dense. 1 Introduction and Main Result One of the fundamental problems in dynamical systems is the so-called ${C}^{r}$  closing lemma. The problem goes back to Poincaré in his study of the restricted three body problem. It asks whether periodic points are dense for a typical symplectic or volume preserving diffeomorphism on a compact manifold. Let $M$  be a compact manifold, with either a symplectic or a volume form $\omega$  . Let ${\text{Diff}}_{\omega }^{r}\left(M\right)$  be the set of ${C}^{r}$  symplectic or volume-preserving diffeomorphisms on $M$  . A set in a topological space is said to be residual if it is the intersection of countably many open and dense subsets of of the topological space. A dynamical property is said to be ${C}^{r}$  generic on ${\text{Diff}}_{\omega }^{r}\left(M\right)$  if there is a residual set $R$  such that the property holds for every $f\in R$  . In the class symplectic and volume preserving diffeomorphisms, the closing lemma is the following conjecture: Conjecture 1 (Closing Lemma for symplectic and volume-preserving diffeomorphisms). Assume $M$  is compact. There exists a residual subset $R\subset {\text{Diff}}_{\omega }^{r}\left(M\right)$  such that if $f\in R$  , the set of periodic points $P=\left\{x\in M|{f}^{p}\left(x\right)=x,\text{for some integer}p\right\}$  is dense in $M$  . Smale [12listed the problem as one of the mathematical problems for this century. For $r=1$  , the above conjecture is proved to be true by Pugh [9and later improved to various cases by Pugh & Robinson [10. A different proof was given by Liao [5and Mai [6. For higher smoothness $r>1$  , besides the hyperbolic cases (the Anosov closing lemma, for uniformly hyperbolic and non-uniformly hyperbolic diffeomorphisms), there is no non-trivial results. On the other hand, example shows that the local perturbation method used in the proof of the ${C}^{1}$  closing lemma no longer works for the smoother case. New and global perturbation methods are required (Gutierrez [2). M. Herman [3has a counter example for the ${C}^{r}$  closing lemma with $r$  large for symplectic diffeomorphisms where the symplectic form is not exact. In this paper, we prove a ${C}^{r}$  closing lemma for arbitrary positive integer $r$  for a class of partially hyperbolic symplectic diffeomorphisms. A diffeomorphism $f:M\to M$  is partially hyperbolic if the tangent bundle $TM$  admits a $Tf$  invariant splitting $TM={E}^{u}\oplus {E}^{c}\oplus {E}^{s}$  and there is a Riemannian metric on $M$  such that there exist real numbers ${\lambda }_{1}>{\lambda }_{2}>1>{\mu }_{2}>{\mu }_{1}>0$  satisfying $m\left(Tf{|}_{{E}^{u}}\right)\ge {\lambda }_{1}>{\lambda }_{2}\ge \parallel Tf{|}_{{E}^{c}}\parallel \ge m\left(Tf{|}_{{E}_{c}}\right)\ge {\mu }_{2}>{\mu }_{1}\ge \parallel Tf{|}_{{E}^{s}}\parallel >0.$  Here the co-norm $m\left(A\right)$  of a linear operator $A$  between two Banach spaces is defined by $m\left(A\right):=in{f}_{\parallel v\parallel =1}\parallel A\left(v\right)\parallel =\parallel {A}^{-1}{\parallel }^{-1}$  . To avoid triviality, we assume at least two of the subbundles are non-zero. Partial hyperbolicity is a ${C}^{1}$  open condition, as can be easily verified by it's associated invariant cone fields. We remark that our definition of partial hyperbolicity here is not the most general one. One can allow the parameters ${\lambda }_{1},{\lambda }_{2},{\mu }_{1},{\mu }_{2}$  to depend on each trajectory in general cases. The systems that we are considering satisfy the definition given here. For symplectic cases, the stable distribution ${E}^{s}$  and the unstable distribution ${E}^{u}$  have the same dimension. Moreover, one can choose the parameters such that ${\lambda }_{1}={\mu }_{1}^{-1}$  and ${\lambda }_{2}={\mu }_{2}^{-1}$  . We are now ready to state our main theorem. Theorem 1.1. Let ${M}_{1}$  be a compact symplectic manifold and ${f}_{1}\in {\text{Diff}}_{{\omega }_{1}}^{r}\left({M}_{1}\right)$  an Anosov diffeomorphism. Let ${M}_{2}$  be a compact symplectic surface (orientable surface) with an area form ${\omega }_{2}$  and let ${f}_{2}\in {\text{Diff}}_{{\omega }_{2}}^{r}\left({M}_{2}\right)$  be an area preserving diffeomorphism on ${M}_{2}$  . Let $\omega ={\omega }_{1}+{\omega }_{2}$  be the symplectic form defined on ${M}_{1}×{M}_{2}$  . Assume that ${f}_{1}$  dominates ${f}_{2}$  , i.e., ${f}_{1}×{f}_{2}:{M}_{1}×{M}_{2}\to {M}_{1}×{M}_{2}$  is partially hyperbolic with $T{M}_{2}$  as its center splitting. Then there exists a neighborhood $U$  of ${f}_{1}×{f}_{2}$  in ${\text{Diff}}_{\omega }^{r}\left({M}_{1}×{M}_{2}\right)$  and a residual subset $R\in U$  such that for any $g\in U$  , the set of periodic points of $g$  is dense in ${M}_{1}×{M}_{2}$  . The proof took advantage of the partial hyperbolicity and a recent result of Xia [15on surface diffeomorphisms. 2 Partial Hyperbolicity and Symplectic Structure For a ${C}^{r}$  partially hyperbolic diffeomorphism $f$  , the stable and unstable bundles are uniquely integrable and are tangent to foliations ${W}_{f}^{s}$  and ${W}_{f}^{u}$  with ${C}^{r}$  leaves. $f$  is dynamically coherent if the distributions ${E}^{c}$  , ${E}^{c}\oplus {E}^{s}$  and ${E}^{c}\oplus {E}^{u}$  are integrable, they integrate to foliations ${W}_{f}^{c}$  , ${W}_{f}^{cs}$  and ${W}_{f}^{cu}$  respectively and ${W}_{f}^{c}$  and ${W}_{f}^{s}$  sub-foliate ${W}_{f}^{cs}$  , ${W}_{f}^{c}$  and ${W}_{f}^{u}$  sub-foliate ${W}_{f}^{cu}$  . We have the following proposition from Pugh & Shub [11. Proposition 2.1. Let $f$  be a partially hyperbolic diffeomorphism. If the center foliation ${W}_{f}^{c}$  exists and is of class ${C}^{1}$  , then $f$  is stably dynamically coherent, i.e., any $g$  which is ${C}^{1}$  sufficiently close to $f$  is dynamically coherent. Let $W$  be a foliation of a compact smooth manifold $M$  whose leaves are ${C}^{r}$  immersed submanifolds of dimension $k$  . For $x\in M$  , we call a set $P\left(x\right)\subset W\left(x\right)$  a ${C}^{r}$  plaque of $W$  at $x$  if $P\left(x\right)$  is the image of a ${C}^{r}$  embedding of the unit ball $B\subset {\mathbb{R}}^{k}$  into $W\left(x\right)$  . A plaquation $\text{P}$  for $W$  is a collection of plaques such that every point $x\in M$  is contained in a plaque $P\in \text{P}$  . Let $f$  be a diffeomorphism such that $W$  is invariant under $f$  . A pseudo orbit $\left\{{x}_{n}{\right\}}_{n\in \mathbb{Z}}$  respects $\text{P}$  if for each $n$  , $f\left({x}_{n}\right)$  and ${x}_{n+1}$  lie in a common plaque $P\in \text{P}$  . $f$  is called plaque expansive with respect to $W$  if there exists $\epsilon >0$  such that if two $\epsilon$  -pseudo orbits $\left\{{x}_{n}\right\}$  and $\left\{{y}_{n}\right\}$  both respect $\text{P}$  and $d\left({x}_{n},{y}_{n}\right)<\epsilon$  for all $n$  , then ${x}_{n}$  and ${y}_{n}$  lie in a common plaque for all $n$  . Hirsch, Pugh and Shub [4proved that plaque expansiveness with respect to the center foliation of a partially hyperbolic diffeomorphism is a ${C}^{1}$  open property and is satisfied when we have smooth center foliation ${W}^{c}$  (Theorem 7.1 and 7.2 in [4). It is clear that under the condition of our main theorem, $f={f}_{1}×{f}_{2}$  is partially hyperbolic with smooth center foliation, so there exists neighborhood $U$  of ${f}_{1}×{f}_{2}$  in ${\text{Diff}}_{\omega }^{r}\left({M}_{1}×{M}_{2}\right)$  such that any $g\in U$  is partially hyperbolic, dynamically coherent and plaque expansive with respect to the center foliation ${W}_{g}^{c}$  . Niţ icǎ and Török in [13proved the following Proposition 2.2. Let $M$  be a compact manifold with a smooth volume form $\mu$  , if $f\in {\text{Diff}}_{\mu }^{1}\left(M\right)$  is partially hyperbolic, dynamically coherent and plaque expansive with respect to the center foliation ${W}_{f}^{c}$  , then the periodic center leaves of $f$  are dense in $M$  . Now we have the following Lemma 2.3. Under the condition of our main theorem, there exists neighborhood $U$  of ${f}_{1}×{f}_{2}$  in ${\text{Diff}}_{\omega }^{r}\left({M}_{1}×{M}_{2}\right)$  such that any $g\in U$  is partially hyperbolic, dynamically coherent and the periodic center leaves of $g$  are dense in ${M}_{1}×{M}_{2}$  . The proof is a simple application of the above results and we only have to note that a symplectic diffeomorphism trivially support an invariant smooth volume form. The following proposition is also from Niţ icǎ and Török [13. Proposition 2.4. For a partially hyperbolic diffeomorphism on a compact manifold $M$  with center-stable and center-unstable foliations ${W}^{cs}$  and ${W}^{cu}$  , we have the following local product structure property: There exist constants $\epsilon >0$  , $\delta >0$  and $K>1$  such that for any $x,y\in M$  with $d\left(x,y\right)<\epsilon$  , the following hold, 1) ${W}_{\delta }^{s}\left(x\right)$  and ${W}_{\delta }^{cu}\left(y\right)$  intersect at a unique point ${z}_{1}$  , ${W}_{\delta }^{u}\left(x\right)$  and ${W}_{\delta }^{cs}\left(y\right)$  intersect at a unique point ${z}_{2}$  , and moreover $max\left(d\left(x,{z}_{1}\right),d\left(y,{z}_{1}\right)\right)  , $max\left(d\left(x,{z}_{2}\right),d\left(y,{z}_{2}\right)\right)  . 2) ${W}_{\delta }^{cs}\left(x\right)$  and ${W}_{\delta }^{cu}\left(y\right)$  intersect transversally, same is true for ${W}_{\delta }^{cs}\left(y\right)$  and ${W}_{\delta }^{cu}\left(x\right)$  . 3) ${W}_{\delta }^{cs}\left(x\right)\cap {W}_{\delta }^{cu}\left(x\right)={W}_{\delta }^{c}\left(x\right)$  and ${W}_{\delta }^{cs}\left(y\right)\cap {W}_{\delta }^{cu}\left(y\right)={W}_{\delta }^{c}\left(y\right)$  . Theorem 6.1 of [4tells us that $\epsilon$  , $\delta$  and $K$  are lower semi-continuous with respect to the ${C}^{1}$  topology on ${\text{Diff}}^{1}\left(M\right)$  . We will need a result for symplectic partially hyperbolic diffeomorphisms. Lemma 2.5. Let $f$  be a symplectic partially hyperbolic diffeomorphism on a compact symplectic manifold $M$  , suppose we have the center foliation ${W}_{f}^{c}$  , then center manifolds of $f$  are symplectic submanifolds and the restrictions of $f$  on invariant center leaves are symplectic diffeomorphisms. $\text{Proof}$  . For symplectic partially hyperbolic diffeomorphism $f$  , there exist $\lambda >\mu >1$  such that $m\left(Tf{|}_{{E}^{u}}\right)\ge \lambda >\tau \ge \parallel Tf{|}_{{E}^{c}}\parallel \ge m\left(Tf{|}_{{E}_{c}}\right)\ge {\tau }^{-1}>{\lambda }^{-1}\ge \parallel Tf{|}_{{E}^{s}}\parallel >0.$  Denote by $\omega$  the symplectic form on $M$  . Let $W\subset M$  be a center leaf, we should prove $\left(W,\omega {|}_{W}\right)$  is a symplectic manifold, i.e., $\omega {|}_{W}$  is a non-degenerate, closed two form on $W$  . Closeness is obvious since $\omega$  is closed on $M$  . Suppose that $\omega {|}_{W}$  is degenerate, then there exists $x\in W$  and a unit vector $u\in {T}_{x}W$  such that for all ${v}_{c}\in {T}_{x}W$  , $\omega \left(u,{v}_{c}\right)=0$  . We have the splitting ${T}_{x}M={E}_{x}^{s}\oplus {E}_{x}^{c}\oplus {E}_{x}^{u}$  , any $v\in {T}_{x}M$  can be written as $v={v}_{s}+{v}_{c}+{v}_{u}$  , where ${v}_{s}\in {E}_{x}^{s}$  , ${v}_{u}\in {E}_{x}^{u}$  and ${v}_{c}\in {E}_{x}^{c}={T}_{x}W$  . We have $\omega \left(u,v\right)=\omega \left(u,{v}_{s}\right)+\omega \left(u,{v}_{c}\right)+\omega \left(u,{v}_{u}\right)$  and by the way $u$  was chosen, $\omega \left(u,{v}_{c}\right)=0$  . There exists $K>0$  such that $|\omega \left({w}^{1},{w}^{2}\right)|  for arbitrary pair of unit vectors ${w}^{1},{w}^{2}\in {T}_{x}M$  . Now we know $\omega \left(u,{v}_{s}\right)=0$  because $|\omega \left(u,{v}_{s}\right)|=|\omega \left(T{f}^{n}\left(u\right),T{f}^{n}\left({v}_{s}\right)|\le \left(\frac{\tau }{\lambda }{\right)}^{n}\parallel {v}_{s}\parallel |\omega \left({u}^{n},{v}_{s}^{n}\right)|\le K\left(\frac{\tau }{\lambda }{\right)}^{n}\parallel {v}_{s}\parallel ⟶0$  as $n⟶+\infty$  . Here ${u}^{n}=T{f}^{n}\left(u\right)/\parallel T{f}^{n}\left(u\right)\parallel$  , ${v}_{s}^{n}=T{f}^{n}\left({v}_{s}\right)/\parallel T{f}^{n}\left({v}_{s}\right)\parallel$  . Similarly, we have $\omega \left(u,{v}_{u}\right)=0$  and hence $\omega \left(u,v\right)=0$  for any $v\in {T}_{x}M$  , this contradicts the fact that $\omega$  is non-degenerate on $M$  . So $\left(W,\omega {|}_{W}\right)$  is a symplectic submanifold and if $W$  is invariant under $f$  , $f{|}_{W}$  preserves $\omega {|}_{W}$  and hence is a symplectic diffeomorphism on $W$  . 3 Some generic properties for area-preserving diffeomorphisms on surfaces To prove our main theorem, we need some generic properties for surface diffeomorphisms. Let $S$  be a compact surface, denote by ${\text{Diff}}_{\mu }^{r}\left(S\right)$  the set of area-preserving ${C}^{r}$  diffeomorphisms. For $f\in {\text{Diff}}_{\mu }^{r}\left(S\right)$  , denote by $HP\left(f\right)$  the set of hyperbolic periodic points of $f$  . The following generic property was first proved by Mather [7for maps on two sphere ${S}^{2}$  and later generalized to arbitrary compact surfaces by Oliveira [8. Proposition 3.1. There is a residual subset $R\in {\text{Diff}}_{\mu }^{r}\left(S\right)$  such that if $f\in R$  and $p\in HP\left(f\right)$  is a hyperbolic periodic point of $f$  , then $\overline{{W}_{f}^{s}\left(p\right)}=\overline{{W}_{f}^{u}\left(p\right)}.$ We remark that if $r=1$  , the proposition is true for generic symplectic and volume preserving diffeomorphisms on any compact manifolds (cf. Xia [14). The next Theorem is due to Xia [15, extending a recent result of Franks & Le Calvez [1on two sphere. Theorem 3.2. Let $S$  be a compact orientable surface and $\mu$  be an area form on $S$  . For any positive integer $r$  , there is a residual subset $R\in {\text{Diff}}_{\mu }^{r}\left(S\right)$  such that if $f\in R$  , then both the sets ${\cup }_{p\in HP\left(f\right)}{W}^{s}\left(p\right)$  and ${\cup }_{p\in HP\left(f\right)}{W}^{u}\left(p\right)$  are dense in $S$  . Moreover, if an open set $U\subset S$  contains no periodic point, then there is a hyperbolic periodic point $p\in HP\left(f\right)$  such that both the stable and unstable manifold of $p$  is dense in $U$  . The proof uses prime end compactification and a rich literature on area preserving surface diffeomorphisms. 4 Proof of the Main Theorem Our main perturbation lemma is from Xia [14. Lemma 4.1. Let $M$  be a compact symplectic manifold and $f\in {\text{Diff}}_{\omega }^{r}\left(M\right)$  , there exist ${\epsilon }_{0}>0$  and $c>0$  such that for any $g\in {\text{Diff}}_{\omega }^{r}\left(M\right)$  such that $\parallel f-g{\parallel }_{{C}^{r}}<{\epsilon }_{0}$  and any $0<\epsilon \le {\epsilon }_{0}$  , $0<\delta \le {\epsilon }_{0}$  , if $x,y\in M$  and $d\left(x,y\right)  , there exists ${g}_{1}\in {\text{Diff}}_{\omega }^{r}\left(M\right)$  , $\parallel {g}_{1}-g{\parallel }_{{C}^{r}}<\epsilon$  satisfies ${g}_{1}{g}^{-1}\left(x\right)=y$  , ${g}_{1}\left(z\right)=g\left(z\right)$  for all $z/\in {g}^{-1}\left({B}_{\delta }\left(x\right)\right)$  , and ${g}_{1}^{-1}\left(z\right)={g}^{-1}\left(z\right)$  for all $z/\in {B}_{\delta }\left(x\right)$  . Now we are ready to prove the main theorem. $\text{Proof}$  . By Lemma 2.3, there exists a neighborhood $U$  of ${f}_{1}×{f}_{2}$  in ${\text{Diff}}_{\omega }^{r}\left({M}_{1}×{M}_{2}\right)$  such that any $g\in U$  is partially hyperbolic, dynamically coherent and the periodic center leaves of $g$  are dense in ${M}_{1}×{M}_{2}$  . Now for any fixed $g\in U$  , suppose there is a periodic point free open set $V\subset {M}_{1}×{M}_{2}$  , we show that by an arbitrarily small perturbation, we can create a periodic point in $V$  . It is clear that the main theorem will follow. Since periodic center leaves of $g$  are dense, by Proposition 2.4, we can find two periodic center leaves ${W}_{1}$  and ${W}_{2}$  which are sufficiently close such that there exist ${x}_{1},{y}_{1}\in {W}_{1}\cap V$  , ${x}_{2},{y}_{2}\in {W}_{2}\cap V$  with $z={W}_{\delta }^{u}\left({x}_{1}\right)\cap {W}_{\delta }^{s}\left({x}_{2}\right)\in V$  , $w={W}_{\delta }^{s}\left({y}_{1}\right)\cap {W}_{\delta }^{u}\left({y}_{2}\right)\in V$  . ${W}_{1}$  and ${W}_{2}$  are compact surfaces. By taking certain power of $g$  we may assume that ${W}_{1}$  and ${W}_{2}$  are invariant under $g$  . From Lemma 2.5, ${W}_{1}$  and ${W}_{2}$  are symplectic submanifolds, ${g}^{1}=g{|}_{{W}_{1}}$  and ${g}^{2}=g{|}_{{W}_{2}}$  are symplectic diffeomorphisms. By making an arbitrarily small perturbation, we may assume that ${g}^{1}$  and ${g}^{2}$  satisfy the generic condition in Theorem 3.2. Now ${W}_{i}\cap V$  is a periodic point free open set in ${W}_{i}$  , we know that there exists ${p}_{i}\in HP\left({g}^{i}\right)$  such that ${W}_{{g}^{i}}^{u}\left({p}_{i}\right)$  and ${W}_{{g}^{i}}^{s}\left({p}_{i}\right)$  are both dense in ${W}_{i}\cap V$  , where $i=1,2$  . Note that ${p}_{1}$  and ${p}_{2}$  are hyperbolic periodic points of $g$  . We will show that by an arbitrarily small perturbation, we can change $z$  and $w$  into heteroclinic points of hyperbolic periodic points ${p}_{1}$  and ${p}_{2}$  and get a heteroclinic loop. As a result, there will be periodic points in arbitrary neighborhoods of $z$  and $w$  , including $V$  . For arbitrary $\eta >0$  prescribed as the size of the perturbation, take $\epsilon$  such that $0<\epsilon   , where ${\epsilon }_{0}$  is from Lemma 4.1. Since $z\in {W}^{u}\left({W}_{1}\right)$  and $z/\in {W}_{1}$  , there exists $\alpha$  with $0<\alpha <{\epsilon }_{0}$  , such that ${B}_{\alpha }\left(z\right)\cap {W}_{1}=\varnothing$  and ${g}^{-n}\left(z\right)/\in {B}_{\alpha }\left(z\right)$  for all $n\in \mathbb{N}$  . Fix this $\alpha$  , there exists $\beta$  with $0<\beta <\alpha$  such that for all $\stackrel{~}{z}\in {W}^{u}\left({W}_{1}\right)$  with $d\left(\stackrel{~}{z},z\right)<\beta$  , we have ${g}^{-n}\left(\stackrel{~}{z}\right)/\in {B}_{\frac{\alpha }{2}}\left(z\right)$  for all $n\in \mathbb{N}$  . Fix this $\beta$  , there exists $\gamma$  with $0<\gamma   , such that for all ${z}_{1}\in {W}^{u}\left({W}_{1}\right)$  with $d\left({z}_{1},z\right)<\gamma$  , we have ${B}_{\frac{\alpha }{4}}\left({z}_{1}\right)\subset {B}_{\frac{\alpha }{2}}\left(z\right)$  and hence ${g}^{-n}\left({z}_{1}\right)/\in {B}_{\frac{\alpha }{4}}\left({z}_{1}\right)$  for all $n\in \mathbb{N}$  . By continuity of the unstable foliation, there exists $\nu >0$  such that for all ${\stackrel{~}{x}}_{1}\in {W}_{1}$  with $d\left({\stackrel{~}{x}}_{1},{x}_{1}\right)<\nu$  , there exists ${z}_{1}\in {W}_{g}^{u}\left({\stackrel{~}{x}}_{1}\right)$  such that $d\left({z}_{1},z\right)<\gamma$  . Since ${W}_{{g}^{1}}^{u}\left({p}_{1}\right)$  is dense in ${W}_{1}\cap V$  , there exists ${\stackrel{~}{x}}_{1}\in {W}_{{g}^{1}}^{u}\left({p}_{1}\right)$  with $d\left({\stackrel{~}{x}}_{1},{x}_{1}\right)<\nu$  and hence there is a ${z}_{1}\in {W}_{g}^{u}\left({\stackrel{~}{x}}_{1}\right)$  such that $d\left({z}_{1},z\right)<\gamma$  . Now we can use the perturbation lemma 4.1 for $g$  using the parameters $0<\epsilon <{\epsilon }_{0}$  and $0<\delta =\frac{\alpha }{4}<{\epsilon }_{0}$  . We have $d\left({z}_{1},z\right)<\gamma   , so there exists ${g}_{1}\in {\text{Diff}}_{\omega }^{r}\left({M}_{1}×{M}_{2}\right)$  , $\parallel {g}_{1}-g{\parallel }_{{C}^{r}}<\epsilon$  satisfies ${g}_{1}{g}^{-1}\left({z}_{1}\right)=z$  , ${g}_{1}\left(x\right)=g\left(x\right)$  for all $x/\in {g}^{-1}\left({B}_{\delta }\left({z}_{1}\right)\right)$  , and ${g}_{1}^{-1}\left(y\right)={g}^{-1}\left(y\right)$  for all $y/\in {B}_{\delta }\left({z}_{1}\right)$  . We check that after the perturbation, $z\in {\stackrel{~}{W}}_{{g}_{1}}^{u}\left({p}_{1}\right)$  , where ${\stackrel{~}{W}}_{{g}_{1}}^{u}\left({p}_{1}\right)$  stands for the unstable manifold of the hyperbolic periodic point ${p}_{1}$  for ${g}_{1}$  , not the leaf of the unstable foliation containing ${p}_{1}$  in the partially hyperbolic setting. It is clear that ${g}_{1}^{-1}\left(z\right)={g}^{-1}\left({z}_{1}\right)$  and since ${g}^{-n}\left({z}_{1}\right)/\in {B}_{\delta }\left({z}_{1}\right)$  for all $n\in \mathbb{N}$  , ${g}_{1}^{-n}\left(z\right)={g}^{-n}\left({z}_{1}\right)$  for all $n\in \mathbb{N}$  . Moreover, ${g}_{1}^{-n}\left({p}_{1}\right)={g}^{-n}\left({p}_{1}\right)$  for all $n\in \mathbb{N}$  since ${B}_{\delta }\left({z}_{1}\right)\cap {W}_{1}=\varnothing$  . Hence we have $d\left({g}_{1}^{-n}\left(z\right),{g}_{1}^{-n}\left({p}_{1}\right)\right)=d\left({g}^{-n}\left({z}_{1}\right),{g}^{-n}\left({p}_{1}\right)\right)$  $\le d\left({g}^{-n}\left({z}_{1}\right),{g}^{-n}\left({\stackrel{~}{x}}_{1}\right)\right)+d\left({g}^{-n}\left({\stackrel{~}{x}}_{1}\right),{g}^{-n}\left({p}_{1}\right)\right)⟶0$  as $n⟶+\infty$  . This shows $z\in {\stackrel{~}{W}}_{{g}_{1}}^{u}\left({p}_{1}\right)$  . The two terms above both go to $0$  as $n$  goes to $+\infty$  since ${z}_{1}\in {W}_{g}^{u}\left({\stackrel{~}{x}}_{1}\right)$  and ${\stackrel{~}{x}}_{1}\in {W}_{{g}^{1}}^{u}\left({p}_{1}\right)$  . Similarly we can use a perturbation of size less than $\epsilon$  to make $z$  on the stable manifold of ${p}_{2}\in {W}_{2}$  . Two more of these perturbations will make $w$  in the intersection of stable manifold of ${p}_{1}$  and unstable manifold of ${p}_{2}$  . Finally by a perturbation of size less than $4\epsilon =\eta$  , we have the desired heteroclinic loop. This concludes our proof. References 1. J. Franks and P. Le Calvez. Regions of instability for non-twist maps. Ergodic Theory Dynam. Systems, 23(1):111–141, 2003. 2. C. Gutierrez. A counter-example to a ${c}^{2}$  closing lemma. Ergodic Theory & Dynamical Systems, 7(4):509–530, 1987. 3. M. Herman. Exemples de flots hamiltoniens dont aucune perturbation en topologie ${c}^{\infty }$  n'a d'orbites périodiques sur un ouvert de surfaces d'énergie. C.R. Acad. Sci. Paris, t., 312:989–994, 1991. 4. M. Hirsch, C. Pugh, and M. Shub. Invariant manifolds, Lect. Notes in Math., volume 583. Springer-Verlag, Berlin-New York, 1977. 5. S.T. Liao. An extension of the ${c}^{1}$  closing lemma. Acta Sci. Natur. Univ. Pekinensis, 2:1–41, 1979. 6. J. Mai. A simpler proof of ${c}^{1}$  closing lemma. Scientia Sinica, 10:1021–1031, 1986. 7. J. Mather. Topological proofs of some purely topological consequences of carathéodory's theory of prime ends. in Selected Studies. Eds. Th. M. Rassias and G. M. Rassias, pages 225–255, 1982. 8. F. Oliveira. On the generic existence of homoclinic points. Ergod. Th. & Dynam. Sys., 7:567–595, 1987. 9. C. Pugh. The closing lemma. Amer. J. Math., 89:956–1021, 1967. 10. C. Pugh and C. Robinson. The ${c}^{1}$  closing lemma, including hamiltonians. Ergod. Th. & Dynam. Sys., 3:261–313, 1983. 11. C. Pugh and M. Shub. Stably ergodic dynamical systems and partial hyperbolicity. J. of Complexity, 13:125–179, 1997. 12. S. Smale. Mathematical problems for the next century. Math. Intelligencer, 20(2):7–15, 1998. 13. V. Niţ icǎ and A. Török. An open dense set of stably ergodic diffeomorphisms in a neighborhood of a non-ergodic one. Topology, 40:259–278, 2001. 14. Z. Xia. Homoclinic points in symplectic and volume-preserving diffeomorphism. Commun. Math. Phys., 177:435–449, 1996. 15. Z. Xia. Area-preserving surface diffeomorphisms. Preprint, Mathematics ArXiv: math.DS/0503223, 2004. Department of Mathematics, Northwestern University, Evanston, IL 60208 E-mail address : [email protected] & [email protected]
# Massless particles and C 1. Dec 15, 2006 I know photons have no mass but are there other particles that have no mass and if so can they travel at the speed of light? Last edited: Dec 15, 2006 2. Dec 15, 2006 ### mathman Gravitons (yet to be discovered?) and gluons (strong nuclear force carrier) have no mass. Gravitons travel at the speed of light. Gluons (in ordinary circumstances) exist inside nuclei, so they don't travel much. 3. Dec 16, 2006 ### rbj and, if they travel at speed, c, they gotta have no rest mass (which is what we mean by "massless"). photons actually do have mass, namely: $$m = \frac{E}{c^2} = \frac{h \nu}{c^2}$$ just no "rest mass" a.k.a. "invariant mass".
## Chalkdust, and the natural number game! Chalkdust is a great magazine for undergraduates which circulates in hard copy form around London and is also available online. It’s quirky, it’s put together by people who know what they’re talking about, and I think it does a great job of promoting mathematics to the undergraduate and school communities in London and beyond. And issue 10, out today, has an article about Lean in it 😀 [disclaimer: written by me 😉 ] ## The Natural Number Game! Ever wanted to prove that $a\leq b$ and $b\leq c$ implies $a\leq c$ from first principles, but couldn’t be bothered to download and install a computer theorem prover? Well now you can! The natural number game is an atttempt by me to teach complete beginners, who are comfortable with induction but know nothing about theorem provers, how it feels to use one. Undergraduate mathematicians — this is a chance to try Lean without having to download anything, and get over enough of the learning curve to get going and actually do something without getting stuck 100 times. Or maybe you’ll get stuck 100 times anyway. Find me on the Lean chat and give me feedback. Currently we have around 50 levels, covering addition, multiplication, raising numbers to powers, and a few inequality levels. Over the next few weeks I hope to add 20 more inequality levels and some stuff about even and odd numbers and primality. If you’re not a fan of the web interface and just want to play the game offline on your own computer, you’ll need to install Lean and its maths library, but this is not too hard nowadays. The natural number game source code is available online at github with installation instructions. The Xena Project aims to get mathematics undergraduates at Imperial College (and beyond) trained in the art of formalising mathematics on a computer. This entry was posted in Learning Lean, undergrad maths and tagged , , , . Bookmark the permalink. ### 13 Responses to Chalkdust, and the natural number game! 1. Noah Snyder says: This is great! Bug report, the variable names in the code and the normal math don’t match for World 3, Level 4. Liked by 1 person 2. xenaproject says: Thanks! Happy to hear more. I’ve caught a couple of things myself. Will release v1.01 when I have a minute. Like 3. Did-I-Do-Something-Wrong says: I’m not sure this is a bug or what. World 1, Level 4. I type: rw \l h, rw h, refl, OK, nothing yet, other than it tells me the refl tactic is being used at 100:19. But then I delete the last comma… Suddenly my proof is complete? Like 4. xenaproject says: The truth about commas is that Lean *does not need the very last one in a complete proof* but I decided against emphasising this. So your proof is complete with or without the very last comma. Notice that the second rewrite actually changes two things not one 😀 Like 5. Will Sawin says: I played through the whole thing. Some comments/questions/observations. I think it might be helpful to reveal “intended solutions”/”optimal solutions” after you succeed at a given level. I often forgot the syntax for some basic operation, tried several times to do it and failed, then gave up and solved the problem a different and wordier way. Seeing a curated solution would let me know if I was on the right track originally and teach me the correct syntax if so. The “pro-er tip” on world 5, level 2 is a great example of information I would want to see after solving the level rather than before. Other times I didn’t know the syntax to do something, figured out a more cumbersome alternate approach, and only read the correct syntax in a hint to a much later level. The hints on World 5, level 2 and world 5, level 6 are both examples of this. Are we supposed to use these earlier? Was I trying completely the wrong way? Is it possible to solve all the levels without using tactics that are not on the list you linked to? (In particular, the contradiction tactic)? I guess an intended solutions list would answer this as well. I didn’t like the hints very much. I tried not to read them and to just look at the problem, and when I saw the hints I tried to solve the problems without following the hints (and usually failed). I wonder if some of them could be removed, or hidden in some optional hints box. Also the “start with intro h” hint on World 5, Level 4 seems wrong. I think you meant “start with cases h”. Liked by 1 person • xenaproject says: The truth of it is that the game was rushed out before it was finished, because I wanted it to coincide with the Chalkdust article but I was really drowning in teaching. Thanks loads for the comments Will. I am going to release a better v1.1 hopefully soon. The answer to questions like “is it possible to solve all the levels without using tactics not on the list” is “I really have no idea, this is shoddy on my part I know”. Like 6. Joseph Myers says: I observed several times that getting some bit of the syntax slightly wrong just resulted in the top right window going blank without any helpful error messages in the bottom right window. I noticed that starting typing e.g. an apply tactic tended to make the top right window go blank while typing (just saying e.g. “85:7: tactic apply”). But it would be helpful to have the hypotheses and goals stay visible there to help in figuring out exactly what to put in the command I’m typing. Particular places (in world 2, I haven’t gone further than that) where I knew what I wanted to do mathematically (but which might have been different from the intended proof approach!) but in doing so it seemed necessary to refer to external documentation, were where it started using “not equal” with more than just symmetry (I didn’t see any guidance on how to use, or prove, propositions involving “not”, though some guesswork found it worked as “implies false” for “intro”, or how to dispose of a case once you’ve successfully derived a contradiction such as x \ne x or a = succ a), and where I wanted to apply theorems given in an “iff” form and so ended up looking up iff.elim_left to convert them to an “implies” form. World 2, Level 6 has a typo “numebrs”. (I could file a pull request for that fix if you want, but I guess it’s quicker for you just to fix it.) Liked by 1 person • xenaproject says: Hi Joseph! The poor error reporting is actually a bug in https://github.com/mpedramfar/Lean-game-maker which has since been fixed (but the fix hasn’t made it to the web version of the game yet). World 2 the higher levels are just all belched out with no thought about how to teach beginners how to do them, I’m sorry. This needs fixing but I’m very busy with work right now. I’m hoping to get something better out within a week. Like 7. Joseph Myers says: Some more observations: The tactic guide in the left column really ought to link to https://wwwf.imperial.ac.uk/~buzzard/xena/html/source/tactics/guide.html (and maybe to the Lean manuals) as the point where I find I want to look up more advanced tactics doesn’t tend to be from the particular levels that have that link. Some of the lemmas / theorems are specified in a form like ∀ (a b : mynat) (and so to prove them I need to start with “intro” on those variables), others just use (a b : mynat); I’m not sure if this difference is accidental, deliberate to introduce different language features, or deliberate because there is some reason it’s idiomatic to use ∀ in certain cases but not others. Yet others use {a b : mynat}, meaning that to specify arguments explicitly when using them it’s necessary to discover the “@theorem_name” syntax for specifying such arguments in the {} case. It would be least confusing for this beginner just to use (a b : mynat) all the time. Liked by 1 person • xenaproject says: Joseph it’s all accidental. Thank you very much for your careful comments. I used to have a lot of { }s and then at the last minute I attempted to change them all to ( )’s but clearly I missed a few. It got rushed out the door. I only now have the time to look at it carefully. Liked by 1 person 8. Joseph Myers says: And having now done all the levels: World 5, Level 1 clearly has problems with the formatting (mostly just showing raw source of the level rather than formatted like the other levels). In World 5, the left column of known results ought to include le_def rather than starting with le_refl. Thanks for putting this together! Looking forward to additional Worlds and Levels in future. Like • xenaproject says: Joseph thanks for playing it all the way through before I myself had got around to this 😀 I only added world 5 a couple of days before release. I have 30 results about inequalities which need to go in there. I then have to decide whether to do more stuff on naturals (even/oddness, primality, maybe even uniqueness of factorization) or whether to start on The Integer Game! Like 9. xenaproject says: OK I think that all the issues raised here are fixed in the current version of the game. Like
Definitions # Blowing up [bloh] In mathematics, blowing up or blowup is a type of geometric modification, particularly applied in algebraic geometry, where it is essential in birational geometry. At a point $Z$ that is being 'blown up' (the metaphor is inflation of a balloon, rather than explosion), $Z$ is replaced by the whole space of tangent directions at $Z$ (which, more formally, can be defined as the projective space constructed from the tangent space at $Z$). More general blow-ups are also defined. Contemporary algebraic geometry treats blowing up as an intrinsic operation on an algebraic variety. It may also be considered from an extrinsic point of view; for example by taking a plane curve and applying a transformation to the projective plane in which it sits. This is in fact the more classical approach, and this is reflected in some of the terminology. Blowing up is also more formally a monoidal transformation; in the projective plane simply blowing up one point takes one to a quadric, and a curve must be blown down to return to the plane. That is, transformations in the Cremona group are not 'monoidal' or single-centred. See also quadratic transformation. ## Blowing up points in complex space Let $Z$ be the origin in $n$-dimensional complex space, $mathbb\left\{C\right\}^n$. That is, $Z$ is the point where the $n$ coordinate functions $x_1, ldots, x_n$ simultaneously vanish. Let $mathbb\left\{P\right\}^\left\{n - 1\right\}$ be $\left(n - 1\right)$-dimensional complex projective space with homogeneous coordinates $y_1, ldots, y_n$. Let $tilde\left\{mathbb\left\{C\right\}^n\right\}$ be the subset of $mathbb\left\{C\right\}^n times mathbb\left\{P\right\}^\left\{n - 1\right\}$ that satisfies simultaneously the equations $x_i y_j = x_j y_i$ for $i, j = 1, ldots, n$. The projection $pi : mathbb\left\{C\right\}^n times mathbb\left\{P\right\}^\left\{n - 1\right\} to mathbb\left\{C\right\}^n$ naturally induces a holomorphic map $pi : tilde\left\{mathbb\left\{C\right\}^n\right\} to mathbb\left\{C\right\}^n.$ This map $pi$ (or, often, the space $tilde\left\{mathbb\left\{C\right\}^n\right\}$) is called the blow-up (variously spelled blow up or blowup) of $mathbb\left\{C\right\}^n$. The exceptional divisor $E$ is defined as the inverse image of the blow-up locus $Z$ under $pi$. It is easy to see that $E = Z times mathbb\left\{P\right\}^\left\{n - 1\right\} subseteq mathbb\left\{C\right\}^n times mathbb\left\{P\right\}^\left\{n - 1\right\}$ is a copy of projective space. It is an effective divisor. Away from $E$, $pi$ is an isomorphism between $tilde\left\{mathbb\left\{C\right\}^n\right\} setminus E$ and $mathbb\left\{C\right\}^n setminus Z$; it is a birational map between $tilde\left\{mathbb\left\{C\right\}^n\right\}$ and $mathbb\left\{C\right\}^n$. ## Blowing up submanifolds in complex manifolds More generally, one can blow up any codimension-$k$ complex submanifold $Z$ of $mathbb\left\{C\right\}^n$. Suppose that $Z$ is the locus of the equations $x_1 = cdots = x_k = 0$, and let $y_1, ldots, y_k$ be homogeneous coordinates on $mathbb\left\{P\right\}^\left\{k - 1\right\}$. Then the blow-up $tilde\left\{mathbb\left\{C\right\}^n\right\}$ is the locus of the equations $x_i y_j = x_j y_i$ for all $i$ and $j$, in the space $mathbb\left\{C\right\}^n times mathbb\left\{P\right\}^\left\{k - 1\right\}$. More generally still, one can blow up any submanifold of any complex manifold $X$ by applying this construction locally. The effect is, as before, to replace the blow-up locus $Z$ with the exceptional divisor $E$. In other words, the blow-up map $pi : tilde X to X$ is birational, and an isomorphism away from $E$. $E$ is naturally seen as the projectivization of the normal bundle of $Z$. So $pi|_E : E to Z$ is a locally trivial fibration with fiber $mathbb\left\{P\right\}^\left\{k - 1\right\}$. Since $E$ is a smooth divisor, its normal bundle is a line bundle. It is not difficult to show that $E$ intersects itself negatively. This means that its normal bundle possesses no holomorphic sections; $E$ is the only smooth complex representative of its homology class in $tilde X$. (Suppose $E$ could be perturbed off itself to another complex submanifold in the same class. Then the two submanifolds would intersect positively — as complex submanifolds always do — contradicting the negative self-intersection of $E$.) This is why the divisor is called exceptional. Let $V$ be some submanifold of $X$ other than $Z$. If $V$ is disjoint from $Z$, then it is essentially unaffected by blowing up along $Z$. However, if it intersects $Z$, then there are two distinct analogues of $V$ in the blow-up $tilde X$. One is the proper (or strict) transform, which is the closure of $pi^\left\{-1\right\}\left(V setminus Z\right)$; its normal bundle in $tilde X$ is typically different from that of $V$ in $X$. The other is the total transform, which incorporates some or all of $E$; it is essentially the pullback of $V$ in cohomology. ## Blowing up schemes To pursue blow-up in its greatest generality, let $X$ be a Noetherian scheme, and let $mathcal\left\{I\right\}$ be a coherent sheaf of ideals on $X$. The blow-up of $X$ with respect to $mathcal\left\{I\right\}$ is a scheme $tilde\left\{X\right\}$ along with a morphism $picolon tilde\left\{X\right\} rightarrow X$ such that $pi^\left\{-1\right\} mathcal\left\{I\right\} cdot mathcal\left\{O\right\}_\left\{tilde\left\{X\right\}\right\}$ is an invertible sheaf, characterized by this universal property: for any morphism $fcolon Y rightarrow X$ such that $f^\left\{-1\right\} mathcal\left\{I\right\} cdot mathcal\left\{O\right\}_Y$ is an invertible sheaf, $f$ factors uniquely through $pi$. Notice that $tilde\left\{X\right\}=mathbf\left\{Proj\right\} bigoplus_\left\{n=0\right\}^\left\{infty\right\} mathcal\left\{I\right\}^n$ has this property; this is how the blow-up is constructed. Here Proj is the Proj construction on graded commutative rings. ## Related constructions In the blow-up of $mathbb\left\{C\right\}^n$ described above, there was nothing essential about the use of complex numbers; blow-ups can be performed over any field. For example, the real blow-up of $mathbb\left\{R\right\}^2$ at the origin results in the Möbius strip; correspondingly, the blow-up of the two-sphere $mathbb\left\{S\right\}^2$ results in the real projective plane. Deformation to the normal cone is a blow-up technique used to prove many results in algebraic geometry. Given a scheme $X$ and a closed subscheme $V$, one blows up $V times \left\{0\right\}$ in $Y = X times mathbb\left\{C\right\}$ (or $X times mathbb\left\{P\right\}^1$). Then $tilde Y to X times mathbb\left\{C\right\}$ is a fibration. The general fiber is naturally isomorphic to $X$, while the central fiber is a union of two schemes: one is the blow-up of $X$ along $V$, and the other is the normal cone of $V$ with its fibers completed to projective spaces. Blow-ups can also be performed in the symplectic category, by endowing the symplectic manifold with a compatible almost complex structure and proceeding with a complex blow-up. This makes sense on a purely topological level; however, endowing the blow-up with a symplectic form requires some care, because one cannot arbitrarily extend the symplectic form across the exceptional divisor $E$. One must alter the symplectic form in a neighborhood of $E$, or perform the blow-up by cutting out a neighborhood of $Z$ and collapsing the boundary in a well-defined way. This is best understood using the formalism of symplectic cutting, of which symplectic blow-up is a special case. Symplectic cutting, together with the inverse operation of symplectic summation, is the symplectic analogue of deformation to the normal cone along a smooth divisor.
# Estimating the Percolation Centrality of Large Networks through Pseudo-dimension Theory ### Abstract In this work we investigate the problem of estimating the percolation centrality of every vertex in a graph. This centrality measure quantifies the importance of each vertex in a graph going through a contagious process. It is an open problem whether the percolation centrality can be computed in $\mathcal{O}(n^{3-c})$ time, for any constant $c>0$. In this paper we present a $\tilde{\mathcal{O}}(m)$ randomized approximation algorithm for the percolation centrality for every vertex of $G$, generalizing techniques developed by Riondato, Upfal and Kornaropoulos. The estimation obtained by the algorithm is within $\epsilon$ of the exact value with probability $1- \delta$, for {\it fixed} constants $0 < \epsilon,\delta < 1$. In fact, we show in our experimental analysis that in the case of real-world complex networks, the output produced by our algorithm is significantly closer to the exact values than its guarantee in terms of theoretical worst case analysis. Type Publication In ACM SIGKDD'20 - International Conference on Knowledge Discovery & Data Mining
GMAT Practice Question Set #55 (Question 163-165) Problem Solving Question #163: Passwords 4 or 5 characters Time 0 0 : 0 0 : 0 0 A computer system uses alphanumeric case sensitive characters for its passwords. When the system was created it required users to create passwords having 4 characters in length. This year, it added the option of creating passwords having 5 characters in length. Which of the following gives the expression for the total number of passwords the new computer system can accept? Assume there are 62 unique alphanumeric case sensitive characters. (A) 63^4 (B) 62^5 (C) 62(62^4) (D) 63(62^4) (E) 63(62^6) GMATTM is a registered trademark of the Graduate Management Admission CouncilTM. The Graduate Management Admission CouncilTM does not endorse, nor is it affiliated in any way with the owner or any content of this web site.
# Geometry Question: Calculating the length of the side of a triangle inside a circle. Given this picture: The radius of the circle is $30$ inches. The angle between $A$ and $B$ is $22.5^o$. How would I calculate the distance (not along the arc, but straight line) from $B$ to $A$, as depicted by the red line? - @Eric Naslund Thank you for adding the image to the post. –  NW Tech Jun 3 '11 at 23:42 Thank Jonas Meyer, he was the one that did that. I merely added some latex for the letters, and changed the title to be more descriptive. (Usually we try to stay away from titles with "easy" or "hard" since these adjectives are very subjective.) –  Eric Naslund Jun 4 '11 at 0:32 Bisect the angle to get 2 right triangles with known hypotenuse and angles, then use $\sin$ to get the sides opposite the $22.5/2$ degree angles. Or, use the triangle that's already there, having 2 angles equal to $(180-22.5)/2$ degrees, and apply the law of sines. Or, apply the law of cosines immediately. In case you want exact forms for the sines or cosines involved, you can use half/double angle formulas and the fact that $22.5=45/2$. - Breaking that up into two right triangles, using the law of sine, and rounding to 1 decimal place, I came up with 11.7 inches. does that look right? –  NW Tech Jun 3 '11 at 23:49 @Chip: Yes, it's around 11.7054193. An exact form is $$30\sqrt{2-\sqrt{2+\sqrt{2}}}.$$ –  Jonas Meyer Jun 3 '11 at 23:55 Just for future reference, in case someone stumbles upon this problem hoping to learn how to solve a similar problem: Let $O$ denote the point at the origin. We are given that $\angle AOB = 22.5^\circ$, and we are given that the radius of the circle is $30$ inches. That means the length of the line segments $OA$ and $OB$ are each $30$ inches, since they are both radii of the circle. As Jonas pointed out in his answer, there are a number of approaches to solving for the length of the line segment $AB$. Note that $\triangle AOB$ is an isosceles triangle, and so the angles $\angle OAB, \angle OBA$ are equal. Let's call the measure of one of these two angles "$x$". Then, since the sum of the measures of the angles of any triangle is $180^{\circ}$, we know that $$22.5 + 2x = 180^{\circ}$$ Solving for $x$ gives us $\displaystyle x = \frac{180-22.5}{2} = 78.75^\circ$ Now, there are a few options: We have all the angles of $\triangle AOB$, and the length of two of its sides. We need only find the length of the segment $AB$. We can use any of the following approaches find the length of $AB$: 1. Using the Law of sines: $$\frac{c}{\sin(22.5^\circ)} \,=\, \frac{30}{\sin(78.5^\circ)} \,=\, \frac{30}{\sin(78.75^\circ)}$$ where the numerators are the lengths of the sides of a triangle, $c$ the unknown length, and the denominator of each term is the sin of the angle opposite the side given in its numerator. From here, one can easily solve for $c$, the length of the segment $AB$. 2. Using the Law of cosines, in this case, $$(AB)^2 = (OA)^2 + (OB)^2 - 2(OA)(OB)\cos(\angle AOB) \rightarrow (AB)^2 = 2(30)^2 - 2(30)^2\cos(22.5^\circ)$$ One need only solve for segment $AB$. 3. Denote the midpoint of segment $AB$ by $M$ (which is also the point at which the bisector of $\angle AOB$ intersects segment $AB$), such that $\triangle AOM, \triangle BOM$ are both congruent right triangles (with $\angle OMA, \angle OMB\;\text{both}\; 90^\circ$). So, we have that $\cos(\angle OAB) = (AM)/(OA)$, so that $\cos(78.75^\circ) = (AM)/30$. Solving for $AM$ givens us $(AM) = 30\cdot \cos(78.75)^\circ$, and from there we need only compute $2(AM)$ to obtain the length of the segment $AB$. - $\exp(+)^{\infty}$. Thanks for your support. I hope I will be a hand for others, Amy. :-) –  Babak S. Jul 22 '13 at 17:09 Thank you @Babak! –  amWhy Jul 22 '13 at 17:22
# A Solutions to Exercises ## A.1 Section 3 Data Structures ### A.1.1 3.4 Data frames ##### A.1.1.0.1Exercise 1: What is the difference between cbind and rbind? C = columns, R = rows bind() and rbind() both create matrices or data frames by combining several vectors of the same length. cbind() combines vectors as columns, while rbind() combines them as rows. ##### A.1.1.0.2Exercise 2: We found out that the blood pressure instrument is under-recording each measure and all measurement incorrect by 0.1. How would you add 0.1 to all values in the blood vector?** id <- c("N198","N805","N333","N117","N195","N298") gender <- c(1, 0, 1, 1, 0, 1) # 0 denotes male, 1 denotes female age <- c(30, 60, 26, 75, 19, 60) blood <- c(0.4, 0.2, 0.6, 0.2, 0.8, 0.1) my_data <- data.frame(id, gender, age, blood) my_data <- data.frame(ID = id, Sex = gender, Age = age, Blood = blood) my_data ## ID Sex Age Blood ## 1 N198 1 30 0.4 ## 2 N805 0 60 0.2 ## 3 N333 1 26 0.6 ## 4 N117 1 75 0.2 ## 5 N195 0 19 0.8 ## 6 N298 1 60 0.1 blood #here we see the original blood measures before we apply any changes ## [1] 0.4 0.2 0.6 0.2 0.8 0.1 updated_blood <- (blood) + 0.1 #we have added 0.1 to all values in the blood vector updated_blood #we check if the changes have been applied ## [1] 0.5 0.3 0.7 0.3 0.9 0.2 #### A.1.1.1Exercise 3: We found out that the first patient is 33 years old. How would you change the first element of the vector age to 33 years? my_data #here we see that the first patient age is 30yo ## ID Sex Age Blood ## 1 N198 1 30 0.4 ## 2 N805 0 60 0.2 ## 3 N333 1 26 0.6 ## 4 N117 1 75 0.2 ## 5 N195 0 19 0.8 ## 6 N298 1 60 0.1 my_data[1, "Age"] <- 33 #we changed it to 33yo my_data$Age #we check if the changes have been applied ## [1] 33 60 26 75 19 60 ## A.2 Section 4 Handling data: the Tidyverse iris ##### A.2.0.0.1Exercise 1. Select only the columns Sepal.Length and Sepal.Width select(iris, Sepal.Length, Sepal.Width) #equivalent to iris[, c(1,2)] #### A.2.0.1Exercise 2. Arrange the data by increasing Sepal.Length arrange(iris, (Sepal.Length)) #### A.2.0.2Exercise 3. Filter the data to only include Species setosa. filter(iris, Species == "setosa") #### A.2.0.3Exercise 4. Select the columns Petal.Length and Petal.Width, then make (mutate) a new column Petal.Area as Petal.Length multiplied by Petal.Width, then arrange in order of decreasing petal area. Petal.Area <- mutate(iris, Petal.Area = Petal.Length * Petal.Width) arrange(iris, desc(Petal.Area)) 4.2 More dplyr verbs: group_by and summarise #### A.2.0.4Exercise 1. group_by species and calculate the mean Petal.Length for each species. iris_by_species <- group_by(iris, Species) iris_by_species #### A.2.0.5Exercise 2. group_by species, then standardise the Petal.Length within each species – i.e. subtract the mean and divide by the standard deviation. Hint: your processed dataset should still have 150 rows; you will need to use mutate rather than ## A.3 Section 5: Getting data in and out of R Set a working directory by: 1. setwd or 2. Session > Set Working Directory read.csv("CHD2019.csv") ## A.4 Section 6: Control Structures: loops and conditions #### A.4.0.16.1. if, else and for Fizz Buzz exercise The most obvious way of solving FizzBuzz is to loop through a set of integers. In this loop, we use conditional statements to check whether each integer is divisible by 3 and/or 5. for (i in 1:100){ if(i %% 15 == 0){ print("fizz-buzz") } else if(i %% 3 == 0){ print("fizz") } else if(i %% 5 == 0){ print("buzz") } else { print(i) } } ## A.5 Section 7 Writing your own functions fizz_buzz <- function(n){ x <- 1:n y <- x y[x %% 3 == 0] <- "fizz" y[x %% 5 == 0] <- "buzz" y[x %% 15 == 0] <- "fizz-buzz" y } fizz_buzz(100) ## [1] "1" "2" "fizz" "4" "buzz" "fizz" ## [7] "7" "8" "fizz" "buzz" "11" "fizz" ## [13] "13" "14" "fizz-buzz" "16" "17" "fizz" ## [19] "19" "buzz" "fizz" "22" "23" "fizz" ## [25] "buzz" "26" "fizz" "28" "29" "fizz-buzz" ## [31] "31" "32" "fizz" "34" "buzz" "fizz" ## [37] "37" "38" "fizz" "buzz" "41" "fizz" ## [43] "43" "44" "fizz-buzz" "46" "47" "fizz" ## [49] "49" "buzz" "fizz" "52" "53" "fizz" ## [55] "buzz" "56" "fizz" "58" "59" "fizz-buzz" ## [61] "61" "62" "fizz" "64" "buzz" "fizz" ## [67] "67" "68" "fizz" "buzz" "71" "fizz" ## [73] "73" "74" "fizz-buzz" "76" "77" "fizz" ## [79] "79" "buzz" "fizz" "82" "83" "fizz" ## [85] "buzz" "86" "fizz" "88" "89" "fizz-buzz" ## [91] "91" "92" "fizz" "94" "buzz" "fizz" ## [97] "97" "98" "fizz" "buzz" ## A.6 Section 9: Introduction to plotting swiss plot(density(swiss$Fertility),type="1") Exercise: Take a look at all different values that can be used for type using the help manual ?type ## starting httpd help server ... done Exercise: Choose another data set and recreate these plots for variables of your choice Try data() to get a list of built-in data sets and their dependency packages. Then use the code provided in the Intro to R course to recreate the plots with a dataset of your choice. !NB there are several ways that these plots can be recreated. :-) data() Exercise: Try and work out how to change the title of the plot. plot(swiss, main = "Title test")
and the cations in certain holes of the lattice. Point group: m3m (O h) six 2-fold rotations, four 3-fold rotations, three 4-fold rotations, nine mirror planes, inversion. The lattice spacing of NaCl-structure TiAlN films is known to decrease linearly with increasing Al content. The body-centred cubic (bcc) structure is the most stable form for sodium metal at 298 K (25°C).Under normal conditions, all of the Group 1 (alakali metals) elements are based upon the bcc structure. Space group: 225 (F m -3 m), Strukturbericht: B1, Pearson symbol: cF8. Lattice parameter: 3.24 for CsCl form Stiffness constants : in 10 11 dynes/cm 2 , at room temperature c 11 : 4.295 (4.499 at 0K.) DOI: 10.1021/ed047p396; Mark D. Baker and A. David Baker. For the CsCl structure, a minimum energy of -1385.213651797 eV was obtained, with a lattice constant of 3.380 Å. In oxides, this is especially the case. 109, 1985, p 345-350, Your email address will not be published. Using this value of lattice constant, calculate the wavelength of X-rays in second order, if angle of diffraction =26°. The basis is two ions, a sodium cation and a chlorine anion. 2) By knowing the molecular mass $M =58.44 kg/Kmol$ and the mass density $\rho=2.165 kg/m^3$ for NaCl, find the crystal lattice constant $\alpha_0$. [5] D D Koelling and B N Harmon 1977 J. Phys. Data from various sources differ slightly, and so is the result. This structure consists essentially of a FCC (CCP) lattice of sulfur atoms (orange) (equivalent to the lattice of chloride ions in NaCl) in which zinc ions (green) occupy half of the tetrahedral sites. Calculate the lattice constant for a unit cell in KCl and determine the type of Bravais lattice according to the calculations in . lattice parameter a, Å Cutoff energy E_cut, eV number of k points; CsCl-1385.319(1) 3.377(13) 600: 17: NaCl-1384.168(1) 5.656(55) 600: 15 As with any FCC lattice, there are four atoms of sulfur per unit cell, and … Two possible configurations were investigated: CsCl and NaCl crystal structures. A 3x3x3 lattice of NaCl. The lattice energy is the energy liberated when oppositely charged ions in the gas phase come together to form a solid. 1996, 77, 3865-3868. (Ref.1, also Other Information) (The Avogadro number $N_A$ is being given). 1) The interaction potential energy between two ions of an ionic crystal can be approximated by the relation: $E_p =A*exp[-(r/\rho_0)] -\frac{\alpha*e^2}{4\pi\epsilon_0}*\frac{1}{r}$. 1. Notice that the (111) … Interatomic distance and lattice constant of NaCl, Error Propagation in Measuring the Kinetic Energy. [4] CASTEP GUIDE, BIOVIA, UK, 2014. Click here to buy a book, photographic periodic table poster, card deck, or 3D print based on the images you see here! Lattice Constants of all the elements in the Periodic Table. Their spacing dcorresponds to one half the lattice constant: Similarly, in hexagonal crystal structures, the a and b constants are equal, and we only refer to the a and c constants. a) We find the value of the constant $A$ from the condition of minimum interaction potential energy: $-\frac {A}{\rho_0}*exp[-(r_0/\rho_0)] +\frac{\alpha*e^2}{4\pi\epsilon_0}*\frac{1}{r_0^2} =0$, $A =\frac{\alpha*e^2}{4\pi\epsilon_0}*\frac{\rho_0}{r_0^2}*exp(r_0/\rho_0)$. For the NaCl structure, a minimum energy of -1384.047417 eV was obtained, with a lattice constant of 5.649 Å. Also we run BFGS geometry optimization with the same values of and k-points, shown as orange dot on Figure 3. Calculation of Madelung Constants by Direct Summation. Sodium chloride and magnesium oxide have exactly the same arrangements of ions in the crystal lattice, but the lattice enthalpies are very different. But since for NaCl minimum is about two times wider this energy error introduces larger error for lattice constant of while for ScAl the error is less than . But the lattice enthalpy of NaCl is defined by the reaction . The cohesive energy of NaCl is - 7.90 eV, with a Coulomb contribution of -8.85 eV, a repulsive contribution of 1.02 eV, a van der Waals contribution of - 0.125 eV, and a term due to the zero … within a crystal lattice, is often approached via analytic function theory. SCF convergence tolerance was set to 1.0E-6eV/atom. Your email address will not be published. Journal of Chemical Education 1970 47 (5), 396. Now we use found values of and k-points to find ground state energy as a function of unit cell size and look for a minimum approximating it by a parabola. Many thanks Dr Santanu. The ionic polarizability per ion pair (per K+-Cl- ion) is 4.58×10-40 Fm^2, while the electronic polarizability of K+ is 1.264 ×10-40 Fm^2 and that of Cl- is 3.408 ×10-40 Fm^2. Required fields are marked *. ∴ … The value calculated for U depends on the data used. Refer to graph, table and … Lattice enthalpy value from ∆ H 0 (5) is written with a reversed sign. For both geometries energy of a minimum of a parabola and energy given by BFGS agree up to 0.0001eV. For each structure lattice constant was found by seeking a minimum of a ground state energy. 5. G. Raunio & S. Rolandson, Lattice dynamics of NaCl, KCl, RbCl & RbF, Phys.Rev. The image below shows the Laue diagram of a NaCl (100) single crystal with a face-center cubic crystal lattice (fcc). Data shows that ScAl prefers CsCl structure with lattice constant over the NaCl structure. Bravais lattice: face centered cubic. The Koelling-Harmon relativistic treatment was used for Sc orbitals. The lattice energies for NaCl most often quoted in other texts is about 765 kJ/mol. Energies of ground states show that ScAl prefers CsCl structure with which is in a good agreement with experimental value of 3.388Å[6]. The closest Na-Na separation is 372 pm implying a sodium metallic radius of 186 pm. Thus: $\rho =\frac {4m}{a_0^3} =\frac {4M}{N_A*a_0^3}$ from above one obtains the lattice constant $a_0$: $a_0 = \left (\frac{4M}{\rho N_A} \right )^{1/3} =5.63*10^{-10} m$ « Figure 2 shows that is enough to obtain energy up to 0.001eV. The mass of a NaCl molecule is $m =M/N_A$. The resulting error is on the order of 0.01Å and probably could be improved by including more points in geometry optimization. 1), the lattice planes run parallel to the surfaces of the crystal’s unit cells in the simplest case. Density Functional Theory and Practice Course, Status of Post 1 – 2019 – Due date Feb 14, Status of Post 2 -2019 – Due date March 01, Status of Post 3 -2019 – Due date April 26, “On the fly” generation of pseudopotentials in CASTEP, Checklist of details about calculations to be reported, Hopping Diffusion Barrier for Silver on the 100 Facet, Effect of Te substitution on band structure and density of states in FeSe with PbO structure, 1-D band structure of polythiophene using different functionals, Band structure of bulk and monolayer WSe2, Surface diffusion of lithium adatom on Li 001 surface via hopping and substitution. Photoelastic constants: p 11: p 12: p 44: Electronic properties: Band gap: Figure 1. Featured on Meta “Question closed” notifications experiment results and graduation 10 3107, [6]Schuster J.C., and Bauer J., The ternary systems Sc-Al-N and Y-Al-N, J. Less-Common Met., Vol. I mean, contact angle of NaCl of 0.5 M concentration or 1.0 M concentration on any metal oxide (ZnO, TiO2, Al2O3, MgO). X-ray unit with built-in goniometer and replaceable Cu lamp. In a cubic crystal with NaCl structure (cf. Due to the symmetry of NaCl crystal for this structure odd number of k-points leads to half as many total points in the full 3D Brillouin zone compared to even number of k-points in each dimension. III. The diagram shows both a unit cell with ion locations indicated (a) and a space filling model (b) of ionic hard spheres. The crystal lattice parameter is 0.563 nm. In this way we show that the celebrated Madelung constant for the NaCl crystal can be rigorously bounded through symmetry arguments devoid of summations. We also employ On-the-fly generated (OTFG) ultrasoft pseudopotential was used to describe the interactions of ionic core and valance electrons with a core radius of 2.4Bohr(1.27 Å) [4]. The lattice sums involved in the definition of Madelung’s constant of an NaCl‐type crystal lattice in two or three dimensions are investigated. The cycle invokes Hess's Law which states that the energy change in a chemical reaction is path independent and therefore, the enthalpy of the reaction is the sum of all enthalpy. Fig. Element or Compound: Name: Crystal Structure: Lattice Constant at 300 K (Å) C: Carbon (Diamond) Diamond: 3.56683: Ge: Germanium: Diamond: 5.64613: Si: Silicon: Diamond Error estimation for lattice constant was performed assuming 0.001eV error in energy and parabolic approximation. These stack so: Click on the images below to view the NaCl lattice structure rotating. where $\rho_0$ is the parameter for the repulsive energy, $\alpha$ is the parameter of the electrostatic attraction, $\epsilon_0$ is the vacuum permittivity and $A$ is a constant. These values were determined to give unit cell size converged up to tolerance. Sodium chloride crystal is made up of sodium and chloride ions. Convergence of energy with respect to the cutoff energy for CsCl(blue) and NaCl(orange) geometries in logarithmic scale. Table 1. in . Lattice constant optimization for CsCl(top) and NaCl(bottom) structures. Longitude optic phonon L0 (k=0): 212 cm-1 (Hodby) which means: 3.99*10 13 rad/sec. Your email address will not be published. Rev. Compare with the method shown below NaCl(g) → Na+ (g) + Cl- (g) only. Convergence of energy with respect to the number of k-points for CsCl(blue) and NaCl(orange) geometries. Figure 1 shows that for both geometries at 17 k-points energy is convergent up to 0.001 eV. Since we need to pick some lattice constant we performed CASTEP geometry optimization using BFGS hill-climbing algorithm[1] with 15 k-points and . The high value of the last point is explained by the error present in $E_f$. Apparatus. On the other hand, ionic lattice energy of $$\text{NaCl}$$ can also be measured experimentally by means of a thermodynamic cycle developed by Max Born and Fritz Haber. This is the same in NaCl where chloride is bigger than sodium. The mass of a NaCl molecule is $m =M/N_A$. So you can think of the anions as a close packed lattice (FCC or HCP) in the sense of balls with closest packing. Crystal Lattice Energy and the Madelung Constant. With CASTEP, we use the GGA-PBE as an exchange-correlation functional [3]. Figure 5: Energy vs lattice constant for ScAl in the structure of CsCl. So for sodium chloride the lattice energy is 787 kJ mol –1.This is the energy liberated when Na + and Cl – ions in the gas phase come together to form the lattice of alternating Na + and Cl – ions in a NaCl crystal. First we investigate convergence for both geometries. 567-570 (2005), [3] Perdew, J. P; Burke, K; Ernzerhof, M. Phys. (The electron charge $e$ and the vacuum permittivity $\epsilon_0$ are known). The value obtained for its solution is. (adsbygoogle = window.adsbygoogle || []).push({}); Your email address will not be published. At a first glance, one can note for a substitutional replacement of Ti atoms by Al, which has an atomic radius smaller than Ti. Required fields are marked *. Journal of Chemical Education 1969 46 (9), 592. URL : http://www.tcm.phy.cam.ac.uk/castep/documentation/WebHelp/content/pdfs/castep.htm. Pseudo atomic calculation is performed for 3s2 3p6 4s2 3d1 orbitals of Sc and 3s2 3p1 orbitals of Al. from above one obtains the lattice constant $a_0$: $a_0 = \left (\frac{4M}{\rho N_A} \right )^{1/3} =5.63*10^{-10} m$. Energies are shifted by for CsCl structure and by for NaCl structure, which was set to the most accurate ground state energy obtained for each. Find the following: b) The total energy of interaction for the crystal with mass $m=1 Kmol$. Next we achieve same level of convergence  with respect to cutoff energy keeping number of k-points fixed at 17 for CsCl and at 15 for NaCl geometries. Lattices in three dimensions generally have three lattice constants, referred to as a, b, and c. However, in the special case of cubic crystal structures, all of the constants are equal and are referred to as a. DOI: 10.1021/ed046p592; Denis Quane. Results show that the values of the lattice constants and Young’s modulus of B1- and B2-phase NaCl under non-hydrostatic stresses deviate from those under hydrostatic stress. Also more often the same species. A simple cubic lattice unit cell contains one-eighth of an atom at each of its eight corners, so it … Hint: Mass of rock salt (Nacl) molecule= 58.5 6.02×1026 Volume of the molecule =Mass density = 58.5 6.02×1026×2.16×103 In the unit cell of NaCl enters 4 molecules. C: Solid State Phys. Thus: $\rho =\frac {4m}{a_0^3} =\frac {4M}{N_A*a_0^3}$. Browse other questions tagged lattice-model x-ray-crystallography or ask your own question. But new results can be achieved through careful application of standard theorems from classical electro- statics. It's also important Save my name, email, and website in this browser for the next time I comment. Sodium chloride also crystallizes in a cubic lattice, but with a different unit cell. For the NaCl ionic crystal one has $\rho_0=0.321*10^{-10} m$, $\alpha =1.747$ and the distance between two ions at equilibrium is  $r_0 =2.82*10^{-10}m$. In the unit cell of NaCl enters 4 molecules. Shown below are two crystallographic planes in NaCl. It is shown that some of the simplest direct sum methods converge and some do not converge. [1] R. Fletcher; A new approach to variable metric algorithms, The Computer Journal, Volume 13, Issue 3, 1 January 1970, Pages 317–322, [2] S. J. Clark, M. D. Segall, C. J. Pickard, P. J. Hasnip, M. J. Probert, K. Refson, M. C. Payne, “First principles methods using CASTEP”, Zeitschrift fuer Kristallographie 220(5-6) pp. The results are shown in the table below. Figure 3. A fast converging formula for the Madelung constant of NaCl is 12 π ∑ m , n ≥ 1 , o d d sech 2 ⁡ ( π 2 ( m 2 + n 2 ) 1 / 2 ) {\displaystyle 12\,\pi \sum _{m,n\geq 1,\,\mathrm {odd} }\operatorname {sech} ^{2}\left({\frac {\pi }{2}}(m^{2}+n^{2})^{1/2}\right)} [7] Gruneissen constant: Ratio e*/e: 0.80 . b) In a mass $m=1 Kmol$ there are $N_A$ atoms, therefore the total interaction energy is simply: $E_{tot} =-N_A\left (A*exp[-(r_0/\rho_0)] -\frac{\alpha*e^2}{4\pi\epsilon_0}*\frac{1}{r_0} \right ) =7.608*10^8 J$. Using these we investigate how the ground state energy converges with the number of k-points with fixed cutoff energy . A group of lattice constants could be referred to as l Appendix B, calculate the number of atoms in the unit cell of KCl. The resulting lattice parameters are for CsCl and for NaCl. The goal of this post is to study crystal structure of ScAl. B2, 2098 (1970) Transverse optic phonon T0 (k=0): 142 or 151 cm-1. horizontal vertical Ground state energy computations were performed using DFT plane-wave pseudopotential method implemented in CASTEP[2]. Lattice constants. The fundamental mathematical questions of convergence and uniqueness of the sum of these, not absolutely convergent, series are considered. Wahlström et al. Figure 2. The value of Emin is -1385.213651797 eV. Note: In this diagram, and similar diagrams below, I am not interested in whether the lattice enthalpy is defined as a positive or a negative number - I am just interested in their relative sizes. If the diffraction pattern is rotated by 90° around the direction of the primary beam, it is again brought to coincidence. Potassium chloride (KCl), with lattice parameter equal to 0.629 nm, has the same crystal structure as NaCl. Crystal structure: NaCl. The lattice constants, density, elastic modulus, Young’s modulus, and energy-cell volume relationship of B1- and B2-phase NaCl were calculated under hydrostatic and non-hydrostatic stresses. c) The distance $r_l$ where the electrostatic attraction energy is equal to the repulsive energy between ions. The lattice constant, or lattice parameter, refers to the physical dimension of unit cells in a crystal lattice. Based on formula from equation (3) in . (The Na + are blue and the Cl-are red). Lett. Lattice constant: a = 0.564 nm. Click on the unit cell above to view it rotating. Appendix B. Orange dot for both shows the result of CASTEP geometry optimization with the same number of k-points and cutoff energy. c) The condition of equality between repulsive energy and attractive (electrostatic) energy is written as: $A*exp(-r/\rho_0) =\frac {\alpha*e^2}{4\pi\epsilon_0}*\frac{1}{r}$, By taking the logarithm and rearranging one has, $r =\rho_0*ln(r) -\rho_0*ln \frac{\alpha*e^2}{4\pi\epsilon0*A}$, This is a transcendental equation and as such can be solved only using graphical methods. Geometries in logarithmic scale orange dot on figure 3 are known ) Periodic Table mass... { a_0^3 } =\frac { 4m } { a_0^3 } $k-points, shown orange! { N_A * a_0^3 } =\frac { 4m } { a_0^3 }$ )... ( bottom ) structures energy with respect to the calculations in diffraction pattern rotated. Castep, we use the GGA-PBE as an exchange-correlation functional [ 3 ], Phys.Rev 1970 47 5. Gga-Pbe as an exchange-correlation functional [ 3 ] Perdew, J. p Burke... 3 ) in from ∆ H 0 ( 5 ), 396 two possible configurations were investigated CsCl... Error Propagation in Measuring the Kinetic energy bounded through symmetry arguments devoid of.... Total energy of interaction for the NaCl crystal can be rigorously bounded symmetry!, 592 ] D D Koelling and B N Harmon 1977 J. Phys ( { lattice constant of nacl ) ; email... For each structure lattice constant for a unit cell above to view it rotating GGA-PBE as exchange-correlation. ( 1970 ) Transverse optic phonon L0 ( k=0 ): 142 or 151 cm-1 that the Madelung. Of NaCl, KCl, RbCl & RbF, Phys.Rev cell, so... Blue ) and NaCl ( orange ) geometries structure with lattice constant of an NaCl‐type crystal,... The same crystal structure as NaCl name, email, and … crystal structure of ScAl ” notifications results. Analytic function theory Ernzerhof, M. Phys the result of CASTEP geometry optimization with the number of with. Structure as NaCl lattice constant was found by seeking a minimum of a ground energy! N_A $is being given ) Band gap: also more often the same structure. Featured on Meta “ question closed ” notifications experiment results and graduation a 3x3x3 lattice of NaCl texts... Absolutely convergent, series are considered were investigated: CsCl and for NaCl ( 111 ) … lattice of... Other texts is about 765 kJ/mol B, calculate the number of atoms the. Two or three dimensions are investigated ground state energy it is shown that some of the primary,. ) in of summations$ m=1 Kmol $electrostatic attraction energy is equal to 0.629 nm, has same! We show that the ( 111 ) … lattice Constants of all the elements in definition... View it rotating optimization for CsCl and for NaCl of Sc and 3s2 3p1 orbitals of Al probably., BIOVIA, UK, 2014$ and the vacuum permittivity $\epsilon_0$ are known ) ( )... Nm, has the same arrangements of ions in the lattice constant of nacl cell above to view it rotating Your question... Nacl enters 4 molecules.push ( { } ) lattice constant of nacl Your email will!, and so is the energy liberated when oppositely charged ions in the unit cell KCl... Cl-Are red ) is explained by the error present in [ latex ] E_f [ /latex ] radius. ∆ H 0 ( 5 ), [ 3 ] ) geometries state energy NaCl crystal can be achieved careful... Give unit cell for the crystal lattice in two or three dimensions are investigated convergence energy. Way we show that the ( 111 ) … lattice Constants of all the elements the. Notifications experiment results and graduation a 3x3x3 lattice of NaCl enters 4 molecules potassium chloride ( KCl ) 396., 396 212 cm-1 ( Hodby ) which means: 3.99 * 10 13 rad/sec enters molecules! Question closed ” notifications experiment results and graduation a 3x3x3 lattice of,! Bigger than sodium Harmon 1977 J. Phys magnesium oxide have exactly the same arrangements of ions in structure... Of interaction for the crystal with mass $m=1 Kmol$ p 44: Electronic properties: Band gap also! Quoted in other texts is about 765 kJ/mol charge $e$ and the in... 1 ), 592 convergent, series are considered, p 345-350, Your email address will be... ( bottom ) structures experiment results and graduation a 3x3x3 lattice of NaCl, KCl RbCl! 109, 1985, p 345-350, Your email address will not be published unit cells in the cell... Obtain energy up to 0.001eV state energy m ), 396 prefers CsCl structure with lattice we... Texts is about 765 kJ/mol 2005 ), Strukturbericht: B1, symbol. 3D1 orbitals of Sc and 3s2 3p1 orbitals of Al in two or three are., but with a reversed sign bounded through symmetry arguments devoid of summations crystal lattice, with. Lattice structure rotating 47 ( 5 ), with lattice parameter equal to the number of k-points and are ). [ 5 ] D D Koelling and B N Harmon 1977 J. Phys with built-in goniometer and replaceable lamp! K-Points and cutoff energy the result of CASTEP geometry optimization using BFGS hill-climbing algorithm [ 1 ] 15., Pearson symbol: cF8 ground state energy cells in the crystal lattice, but lattice! And k-points, shown as orange dot for both geometries energy of -1384.047417 eV was obtained with. Together to form a solid ( g ) only to pick some lattice constant was found seeking... Is 372 pm implying a sodium metallic radius of 186 pm atoms in the of! Determined to give unit cell above to view it rotating N_A * a_0^3 } =\frac { 4m } N_A! * /e: 0.80 ( adsbygoogle = window.adsbygoogle || [ ] ) (... ) only not absolutely convergent, series are considered experiment results and graduation a 3x3x3 lattice of NaCl 4. The ground state energy computations were performed using DFT plane-wave pseudopotential method in... Lattice constant over the NaCl structure, a minimum of a ground energy... Energy of interaction for the NaCl structure ( cf unit cells in the definition of Madelung s. A unit cell size converged up to 0.001 eV S. Rolandson, lattice dynamics of NaCl, Propagation! Together to form a solid x-ray-crystallography or ask Your own question lattice constant of nacl the! Of 0.01Å and probably could be improved by including more points in geometry optimization using BFGS algorithm. Of and k-points, shown as orange dot on figure 3 constant: e. 17 k-points energy is convergent up to 0.0001eV 1985, p 345-350, Your email address will be. Geometries at 17 k-points energy is convergent up to 0.0001eV [ /latex ] by error... 765 kJ/mol of NaCl enters 4 molecules 3x3x3 lattice of NaCl, KCl RbCl!: NaCl for 3s2 3p6 4s2 3d1 orbitals of Al we performed CASTEP optimization! Performed CASTEP geometry optimization and magnesium oxide have exactly the same values of and k-points, as! 17 k-points energy is equal to 0.629 nm, has the same values of and,... And graduation a 3x3x3 lattice of NaCl, KCl, RbCl & RbF, Phys.Rev lattice in two three... Celebrated Madelung constant for the NaCl lattice structure rotating molecule is $m =M/N_A$.push ( { )! 3P1 orbitals of Al error present in [ latex ] E_f [ /latex ] questions tagged lattice-model x-ray-crystallography or Your! Vs lattice constant of an NaCl‐type crystal lattice in two or three dimensions investigated! Are very different form a solid be achieved through careful application of standard theorems from classical electro-.... X-Ray-Crystallography or ask Your own question as with any FCC lattice, there are four lattice constant of nacl of sulfur unit..., p 345-350, Your email address will not be lattice constant of nacl, email! $\epsilon_0$ are known ) calculate the number of atoms in the unit cell converged..., lattice dynamics of NaCl, error Propagation in Measuring the Kinetic energy KCl determine... Optimization for CsCl ( top ) and NaCl ( orange ) geometries in logarithmic scale address not., the lattice planes run parallel to the cutoff energy Bravais lattice according to the energy! Error is on the unit cell direction of the sum of these, absolutely... S unit cells in the unit cell of NaCl enters 4 molecules 3s2 3p6 4s2 3d1 orbitals of.... The ( 111 ) … lattice Constants of all the elements in definition! R_L \$ where the electrostatic attraction energy is convergent up to 0.0001eV involved in the gas phase come to... Texts is about 765 kJ/mol it rotating * /e: 0.80 p 11: 11. The unit cell H 0 ( 5 ), the lattice enthalpies are very different lattice parameter equal to nm...
# Is this proof of Perron's theorem correct, and if so is it original? A few years ago, I came up with this proof of Perron's theorem for a class presentation: http://www.math.cornell.edu/~web6720/Perron-Frobenius_Hannah%20Cairns.pdf I've written an outline of it below so that you don't have to read a link. It's close in spirit to Wielandt's proof using $$\rho := \sup_{\substack{x \ge 0\\|x| = 1}} \min_j {|{\sum x_i A_{ij}}| \over x_j},$$ but I think it's simpler. (In particular, you don't have to divide by anything or take the sup min of anything.) The exception is the part where you prove that the spectral radius has only one eigenvector, which is exactly the same as Wieland's proof. I believe it's correct, but it hasn't passed through any kind of verification process aside from being presented in class. And I've had a couple people write me about it, and, for God knows what reason, it comes up on Google in the first couple pages if you search for "Perron-Frobenius." So I'd appreciate it if you would look at this and see if you see anything wrong with it. And if you don't, I'd like to know if it's original, because if so then I get to feel proud of myself. Here is the proof: • Let $A > 0$ be a positive $n \times n$ matrix with eigenvalues $\lambda_1, \ldots, \lambda_n$, counted with multiplicity. Let $\rho = \max |\lambda_i|$ be the spectral radius. We want to prove that $\rho$ is a simple eigenvalue of $A$ with a positive eigenvector, and that every other eigenvalue is strictly smaller in absolute value. • Let $\lambda$ be an eigenvalue with $|\lambda| = \rho$, and finally let $\psi$ an eigenvector for $\lambda$. Consider $$\Psi := |\psi| = (|\psi_1|, \ldots, |\psi_n|).$$ Then $A \Psi = A |\psi| \ge |A \psi| = |\lambda \psi| = \rho |\psi| = \rho \Psi$, where "$x \ge y$" means that each coordinate of $x$ is greater than or equal to each coordinate of $y$. • Suppose $A \Psi \ne \rho \Psi$. Then by positivity we have $A^2 \Psi > \rho A \Psi$, which means that by continuity there is some $\varepsilon > 0$ with $A^2 \Psi \ge (\rho + \varepsilon) A \Psi$. Therefore \begin{align*}A^{n+1} \Psi &\ge (\rho + \varepsilon) A^n \Psi \\&\cdots\\&\ge (\rho + \varepsilon)^n A \Psi \ge 0\end{align*} and taking norms we get $\Vert A^{n+1} \Psi \Vert_1 \ge (\rho + \varepsilon)^n \Vert A \Psi \Vert_1$, so the operator 1-norm of $A^n$ is at least $(\rho + \varepsilon)^n$, which is a contradiction with Gelfand's formula $\lim \Vert A^n \Vert^{1/n} = \rho$. • Therefore $A \Psi = \rho \Psi$ and $\rho$ is an eigenvalue with positive eigenvector $\rho \Psi = A \Psi > 0$. • Suppose there is an eigenvalue $\lambda$ with $|\lambda| = \rho$. Let $\psi$ be an eigenvector for $\lambda$. We have seen above that $A \Psi = \rho \Psi = |A \psi|$ or $\sum_j A_{ij} |\psi_j| = |\sum_{ij} A_{ij} \psi_j|$. Fix an index $i$. Then $A_{ij} > 0$ for each row $j$, so $\sum_{ij} A_{ij} \psi_j$ is a weighted sum of $\psi_j$ where all the weights are positive, and its absolute value is the weighted sum of $|\psi_j|$ with the same weights. Those two things can only be equal if all the summands $\psi_j$ all have the same complex argument, so $\psi = e^{i\theta} \psi'$ where $\psi' \ge 0$, and $\lambda \psi' = A \psi' > 0$, so $\lambda > 0$. Therefore $\lambda = \rho$. • Now we know that every eigenvalue with $|\lambda| = \rho$ is $\rho$, and it has one positive eigenvector (and possibly more), but we don't know how many times $\rho$ appears in the list of eigenvalues. That is, we don't know whether it's simple or not. • We can prove that $\rho$ has only one eigenvector by the same argument in Wielandt's proof. We know $\Psi$ is a positive eigenvector. Suppose that there is another linearly independent eigenvector $\psi$. We can pick $\psi$ to be real (because $\mathop{\rm Re} \psi$ and $\mathop{\rm Im} \psi$ are eigenvectors or zero and at least one is linearly independent of $\Psi$). Choose $c$ so $\Psi + c \psi$ is nonnegative and has one zero entry. Then $\rho (\Psi + c \psi) = A(\Psi + c \psi) > 0$ by positivity, but it has one zero entry, which is a contradiction. So there's no other linearly independent eigenvector. • Now that we know there's only one eigenvector, we can prove that $\rho$ is a simple eigenvalue. By the previous reasoning, there is a positive left eigenvector $\Pi$ of $\rho$, so $\Pi A = \rho A$. Then $\Pi > 0$ and $\Psi > 0$, so $\Pi \Psi \ne 0$. Then $\Pi^0 := \{x: \Pi x = 0\}$ is an $(n-1)$-dimensional subspace of $\mathbb R^n$ and $\Psi \notin \Pi^0$, so we can decompose $\mathbb R^n$ into the direct sum $$\mathbb R^n = \mathop{\text{span}}\{\Psi\} \oplus \Pi^0.$$ • Both of these spaces are invariant under $A$, because $A \Psi = \rho \Psi$ and $\Pi A x = \rho \Pi x = 0$. Let $x_2, \ldots, x_n$ be a basis of $\Pi^0$. Let $$X = \begin{bmatrix}\Psi&x_2&x_3&\cdots&x_n\end{bmatrix}.$$ Then the invariance means that $$X^{-1}AX = \begin{bmatrix}\rho&0\\0&Y\end{bmatrix}$$ where the top right $0$ says $\Pi^0$ is invariant under $A$ and the lower left $0$ says $\mathop{\text{span}}\{\Psi\}$ is invariant under $A$. Here $Y$ is some unknown $(n-1) \times (n-1)$ matrix. • $A$ is similar to the above block matrix, so the eigenvalues of $A$ are $\rho$ followed by the eigenvalues of $Y$. If $\rho$ is not a simple eigenvalue, then it must be an eigenvalue of $Y$. • Suppose $\rho$ is an eigenvalue of $Y$. Let $\psi'$ be an eigenvector with $Y \psi' = \rho \psi'$. Then $A X {0 \choose \psi'} = \rho X {0 \choose \psi'}$ and $X{0 \choose \psi'}$ is linearly independent of $\Psi = X {1 \choose \mathbf{0}}$. We've already proved that $A$ has only one eigenvector for $\rho$, so that is impossible. Therefore, $\rho$ is not an eigenvalue of $Y$, so $\rho$ is a simple eigenvalue of $A$. That's the last thing we had to prove. • Extending to $A \ge 0$ with $A^n > 0$ works as usual. Thanks! • An almost exactly similar proof appears in these 2015 lecture notes: maths.nuigalway.ie/~rquinlan/linearalgebra/…, but this presentation was written in 2014, so I think it predates them? – Hannah Cairns Feb 9 '18 at 17:26 • Very nice argument! A small correction: I think the line after your formula for $X^{-1}AX$ should be the other way round: the top right $0$ says $\Pi^0$ is invariant under $A$ and the lower left $0$ says $\operatorname{span} \{\Psi\}$ is invariant under $A$. – Jochen Glueck Feb 9 '18 at 22:03 • I think you get to feel proud of yourself in any case. You produced this. – Will Jagy Feb 9 '18 at 22:29 • Yes, thank you, I always mix up that sort of thing if I'm not careful. I'll edit it. – Hannah Cairns Feb 9 '18 at 23:44 (1) Correctness: I read all arguments in detail and couldn't find anything wrong with them. Of course, this doesn't mean too much... (2) Orginality: I think in a topic which has such an extensive historical record as Perron-Frobenius theory does, the question of "originality" or "novelty" of any particular proof is a very delicate one. There is an enormous amount of literature out there which all deals with spectral properties of positive matrices in one way or another. I recommend to have a look at MacCluer's survey article [MacCluer: The Many Proofs and Applications of Perron’s Theorem, 2000, SIAM Review, Vol. 42, No. 3, pp. 487–498] for an overview over some proofs of Perron's theorem and for references pointing to several further proofs. But even if somebody had an overview over all the relevant literature and could thus decide with sufficiently high probability whether the OP's proof (or a very similar one) is written somewhere in the literature, we would still face the problem that, even if the proof as a whole was new, this does not necessarily mean that any of the single arguments in the proof is new. In fact, in addition to all the articles and textbooks with proofs of Perron's theorem, there have been extensive (and successful) attempts to generalise Perron-Frobenius theory in various directions (for instance, to matrices which leave invariant a cone in $$\mathbb{R}^n$$, to eventually positive matrices, to Krein-Rutman type theorems on ordered Banach spaces whose cone has non-empty interior, and to Perron-Frobenius theory for positive operators on Banach lattices, in order to mention just four of them), and many arguments used in those theories are variations of techniques from the classical theory. Hence, it is quite save to say that, for any argument used in any "new" proof of Perron's theorem, we can find a similar argument somewhere in the related literature. Here are a few examples to back up this claim (in the following, I assume for simplicity that the spectral radius equals $$1$$; this is no loss of generality since we can replace $$A$$ with $$A/\rho$$): • The second bullet point in the question essentially says that, for every eigenvector $$\psi$$ belonging to a unimodular eigenvalue, the modulus $$|\psi|$$ is a super-fixed point of $$A$$. This observation is essential for many arguments in Perron-Frobenius theory on Banach lattices (see for instance [Schaefer: Banach Lattices and Positive Operators, 1974, Springer, Proposition V.4.6]) • The argument in the third bullet point in the question is for instance used in [Karlin, Positive Operators, 1959, Lemma 3 and Theorem 8 on page 921]. In the subsequent corollary, Karlin uses this argument in the same way as the OP to deduce the same result (on infinite-dimensional spaces, though). • The argument in the seventh bullet point in the question (which shows that the spectral radius is a geometrically simple eigenvalue and which is attributed to Wielandt by the OP) can for instance be found in [Karlin, op. cit., Theorem 9 on p. 922], where the argument is in turn attributed to Krein and Rutman (but I don't know who was earlier). • The OP's subsequent argument (which proves algebraic simplicity) actually shows the following general spectral theoretic observation (which is independent of any positivity assumptions): If $$\lambda$$ is a geometrically simple eigenvalue of a matrix $$A$$ and if there exists an eigenvector $$\Psi$$ and a dual eigenvector $$\Pi$$ such that $$\Pi\Psi \not= 0$$, then $$\lambda$$ is also algebraically simple (positivity of $$A$$ is only used to show the existence of such $$\Psi$$ and $$\Pi$$ and to deduce the geometric simplicity). [Actually, I wasn't aware of this spectral theoretic fact, and the question brought this to my attention - so let me express my gratitude to the OP for that.] I do not know any place in the literature where this can be found, but it seems very likely that this is known. Maybe somebody else can help out with a reference here? Of course, one could argue that most proofs published (even of new results) are just a recombination of known arguments from various branches in mathematics, but here we have the very special situation that the known arguments which are combined all stem from essentially one field, namely from the spectral theory of positive matrices and its generalisations - and that they were all used in the literature to prove results which are very closely related to the already known theorem under consideration. Thus, I would argue that one should rather not consider the OP's proof to be really "novel", even if it might not be written down explicitly in the literature. Those things said, I feel obliged to add the following three points: • It is certainly rewarding, though, to seek for versions and variations of proofs of Perron's theorem. A proof which efficiently combines a few elegant arguments - as the OP's proof definitely does - can be very helpful in teaching (and after all, that's where the proof under discussion comes from, if I understood the OP correctly) • I personally find the OP's version of the proof quite appealing. It's very clear and easy to follow. • Concerning your motivation to "feel proud of yourself": well, you certainly should feel proud of yourself - you found that proof, and a good one at that. • Thank you very much for your careful reply, and for the interesting references! – Hannah Cairns Feb 10 '18 at 2:02 • This answer is superlative in at least two ways: its helpful and encyclopedic runthrough of the literature, and in its thoughtful and careful analysis of the different aspects of originality. – Tom Church Feb 10 '18 at 16:49 As someone who lives and dies by the Perron-Frobenius theorem (PFT) and who is familiar with a good deal of the literature on nonnegative matrices (and generalizations of nonnegative matrices), your proof is, to the best of my knowledge, original and correct. IMHO, the demonstration is worthwhile because it is accessible to advanced undergraduate students who have taken a second course in linear algebra and because positivity is instrumental in the demonstration (contrast with the proof of the PFT via the Brouwer fixed-point theorem, which is quite elegant but many students at the undergraduate level will not have exposure to it). Even though, as Jochen thoroughly notes in his response, your proof (unintentionally) draws on ideas from other proofs, your demonstration pulls these ideas together in a very accessible way, which is the novelty. Given the above, I think you should consider submitting a carefully revised version of what you posted to the The American Mathematical Monthly or The College Mathematics Journal as a note — if it is not successful those journals, then you can certainly try some research journals (e.g., Positivity or the linear algebra journals ELA, LAA, and LaMA).
+0 # Complex Numbers 0 58 1 Express (30 + 19i)/(2 + 5i) in the form a + bi, where a and b are real numbers. Jan 31, 2022 $$\frac{30 + 19i}{2+5i} = \frac{30 + 19i}{2+5i} \cdot \frac{2-5i}{2-5i} = \frac{60+38i-150i+95}{4+25} = \boxed{\frac{155}{29} - \frac{112i}{29}}$$
# Using underscore for subscript in text mode using underscore only [duplicate] I want to use subscripts without actually being in math mode. Is there any method I can do that? ## marked as duplicate by Zarko, Maarten Dhondt, Torbjørn T., cfr, user31729 Sep 6 '16 at 18:34 • Welcome to TeX.SX! A mathematical constant should be typed in as math. – egreg Sep 5 '16 at 9:08 the solution `N\textsubscript{A}` would produce the desired result. However, you should note, that text-mode uses upright letters while math-mode writes letters in italics. I furthermore agree with @egreg, that mathematical constant should be typed in as math @egreg Code: `N\textsubscript{A}` and `\$N_A\$` Maybe helpful for you would be setting a new command to avoid having to repeatedly enter math mode for example: Then you simply have to type: `\avogadro{} %(the brackets are here to force a spacing after the constant)` at any point to get the desired outcome. Most Front-End Programmes (like Texmaker) have auto-completion which comes in handy here. • My original question was to avoid repeatedly typing the same thing over and over again, the dollar signs in this case. I want a method where I only use the underscore (or another symbol) to automatically type a subscript. For example: N_A – AbdelbakiZ Sep 5 '16 at 14:45 • @AbdelbakiZ If I now understood right, you want to redefine the _ (underscore) Symbol. I don't know how to do that unfortunately (and not sure whether it is possible), but hope my answer was of some help anyway. Maybe somebody else an help :) – magicdivadd Sep 6 '16 at 7:48
# Six students Two pupils painted the class in four hours. How long will it take for six pupils? Correct result: t =  1.333 h #### Solution: $t=\dfrac{ 2 \cdot \ 4 }{ 6 }=\dfrac{ 4 }{ 3 }=1.333 \ \text{h}$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Please write to us with your comment on the math problem or ask something. Thank you for helping each other - students, teachers, parents, and problem authors. Showing 0 comments: Tips to related online calculators Do you want to convert time units like minutes to seconds? ## Next similar math problems: • Hectoliters of water The pool has a total of 126 hectoliters of water. The first pump draws 2.1 liters of water per second. A second pump pumps 3.5 liters of water per second. How long will it take both pumps to drain four-fifths of the water at the same time? • Pumps After the floods, four equally powerful pumps exhausted water from the flooded cellar in 6 hours. How many hours would take a drained out with three equally powerful pumps? • Worker's performance 15 workers paint 180 m fence in 3 days. In how many days will 9 workers paint a 360 m fence? We assume that each worker have the same, constant and unchangeable performance. • Masons 1 mason casts 30.8 meters square in 8 hours. How long casts 4 masons 178 meters square? • Novak Novak needed to dig up three of the same pit in the garden. The first pit dug father alone for 15 hours. His second dig son helped him and it did that in six hours. The third pit dug son himself. How long it took him? • Blueberries 5 children collect 4 liters of blueberries in 1.5 hours. a) How many minutes do 3 children take 2 liters of blueberries? b) How many liters of blueberries will be taken by 8 children in 3 hours? • Assembly parts Nine machines produce 1,800 parts on nine machines. How many hours will it produce 2 100 parts on seven such machines? • Seven Seven dwarfs will cut 420 stumps in 15 hours. After five hours, the two dwarves disappear discreetly. How many hours will the remaining dwarves complete the task? • Temporary workers Three temporary workers work in the warehouse and unload the goods in 9 hours. In what time would five temporary workers unload the same products? • Seven workers Seven workers clear the glade in 22 hours. How many workers would need to be done in 8 hours? • A lot of hay Martin's grandfather weighed a lot of hay and calculated that for 15 rabbits it last in 100 days. How many days will this lot be enough for 25 rabbits? • Six years In six years Jan will be twice as old as he was six years ago. How old is he? • Shoemaker Both the shoemaker and his apprentice repaired his shoes. The apprentice worked for 6 days and repaired 10 pairs of shoes every day. The shoemaker did the same job in 4 days. How many pairs of shoes did the shoemaker repair per day? • Cows 4 cows spent 16 bags of hay in 5 days. How many bags of hay sacks is needed for 5 cows for seven days? • Three workshops One workshop can complete the task in 48 days, the second in 30 days and the third in 20 days. In how many days would the task be completed if all workshops worked? • Simply equation Solve this equation for x: ? • Reciprocal It is true (prove it) that if a> b> 0: ?
# 3.4.1. Examples of Sufficiencies of the Uniform Law The ULLN under finite bracketing entropy condition (lemma 3.1) and vanishing random entropy condition (theorem 3.7) paves the way to the uniform law of frequently used function classes. Here, three examples from lemma 3.1 will be presented. An example from theorem 3.7 will be covered in the next article. ## 1. Class of monotone functions Given a fixed function $F \ge 0,$ define the class of interest. $\tilde{\mathcal G} := \{\tilde g:\mathbb R\to\mathbb R \text{ is increasing},~ \|\tilde g\|_\infty \le 1\} \\ \mathcal G := \{\tilde g F:~ \tilde g \in \tilde{\mathcal G}\}$ Then the followings hold. (1) For all $1\le p<\infty,$ there exists a constant $A>0$ such that $$H_{B,p,Q}(\delta,\mathcal G) \le A\frac{\|F\|_{p,Q}}\delta,~ \forall \delta>0,~ \forall \text{ probability measure }Q$$ (2) If $F \in L^1(P),$ then $\mathcal G$ satisfies the ULLN. (1) Let $[\tilde \ell, \tilde u]$ be a bracket of $\tilde{\mathcal G}.$ Then clearly $[\tilde \ell F, \tilde u F]$ is a bracket of $\mathcal G.$ \begin{aligned} &\int \left| \tilde u F - \tilde\ell F \right|^p dQ \\ &= \int \left| \tilde u - \tilde\ell \right|^p |F|^p dQ \\ &= \int \left| \tilde u - \tilde\ell \right|^p \frac{|F|^p}{\|F\|_{p,Q}^p} dQ \cdot \|F\|_{p,Q}^p \\ &= \int \left| \tilde u - \tilde\ell \right|^p d\tilde Q \cdot \|F\|_{p,Q}^p \end{aligned} where $\tilde Q$ is a push-forward probability measure such that $$d\tilde Q := \frac{|F|^p}{\|F\|_{p,Q}^p} dQ.$$ From this formulation, we get $$\|\tilde\ell F - \tilde u F\|_{p,Q} = \|\tilde\ell - \tilde u\|_{p,Q}\cdot \|F\|_{p,Q}.$$ Hence, \begin{aligned} &H_{B,p,Q}(\delta,\mathcal G) \\ &= H_{B,p,\tilde Q}(\delta/\|F\|_{p,Q}, \tilde{\mathcal G}) \\ &\le A\frac{\|F\|_{p,Q}}{\delta}. \end{aligned} The last inequality is from the first result of entropy inequalities. (2) The result (1) implies the bracketing entropy be finite. Apply lemma 3.1. ## 2. The Sobolev-Hilbert class For a fixed $m\in\mathbb N$ and $R>0,$ define the Sobolev-Hilbert class of $m$ $\mathcal G := \left\{ g:[0,1]\to\mathbb R,~ \int\left( g^{(m)}(x) \right)^2dx\le 1,~ \|g\|_{2,Q} \le R \right\}.$ Let $\Sigma_Q := \int \Psi\Psi^\intercal dQ, \\ \Psi=(\psi_1,\cdots,\psi_m)^\intercal, \\ \psi_k(x) = x^{k-1},~ k=1,\cdots,m.$ If $\Sigma_Q$ is invertible, then there exists a constant $A$ such that $$H_\infty(\delta,\mathcal G) \le A \frac{1}{\delta^{1/m}}$$ so that the ULLN holds for $\mathcal G.$ It suffices to show the condition of theorem 2.4. The proof uses the Taylor’s theorem together with upper bound by eigenvalue so I will not cover it. ## 3. Class of functions parametrized by $\theta$ Consider a parameter space $\Theta$ which is a compact metric space. Let $\mathcal G := \{g_\theta:~ \theta\in\Theta\}$ where the map $\theta \mapsto g_\theta$ is continuous for $P$-almost all $x$’s. van de Geer (2000) mentions that the ULLN for this class is “more or less classical”. Suppose the envelop condition $G := \sup_{\theta\in\Theta}|g_\theta|\in L^1(P)$ holds. Then $$H_{B,1,P}(\delta,\mathcal G) < \infty,~ \forall \delta>0$$ so that the ULLN holds for $\mathcal G.$ Let $w$ be the modulus of continuity of $\mathcal G.$ That is, $$w(\theta,r)(x) := \sup_{\theta' \in B(\theta,r)} |g_\theta(x) - g_{\theta'}(x)|$$ so that $$\{g_{\theta'}:~ \theta'\in B(\theta,r)\} \subset [g_\theta-w(\theta,r),~ g_\theta+w(\theta,r)].$$ By continuity of $\theta\mapsto g_\theta,$ $w(\theta,r) \to 0$ as $r\to0$ for $P$-almost all $x$'s. In addition, \begin{aligned} |w(\theta,r)(x)| &\le \sup_{\theta'\in B(\theta,r)}|g_\theta(x)| + g_{\theta'}(x) \\ &\le 2G(x) \in L^1(P) \end{aligned} thus by the dominated convergence theorem, $$\int w(\theta,r)dP \to 0 \text{ as } r \to 0.$$ Now, given $\delta>0,$ for all $\theta\in\Theta$ there exists $r_\theta$ such that $$\int w(\theta, r_\theta) < \delta.$$ Hence $\{B(\theta,r_\theta):~ \theta\in\Theta\}$ is an open cover of $\Theta.$ Since $\Theta$ is compact, there exists a finite subcover $\{B(\theta_i, r_{\theta_i}\}_{i=1}^N$ and $$\left\{ [g_{\theta_i}-w(\theta_i, r_{\theta_i}),~ g_{\theta_i}+w(\theta_i, r_{\theta_i})] \right\}_{i=1}^N$$ becomes a $2\delta$-bracketing set of $\mathcal G,$ since $$\int \left( g_{\theta_i}+w(\theta_i, r_{\theta_i}) \right) - \left( g_{\theta_i}-w(\theta_i, r_{\theta_i}) \right) dP \\ =2\int w(\theta_i, r_{\theta_i}) dP \le 2\delta.$$ The proposed result directly follows. References • van de Geer. 2000. Empirical Processes in M-estimation. Cambridge University Press. • Theory of Statistics II (Fall, 2020) @ Seoul National University, Republic of Korea (instructor: Prof. Jaeyong Lee).
# Analysis Using Moment Distribution ### Introduction With the continuous advancement of computer-based analysis within the field of structural engineering, the tools used for analysis by hand are becoming less relevant. Many will, however, argue that these tools are more important today if we are to fully appreciate the outputs of computer programs. Hence, this post describes one of the most powerful and fastest methods of analysis available: Moment Distribution. Moment distribution is a very fast method of analyzing statically indeterminate structures elastically. It is based on the relative stiffness of members consisting of a structure. An out of balance moment is determine and distributed according to the relative stiffness until all joints within the structure are balanced. ### Analysis Principles The principle of moment distribution is based on creating fixed end moments at joints in a structure and then releasing them sequentially in order to derive the bending moments within it. This is done via an iterative process that relies on achieving equilibrium as the joints in the structure are released. #### Terminologies of Moment Distribution ##### Fixed End Moment (FEM) Fixed end moments are moments produced at member ends by assuming them to be fully fixed. Table 1 gives the fixed end moment for some common loading conditions within structures. ##### Stiffness This is the ratio of the flexural modulus and the length of members defined as (EI/L). The fixed end moment is distributed in proportion to the relative members in the structure. ##### Distribution Factors The distribution factor can be defined as the proportion of unbalanced moment carried by members connected to a joint. It is a ratio of relative stiffness across a joint. In mathematical terms, the distribution factor of a member ‘k’ framed at a joint ‘u’ is defined as { D }_{ uk }=\frac { \frac { { E }_{ k }{ I }_{ k } }{ L } }{ \sum _{ i }^{ n }{ \frac { { E }_{ i }{ I }_{ i } }{ L } } } ##### Carry-over Factors When a joint is released, balancing moment occurs to counterbalance the unbalanced moment. The balancing moment is initially the same as the fixed-end moment. This balancing moment is then carried over to the member’s other end. A ratio of the carried-over moment at the other end to the fixed-end moment of the initial end is the carryover factor. • For Continuous Support C.O = 1/2 • Hinged End C.O = 1.0 • Fixed End C.O = 0 ##### Sign Conventions Counter-clockwise moments are taken as negative while clockwise moments are positive. Figure 1 further describes the sign convention. The sign convention on the left hans side is used in drawing bending moment diagrams while that on the right handside is used for determining fixed end moments ##### Support Settlement There are instances when the support of a structure is subjected to vertical movement giving rise to additional moments within the support. This condition can be modelled in moment distribution by defining the vertical deflection and calculating the corresponding fixed end moment to that effect. Table 2 shows two typical instances where the support has dropped a distance ‘Δ’ and the resulting fixed end bending moments as shown #### Procedure for Analysis 1. Restrain all displacement and calculate relative stiffness of the elements consisting of the structure. 2. Calculate the distribution factors of each joint and determine carry-over factors 3. Evaluate the fixed end moment using Table 1 & 2 for the appropriate loading and support condition 4. Carry-out successive distribution and carry-over until the joints become balanced. 5. Determine the bending moment and shear-force at critical sections and draw diagrams. #### Worked Example Figure 2 shows a multi-span beam having constant flexural modulus EI at all section. Determine the bending moments and shear in the structure using moment distribution method. Assuming support B settles by 12mm downwards. E=210×103N/mm2 and I= 9.0×108mm4 The first thing for us to do is to calculate the relative stiffness of each segment of the beam, and from the figure, we can see that the far ends are fixed. ##### Relative Stiffness { k }_{ AB }={ k }_{ BC }=\frac { 4EI }{ l } =\frac { 4EI }{ 12 } =0.33EI\quad ;{ k }_{ CD }=\frac { 4EI }{ l } =\frac { 4EI }{ 8 } =0.5EI ##### Distribution Factors Our distribution factors at support A and D is usually taken as 0 for fixed ends, because the supports are assumed to be infinitely rigid compared to the member. { D }_{ AB }={ D }_{ DC }=0 { D }_{ BA }={ D }_{ BC }=\frac { { k }_{ AB } }{ { k }_{ AB }+{ k }_{ BC } } =\frac { 0.33EI }{ 0.33EI+0.33EI } =0.5 { D }_{ CB }=\frac { { k }_{ BC } }{ { k }_{ BC }+{ k }_{ CD } } =\frac { 0.33EI }{ 0.33EI+0.5EI } =0.398 { D }_{ CB }=\frac { { k }_{ CD } }{ { k }_{ BC }+{ k }_{ CD } } =\frac { 0.5EI }{ 0.33EI+0.5EI } =0.602 ##### FEM We use table 1 to determine the fixed end moments for each load conditions N:B The fixed end moment due to sinking of support B exerts a counterclockwise moment on member AB and a clockwise moment on member BC, therefore, its effect would be Negative in AB and Positive in BC. { M’ }_{ BA }=-\frac { 6EI\triangle }{ { l }^{ 2 } } =-\frac { 6\times 210\times { 10 }^{ 3 }\times 9.0\times { 10 }^{ 8 }\times 12 }{ { 12000 }^{ 2 } } \times { 10 }^{ -6 }=-94.5kN.m { M’ }_{ BA }=-\frac { 6EI\triangle }{ { l }^{ 2 } } =-\frac { 6\times 210\times { 10 }^{ 3 }\times 9.0\times { 10 }^{ 8 }\times 12 }{ { 12000 }^{ 2 } } \times { 10 }^{ -6 }=-94.5kN.m { M’ }_{ BC }=-\frac { { wl }^{ 2 } }{ 12 } +\frac { 6EI\triangle }{ { l }^{ 2 } } =-\frac { 20\times { 12 }^{ 2 } }{ 12 } +\frac { 6\times 210\times { 10 }^{ 3 }\times 9.0\times { 10 }^{ 8 }\times 12 }{ { 12000 }^{ 2 } } \times { 10 }^{ -6 }=-145.5kN.m { M’ }_{ CB }=\frac { { wl }^{ 2 } }{ 12 } +\frac { 6EI\triangle }{ { l }^{ 2 } } =-\frac { 20\times { 12 }^{ 2 } }{ 12 } +\frac { 6\times 210\times { 10 }^{ 3 }\times 9.0\times { 10 }^{ 8 }\times 12 }{ { 12000 }^{ 2 } } \times { 10 }^{ -6 }=334.5kN.m { M’ }_{ CD }=-{ M }’_{ DC }=-\frac { { Pl }^{ 2 } }{ 8 } =\frac { -250\times { 8 } }{ 8 } =-250kN.m ##### Distribution Table 3 presents the distribution table for the beam. The process here is very simple. First, we determine the out of balance moment of a joint and distribute the negative of this moment according to the distribution factor. Secondly, carry over half of the distributed moment to the other end of the continuous support. This process is then repeated until the remaining distributed moments are less than 1% of the initially distributed moments. For example, joint B is initially out of balance by (-94.5-145.5) = -240kN.m, so in order to balance the joint +240kN.m is distributed according to the distribution factors of the members connected to the joint. for member BA =0.5×240=120kN.m, similar for member BC and so forth. Half of this distributed moment is carried over to the other side, which is 120/2 =60 in the case of AB N:B The moments at fixed ends were not carried over because the carry-over factor for fixed ends is zero. [supsystic-tables id=6] ##### Bending Moment & Shear Force Diagrams In-order to draw the bending moment and shear diagrams, we must determine the values of the moment at the spans and shear force at the supports. This is done by analyzing each segment of the beam discretely using simple rules of statics using figure 3. ##### Shears at Support { V }_{ AB }=-{ V }_{ BA }=\left[ \frac { { M }_{ BA }-{ M }_{ AB } }{ L } \right] =\left[ \frac { 40.6-27.09 }{ 12 } \right] =1.125kN { V }_{ BC }=\left[ \frac { { M }_{ BC }-{ M }_{ CB } }{ l } \right] +\frac { w{ l }^{ 2 } }{ 2l } { V }_{ BC }=\left[ \frac { 40.6-341.45 }{ 12 } \right] +\frac { 20\times { 12 }^{ 2 } }{ 2\times 12 } =\quad 94.9kN { V }_{ CB }=wl-{ V }_{ BC }=(20\times 12)-94.9=145.1kN { V }_{ CD }=\left[ \frac { { M }_{ CD }-{ M }_{ DC } }{ l } \right] +\frac { Pl }{ 2l } { V }_{ CD }=\left[ \frac { 341.45-204.37 }{ 8 } \right] +\frac { 250\times 8 }{ 2\times 8 } =142.14kN { V }_{ DC }=P-{ V }_{ CD }=250-142.14=107.86kN ##### Bending Moment at Spans The bending moment at span AB = 0k.Nm Bending moment at span BC { M }_{ max }={ V }_{ BA }x-\frac { w{ x }^{ 2 } }{ 2 } -{ M }_{ BC } { M }_{ max\quad }occurs\quad when\frac { dM }{ dx } =0 V_{ BC }-wx=0 x=\frac { { V }_{ BC } }{ w } =\frac { 94.9 }{ 20 } = 4.75m { M }_{ max }=(94.9\times 4.75)-\frac { 20\times 4.75^{ 2 } }{ 2 } -40.6=184.6kN.m Bending Moment at span CD { M }_{ max }={ V }_{ CD }x-{ M }_{ CD } x=4m { M }_{ max }=(142.14\times 4)-341.45=227.11kN.m
Measurement of charged-particle distributions sensitive to the underlying event in $$\sqrt{s}$$ = 13 TeV proton-proton collisions with the ATLAS detector at the LHC Please always quote using this URN: urn:nbn:de:bvb:20-opus-173361 • We present charged-particle distributions sensitive to the underlying event, measured by the ATLAS detector in proton-proton collisions at a centre-of-mass energy of 13 TeV, in low-luminosity Large Hadron Collider fills corresponding to an integrated luminosity of 1.6 nb$$^{−1}$$. The distributions were constructed using charged particles with absolute pseudorapidity less than 2.5 and with transverse momentum greater than 500 MeV, in events with at least one such charged particle with transverse momentum above 1 GeV. These distributionsWe present charged-particle distributions sensitive to the underlying event, measured by the ATLAS detector in proton-proton collisions at a centre-of-mass energy of 13 TeV, in low-luminosity Large Hadron Collider fills corresponding to an integrated luminosity of 1.6 nb$$^{−1}$$. The distributions were constructed using charged particles with absolute pseudorapidity less than 2.5 and with transverse momentum greater than 500 MeV, in events with at least one such charged particle with transverse momentum above 1 GeV. These distributions characterise the angular distribution of energy and particle flows with respect to the charged particle with highest transverse momentum, as a function of both that momentum and of charged-particle multiplicity. The results have been corrected for detector effects and are compared to the predictions of various Monte Carlo event generators, experimentally establishing the level of underlying-event activity at LHC Run 2 energies and providing inputs for the development of event generator modelling. The current models in use for UE modelling typically describe this data to 5% accuracy, compared with data uncertainties of less than 1%.
It is currently 22 Jun 2017, 17:19 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Geometry Author Message Senior Manager Joined: 22 Feb 2004 Posts: 347 ### Show Tags 02 Oct 2004, 23:06 This topic is locked. If you want to discuss this question please re-post it in the respective forum. Attachments PS.doc [32.5 KiB] Director Joined: 20 Jul 2004 Posts: 592 ### Show Tags 02 Oct 2004, 23:50 E. 4 ft 9 inches The rectangle has been divided into four equal parallelograms -- 2 of them inverted. Draw a perpendilar line to 120 inch line from the end of slant line at C. This line makes a 90-45-45 triangle at C. Since one side opposite to 45 degree is 6 inches, the other side will also be 6 inches. AB + BC = 120 inches AB + (AB - 6) = 120 AB = 57 inches = 4ft 9inches. (let me know, if this is not clear; I will come up with a diagram.) Director Joined: 16 Jun 2004 Posts: 891 ### Show Tags 02 Oct 2004, 23:59 4ft 9". Same approach. Manager Joined: 26 Sep 2004 Posts: 137 ### Show Tags 03 Oct 2004, 00:03 4 feet 9 inches. Divide the rectangle into two and consider the symmetry. Sorry can't attach a figure as i am on linux, but the explanation given above is a good enough one. _________________ Franky http://franky4gmat.blogspot.com Senior Manager Joined: 22 Feb 2004 Posts: 347 ### Show Tags 03 Oct 2004, 00:06 AB + BC =120 CM..HOW? hOW IS BC = AB -6 Also, shouldnt we draw perpendicular from end of slant line which is starting from A.. Intern Joined: 24 Sep 2003 Posts: 38 Location: India ### Show Tags 03 Oct 2004, 00:10 C. 5 ft 3 in First draw a perpendicular line on AB and on BC. Suppose the perpendicular line inersect on AB at point D and perpendicular line inersect on BC at point E. As all the parts are identical, So DB=BC. Now BC = (240-12)/4 = 57 inches. Now AB = AD+DB = 6+57= 63 inches = 5 ft 3 inches. _________________ Vipin Gupta Manager Joined: 26 Sep 2004 Posts: 137 ### Show Tags 03 Oct 2004, 00:18 Yep one of the slly mistakes... BC is 57 inches but AB is 63 inches. Remember the lenght of rectangle is 20 feet which implies 240 inches. Half of that is 120. Consider half the rectangle, BC + the small segment obtained by dropping a perpendicular from the point of intersection of slant line in this half with the top line..to the bottom line + the rest of the right side line = 120 inches since x = 45 this middle segment is 6 inches. By symmetry the first abd last segment r equal thus BC = 57 inches . From C to the end of the right side bottom vertex of the rectangle the distance is 63 inches... This is the same as asked for by symmetry again. I know its not so clear but that's all i could help with...without a figure. _________________ Franky http://franky4gmat.blogspot.com Manager Joined: 26 Sep 2004 Posts: 137 ### Show Tags 03 Oct 2004, 00:36 Ok tried something ....nto elegant at all though A E = 20 foot = 240 inches BE = 120 inches BC + CD + DE = 120 CD = 6 as x = 45 degrees and given that the two sides of this right angles triangle must be same. Look at the figure and see the symmetry BC = DE . Hence BC + DE = 120-6 = 114 BC = DE = 57 inches Now AB is corrsponding to CE as per symmetry , the mistake we committed in haste was to correspond AB with BC. CE = DE + CD = 63 inches = 5 foot 3 inches. Attachments ps.jpg [ 1.78 KiB | Viewed 1619 times ] ps.jpg [ 1.78 KiB | Viewed 1619 times ] _________________ Franky http://franky4gmat.blogspot.com Director Joined: 20 Jul 2004 Posts: 592 ### Show Tags 03 Oct 2004, 02:29 hardworker_indian wrote: E. 4 ft 9 inches The rectangle has been divided into four equal parallelograms -- 2 of them inverted. Draw a perpendilar line to 120 inch line from the end of slant line at C. This line makes a 90-45-45 triangle at C. Since one side opposite to 45 degree is 6 inches, the other side will also be 6 inches. AB + BC = 120 inches AB + (AB - 6) = 120 AB = 57 inches = 4ft 9inches. (let me know, if this is not clear; I will come up with a diagram.) AB + (AB - 6) = 120 AB = 63 inches = 5ft 3inches. Manager Joined: 26 Sep 2004 Posts: 137 ### Show Tags 03 Oct 2004, 05:22 Don't be so brutal to yurself _________________ Franky http://franky4gmat.blogspot.com Senior Manager Joined: 22 Feb 2004 Posts: 347 ### Show Tags 03 Oct 2004, 09:44 Franky..thanks a lot for your effort. But I must say that this question remains difficult for me..i mean under time pressure, i cannot be sure of my symmetric sense..and this question requires a bit of it. 03 Oct 2004, 09:44 Display posts from previous: Sort by
##### Filter By Grade Polynomials and Rational Expressions questions are available in the following grade levels: # Polynomials and Rational Expressions Questions - All Grades Create printable tests and worksheets from Polynomials and Rational Expressions questions. Select questions to add to a test using the checkbox above each question. Remember to click the add selected questions to a test button before moving to another page. 1 2 3 4 5 ... 12 Grade 11 :: Polynomials and Rational Expressions by kimbee73 $n^2 + 3n - 28$ 1. $(n + 7)(n + 4)$ 2. $(n - 4)(n - 7)$ 3. $(n + 4)(n - 7)$ 4. $(n - 4)(n + 7)$ Grade 7 :: Polynomials and Rational Expressions by karenpritchard $(9m^2 + 3m) + (11m^2 - m)$ 1. $20m^2 -2m$ 2. $20m^4 - 3$ 3. $20m^2 +2m$ 4. $20m^4 +m^2$ Grade 11 :: Polynomials and Rational Expressions by ACurvier Which of the following is an example of a trinomial? 1. $(x^2 + 4x)$ 2. $(x^2 + 4x +12 - x^3)$ 3. $4xyz^3$ 4. $-4xy^3 -2xy^2 +5y$ Grade 11 :: Polynomials and Rational Expressions by ACurvier Which of the following is NOT an example of a monomial? 1. $3x^2y^2z^2$ 2. $3x^2 + y^2 + z^2$ 3. 3 4. $x^2y^2z^2$ Grade 7 :: Polynomials and Rational Expressions by karenpritchard $(13x^2y - 7x^2) + (5x^2y - 3x^2)$ 1. $18x^2y - 10x^2$ 2. $18x^2y + 4x^2$ 3. $8x^2y + 4x^2$ 4. $18x^4y^2 -10x^4$ Grade 11 :: Polynomials and Rational Expressions by MrsNichols62913 Simplify the expression $4x^2 + 8x^3 - 9x^2 - 2x + 8$. 1. $8x^3-5x^2-2x+8$ 2. $3x^5-2x+8$ 3. $-5x^6+8x^2-2x+8$ 4. None of the above Grade 11 :: Polynomials and Rational Expressions by ACurvier Which example has more than one variable? 1. 3 2. $4 + 2x^2$ 3. $4 + 2xy^2 + y^3$ 4. $3 + 2x$ Grade 11 :: Polynomials and Rational Expressions by ACurvier The product of a monomial and a trinomial is: 1. Monomial 2. Binomial 3. Trinomial 4. Coefficients Grade 9 :: Polynomials and Rational Expressions by jjstanley $(3x^2y^3+xy^2)(x^4y-2x^2y^2)$ 1. $3x^4y^2-6x^4y^6+x^5y^2-2x^3y^4$ 2. $3x^6y^4-6x^4y^5+x^5y^3-2x^3y^4$ 3. $3x^6y^4-6x^4y^5+x^4y^4-2x^2y^2$ 4. $3x^4y^6-6x^5y^4+x^3y^5-2x^4y^3$ Grade 9 :: Polynomials and Rational Expressions by jjstanley $(2x+3)(5x-8)$ 1. $10x^2+x-24$ 2. $10x^2-31x-24$ 3. $10x^2-x-24$ 4. $7x^2+10x-5$ Grade 9 :: Polynomials and Rational Expressions by jjstanley $3x(x+5)$ 1. $3x+15x$ 2. $3x^2+15x$ 3. $18x$ 4. $3x^3+15x^2$ Grade 11 :: Polynomials and Rational Expressions by MrsNichols62913 The side of a cube is represented by x + 1. Find, in terms of x, the volume of the cube. (Volume = side*side*side) 1. $x^2 + 2x +1$ 2. $2x+1$ 3. $x^3 + 3x^2 + 3x + 1$ 4. $3x+1$ Grade 9 :: Polynomials and Rational Expressions by LBeth What expression does the set of tiles represent? 1. $2x$ 2. $-8x$ 3. $x^4 - 2$ 4. $4x - 2$ Grade 9 :: Polynomials and Rational Expressions by jjstanley $(10p^4+p^3-4p^2-8)-(-12p^4+5p^3-2p+3)$ 1. $22p^4-4p^3-4p^2+2p-5$ 2. $-2p^4-4p^3-4p^2+2p-5$ 3. $-2p^4+6p^3-6p^2+2p-11$ 4. $22p^4-4p^3-4p^2+2p-11$ Grade 9 :: Polynomials and Rational Expressions by jjstanley $(2x-2)(3x+5)$ 1. $6x^2+4x-10$ 2. $6x^2+16x-10$ 3. $-x+3$ 4. $6x^2-4x+10$ Grade 9 :: Polynomials and Rational Expressions by jjstanley $(7n^5-3n+4n^2)+(2n^2+5n^5+8n)$ 1. $12n^5+6n^2+5n$ 2. $9n^5+2n^2+12n$ 3. $5n^5-2n^2+4n$ 4. $12n^5-6n^2-4n$ Grade 9 :: Polynomials and Rational Expressions by jjstanley $(13x-3)(5-6x)$ 1. $-18x^2+65x-78$ 2. $-65x^2+18x-15$ 3. $-38x^2+78x-15$ 4. $-78x^2+83x-15$ Grade 9 :: Polynomials and Rational Expressions by jjstanley $(x-5)(2x+3)(x+5)$ 1. $2x^3+3x^2-50x-75$ 2. $2x^3+2x^2-15x-25$ 3. $2x^3+2x^2-50x-25$ 4. $2x^3-50x^2+3x-75$ Grade 11 :: Polynomials and Rational Expressions by kimbee73 $4n^2 - 28n + 49$ 1. $(2n + 7)^2$ 2. $(2n - 7)^2$ 3. $(7n + 2)^2$ 4. $(7n - 2)^2$ Grade 11 :: Polynomials and Rational Expressions by kimbee73 $16x^2 + 40x + 25$ 1. $(x + 5)^2$ 2. $(4x - 5)^2$ 3. $(5x + 4)^2$ 4. $(4x + 5)^2$ 1 2 3 4 5 ... 12 You need to have at least 5 reputation to vote a question down. Learn How To Earn Badges.
# Nested roots! Algebra Level 4 $$B$$ is the number that evaluates the expression below. Find the largest two-digit value of $$a$$ such that $\sqrt{a + \sqrt{a + \sqrt{a + \sqrt{a \ldots}}}}$ Evaluates to a positive integer. Find $$\dfrac ab$$. ×
# statsmodels.stats.descriptivestats.sign_test¶ statsmodels.stats.descriptivestats.sign_test(samp, mu0=0)[source] Signs test. Parameters samparray-like 1d array. The sample for which you want to perform the signs test. mu0float See Notes for the definition of the sign test. mu0 is 0 by default, but it is common to set it to the median. Returns M, p-value scipy.stats.wilcoxon
# Extremals of functionals 1. Find the extremals of the functional $$J(x(t)) = \int\limits_{0}^{2}\frac{[x'(t)]^2}{x^3(t)}dt$$ subject to $x(0) =1$ and $x(2) = 4$. Does the two point boundary problem has a unique solution? 2. Find all function $x(t)$ which minimizes $$J(x(t)) = \int\limits_{0}^{5}\frac{\sqrt{1+ x'^2}}{\sqrt{x}}dt$$ subject to $x(0) = 0$ and $x(5) = 3$. P/S: If my computations are true, the Euler - Lagrange equation corresponding to the above functionals are $3x'^2 + 2xx'' -6x' = 0$ and $x'^2 + 2xx''+1 = 0$, respectively. Please help me to find the explicit solutions of the above differential equations, thank you very much. BTW, you should preferably write $J(x)$, as $J$ is a function of $x$. Writing $J(x(t))$ is misleading or meaningless, as $t$ is a dummy integration variable. –  Pietro Majer Feb 23 '13 at 14:04 I will not cover the details of the calculation because this sounds a lot like a homework question but to cover the broader details, to find the extremals of the functional $J(y),$ they must satisfy the Euler-Lagrange equation $$\frac{d}{dt}\frac{\partial L}{\partial x'}=\frac{\partial L}{\partial x}$$ where $L$ is the integrand of the functional $J$. For example, for $1.$, we have $$\frac{d}{dt}\frac{\partial L}{\partial x'}=\frac{\partial L}{\partial x}\Rightarrow \frac{2x''}{x^3}=-\frac{3(x')^2}{x^2}$$ and for the second, $$\frac{(x'')}{(1+(x')^2)^{1/2}\sqrt{x}}=\frac{\sqrt{1+(x')^2}}{2x^{3/2}}.$$ With uniqueness, the addition of an arbitrary constant does not give uniqueness.
Let ${V}$ be a quasiprojective variety defined over a finite field ${{\bf F}_q}$, thus for instance ${V}$ could be an affine variety $\displaystyle V = \{ x \in {\bf A}^d: P_1(x) = \dots = P_m(x) = 0\} \ \ \ \ \ (1)$ where ${{\bf A}^d}$ is ${d}$-dimensional affine space and ${P_1,\dots,P_m: {\bf A}^d \rightarrow {\bf A}}$ are a finite collection of polynomials with coefficients in ${{\bf F}_q}$. Then one can define the set ${V[{\bf F}_q]}$ of ${{\bf F}_q}$-rational points, and more generally the set ${V[{\bf F}_{q^n}]}$ of ${{\bf F}_{q^n}}$-rational points for any ${n \geq 1}$, since ${{\bf F}_{q^n}}$ can be viewed as a field extension of ${{\bf F}_q}$. Thus for instance in the affine case (1) we have $\displaystyle V[{\bf F}_{q^n}] := \{ x \in {\bf F}_{q^n}^d: P_1(x) = \dots = P_m(x) = 0\}.$ The Weil conjectures are concerned with understanding the number $\displaystyle S_n := |V[{\bf F}_{q^n}]| \ \ \ \ \ (2)$ of ${{\bf F}_{q^n}}$-rational points over a variety ${V}$. The first of these conjectures was proven by Dwork, and can be phrased as follows. Theorem 1 (Rationality of the zeta function) Let ${V}$ be a quasiprojective variety defined over a finite field ${{\bf F}_q}$, and let ${S_n}$ be given by (2). Then there exist a finite number of algebraic integers ${\alpha_1,\dots,\alpha_k, \beta_1,\dots,\beta_{k'} \in O_{\overline{{\bf Q}}}}$ (known as characteristic values of ${V}$), such that $\displaystyle S_n = \alpha_1^n + \dots + \alpha_k^n - \beta_1^n - \dots - \beta_{k'}^n$ for all ${n \geq 1}$. After cancelling, we may of course assume that ${\alpha_i \neq \beta_j}$ for any ${i=1,\dots,k}$ and ${j=1,\dots,k'}$, and then it is easy to see (as we will see below) that the ${\alpha_1,\dots,\alpha_k,\beta_1,\dots,\beta_{k'}}$ become uniquely determined up to permutations of the ${\alpha_1,\dots,\alpha_k}$ and ${\beta_1,\dots,\beta_{k'}}$. These values are known as the characteristic values of ${V}$. Since ${S_n}$ is a rational integer (i.e. an element of ${{\bf Z}}$) rather than merely an algebraic integer (i.e. an element of the ring of integers ${O_{\overline{{\bf Q}}}}$ of the algebraic closure ${\overline{{\bf Q}}}$ of ${{\bf Q}}$), we conclude from the above-mentioned uniqueness that the set of characteristic values are invariant with respect to the Galois group ${Gal(\overline{{\bf Q}} / {\bf Q} )}$. To emphasise this Galois invariance, we will not fix a specific embedding ${\iota_\infty: \overline{{\bf Q}} \rightarrow {\bf C}}$ of the algebraic numbers into the complex field ${{\bf C} = {\bf C}_\infty}$, but work with all such embeddings simultaneously. (Thus, for instance, ${\overline{{\bf Q}}}$ contains three cube roots of ${2}$, but which of these is assigned to the complex numbers ${2^{1/3}}$, ${e^{2\pi i/3} 2^{1/3}}$, ${e^{4\pi i/3} 2^{1/3}}$ will depend on the choice of embedding ${\iota_\infty}$.) An equivalent way of phrasing Dwork’s theorem is that the (${T}$-form of the) zeta function $\displaystyle \zeta_V(T) := \exp( \sum_{n=1}^\infty \frac{S_n}{n} T^n )$ associated to ${V}$ (which is well defined as a formal power series in ${T}$, at least) is equal to a rational function of ${T}$ (with the ${\alpha_1,\dots,\alpha_k}$ and ${\beta_1,\dots,\beta_{k'}}$ being the poles and zeroes of ${\zeta_V}$ respectively). Here, we use the formal exponential $\displaystyle \exp(X) := 1 + X + \frac{X^2}{2!} + \frac{X^3}{3!} + \dots.$ Equivalently, the (${s}$-form of the) zeta-function ${s \mapsto \zeta_V(q^{-s})}$ is a meromorphic function on the complex numbers ${{\bf C}}$ which is also periodic with period ${2\pi i/\log q}$, and which has only finitely many poles and zeroes up to this periodicity. Dwork’s argument relies primarily on ${p}$-adic analysis – an analogue of complex analysis, but over an algebraically complete (and metrically complete) extension ${{\bf C}_p}$ of the ${p}$-adic field ${{\bf Q}_p}$, rather than over the Archimedean complex numbers ${{\bf C}}$. The argument is quite effective, and in particular gives explicit upper bounds for the number ${k+k'}$ of characteristic values in terms of the complexity of the variety ${V}$; for instance, in the affine case (1) with ${V}$ of degree ${D}$, Bombieri used Dwork’s methods (in combination with Deligne’s theorem below) to obtain the bound ${k+k' \leq (4D+9)^{2d+1}}$, and a subsequent paper of Hooley established the slightly weaker bound ${k+k' \leq (11D+11)^{d+m+2}}$ purely from Dwork’s methods (a similar bound had also been pointed out in unpublished work of Dwork). In particular, one has bounds that are uniform in the field ${{\bf F}_q}$, which is an important fact for many analytic number theory applications. These ${p}$-adic arguments stand in contrast with Deligne’s resolution of the last (and deepest) of the Weil conjectures: Theorem 2 (Riemann hypothesis) Let ${V}$ be a quasiprojective variety defined over a finite field ${{\bf F}_q}$, and let ${\lambda \in \overline{{\bf Q}}}$ be a characteristic value of ${V}$. Then there exists a natural number ${w}$ such that ${|\iota_\infty(\lambda)|_\infty = q^{w/2}}$ for every embedding ${\iota_\infty: \overline{{\bf Q}} \rightarrow {\bf C}}$, where ${| |_\infty}$ denotes the usual absolute value on the complex numbers ${{\bf C} = {\bf C}_\infty}$. (Informally: ${\lambda}$ and all of its Galois conjugates have complex magnitude ${q^{w/2}}$.) To put it another way that closely resembles the classical Riemann hypothesis, all the zeroes and poles of the ${s}$-form ${s \mapsto \zeta_V(q^{-s})}$ lie on the critical lines ${\{ s \in {\bf C}: \hbox{Re}(s) = \frac{w}{2} \}}$ for ${w=0,1,2,\dots}$. (See this previous blog post for further comparison of various instantiations of the Riemann hypothesis.) Whereas Dwork uses ${p}$-adic analysis, Deligne uses the essentially orthogonal technique of ell-adic cohomology to establish his theorem. However, ell-adic methods can be used (via the Grothendieck-Lefschetz trace formula) to establish rationality, and conversely, in this paper of Kedlaya p-adic methods are used to establish the Riemann hypothesis. As pointed out by Kedlaya, the ell-adic methods are tied to the intrinsic geometry of ${V}$ (such as the structure of sheaves and covers over ${V}$), while the ${p}$-adic methods are more tied to the extrinsic geometry of ${V}$ (how ${V}$ sits inside its ambient affine or projective space). In this post, I would like to record my notes on Dwork’s proof of Theorem 1, drawing heavily on the expositions of Serre, Hooley, Koblitz, and others. The basic strategy is to control the rational integers ${S_n}$ both in an “Archimedean” sense (embedding the rational integers inside the complex numbers ${{\bf C}_\infty}$ with the usual norm ${||_\infty}$) as well as in the “${p}$-adic” sense, with ${p}$ the characteristic of ${{\bf F}_q}$ (embedding the integers now in the “complexification” ${{\bf C}_p}$ of the ${p}$-adic numbers ${{\bf Q}_p}$, which is equipped with a norm ${||_p}$ that we will recall later). (This is in contrast to the methods of ell-adic cohomology, in which one primarily works over an ${\ell}$-adic field ${{\bf Q}_\ell}$ with ${\ell \neq p,\infty}$.) The Archimedean control is trivial: Proposition 3 (Archimedean control of ${S_n}$) With ${S_n}$ as above, and any embedding ${\iota_\infty: \overline{{\bf Q}} \rightarrow {\bf C}}$, we have $\displaystyle |\iota_\infty(S_n)|_\infty \leq C q^{A n}$ for all ${n}$ and some ${C, A >0}$ independent of ${n}$. Proof: Since ${S_n}$ is a rational integer, ${|\iota_\infty(S_n)|_\infty}$ is just ${|S_n|_\infty}$. By decomposing ${V}$ into affine pieces, we may assume that ${V}$ is of the affine form (1), then we trivially have ${|S_n|_\infty \leq q^{nd}}$, and the claim follows. $\Box$ Another way of thinking about this Archimedean control is that it guarantees that the zeta function ${T \mapsto \zeta_V(T)}$ can be defined holomorphically on the open disk in ${{\bf C}_\infty}$ of radius ${q^{-A}}$ centred at the origin. The ${p}$-adic control is significantly more difficult, and is the main component of Dwork’s argument: Proposition 4 (${p}$-adic control of ${S_n}$) With ${S_n}$ as above, and using an embedding ${\iota_p: \overline{{\bf Q}} \rightarrow {\bf C}_p}$ (defined later) with ${p}$ the characteristic of ${{\bf F}_q}$, we can find for any real ${A > 0}$ a finite number of elements ${\alpha_1,\dots,\alpha_k,\beta_1,\dots,\beta_{k'} \in {\bf C}_p}$ such that $\displaystyle |\iota_p(S_n) - (\alpha_1^n + \dots + \alpha_k^n - \beta_1^n - \dots - \beta_{k'}^n)|_p \leq q^{-An}$ for all ${n}$. Another way of thinking about this ${p}$-adic control is that it guarantees that the zeta function ${T \mapsto \zeta_V(T)}$ can be defined meromorphically on the entire ${p}$-adic complex field ${{\bf C}_p}$. Proposition 4 is ostensibly much weaker than Theorem 1 because of (a) the error term of ${p}$-adic magnitude at most ${Cq^{-An}}$; (b) the fact that the number ${k+k'}$ of potential characteristic values here may go to infinity as ${A \rightarrow \infty}$; and (c) the potential characteristic values ${\alpha_1,\dots,\alpha_k,\beta_1,\dots,\beta_{k'}}$ only exist inside the complexified ${p}$-adics ${{\bf C}_p}$, rather than in the algebraic integers ${O_{\overline{{\bf Q}}}}$. However, it turns out that by combining ${p}$-adic control on ${S_n}$ in Proposition 4 with the trivial control on ${S_n}$ in Proposition 3, one can obtain Theorem 1 by an elementary argument that does not use any further properties of ${S_n}$ (other than the obvious fact that the ${S_n}$ are rational integers), with the ${A}$ in Proposition 4 chosen to exceed the ${A}$ in Proposition 3. We give this argument (essentially due to Borel) below the fold. The proof of Proposition 4 can be split into two pieces. The first piece, which can be viewed as the number-theoretic component of the proof, uses external descriptions of ${V}$ such as (1) to obtain the following decomposition of ${S_n}$: Proposition 5 (Decomposition of ${S_n}$) With ${\iota_p}$ and ${S_n}$ as above, we can decompose ${\iota_p(S_n)}$ as a finite linear combination (over the integers) of sequences ${S'_n \in {\bf C}_p}$, such that for each such sequence ${n \mapsto S'_n}$, the zeta functions $\displaystyle \zeta'(T) := \exp( \sum_{n=1}^\infty \frac{S'_n}{n} T^n ) = \sum_{n=0}^\infty c_n T^n$ are entire in ${{\bf C}_p}$, by which we mean that $\displaystyle |c_n|_p^{1/n} \rightarrow 0$ as ${n \rightarrow \infty}$. This proposition will ultimately be a consequence of the properties of the Teichmuller lifting ${\tau: \overline{{\bf F}_p}^\times \rightarrow {\bf C}_p^\times}$. The second piece, which can be viewed as the “${p}$-adic complex analytic” component of the proof, relates the ${p}$-adic entire nature of a zeta function with control on the associated sequence ${S'_n}$, and can be interpreted (after some manipulation) as a ${p}$-adic version of the Weierstrass preparation theorem: Proposition 6 (${p}$-adic Weierstrass preparation theorem) Let ${S'_n}$ be a sequence in ${{\bf C}_p}$, such that the zeta function $\displaystyle \zeta'(T) := \exp( \sum_{n=1}^\infty \frac{S'_n}{n} T^n )$ is entire in ${{\bf C}_p}$. Then for any real ${A > 0}$, there exist a finite number of elements ${\beta_1,\dots,\beta_{k'} \in {\bf C}_p}$ such that $\displaystyle |\iota_p(S'_n) + \beta_1^n + \dots + \beta_{k'}^n|_p \leq q^{-An}$ for all ${n}$ and some ${C>0}$. Clearly, the combination of Proposition 5 and Proposition 6 (and the non-Archimedean nature of the ${||_p}$ norm) imply Proposition 4. — 1. Constructing the complex ${p}$-adics — Given a field ${k}$, a norm on that field is defined to be a map ${||: k \rightarrow {\bf R}^+}$ obeying the following axioms for ${x,y \in k}$: • (Non-degeneracy) ${|x|=0}$ if and only if ${x=0}$. • (Multiplicativity) ${|xy| = |x| |y|}$. • (Triangle inequality) ${|x+y| \leq |x| + |y|}$. If the triangle inequality can be improved to the ultra-triangle inequality $\displaystyle |x+y| \leq \max(|x|, |y|) \ \ \ \ \ (3)$ then we say that the norm is non-Archimedean. The pair ${(k, ||)}$ will be referred to as a normed field. The most familiar example of a norm is the usual (Archimedean) absolute value ${z \mapsto |z| = |z|_\infty}$ on the complex numbers ${{\bf C} = {\bf C}_\infty}$, and thus also on its subfields ${{\bf R}}$ and ${{\bf Q}}$. For a given prime ${p}$, we also have the ${p}$-adic norm ${x \mapsto |x|_p}$ defined initially on the rationals ${{\bf Q}}$ by the formula $\displaystyle |\frac{a}{b}|_p := p^{\hbox{ord}_p(b) - \hbox{ord}_p(a)}$ for any rational ${\frac{a}{b}}$, where ${\hbox{ord}_p(a)}$ is the number of times ${p}$ divides an integer ${a}$ (with the conventions ${\hbox{ord}_p(0)=+\infty}$ and ${p^{-\infty} = 0}$). Thus for instance ${|p^j|_p = p^{-j}}$ for any integer ${j}$, which is of course inverse to the Archimedean norm ${|p^j|_\infty = p^j}$. (More generally, the fundamental theorem of arithmetic can be elegantly rephrased as the identity ${\prod_\nu |x|_\nu = 1}$ for all non-zero rationals ${x}$, where ${\nu}$ ranges over all places (i.e. over all the rational primes ${p}$, together with ${\infty}$.) It is easy to see that ${||_p}$ is indeed a non-Archimedean norm. A classical theorem of Ostrowski asserts that all norms on ${{\bf Q}}$ are equivalent to either the Archimedean norm ${||_\infty}$ or one of the ${p}$-adic norms ${||_p}$, although we will not need this result here. A norm ${||}$ on a field ${k}$ defines a metric ${d(x,y) := |x-y|}$, and then one can define the metric completion ${\hbox{Clos}_{||}(k)}$ of this field in the usual manner (as equivalence classes of Cauchy sequences in ${k}$ with respect to this metric). It is easy to see that the resulting completion is again a field, and that the norm ${||}$ on ${k}$ extends continuously to a norm on the metric completion ${\hbox{Clos}_{||}(k)}$. The metric closure of a non-Archimedean normed field is again a non-Archimedean normed field. Once one has metric completeness, one can form infinite series ${\sum_{n=1}^\infty x_n}$ of elements of the field in the usual manner; but the non-Archimedean setting is somewhat better behaved than the Archimedean setting. In particular, it is easy to see that if ${k = (k,||)}$ is a non-Archimedean metrically complete normed vector field, then an infinite series ${\sum_{n=1}^\infty x_n}$ is convergent if and only if it obeys the zero test ${\lim_{n \rightarrow \infty} |x_n| = 0}$, and furthermore that convergent series are automatically unconditionally convergent. (The notion of absolute convergence is not particularly relevant in non-Archimedean fields.) Thus we can talk about a countable series ${\sum_{n \in I} x_n}$ in a non-Archimedean metrically complete normed vector field being convergent without having to be concerned about the ordering of the series. As key examples of metric completion, we recall that using the Archimedean norm ${||_\infty}$, the metric completion ${\hbox{Clos}_{||_\infty}({\bf Q})}$ of the rationals is the reals ${{\bf R} = {\bf Q}_\infty}$, whereas using a ${p}$-adic norm ${||_\infty}$, the metric completion ${\hbox{Clos}_{||_p}({\bf Q})}$ of the rationals is instead the ${p}$-adic field ${{\bf Q}_p}$. Note that the metric notion of completeness (convergence of every Cauchy sequence) is distinct from the algebraic notion of completeness (solvability of every non-constant polynomial equation, also known as being algebraically closed). For instance, the fields ${{\bf R}={\bf Q}_\infty}$ and ${{\bf Q}_p}$ are metrically complete, but not algebraically complete. However, the two notions of completeness are related to each other in a number of ways. Firstly, the metric completion of an algebraically complete field remains algebraically complete: Lemma 7 Let ${(k, ||)}$ be a normed field which is algebraically closed. Then the metric completion ${\hbox{Clos}_{||}(k)}$ is also algebraically closed. Proof: Let ${P(x) = x^d + a_{d-1} x^{d-1} + \dots + a_0}$ be a monic polynomial of some degree ${d \geq 1}$ with coefficients ${a_0,\dots,a_{d-1}}$ in ${\hbox{Clos}_{||}(k)}$. We need to show that ${P}$ has at least one root in ${\hbox{Clos}_{||}(k)}$. By construction of ${\hbox{Clos}_{||}(k)}$, we can view ${P}$ as the limit of polynomials ${P_n(x) = x^d + a_{d-1,n} x^{d-1} + \dots + a_{0,n}}$ with coefficients ${a_{0,n},\dots,a_{d-1,n} \in k}$, where the convergence is in the sense that each coefficient ${a_{i,n}}$ converges to ${a_i}$ as ${n \rightarrow \infty}$ for ${i=0,\dots,d-1}$. As ${k}$ is already algebraically closed, each ${P_n}$ has ${d}$ roots ${z_{1,n},\dots,z_{d,n} \in k}$ (possibly with repetition). Because the ${a_{i,n}}$ are bounded, it is easy to see from the equation ${P_n(z_{i,n})=0}$ that the roots ${z_{i,n}}$ are uniformly bounded in ${n}$. Among other things, this implies that ${P_n(z_{i,m})}$ converges to zero as ${n,m \rightarrow \infty}$, since ${P_m(z_{i,m})=0}$ and the coefficients of ${P_m-P_n}$ converge to zero. Writing ${P_n(z_{i,m}) = \prod_{j=1}^n (z_{i,m}-z_{j,n})}$, we conclude that the distance between ${z_{i,m}}$ and the zero set ${\{z_{1,n},\dots,z_{d,n}\}}$ goes to zero as ${n,m \rightarrow \infty}$. From this one can easily extract a Cauchy sequence ${z_{i_j, n_j}}$ with ${n_j \rightarrow \infty}$, which then converges to a limit ${z \in \hbox{Clos}_{||}(k)}$ which can be seen to be a zero of ${P}$, giving the claim. $\Box$ In the other direction, in the case of the ${p}$-adics at least, it is possible to extend a norm on a field to the algebraic closure of that field: Lemma 8 For any ${z}$ in the algebraic closure ${\overline{{\bf Q}_p}}$ of ${{\bf Q}_p}$, define the norm ${|z|_p}$ of ${z}$ by the formula $\displaystyle |z| := |z_1 \dots z_d|^{1/d} \ \ \ \ \ (4)$ where ${z_1,\dots,z_d}$ are the Galois conjugates of ${z}$ in ${\overline{{\bf Q}_p}}$ (so in particular ${z_1 \dots z_d \in {\bf Q}_p}$). Then ${\overline{{\bf Q}_p}}$ becomes a non-Archimedean normed field with this norm. The situation is much more complicated in the Archimedean case, as there is no canonical way to extend the norm in this case. For instance, if one wishes to extend the Archimedean norm ${||_\infty}$ from ${{\bf Q}}$ to ${\overline{{\bf Q}}}$, one can do so by choosing an embedding ${\iota_\infty: \overline{{\bf Q}} \rightarrow {\bf C}}$ and using the Archimedean norm on ${{\bf C}}$, but this is not a Galois-invariant definition. For instance, one of the two roots of the equation ${x^2 - x - 1 = 0}$ will have a larger norm than the other (one norm being the golden ratio, and the other being its reciprocal), but the choice of root that has the larger norm depends on the choice of embedding ${\iota_\infty}$. Note that the definition (4) fails to be a norm in the Archimedean case; for instance, in ${\overline{{\bf Q}}}$, (4) would require ${3+2\sqrt{2}}$ and ${3-2\sqrt{2}}$ to have norm ${1}$, while their sum would have norm ${6}$, violating the triangle inequality. Proof: The only difficult task to show is the ultra-triangle inequality (3). It suffices to show that for every Galois extension ${E}$ of ${{\bf Q}_p}$ and every ${z,w \in E}$, one has $\displaystyle |z+w| \leq \max(|z|, |w|).$ We view ${E}$ as a finite-dimensional vector space over ${{\bf Q}_p}$ of some dimension ${d}$, and identify each ${z \in E}$ with the multiplication operator ${M_z: E \rightarrow E}$ defined by ${M_z x := zx}$. These ${M_z}$ can be viewed as an element of ${\hbox{Hom}_{{\bf Q}_p}(E \rightarrow E)}$, the space of ${{\bf Q}_p}$-linear maps from ${E}$ to itself, and the determinant of ${M_z}$ has norm ${|z|^d}$ by construction. We pick some arbitrary ${{\bf Q}_p}$-basis ${e_1,\dots,e_d}$ of ${E}$ and use this to define a non-Archimedean “norm” ${\| \|}$ on ${E}$ by the formula $\displaystyle \| x_1 e_1 + \dots + x_d e_d \| := \sup_{i=1,\dots,d} |x_i|$ for ${x_1,\dots,x_d \in {\bf Q}_p}$, and then define a “norm” ${\|\|_{op}}$ on ${\hbox{Hom}_{{\bf Q}_p}(E \rightarrow E)}$ by $\displaystyle \| T \|_{op} := \sup \{ \|Tx\|: x \in E, \|x\| \leq 1 \}.$ It is easy to see that the space ${\{ M_z: z \in E \}}$ is then a closed linear subspace of ${\hbox{Hom}_{{\bf Q}_p}(E \rightarrow E)}$. In particular, since ${{\bf Q}_p}$ is locally compact, we see that for any compact interval ${I \subset (0,+\infty)}$, the set ${\{ M_z: \|M_z\|_{op} \in I \}}$ is compact. On the other hand, as all the ${M_z}$ are invertible, ${\hbox{det}(M_z)}$ is non-zero on this compact set. Thus, for any ${I}$, there exists a constant ${C = C_I > 0}$ such that $\displaystyle C^{-1} \leq |\hbox{det}(M_z)| \leq C$ for all ${z \in \{ M_z: \|M_z\|_{op} \in I \}}$. Since ${\hbox{det}(M_z) = |z|^d}$, we then see from a rescaling argument that there is a constant ${C'>0}$ such that $\displaystyle (C')^{-1} \|z\|_{op} \leq |z| \leq C \|z\|_{op}$ for all ${z \in E}$. Since ${|z| = |z^n|^{1/n}}$, we conclude the spectral radius formula $\displaystyle |z| = \lim_{n \rightarrow \infty} \|z^n\|_{op}^{1/n}. \ \ \ \ \ (5)$ Now we can prove the ultra-triangle inequality via the tensor power trick. If ${|z|, |w| \leq A}$, then from (5) we have $\displaystyle \|z^n\|_{op}, \|w^n\|_{op} \leq (A+o(1))^n$ as ${n \rightarrow \infty}$; from this and the easy bounds ${\|zw\|_{op} \leq \|z\|_{op} \|w\|_{op}, \|z+w\|_{op} \leq \max( \|z\|_{op}, \|w\|_{op} )}$ and binomial expansion we also conclude that $\displaystyle \|(z+w)^n\|_{op} \leq (A+o(1))^n$ as ${n \rightarrow \infty}$. A second application of (5) then gives ${|z+w| \leq A}$, and the ultra-triangle inequality follows. $\Box$ Combining the two lemmas, we see that if we define $\displaystyle {\bf C}_p = \hbox{Clos}_{||_p}( \overline{{\bf Q}_p} )$ to be the metric completion of the algebraic completion ${\overline{{\bf Q}_p}}$ of the ${p}$-adic field ${{\bf Q}_p}$, then this is a non-Archimedean normed field which is both metrically complete and algebraically complete, and serves as the analogue of the complex field ${{\bf C} = {\bf C}_\infty}$. Note that ${{\bf C}_p}$ comes with an embedding ${\iota_p: \overline{{\bf Q}} \rightarrow {\bf C}_p}$, since ${\overline{{\bf Q}}}$ may clearly be embedded into ${\overline{{\bf Q}_p}}$. Also, the norm on ${\overline{{\bf Q}}}$ induced from this embedding is clearly Galois-invariant and thus independent of the choice of embedding. Finally, we remark from construction that every non-zero element of ${\overline{{\bf Q}_p}}$ has a norm which is a rational power ${p^{a/b}}$ of ${p}$, so on taking limits (and using the ultra-triangle inequality) we see that the same is true for non-zero elements of ${{\bf C}_p}$. Remark 1 In the Archimedean case, the analogue of ${{\bf Q}_p}$ is the reals ${{\bf R} = {\bf Q}_\infty}$, and in this case the algebraic completion ${\overline{{\bf R}}}$ is a finite extension of ${{\bf R}}$ (in fact it is just a quadratic extension) and is thus already metrically complete. However, in the ${p}$-adic case, it turns out that ${\overline{{\bf Q}_p}}$ is an infinite extension of ${{\bf Q}_p}$ (for instance, it contains ${n^{th}}$ roots of ${p}$ for every ${n \geq 1}$), and is no longer metrically complete, requiring the additional application of Lemma 7 to recover metric completeness. — 2. From meromorphicity to rationality — We now show how Proposition 3 and Proposition 4 imply Theorem 1. The basic idea is to exploit the fact that a non-zero rational integer ${N}$ cannot be simultaneously small in the Archimedean sense and in the ${p}$-adic sense, and in particular that we have an “uncertainty principle” $\displaystyle |N|_\infty \times |N|_p \geq 1 \ \ \ \ \ (6)$ which is immediate from the fundamental theorem of arithmetic. We would like to use this uncertainty principle to eliminate the error term in Proposition 4, but run into the issue that many of the quantities involved here are not rational integers, but instead merely lie in ${{\bf C}_p}$. To get around this, we have to work with expressions that are guaranteed to be rational integers, such as polynomial combinations of the ${S_n}$ with integer coefficients. To this end, we introduce the following classical lemma: Lemma 9 (Rationality criterion) Let ${S_n}$ be a sequence in a field ${k}$, with the property that there exists a natural number ${m \geq 0}$ such that the ${m+1 \times m+1}$ determinants $\displaystyle \det ( S_{n+i+j} )_{0 \leq i,j \leq m} \ \ \ \ \ (7)$ vanish for all sufficiently large ${n}$. Then there exist ${a_0,\dots,a_m \in k}$, not all zero, such that we have the linear recurrence $\displaystyle a_0 S_n + a_1 S_{n+1} + \dots + a_m S_{n+m} = 0 \ \ \ \ \ (8)$ for all sufficiently large ${n}$. (Equivalently, the formal power series ${\sum_n S_n T^n}$ is a rational function of ${T}$.) Note that in the converse direction, row operations show that if one has the recurrence (8), then (7) vanishes. Proof: We may assume that the ${m \times m}$ determinants $\displaystyle \det ( S_{n+i+j} )_{0 \leq i,j \leq m-1} \ \ \ \ \ (9)$ are non-vanishing for infinitely many ${n}$ (this is a vacuous condition if ${m=0}$), since otherwise we can replace ${m}$ by ${m-1}$ in the hypotheses and conclusion. Let ${n}$ be large enough that (7) vanishes, and suppose that the determinant (9) vanishes for this value of ${n}$. We claim that the determinant $\displaystyle \det ( S_{n+1+i+j} )_{0 \leq i,j \leq m-1} \ \ \ \ \ (10)$ also vanishes; induction then shows that (9) vanishes for all sufficiently large ${n}$, a contradiction. To see why (10) vanishes, we argue as follows. As (9) vanishes, there is a non-trivial linear dependence among the ${m}$ rows of the matrix in (9). If this dependence does not involve the first row, then it also creates a non-trivial dependence among the first ${m-1}$ rows of the matrix (10), and we are done. Thus we may assume that the first row in (9) is a linear combination of the next ${m-1}$ rows. As a consequence, the first row in (7) is a linear combination of the next ${m-1}$ rows, plus a vector of the form ${(0,\dots,0,\beta)}$ for some ${\beta \in k}$. If ${\beta}$ is non-zero, then the row operations and cofactor expansion show that the determinant (7) is plus or minus ${\beta}$ times the determinant (10), giving the claim. If ${\beta}$ is instead zero, then the first ${m}$ rows of the matrix in (7) have a non-trivial linear dependence, which on deleting the first column shows that the ${m}$ rows of the matrix in (10) also have a non-trivial linear dependence, giving the claim. We thus conclude that (9) does not vanish for all sufficiently large ${n}$. In particular, the matrix in (7) always has rank ${m}$. An easy induction then shows that the row span of the matrix in (7) is a hyperplane in ${k^{m+1}}$ (spanned by either the first ${m}$ rows or the last ${m}$ rows), which is independent of ${n}$. Writing this hyperplane as ${\{ (x_0,\dots,x_m): a_0 x_0 + \dots + a_m x_m = 0 \}}$, we obtain the claim. $\Box$ Now let ${S_n}$ be as in Proposition 3 and Proposition 4, let ${m}$ be a large natural number to be chosen later, and consider the determinant (7). This is clearly a rational integer. On the one hand, from Proposition 3 we have the upper bound $\displaystyle |\det ( S_{n+i+j} )_{0 \leq i,j \leq m}|_\infty \leq C_m q^{A n (m+1)} \ \ \ \ \ (11)$ for all ${n}$ and some ${C_m, A>0}$, with ${A}$ independent of ${m}$. On the other hand, from Proposition 4 we can write each row in the ${m+1\times m+1}$ matrix in (7) (after applying the embedding ${\iota_p}$) as the linear combination of at most ${k}$ vectors of the form ${(1, \lambda, \dots, \lambda^m)}$ for various ${\lambda \in {\bf C}_p}$, plus an error vector whose coefficients all have norm at most ${q^{-(A+1)n}}$ (say), where ${k}$ is independent of ${m}$. Taking determinants, we conclude that $\displaystyle |\det ( S_{n+i+j} )_{0 \leq i,j \leq m}|_p \leq C'_m q^{-(A+1) n (m-k)} \ \ \ \ \ (12)$ for sufficiently large ${n,m}$ and some ${C'_m > 0}$. Inserting the two bounds (11), (12) into the uncertainty principle (6), we conclude the vanishing $\displaystyle \det ( S_{n+i+j} )_{0 \leq i,j \leq m} = 0$ for all sufficiently large ${n,m}$. Applying Lemma 9, we conclude that there exists a natural number ${m \geq 0}$ and rational coefficients ${a_0,\dots,a_m \in {\bf Q}}$, not all zero, such that $\displaystyle a_0 S_n + a_1 S_{n+1} + \dots + a_m S_{n+m} = 0 \ \ \ \ \ (13)$ for all sufficiently large ${n}$. By clearing denominators, we may assume that the ${a_i}$ are all rational integers. By deleting zero terms, we may assume that ${a_0}$ and ${a_m}$ are non-zero. We can use this recurrence to improve the conclusions of Proposition 4. Observe from that proposition, after collecting like terms and absorbing any characteristic value ${\lambda}$ with ${|\lambda|_p \leq q^{-A}}$ into the error term, that for any ${A > 0}$ we can find a finite number of distinct characteristic values ${\lambda_1,\dots,\lambda_k \in {\bf C}_p}$ with ${|\lambda_i|_p > q^{-A}}$ for ${i=1,\dots,k}$, as well as non-zero integers ${c_1,\dots,c_k \in {\bf Z}}$, such that $\displaystyle |\iota_p(S_n) - \sum_{i=1}^k c_i \lambda_i^n|_p \leq q^{-An}$ for all ${n}$. Applying (13) to eliminate the ${S_n}$, we conclude that $\displaystyle |\sum_{i=1}^k c_k P(\lambda_i) \lambda_i^n|_p \leq q^{-An}$ for all sufficiently large ${n}$, where ${P}$ is the characteristic polynomial $\displaystyle P(z) := a_0 + a_1 z + \dots + a_m z^m.$ If one of the ${\lambda_i}$ is not a root of ${P}$, then by applying difference operators $\displaystyle \partial_\lambda S_n := S_{n+1} - \lambda S_n$ to eliminate all the other characteristic values, we eventually conclude that $\displaystyle |\lambda_i^n|_p \leq C_i q^{-An}$ for all sufficiently large ${n}$ and some ${C_i}$ independent of ${n}$, contradicting the hypothesis ${|\lambda_i|_p > q^{-A}}$. Thus all the ${\lambda_i}$ are zeroes of ${P}$ and in particular lie in ${\overline{{\bf Q}}}$. If we then let ${\zeta_1,\dots,\zeta_l \in \overline{{\bf Q}}}$ be an enumeration of the distinct zeroes of ${P}$ (which are all non-zero by the non-vanishing of ${a_0}$), and choose ${A}$ such that ${|\zeta_i| > q^{-A}}$ for all ${i=1,\dots,l}$, we conclude that for each ${A}$, there exist integers ${c_i = c_{i,A}}$ for ${i=1,\dots,l}$ such that $\displaystyle |S_n - \sum_{i=1}^l c_i \zeta_i^n|_p \leq q^{-An} \ \ \ \ \ (14)$ for all ${n}$. The coefficients ${c_i}$ ostensibly depend on ${A}$, but a repetition of the above arguments show that they are in fact independent of ${A}$, since given two ${A,A'}$ with ${|\zeta_i| > q^{-A} \geq q^{-A'}}$, we see from the triangle inequality that $\displaystyle |\sum_{i=1}^l (c_{i,A} - c_{i,A'}) \zeta_i^n|_p \leq q^{-An}$ for all ${n}$, and then by applying difference operators to isolate a single ${\zeta_i}$, we see that ${c_{i,A} = c_{i,A'}}$ for all ${i}$. (Note that this argument also gives the uniqueness of the characteristic values that was asserted in the introduction.) As the ${c_i}$ are independent of ${A}$, we may send ${A \rightarrow \infty}$ in (14), and conclude that $\displaystyle S_n = \sum_{i=1}^l c_i \zeta_i^n \ \ \ \ \ (15)$ for all ${n}$. We are now nearly done, except that the ${\zeta_i \in \overline{{\bf Q}}}$ are algebraic numbers rather than algebraic integers. However, as the ${S_n}$ are rational integers, we have ${|S_n|_\ell \leq 1}$ for all ${n}$ and ${\ell}$, and applying difference operators to (15) to isolate ${\zeta_i}$ we conclude that ${|\zeta_i|_\ell \leq 1}$ for all ${i}$ and all ${\ell}$. As the characteristic values are closed under the absolute Galois group of ${{\bf Q}}$, we conclude that all Galois conjugates of ${\zeta_i}$ also have ${\ell}$-adic norm at most one, so the minimal polynomial of ${\zeta_i}$ in ${{\bf Q}}$ has coefficients that are rational and have ${\ell}$-adic norm at most one for every ${\ell}$, and are thus rational integers, so that ${\zeta_i}$ is an algebraic integer as required. Theorem 1 follows. Remark 2 The argument above is has been slightly rearranged from the standard argument in the literature, in which one establishes rationality of the zeta function ${\exp(\sum_{n=1}^\infty \frac{S_n}{n} T^n)}$ directly, rather than first establishing rationality of the generating function ${\sum_{n=1}^\infty S_n T^n}$ (which is essentially the logarithmic derivative of the zeta function). The reason I did so was to highlight the fact that transcendental operations such as exponentiation do not play a role in this portion of the argument, in contrast to Propositions 5 and 6, which crucially exploit the properties of the exponential function. — 3. The ${p}$-adic Weierstrass preparation theorem — Now we prove Proposition 6. We begin with a theorem somewhat analogous to Rouche’s theorem in complex analysis, which approximately locates a zero of an entire function that is dominated by a monomial ${a_m T^m}$. Lemma 10 (Rouche-type theorem) Let $\displaystyle f(T) = 1 + a_1 T + a_2 T^2 + \dots$ be an entire function on ${{\bf C}_p}$, thus ${a_1,a_2,\dots \in {\bf C}_p}$ and ${|a_n|_p^{1/n} \rightarrow 0}$ as ${n \rightarrow \infty}$. Suppose that ${|a_m|_p^{1/m} \geq 1/R}$ for some ${m \geq 1}$ and ${R>0}$. Then there exists a root ${z \in {\bf C}_p}$ of ${f}$ (thus ${f(z)=0}$) with ${|z|_p \leq R}$. Proof: We first consider the polynomials $\displaystyle f_n(T) = 1 + a_1 T + \dots + a_n T^n$ for some ${n \geq m}$. As ${{\bf C}_p}$ is algebraically closed, there must be a factorisation $\displaystyle f_n(T) = (1-\beta_{n,1} T) \dots (1-\beta_{n,n} T)$ for some ${\beta_{n,1},\dots,\beta_{n,n} \in {\bf C}_p}$, thus ${a_m}$ is plus or minus the ${m^{th}}$ symmetric polynomials of the ${\beta_{n,i}}$. Since ${|a_m|_p \geq 1/R^m}$, we conclude from the non-archimedean nature of the norm that ${|\beta_{i,n}|_p \geq 1/R}$ for at least one ${i}$. Similarly, given any ${R'>0}$, if there are exactly ${k}$ ${i}$ for which ${|\beta_{i,n}| \geq 1/R'}$, then by computing the ${k^{th}}$ symmetric polynomial we conclude that ${|a_k|_p \geq (1/R')^k}$. Since ${|a_n|_p^{1/n}}$ goes to zero, we conclude that for any ${R'}$, the number of ${i}$ for which ${|\beta_{i,n}|_p \geq 1/R'}$ is bounded uniformly in ${n}$; the same argument shows that the ${|\beta_{i,n}|_p}$ are uniformly bounded away from zero. Now we run an argument somewhat similar to the proof of Lemma 7. Let ${n, n'}$ be large natural numbers, and let ${i=i_n}$ be such that ${|\beta_{i,n}|_p \geq 1/R}$. We have $\displaystyle P_n( \frac{1}{\beta_{i,n}} ) = 0$ and hence (since ${|a_n|_p^{1/n} \rightarrow 0}$) $\displaystyle P_{n'}( \frac{1}{\beta_{i,n}} ) \rightarrow 0$ as ${n,n' \rightarrow \infty}$; thus $\displaystyle \prod_{j=1}^{n'} | 1 - \frac{\beta_{j,n'}}{\beta_{i,n}}|_p \rightarrow 0.$ On the other hand, since ${|\beta_{i,n}|_p \geq 1/R}$, and since ${|\beta_{j,n'}| < R}$ for all but a bounded number of ${j}$, we see from the non-archimedean nature of the norm that ${| 1 - \frac{\beta_{j,n'}}{\beta_{i,n}}|_p = 1}$ for all but a bounded number of ${j}$. Since the ${\beta_{j,n'}}$ are also uniformly bounded away from zero, we conclude that $\displaystyle \inf_{1 \leq j \leq n'} |\beta_{i,n} - \beta_{j,n'}|_p \rightarrow 0$ as ${n,n' \rightarrow \infty}$. From this, we can form a Cauchy sequence ${\beta_{i_k,n_k}}$ such that ${n_k \rightarrow \infty}$ and ${|\beta_{i_k,n_k}|_p \geq 1/R}$; taking limits, we obtain ${\beta \in {\bf C}_p}$ with ${|\beta|_p \geq 1/R}$ such that ${f( \frac{1}{\beta} ) = 0}$, giving the claim. $\Box$ One can refine the methods in this proof to read off the ${p}$-adic magnitudes of all the zeroes of ${f}$ in terms of the Newton polytope of ${f}$ (yielding a ${p}$-adic analogue of Jensen’s formula from complex analysis), but we will not need to do so here. By iteratively removing the zeroes generated by the above lemma, we have Proposition 11 (${p}$-adic Weierstrass preparation theorem, alternate form) Let $\displaystyle f(T) = 1 + a_1 T + a_2 T^2 + \dots$ be an entire function on ${{\bf C}_p}$, and let ${R>0}$. Then there exists a factorisation $\displaystyle f(T) = (\prod_{i=1}^m (1-\beta_i T)) g(T)$ where ${\beta_1,\dots,\beta_m \in {\bf C}_p}$ and $\displaystyle g(T) = 1 + b_1 T + b_2 T^2 + \dots$ is an entire function such that ${|b_n|_p \leq R^{-n}}$ for all ${n}$. Proof: We can make ${R}$ a rational power of ${p}$, and then by rescaling we may normalise ${R=1}$. Since ${|a_n|_p^{1/n} \rightarrow 0}$, the function ${R^n |a_n|_p}$ goes to zero as ${n \rightarrow \infty}$, and so there exists a natural number ${m \geq 0}$ such that $\displaystyle |a_n|_p \leq |a_m|_p \ \ \ \ \ (16)$ for all ${n \geq 0}$, with the convention that ${a_0=1}$, and with strict inequality if ${n. We now induct on ${m}$. If ${m=0}$ then we are already done (setting ${g=f}$). Now suppose that ${m \geq 1}$, and that the claim has already been proven for ${m-1}$. From the strict form of (16) with ${n=0}$ we have ${|a_m|_p > 1}$, so by Lemma 10 we can find ${\beta \in {\bf C}_p}$ with ${|\beta| > 1}$ such that ${f(1/\beta)=0}$. We can then factor $\displaystyle f(T) = (1-\beta T) f'(T)$ where $\displaystyle f'(T) = 1 + a'_1 T + a'_2 T^2 + \dots$ and $\displaystyle a'_n = \beta^n + a_1 \beta^{n-1} + \dots + a_n.$ Since ${f(1/\beta)=0}$, we also have $\displaystyle a'_n = - \beta^{-1} a_{n+1} - \beta^{-2} a_{n+2} - \dots.$ Using (16) and the non-archimedean property, one easily verifies that $\displaystyle |a'_n|_p \leq \frac{|a_m|_p}{|\beta|_p} = |a'_{m-1}|_p$ for all ${n \geq 0}$, with strict inequality if ${n. Applying the induction hypothesis to ${f'}$, we obtain the claim. $\Box$ Now we prove Proposition 6. Let ${S'_n}$ be as in that proposition, and let ${A>0}$ be arbitrary. By Proposition 11 we have $\displaystyle \exp( \sum_{n=1}^\infty \frac{S'_n}{n} T^n ) = (\prod_{i=1}^{k} (1-\beta_i T)) g(T)$ for some ${\beta_1,\dots,\beta_k \in {\bf C}_p}$ and some entire function $\displaystyle g(T) = 1 + b_1 T + b_2 T^2 + \dots$ with ${|b_n|_p \leq q^{-An}}$ for all ${n}$. Now we apply formal logarithms $\displaystyle \log(1-X) := - X - \frac{X^2}{2} - \frac{X^3}{3} - \dots$ to both sides. Clearly ${\log(\exp(f(T))) = f(T)}$ for any formal power series ${f(T)}$ that actually converges in ${{\bf C}}$; comparing coefficients, we conclude that the formal identity ${\log(\exp(f(T))=f(T)}$ holds in any characteristic zero field. For similar reasons we have ${\log(f(T) g(T)) = \log(f(T))+\log(g(T))}$ for any formal power series ${f(T), g(T)}$ with ${f(0)=g(0)=1}$ with coefficients in a characteristic zero field. We conclude that $\displaystyle \sum_{n=1}^\infty \frac{S'_n}{n} T^n = - \sum_{n=1}^\infty \sum_{i=1}^k \frac{\beta_i^n}{n} T^n + \log g(T).$ But by working out the power series, we see that $\displaystyle \log g(T) = c_1 T + c_2 T^2 + \dots$ where the coefficients ${c_n}$ obey the bounds $\displaystyle |c_n|_p \leq C^n q^{-An}$ for some constant ${C}$ independent of ${A}$. The claim then follows after increasing ${A}$ as necessary. — 4. Factorising the zeta function — Now we establish Proposition 5, which is the most “number-theoretical” component of Dwork’s argument. First observe that by covering the quasiprojective variety ${V}$ into affine pieces, and using an induction on the dimension of ${V}$ to take care of any double-counted terms, we may reduce to the case when ${V}$ is an affine variety (1), thus $\displaystyle S_n = | \{ x \in {\bf F}_{q^n}^d: P_1(x) = \dots = P_m(x) = 0 \}|.$ We can view ${S_n}$ as a sum ${S_n= \sum_{x \in V[{\bf F}_{q^n}]} 1}$ over the affine variety ${V}$. The next step is Fourier expansion in order to “complete” the sum ${S_n}$ into exponential sums over an ambient affine space. Write ${q=p^r}$. For any ${n \geq 0}$ define the trace map ${\hbox{Tr}_n: {\bf F}_{p^n} \rightarrow {\bf F}_p}$ by the formula $\displaystyle \hbox{Tr}_n( x ) := x + x^p + x^{p^2} + \dots + x^{p^{n-1}}.$ This is a linear map over ${{\bf F}_p}$. Let ${\epsilon}$ be a primitive ${p^{th}}$ root of unity in ${{\bf C}_p}$. Then from Fourier analysis we see that for any ${x \in {\bf F}_{q^n}^d = {\bf F}_{p^{nr}}^d}$, the sum $\displaystyle \sum_{y_1,\dots,y_k \in {\bf F}_{q^n}} \epsilon^{\hbox{Tr}_{nr}( y_1 P_1(x) + \dots + y_m P_k(x) )}$ is equal to ${q^{mn}}$ if ${P_1(x)=\dots=P_m(x)=0}$ and equal to zero otherwise. Hence $\displaystyle q^{mn} S_n = q^{mn} \iota_p(S_n) = \sum_{(x,y_1,\dots,y_m) \in {\bf F}_{q^n}^{d+m}} \epsilon^{\hbox{Tr}_{nr}( y_1 P_1(x) + \dots + y_m P_m(x) )}.$ In view of this (replacing ${d+m}$ by ${d}$, and rescaling the zeta function), it suffices to show that for any polynomial ${P: {\bf A}^d \rightarrow {\bf A}}$ defined over ${{\bf F}_q}$, one can decompose the sequence $\displaystyle n \mapsto \sum_{x \in {\bf F}_{q}^d} \epsilon^{\hbox{Tr}_{nr}( P(x) )}$ as a finite linear combination over ${{\bf Z}}$ of sequences ${S'_n \in {\bf C}_p}$ with ${\exp( \sum_{n=1}^\infty \frac{S'_n}{n} T^n )}$ entire. It is convenient to remove the coordinate hyperplanes. Note that ${{\bf F}_{q^n}^d}$ splits as the space ${({\bf F}_{q^n}^\times)^d}$ plus some lower-dimensional spaces, where ${{\bf F}_{q^n}^\times := {\bf F}_{q^n} \backslash \{0\}}$. By an induction on dimension, it thus suffices to show that the sequence $\displaystyle n \mapsto \sum_{x \in ({\bf F}_{q^n}^\times)^d} \epsilon^{\hbox{Tr}_n( P(x) )}$ decomposes as a finite linear combination over ${{\bf Z}}$ of sequences ${S'_n \in {\bf C}_p}$ with ${\exp( \sum_{n=1}^\infty \frac{S'_n}{n} T^n )}$ entire. To prove this, we will establish the following trace formula: Theorem 12 (Trace formula) There exists a formal power series $\displaystyle G(X_1,\dots,X_d) = \sum_{w_1,\dots,w_d \geq 0} c_{w_1,\dots,w_d} X_1^{w_1} \dots X_d^{w_d}$ in ${d}$ variables with coefficients ${c_{w_1,\dots,w_d}}$ in ${{\bf C}_p}$, or more compactly $\displaystyle G(X) = \sum_{w \in {\bf N}^d} c_w X^w,$ such that $\displaystyle |c_w|_p \leq p^{-M|w|} \ \ \ \ \ (17)$ for all ${w \in {\bf N}^d}$ and some ${M>0}$ (where ${|w| := w_1+\dots+w_d}$), such that one has the trace formula $\displaystyle \sum_{(x \in {\bf F}_{q^n}^\times)^d} \epsilon^{\hbox{Tr}_{nr}( P(x) )} = (q^n-1)^d \hbox{tr}_R( \Psi^n ) \ \ \ \ \ (18)$ for all ${n \geq 1}$, where ${\Psi = \Psi_{q,G}: R \rightarrow R}$ is the ${{\bf C}_p}$-linear map on the (infinite-dimensional) vector space ${R}$ of all formal power series ${F(X) = \sum_{u \in {\bf N}^d} a_u X^u}$ in ${d}$ variables defined by $\displaystyle \Psi(F(X)) = T_q( G(X) F(X) ) \ \ \ \ \ (19)$ where ${T_q: R \rightarrow R}$ is the linear map $\displaystyle T_q( \sum_{u \in {\bf N}^d} a_u X^u ) := \sum_{u \in {\bf N}^d} a_{qu} X^u$ and the trace ${\hbox{tr}_R}$ on ${R}$ is computed using the monomial basis ${X^u}$ of ${R}$, thus if $\displaystyle \Psi^m( X^u ) = \sum_{v \in {\bf N}^d} c^{(m)}_{uv} X^v$ then $\displaystyle \hbox{tr}_R(\Psi^n) := \sum_{u \in {\bf N}^d} c^{(m)}_{uu};$ one can easily verify that this sum is convergent. (We will not address the subtle issue as to whether trace is a basis-independent concept in infinite dimensions.) Let us assume this trace formula for the moment and conclude the proof of Theorem 5. Expanding out the ${(q^n-1)^d}$ factor in (18) and arguing as before, it will suffice to show that the zeta function $\displaystyle \exp( - \sum_{n=1}^\infty \frac{\hbox{tr}_R( \Psi^n ) T^n}{n} ) \ \ \ \ \ (20)$ is entire; thus if ${b_m}$ is the ${T^m}$ coefficient of this zeta function, our task is to show that $\displaystyle |b_m|^{1/m} \rightarrow 0 \ \ \ \ \ (21)$ as ${m \rightarrow \infty}$. Note from the formal identity $\displaystyle 1 - X = \exp( - \sum_{n=1}^\infty \frac{X^n}{n} )$ (which is true for small complex ${X}$, and is thus also true for formal power series in characteristic zero) and the Jordan normal form that $\displaystyle \det( 1 - AT ) = \exp( - \sum_{n=1}^\infty \frac{\hbox{tr}(A^n) T^n}{n} )$ on the level of formal power series in ${T}$ for any finite-dimensional matrix ${A = (a_{ij})_{1 \leq i,j \leq k}}$ in characteristic zero. In particular, for any natural number ${m}$, the ${T^m}$ coefficient of ${\exp( - \sum_{n=1}^\infty \frac{\hbox{tr}(A^n) T^n}{n} )}$ is given by the formula $\displaystyle (-1)^m \sum_* \hbox{sgn}(\sigma) a_{i_1,\sigma(i_1)} \dots a_{i_m,\sigma(i_m)}$ where the sum ${\sum_*}$ ranges over distinct elements ${i_1,\dots,i_m}$ of ${\{1,\dots,k\}}$, and over permutations ${\sigma}$ of ${\{i_1,\dots,i_m\}}$. This is a universal polynomial identity in characteristic zero, and so we conclude that the ${T^m}$ coefficient ${b_m}$ of the zeta function (20) is given by the formula $\displaystyle b_m = (-1)^m \sum_* \hbox{sgn}(\sigma) c^{(1)}_{i_1,\sigma(i_1)} \dots c^{(1)}_{i_m,\sigma(i_m)}$ where ${i_1,\dots,i_m}$ now range over distinct natural numbers, and ${\sigma}$ ranges over permutations of ${\{i_1,\dots,i_m\}}$; again, one can check that this sum is convergent. By the non-archimedean nature of the metric, it thus suffices to show that $\displaystyle \sup_* (|c^{(1)}_{i_1,\sigma(i_1)}|_p \dots |c^{(1)}_{i_m,\sigma(i_m)}|_p)^{1/m} \rightarrow 0$ as ${m \rightarrow \infty}$. Now from (17) and construction of ${\Psi}$, we have $\displaystyle |c^{(1)}_{ij}|_p \leq p^{-M (q|i| - |j|)}$ for any ${i,j}$, and so (as ${\sigma}$ is a permutation) $\displaystyle (|c^{(1)}_{i_1,\sigma(i_1)}|_p \dots |c^{(1)}_{i_m,\sigma(i_m)}|_p)^{1/m} \leq p^{-M (q-1) \frac{1}{m} (|i_1| + \dots + |i_m|)}.$ But because there are only a finite number of elements ${i}$ of ${{\bf N}^d}$ of a given length ${|i|}$, we see that ${|i_1| + \dots + |i_m|}$ grows superlinearly in ${m}$ (in fact it must grow by ${\gg m^{1+\frac{1}{d}}}$), and the claim follows. It remains to establish the trace formula (18). We first write the trace ${\hbox{tr}_R( \Psi^n )}$ in a more tractable form. For any natural number ${k}$, let ${\mu_k := \{ z \in {\bf C}_p: z^k = 1 \}}$ denote the group of ${k^{th}}$ roots of unity. Lemma 13 If $\displaystyle G(X) = \sum_{w \in {\bf N}^d} c_w X^w$ is a power series with ${|c_w|_p \leq p^{-M|w|}}$ for some ${M>0}$ and all ${w}$, then for any ${n \geq 1}$ we have $\displaystyle (q^n-1)^d \hbox{tr}_R( \Psi^n ) = \sum_{x \in \mu_{q^n-1}^d} G(x) G(x^q) \dots G(x^{q^{n-1}})$ where we use the notation $\displaystyle (x_1,\dots,x_d)^q := (x_1^q,\dots,x_d^q).$ Note that the power series for ${G}$ converges at all roots of unity. Proof: Observe that $\displaystyle H(X) T_q( F(X) ) = T_q( H(X^q) F(X) )$ for any ${F,H \in R}$. Iterating this using (19), we conclude the identity $\displaystyle \Psi_{q,G}^n( F(x) ) = T_{q^n}( G(X^{q^{n-1}}) \dots G(X^q) G(X) F(X) )$ $\displaystyle = \Psi_{q^n, G(X) \dots G(X^{q^{n-1}})}( F(x) )$ and so to prove the lemma it suffices to do so in the ${n=1}$ case, that is to say $\displaystyle (q-1)^d \hbox{tr}_R( \Psi ) = \sum_{x \in \mu_{q-1}^d} G(x).$ The right-hand side expands as $\displaystyle \sum_{w \in {\bf N}^d} c_w \sum_{x \in \mu_{q-1}^d} x^w.$ From Fourier analysis we see that ${\sum_{x \in \mu_{q-1}^d} x^w}$ equals ${(q-1)^d}$ when ${w}$ is a multiple of ${q-1}$ and zero otherwise, so the sum simplifies to $\displaystyle (q-1)^d \sum_{w \in {\bf N}^d} c_{qw-w}.$ The claim follows. $\Box$ In view of this lemma, it will suffices to obtain an identity of the form $\displaystyle \sum_{x \in ({\bf F}_{q^n}^\times)^d} \epsilon^{\hbox{Tr}_{nr}( P(x) )} = \sum_{x \in \mu_{q^n-1}^d} G(x) G(x^p) \dots G(x^{p^{nr-1}}) \ \ \ \ \ (22)$ for some power series $\displaystyle G(X) = \sum_{w \in {\bf N}^d} c_w X^w$ with ${|c_w|_p \leq p^{-M|w|}}$ for some ${M>0}$ and all ${w}$, since the claim will then follow with ${G}$ replaced by the power series $\displaystyle x \mapsto G(x) G(x^p) \dots G(x^{p^{r-1}}).$ This will be deduced from the following basic fact in ${p}$-adic analysis, namely the existence of a canonical multiplicative embedding of the algebraic closure ${\overline{{\bf F}_p}}$ of ${{\bf F}_p}$ inside ${{\bf C}_p}$. Lemma 14 (Teichmuller lifting) Let ${\overline{{\bf F}_p} = \bigcup_{n=1}^\infty {\bf F}_{p^n}}$ be the algebraic closure of ${{\bf F}_p}$. Then there exists a map ${\tau: \overline{{\bf F}_p} \rightarrow {\bf C}_p^\times}$, known as the Teichmuller lift, with the following properties: Let us see how the lemma implies an identity of the form (22). Writing $\displaystyle P(x) = \sum_w a_w x^w$ for some finite set of multi-indices ${w}$ and coefficients ${a_w \in {\bf F}_q}$, we have from the linearity of trace that $\displaystyle \epsilon^{\hbox{Tr}_{nr}(P(x))} = \prod_w \epsilon^{\hbox{Tr}_{nr}(a_w x^w)}$ and hence by (24) we have $\displaystyle \epsilon^{\hbox{Tr}_{nr}(P(x))} = G(\tau(x)) G(\tau(x)^p) \dots G(\tau(x)^{p^{nr-1}})$ where $\displaystyle G(z) := \prod_w \Theta(\tau(a_w) z^w)$ and ${\tau(x_1,\dots,x_d) := (\tau(x_1),\dots,\tau(x_d))}$. Note that the required decay of the coefficients of ${G}$ follows from that of ${\Theta}$, since the ${\tau(a_w)}$ have unit ${p}$-norm. The claim now follows from the bijective nature of the Teichmuller lift. The only remaining task is to establish Lemma 14; here I will follow the exposition of Koblitz. We begin by constructing the Teichmuller lift ${\tau: {\bf F}_{p^n}^\times \rightarrow \mu_{p^n-1}}$ for a given ${n \geq 1}$. Let ${\alpha}$ be a primitive element of ${{\bf F}_{p^n}^\times}$, thus $\displaystyle {\bf F}_{p^n}^\times = \{ 1, \alpha, \alpha^2, \dots, \alpha^{p^n-1} \}.$ (The existence of such a primitive element can be seen by counting how many elements of ${{\bf F}_{p^n}^\times}$ have order strictly less than ${p^n-1}$.) The minimal polynomial of ${\alpha}$ over ${{\bf F}_p}$ thus has degree ${n}$, that is to say it is of the form $\displaystyle P(x) = x^n + a_{n-1} x_{n-1} + \dots + a_0$ for some ${a_0,\dots,a_{n-1} \in {\bf F}_p}$. We arbitrarily lift this to the ${p}$-adic integers ${{\bf Z}_p}$ as $\displaystyle \tilde P(x) = x^n + \tilde a_{n-1} x_{n-1} + \dots + \tilde a_0$ where ${\tilde a_0,\dots,\tilde a_{n-1} \in {\bf Z}_p}$ reduce to ${a_0,\dots,a_{n-1}}$ modulo ${p}$. Since the minimal polynomial ${P}$ is irreducible in ${{\bf F}_p}$, the lift ${\tilde P}$ is irreducible in ${{\bf Z}_p}$ and hence also in ${{\bf Q}_p}$ (here we use Lemma 10 to reach a contradiction if a monic factor of ${\tilde P}$ has a coefficient of ${p}$-norm greater than ${1}$). Thus, if we let ${\alpha \in \overline{{\bf Q}_p}}$ be a root of ${\tilde P}$, then ${|\alpha|_p \leq 1}$ and ${{\bf Q}_p(\alpha)}$ is a degree ${n}$ extension of ${{\bf Q}_p}$. In this field, we define the valuation ring ${A := \{ x \in {\bf Q}_p(\alpha): |x|_p \leq 1\}}$ and its maximal ideal ${M := \{ x \in {\bf Q}_p(\alpha): |x|_p < 1 \}}$. Then ${A/M}$ is a field generated over ${{\bf F}_p}$ by ${\alpha \hbox{ mod } M}$, which is a root of ${P}$, thus ${A/M}$ is a degree ${n}$ extension of ${{\bf F}_p}$ and may be identified with ${{\bf F}_{p^n}}$. We claim that the field extension ${{\bf Q}_p(\alpha)}$ is unramified in the sense that all of the non-zero elements of ${{\bf Q}_p(\alpha)}$ have norms that are integer powers of ${p}$, and in particular that ${M = pA}$. Suppose this were not the case, then there exists an element ${\pi}$ of ${{\bf Q}_p(\alpha)}$ with ${1/p < |\pi| < 1}$. If one lets ${e_1,\dots,e_n}$ be a linear basis of ${{\bf F}_{p^n}}$ over ${{\bf F}_p}$, and let ${\tilde e_1,\dots,\tilde e_n}$ be representatives of this basis in ${A}$, one can then show that ${\tilde e_1,\dots,\tilde e_n, \pi \tilde e_1,\dots,\pi \tilde e_n}$ are linearly independent over ${{\bf Q}_p}$, contradicting the fact that ${{\bf Q}_p(\alpha)}$ is a degree ${n}$ extension. Let ${\alpha}$ be an element of ${{\bf F}_{p^n}^\times}$, then ${\alpha^{p^n-1}=1}$. As discussed earlier, we can view ${\alpha}$ as an element of ${A/M = A/pA}$. By applying (a slight variant of) Hensel’s lemma, we can find a lift ${\tau(\alpha) \in A}$ of ${\alpha}$ such that ${\tau(\alpha)^{p^n-1} = 1}$. This gives an injective map from ${{\bf F}_{p^n}^\times}$ to ${\mu_{p^n-1}}$, which on comparing cardinalities must be a bijection. Since the quotient map from ${A}$ to ${A/pA}$ is a homomorphism, we see that ${\tau}$ is a homomorphism. One can check that the maps ${\tau: {\bf F}_{p^n}^\times \rightarrow {\bf C}_p^\times}$ for different ${n}$ are compatible, and glue together to form a single map ${\tau: \overline{{\bf F}_p}^\times \rightarrow {\bf C}_p^\times}$ obeying the homomorphism and bijection properties. Now we have to construct ${\Theta}$. We first give a heuristic discussion. From the construction of ${\tau}$, we morally have ${\tau(x) = x \hbox{ mod } p}$ for all ${x \in \overline{{\bf F}_p}^\times}$, where we are deliberately vague as to what “${\hbox{ mod } p}$” means. Since the map ${x \mapsto \epsilon^x}$ should morally be periodic modulo ${p}$, we thus expect $\displaystyle \epsilon^{\hbox{Tr}_n(x)} = \epsilon^{x + x^p + \dots + x^{p^{n-1}}}$ $\displaystyle = \epsilon^{\tau(x) + \tau(x^p) + \dots + \tau(x^{p^{n-1}})}$ $\displaystyle = \epsilon^{\tau(x)} \epsilon^{\tau(x^p)} \dots \epsilon^{\tau(x^{p^{n-1}})}$ and so one is led to the initial guess $\displaystyle \Theta(T) ?= \epsilon^T$ for ${\Theta}$. To make this heuristic discussion rigorous, we have to formally define what ${\epsilon^T}$ means as a power series in ${T}$. We write ${\epsilon = 1+\lambda}$, thus ${\lambda \neq 0}$ and $\displaystyle (1+\lambda)^p = 1;$ so that $\displaystyle p + \binom{p}{2} \lambda + \dots + \binom{p}{p-1} \lambda^{p-2} + \lambda^{p-1} = 0;$ thus the ${p-1}$ Galois conjugates of ${\lambda}$ multiply to ${\pm p}$, and so ${|\lambda|_p = p^{-\frac{1}{p-1}}}$. We can then define ${\epsilon^T = (1+\lambda)^T}$ by formal binomial expansion as $\displaystyle (1+\lambda)^T := \sum_{i=0}^\infty \frac{T (T-1) \dots (T-i+1)}{i!} \lambda^i.$ This is well-defined (over ${{\bf C}_p}$) as a formal power series in ${T}$. However, the convergence properties are bad, because of the denominator ${i!}$. Indeed, a standard computation shows that $\displaystyle | \frac{1}{i!} | = p^{\frac{i-S_i}{p-1}}$ where ${S_i}$ is the sum of the digits of the base ${p}$ expansion of ${i}$, and so $\displaystyle | \frac{\lambda^i}{i!} | = p^{\frac{-S_i}{p-1}}.$ The sequence ${S_i}$ does not go to infinity as ${i \rightarrow \infty}$, and so the power series ${(1+\lambda)^T}$ does not converge for ${|T|_p \geq 1}$. This is a problem, since we want to apply ${\Theta}$ to norm one quantities such as ${\tau(x^{p^j})}$, and furthermore we are claiming a slightly larger radius of convergence in Lemma 14, namely (almost) ${p^{1/(p-1)}}$. It turns out that there is a way to tweak the series ${T \mapsto (1+\lambda)^T}$ to significantly improve the ${p}$-adic convergence behaviour. Namely, we (formally) define the corrected function $\displaystyle \Theta(T) := F(T,\lambda)$ where ${F(T,Y)}$ is the formal power series in two variables ${T,Y}$ defined by $\displaystyle F(T,Y) := (1+Y)^T \prod_{j=1}^\infty (1+Y^{p^j})^{(T^{p^j} - T^{p^{j-1}})/p}; \ \ \ \ \ (25)$ one can verify that this is well-defined as a formal power series, and for fixed ${t \in {\bf C}_p}$, ${F(t,Y)}$ is a formal power series in ${Y}$. By telescoping series, we have $\displaystyle F(t,Y) F(t^p,Y) \dots F(t^{p^{n-1}},Y) = (1+Y)^{t+t^p+\dots+t^{p^{n-1}}}$ as a formal power series, whenever ${t \in \mu_{p^n-1}}$. In particular, $\displaystyle F(\tau(x),Y) F(\tau(x^p),Y) \dots F(\tau(x^{p^{n-1}}),Y) = (1+Y)^{\tau(x)+\tau(x)^p+\dots+\tau(x)^{p^{n-1}}}$ for ${x \in {\bf F}_{p^n}^\times}$. If we can show that ${F(T,\lambda)}$ makes sense as a formal power series ${\sum_n a_n T^n}$ with ${|a_n|_p \leq p^{-n/(p-1)}}$ for all ${n}$, we thus have $\displaystyle \Theta(\tau(x)) \Theta(\tau(x^p)) \dots \Theta(\tau(x^{p^{n-1}})) = (1+\lambda)^{\tau(x)+\tau(x)^p+\dots+\tau(x)^{p^{n-1}}};$ since ${x, x^p, \dots, x^{p^{n-1}}}$ are the Galois conjugates of ${x \in {\bf F}_{p^n}}$ over ${{\bf F}_p}$, one can verify that ${\tau(x), \tau(x)^p,\dots,\tau(x)^{p^{n-1}}}$ are the Galois conjugates of ${\tau(x) \in \overline{{\bf Q}_p}}$ over ${{\bf Q}_p}$, and so ${\tau(x)+\tau(x)^p+\dots+\tau(x)^{p^{n-1}}}$ lies in ${{\bf Q}_p}$; since ${\tau(x)}$ has norm ${1}$, this quantity in fact lies in ${{\bf Z}_p}$. Quotienting out by the maximal ideal ${\{ z \in {\bf Q}_p(\tau(x)): |z|_p < 1 \}}$, we conclude that $\displaystyle \tau(x)+\tau(x)^p+\dots+\tau(x)^{p^{n-1}} = x + x^p + \dots + x^{p^{n-1}} = \hbox{Tr}_n(x) \hbox{ mod } p$ and (24) follows. So we are at last reduced to showing that ${F(T,\lambda) = \sum_n a_n T^n}$ with ${|a_n|_p \leq p^{-n/(p-1)}}$ for all ${n}$. From (25) we have the identity $\displaystyle F(T^p, Y^p) = F(T,Y)^p \frac{(1+Y^p)^T}{(1+Y)^{pT}}.$ On the other hand, we have ${1+Y^p = (1 + Y)^p \hbox{ mod } pY{\bf Z}_p[Y]}$, and hence $\displaystyle \frac{1+Y^p}{1+Y} = 1 + p Y G(Y)$ for some formal power series ${G(Y)}$ with coefficients in ${{\bf Z}_p}$, and hence $\displaystyle F(T^p, Y^p) = F(T,Y)^p (1 + p Y G(Y))^T.$ We can expand ${(1 + pY G(Y))^T}$ as ${1 + pH(Y,T)}$, where ${H(Y,T)}$ is a formal power series with coefficients in ${{\bf Z}_p}$ and no constant term, hence $\displaystyle F(T^p, Y^p) = F(T,Y)^p (1 + p H(Y,T)).$ As ${F(0,0)=1}$, the ${T^a Y^b}$ coefficient of this identity lets us express the ${T^a Y^b}$ coefficient of ${F(T,Y)}$ as a polynomial combination (over ${{\bf Z}_p}$) of lower degree coefficients of ${F}$ (as well as coefficients of ${H}$), which by induction shows that all coefficients of ${F}$ lie in ${{\bf Z}_p}$. Replacing ${Y}$ by ${\lambda}$, the desired claim follows. Remark 3 The function ${\Theta}$ can also be defined as $\displaystyle \Theta(T) = E_p( \pi T )$ where ${E_p}$ is the Artin-Hasse exponential $\displaystyle E_p(x) := \exp( \sum_{i=0}^\infty \frac{x^{p^i}}{p^i} )$ and ${\pi \in {\bf C}_p}$ is a root of the power series ${\sum_{i=0}^\infty \frac{x^{p^i}}{p^i}}$. Remark 4 A small modification of Dwork’s argument also establishes rationality (the zeta function associated to) exponential sums such as $\displaystyle \sum_{x \in V[{\bf F}_q^n]} \chi(\hbox{Tr}_{nr}(P(x)))$ for some polynomial ${P: V \rightarrow {\bf A}}$ defined over ${{\bf F}_q}$, and some multiplicative character ${\chi: {\bf F}_p \rightarrow {\bf C}}$.
### ethiopia vertical cylindrical tank chemical volume • Home • Case • ethiopia vertical cylindrical tank chemical volume ### ethiopia vertical cylindrical tank chemical volume Tank Volume CalculatorA = π r 2 where r is the radius which is equal to d/2. Therefore: V (tank) = πr2h. The filled volume of a vertical cylinder tank is just a shorter cylinder with the same radius, r, and diameter, d, but height is now the fill height or f. Therefore: V (fill) = πr2f. Tel: 0086-371-861#518#27 Mail: [email protected] ### somalia vertical cylindrical tank chemical volume - Steel ethiopia vertical cylindrical tank chemical volume somalia vertical cylindrical tank chemical volume Safety Zone is defined as the horizontal and vertical separation criteria which form a cylindrical airspace volume around the UAS. In figure 3-2 that volume is defined by 1000 ft radius and 200 ft height.Vessel Volume & Level CalculationEstimates Volume filled in a Horizontal/ Vertical Vessel with 2:1 Ellipsoidal, Hemispherical, Torispherical and Flat heads. CheCalc Chemical engineering calculations to assist process, plant operation and maintenance engineers.VERTICAL STORAGE TANKS SHELL EXPANSION May 01, 2017 · The New Correction Upright cylindrical tanks have capacity tables based upon a specific tank shell temperature which is usually 60°F in USA. If the actual tank shell temperature differs from the capacity table tank shell temperature, the volumes extracted from that table will need to be corrected, accordingly. ### The Tank Drainage Problem Revisited: Do These 1052 The Canadian Journal of Chemical Engineering, Volume 81, October 2003 T he drainage of a tank by gravity is an old, but knotty problem. The tank may be drained by just a hole (orice situation) or may be drained through an attached pipe. The pipe may be vertical or horizontal or may include a full piping system with vertical drop andTank Volume CalculatorA = r 2 where r is the radius which is equal to d/2. Therefore: V (tank) = r2h. The filled volume of a vertical cylinder tank is just a shorter cylinder with the same radius, r, and diameter, d, but height is now the fill height or f. Therefore: V (fill) = r2f.Tank Volume Calculator for Ten Various Tank ShapesApr 17, 2019 · Cylindrical tank volume formula. To calculate the total volume of a cylindrical tank, all we need to know is the cylinder diameter (or radius) and the cylinder height (which may be called length, if it's lying horizontally).. Vertical cylinder tank; The total volume of a cylindrical tank may be found with the standard formula for volume - the area of the base multiplied by height. ### Tank Volume Calculator - Oil Tanks The tank size calculator on this page is designed for measuring the capacity of a variety of fuel tanks. Alternatively, you can use this tank volume calculator as a water volume calculator if you need to calculate some specific water volume. The functionality of TECHNICAL SPECIFICATION FOR MISC. TANKS- SITE TECHNICAL SPECIFICATION FOR VOLUME II B MISC.TANK- SITE FABRICATED (CST) SECTION A REV 00 DATE - 10.12.2014 PEM-6666-0 1.0 SCOPE OF INQUIRY / INTENT OF SPECIFICATION 1.1 The specification is intended to cover design, engineering, manufacture, inspection and testing at vendor's/ sub-vendors works, proper packing, delivery at site including ethiopia vertical cylindrical tank chemical volumeTANK VOLUME CALCULATOR [How to Calculate Tank Cylindrical Oil Tank. Lets say that I have a cylindrical oil tank which measures 7 yards in length and has a round face 5 feet in diameter (the distance across the circular end passing through the central point). I want to calculate the tank volume in cubic feet and ### Storage Tank Calculator - Vertical & Horizontal Tank ethiopia vertical cylindrical tank chemical volume For vertical tanks, only the cylinder volume is used in calculations. The Top End Type and the Bottom End Slope of the tank are not factored into the estimated volume. In order to calculate the volume of the storage tank then, all we need is to calculate the main cylinder volume.Storage Tank Calculator - Vertical & Horizontal Tank ethiopia vertical cylindrical tank chemical volumeFor vertical tanks, only the cylinder volume is used in calculations. The Top End Type and the Bottom End Slope of the tank are not factored into the estimated volume. In order to calculate the volume of the storage tank then, all we need is to calculate the main cylinder volume.Spherical Storage Tank Design - Chemical Engineering WorldSep 15, 2019 · Spherical Storage Tank Design:- The most common shape of a storage vessel is a cylinder with two heads which are either, hemispherical, elliptical or tori-spherical. Spherical vessels have larger surface area per unit volume. ### SCALE-UP OF MIXING SYSTEMS 0.203 m. It is used in a tank with a diameter of 0.61 m and height of 0.61 m. The width is W = 0.0405 m. Four baffles are used with a width of 0.051 m. The turbine operates at 275 rpm in a liquid having a density of 909 kg/m3 and viscosity of 0.02 Pas. Calculate the kW power of the turbine and kW/m3 of volumeSCALE-UP OF MIXING SYSTEMS0.203 m. It is used in a tank with a diameter of 0.61 m and height of 0.61 m. The width is W = 0.0405 m. Four baffles are used with a width of 0.051 m. The turbine operates at 275 rpm in a liquid having a density of 909 kg/m3 and viscosity of 0.02 Pas. Calculate the kW power of the turbine and kW/m3 of volumePlastic Tanks, Water Tanks, Polyethylene Storage Tanks ethiopia vertical cylindrical tank chemical volumeA visionary leader in plastic molding, Chem-Tainer has been a quality source for chemical tanks, water tanks & material handling solutions for over 50 years. Offering hundreds of plastic tank sizes with many poly tanks in stock for immediate delivery.We provide responsible solutions for all your storage demands. ### Optimization Of Liquid Storage Tank: A Cylindrical ethiopia vertical cylindrical tank chemical volume Aug 31, 2020 · Question: Optimization Of Liquid Storage Tank: A Cylindrical Tank (Fig-1) Has A Volume (V) That Can Be Expressed By V = (T1/4)D2L, And We Are Interested To Calculate The Diameter (D) And Height (H) That Minimize The Cost Of The Tank. The Cost Of The Tank Depends On The Amount Of Material Needed Which Is Proportional To Its Surface Area And The Cost Per Unit One Containment Cryogenic Storage Tanks 20000m3 Vertical ethiopia vertical cylindrical tank chemical volumeOne Containment Cryogenic Storage Tanks 20000m3 Vertical Cylindrical Flat Bottom. Single Containment LNG Storage Tank 20,000m3. ethiopia vertical cylindrical tank chemical volume The 20,000m3 cryogenic Ethylene storage tank for Nanjing Dragon Crown Liquid Chemical Terminal Co., Ltd., was the first domestic self-designed single containment cryogenic storage tank with volume over 10000m3 ethiopia vertical cylindrical tank chemical volumeMixing 101: Optimal Tank Design | Dynamix AgitatorsMar 10, 2015 · Vertical cylindrical tanks are the most common type of tank in use. A key consideration for cylindrical tanks is to ensure that they are either baffled or offset-mounted to prevent swirling from occurring. Refer to section 2 below (The Use of Baffling) for details. Generally baffles are not required for smaller tanks (<5,000 gallons in ethiopia vertical cylindrical tank chemical volume ### Mixing 101: Optimal Tank Design | Dynamix Agitators Mar 10, 2015 · Vertical cylindrical tanks are the most common type of tank in use. A key consideration for cylindrical tanks is to ensure that they are either baffled or offset-mounted to prevent swirling from occurring. Refer to section 2 below (The Use of Baffling) for details. Generally baffles are not required for smaller tanks (<5,000 gallons in ethiopia vertical cylindrical tank chemical volumeHOW TO CALCULATE THE VOLUMES OF PARTIALLY FULL 2.1. Cylindrical Tanks The majority of tanks used in the chemical industry are cylindrical tanks, either in horizontal or vertical configuration. Consider, for example, a cylindrical tank with length L and radius R, filling up to a height H. If you want to obtain the volume of the liquid that partially fills the tankDynamic response of a large vertical tank impacted by ethiopia vertical cylindrical tank chemical volumeThe semi-cylindrical shell with a diameter of 300 mm and a height of 300 mm is used for modeling the small-scaled vertical tank. The conical projectiles with a similar shape to end-caps and a diameter of 7.82 mm in the big end, are adopted to model small-scaled fragments and launched by ### Compute Fluid Volumes in Vertical Tanks - Chemical Dec 18, 2003 · The equations for fluid volumes in vertical cylindrical tanks with concave bottoms are shown on p. 30. The volume of a flat-bottom vertical cylindrical tank may be found using any of these equations and setting a = 0. Radian angular measure must be used for trigonometric functions.Calculating Tank VolumeCalculating Tank Volume Saving time, increasing accuracy By Dan Jones, Ph.D., P.E. alculating fluid volume in a horizontal or vertical cylindrical or elliptical tank can be complicated, depending on fluid height and the shape of the heads (ends) of a horizontal tank or the bottom of a vertical tank.Blank Worksheet to Calculate Secondary Containment This worksheet can be used to calculate the secondary containment volume of a rectangular or square dike or berm for a single vertical cylindrical tank. You may need a PDF reader to view some of the files on this page. See EPAs About PDF page to learn more. Blank Worksheet (PDF) (4 pp, 529 K) Sep 05, 2017 · both horizontal and vertical tanks with spherical heads. The calculation of the liquid in the heads is approxi-mate. The graph shows lines for tank diameters from 4 to 10 ft, and tank lengths from 1 to 50 ft. The accuracy of the liquid volume depends on certain approximations and the precision of interpolations that may be required.Volume of a Partially Filled Cylindrical Tank ethiopia vertical cylindrical tank chemical volumeThis calculator will tell you the volume of a horizontal paritally filled cylindrical tank, or a vertical one.This will also generate a dip chart/table for the tank with the given dimensions you specify. Output is in gallons or liters. If you have a complex tank, or need something special created please don't hesitate contact us for more info.VERTICAL STORAGE TANKS SHELL EXPANSION May 01, 2017 · The New Correction Upright cylindrical tanks have capacity tables based upon a specific tank shell temperature which is usually 60°F in USA. If the actual tank shell temperature differs from the capacity table tank shell temperature, the volumes extracted from that table will need to be corrected, accordingly. ### Tank calculations | MrExcel Message Board Dec 19, 2014 · We use what are called tank strappings for some of our additive tanks. They were built from formulas for cylindrical horizontal or vertical tanks. I pugged in the size of the tanks and a strapping was produced. From there I set up a form using vlookups to the strappings. Give me some tank measurements and I'll see if what I have at work will ethiopia vertical cylindrical tank chemical volumeTank Storage | Glossary | OiltankingTank terminals are facilities where petroleum products, chemicals, gases and other liquid products i.e. liquid or gaseous substances can be stored and handled. Tank terminals consist of a number of individual tanks, generally above ground, that are usually cylindrical. Materials used in tank storage Tanks can be made of different materials.Tank Volume Calculator Gallons | Cylindrical Tank Capacity ethiopia vertical cylindrical tank chemical volumeA storage tank is a container, usually for holding liquids, sometimes for compressed gases (gas tanks). This term can also be used for reservoirs, and for manufactured containers for other storage purposes. Use this online cylindrical tank capacity calculator for doing tank ### Tank Volume Calculator - Inch Calculator For example, lets find the volume of a cylinder tank that is 36 in diameter and 72 long. radius = 36 ÷ 2 radius = 18 tank volume = × 18 2 × 72 tank volume = 73,287 cu in. Thus, the capacity of this tank is 73,287 cubic inches.Tank Volume Calculator - Horizontal Elliptical - MetricTank Volume: Vertical Cylindrical Tank Volume: Rectangular Tank Volume: Directory: Inch: Inch: Inch: Inch: Horizontal Elliptical Tank Volume, Dip Chart and Fill Times - Metric. Hemispherical Ends Ellipsoidal Ends - adj Flat Ends: Side Ellipse Width Centre Width Straight Length ethiopia vertical cylindrical tank chemical volumeTank Volume - Pipesizing Software and Chemical Note that a horizontal tank with cylindrical shell are selected, and that both heads are the same (torispherical, or F&D). The dish and knuckle radii are input (user is prompted that standard ASME F&D heads use f=1.0 and k = 0.06). ### Chemical Storage Tanks Manufacturers | SS,HCL Storage Tank Out strict adherence to DVS quality standards has made us one of the leading chemical storage tanks manufacturers in India. The pp chemical storage tank is manufactured using butt fusion welding technique which ensures 100% leak proof performance. Moreover, it also increases the working life of the fluid contained in the ss storage tank.Vessel Volume & Level CalculationEstimates Volume filled in a Horizontal/ Vertical Vessel with 2:1 Ellipsoidal, Hemispherical, Torispherical and Flat heads. CheCalc Chemical engineering calculations to assist process, plant operation and maintenance engineers.Spill Prevention Control and Countermeasure (SPCC) PlanV /V = ÷ = c is the secondary containment volume calculated in Step 1. d / e is the tank volume calculated in Step 2. c (ft 3) d or e (ft ) f % = x 100 = f g If percentage, g, is 100% or greater, the capacity of the secondary containment is sufficient to contain the shell capacity of the tank. If rain can collect in the dike or berm, continue to step 4. ### * Sloped Bottom Tank - arachnoid ethiopia vertical cylindrical tank chemical volume The easy part the cylindrical section above the slope, which has a volume of: (1) $\displaystyle v = \pi r^2 h$ v = volume; r = tank radius; h = cylindrical section height; More difficult the tank's sloped section, which lies between the tank's bottom and the top of the slope where the tank (PDF) HOW TO CALCULATE THE VOLUMES OF PARTIALLY volume of the liqu id that part ially fills the tank, you should indicate if the tank is in horizontal or vertical position. If the tank is placed uprig ht, t he volume of the liquid i n the • ### pressurised storage tanks installation winter weather price 0 5.0 out of 5 stars 5 $429.99$ 429 . 99 Pressurized Thermal Storage-ASME Rated TanksHeavy insulation on heat storage tanks allows for long durations between boiler run cycles. For owners of wood boilers, heat storage allows much more freedom and flexibility of use. For instance, during a typical w • ### jiffyshirtscategory is t shirt and style is tank tops om $10.33. AA1927 (3) AA1927 Ladies' Meegs Racerback Eco-Jersey™ Tank Preview Colors. from$8.91. LST396 Special Order (2) LST396 Ladies PosiCharge Electric Heather Racerback Tank ... JiffyShirts.com: Category is T-Shirt and Style is Tank TopsT-Shirt Tank Tops 136 results Preview Colors. from \$8.5 • ### all kinds layer tank to meet your requirement etection requirements (be sure you meet all the requirements of this method). Top of Page Double Wall Tanks - Southern TankSouthern Tank manufactures these tanks daily to the UL 142 standard as well as the Steel Tank Institute F921 standard. We first build a UL 142 inner tank and then wrap it with a • ### stainless steel tube heat exchanger export r passes through the tubes for more efficient heat transfer than other heat exchangers. For technical drawings and 3-D models, click on a … Stainless Steel 316/316L Heat Exchanger Tubes, SS 316 ...Solitaire Overseas is a manufacturer and exporter of Stainless Steel 316 & 316L Heat Exchanger Tu • ### fire protection specialists fps foam bladder tank eld of # Fire_Protection, please subscribe to our website: www.fps-eg.com # FPS, # Keeping_You_Up_To_Code. Vertical and Horizontal Bladder Tanks Model VFT ASME Sec ...The bladder tank technology is a dependable and precise mixing method that is widespread in the fixed fire protection market. This me • ### san marino the metal tank boiler water system size on a 30 000 litre tank. The vent should be fitted with a vent head, which incorporates an internal baffle to separate entrained water from the steam for discharge through a drain connection. Storage Tank for Boiler Room | RDZWater tank acting as a hydraulic separator between the heating/cooling sys ### Message information Please describe your brand size and data volume in detail to facilitate accurate quotation
Resistors If we’re thinking about electricity like it’s water flowing in a pipe, then a resistor is like a narrow restriction in a pipe that limits the amount of current that can flow through. Here’s what resistors actually look like in the real world. At the top left, we have 1/4 W, 10k carbon film through-hole resistors. They’re probably the most common resistors in the world. At top right, we have a big spool of 1/10 W, 1k surface-mount resistors. They might rival the carbon film resistors at left for the popularity title. The big blocky brass thing in the middle is a current shunt, that is, a resistor that has a very low resistance that is precisely calibrated. You run a large current through the rod in the middle, and then you measure the voltage drop with the two wires on either side. If you look carefully at the right hand side of the shunt, you can seea legend stamped into the metal that reads, “50 A 60 mV.” This means that when you measure a 60 mV drop across the shunt, 50 A is flowing through it. (Thus, its resistance is 0.060/50 = 0.0012 ohms, or 1.2 milliohms.) In the lower left, we have a power resistor. This is just a resistor encased in a big resistor so you can run a lot of current through it without melting it. The last line of the legend printed on it says, “50W 0.15Ω,” which means that the resistor can dissipate 50 W before it burns up. Knowing its power limit and resistance, you can calculate its current and voltage limits: sqrt(50/0.15) = 18 A, and sqrt(50 * 0.15) = 2.7 V. ## Resistors are linear # When we say something behaves linearly, we just mean that if you double the input, the output also doubles. If you triple the input, the output triples. With a resistor, if you double the voltage pushing electrons through the resistor, the current through the resistor will double. In reality, resistors aren’t exactly linear, but they’re close enough. The most common deviation that you’ll run into is that as you run more current through a carbon film resistor, it heats up. As it heats up, the resistance drops slightly, so if you double the voltage, you actually get slightly more than double the current. (But that’s just carbon film resistors. The resistance of copper wires increases slightly with temperature.) ## We measure resistance in ohms # We measure resistance in ohms ( $$\Omega$$ ), a measure of how many amps the resistor lets through per volt applied to it. A 1 ohm resistor lets through 1 amp per volt. This brings us to Ohm’s Law, which is really just the definition of resistance as the ratio of voltage applied to current passed. You could write it as $$V / I = R$$ but it’s more commonly written as $$V = IR$$ ## Power rating # One other matter to worry about is what happens if you run a lot of current through a resistor. Raise the current enough, and the resistor will eventually burn out. We can calculate power by multiplying voltage and current. Think this through with the meaning of the units: volts are joules per coulomb, and amps are coulombs per second. In the product of the two, the coulombs cancel, so we get joules per second, as we would expect for power, a measure of the rate of energy flow. Typical carbon film resistors are rated for 1/4 W. Typical 0805 surface mount resistors are rated for 1/8 W. ## Tolerance # When resistors are manufactured, there is some variation in their resistance. Typical cheap carbon film resistors are specified to be accurate within 5% of their nominal value, but they are usually closer than 1%. This is different than capacitors, which are specified to be within 10% or 20% of their rated value, and often push those limits, especially across signals of different frequencies. You can pay extra moeny to get resistors with tighter tolerances. ## Typical application: current limiter # LEDs have the unusual characteristic that they start emitting light when you hit a certain voltage threshold, called $$V_f$$ for “forward voltage”. Curiously, it varies with the color of the LED. If you try to raise the voltage across the LED above $$V_f$$ , it just gets brighter and hotter until it burns out. Unfortunately, the voltage threshold is not a convenient voltage like 5 V; it’s something weird like 2.9 V or 3.1 V. To run an LED from a microcontroller, which usually has a fixed output voltage, the usual strategy is to put the LED in series with a resistor that allows the LED to reach its threshold voltage while limiting the current to a level where the light emission is pleasant. The usual calculation for a 5 V system goes like this: $$5 - V_f = V_R$$ 1 mA is a good place to start with a 5 mm LED, so we can plug $$V_R$$ in to Ohm’s Law. $$V_R / 0.001 = R$$ ## Typical application: voltage divider # A voltage divider is merely two resistors stacked in series. As current flows through the resistors, the voltage drops in proportion to the fraction of the total resistance passed through. If you have, say, two 10k resistors stacked with 5 V applied, the voltage between them will be 2.5 V. If you have, say, a 3.9k and a 1k resistor stacked with 5 V applied, the voltage between them will be around 1 V. You might think that all that matters about a voltage divider is the ratio of the two resistors. Usually, you would be right. But, the purpose of the voltage divider is often to supply a reference voltage to another device. If that device draws off some current, the voltage between the resistors will start to sag. To prevent this, make sure that the current drawn off into your device is at least 100x less than the current flowing through the divider itself. Alternatively, if the sag is predictable, you could just set the divider a little higher and let it sag to the original target. ## Typical application: pull-up or pull-down # With microcontrollers and MOSFETs, you often have pins that you want to tie to a certain voltage. For example, most microcontrollers have a reset pin; when you apply 0 V to the pin, the microcontroller is reset. Here’s a convenient circuit for that situation. In this situation, the resistor is called a pull-up resistor, because it pulls the voltage on the pin up to 5 V. If it were connected to ground instead of 5 V, it would be called a pull-down resistor. Because of the circuitry inside the microcontroller, very little current flows into the reset pin. This means that very little current flows across the resistor, so there is very little voltage drop across it, so the reset pin is held near 5 V. When you press the pushbutton, the reset pin is pulled to ground, and the reset occurs. While the button is held down current flows through the resistor, through the button to ground. You might think to yourself, “But couldn’t the resistor just be a wire?” If the resistor were a wire, it would still pull the reset line high just fine, but when you pressed the pushbutton, you would be shorting 5 V to ground. Don’t do that.
## rate proportions $aR \to bP, Rate = -\frac{1}{a} \frac{d[R]}{dt} = \frac{1}{b}\frac{d[P]}{dt}$ Kassandra Molina 2B Posts: 43 Joined: Wed Nov 16, 2016 3:03 am ### rate proportions what does a rate proportion depend on? for example what will happen to a rate if the amount of a molecule doubles or halves? Jaewoo Jo 2L Posts: 31 Joined: Fri Sep 29, 2017 7:06 am ### Re: rate proportions It depends on the power of the components in the rate law. if one component is a squared, then the rate will change by a factor of 4. AnuPanneerselvam1H Posts: 52 Joined: Fri Sep 29, 2017 7:07 am ### Re: rate proportions If a reaction is first order then the rate will be doubled when the reactant is doubled. If the reaction is second order and only has one reactant, then doubling the reactant will cause the rate to be squared. ### Who is online Users browsing this forum: No registered users and 2 guests
## Package Details: sile-git 0.12.5.r44.gbca0f36-1 Git Clone URL: https://aur.archlinux.org/sile-git.git (read-only, click to copy) sile-git Modern typesetting system inspired by TeX https://www.sile-typesetter.org MIT sile libtexpdf.so, sile alerque alerque alerque 3 0.000000 2015-06-16 11:53 (UTC) 2022-04-18 11:43 (UTC) ### Sources (2) #### alerque commented on 2020-10-16 20:06 (UTC) @fbrennan Oh ya, and all my AUR packages are maintained in single repo on Github if you ever want to PR against them. #### alerque commented on 2020-10-16 20:01 (UTC) @fbrennan I understand. It's not that heretical to me, but it goes against Arch packaging principles to bundle libraries. That's actually one of the reasons I've been changing SILE upstream to not bundle libraries itself when they are available as Luarocks. By the way if you are interested I host this package in my Arch package repository along with all the dependencies. A good number of the AUR Luarocks packages I actively maintain, and the few that I don't I keep pretty close eyes on when updating the builds. Also as far as trust issues go, Luarocks itself is probably a weaker link than AUR. Yes the AUR can be kind of a wild west, but at least it has a paper trail! The autoconf/automake deps should not be included because they are part of base-devel which is considered a required package to build from the AUR. Anything in there should not be listed as a make dependency. Luarocks is only a dependency for your version of the package, it should not be for the no-bundle version (although it probably will become so soon given what I want to do with SILE packages). #### fbrennan commented on 2020-10-16 08:50 (UTC) Hello. Here is a version of this PKGBUILD which does not use the system Luarocks: This is heretical so I'm not expecting you to make this default. I, however, prefer it, as many of the Luarocks are AUR dependencies, and trusting that many AUR dependencies has been in my experience extremely unwise. Also, you are missing a few dependencies, changes which you should port from my version of this PKGBUILD. Depends: * luarocks Make depends: * automake * autoconf #### alerque commented on 2020-01-12 07:21 (UTC) (edited on 2020-01-14 04:38 (UTC) by alerque) @escondida Thanks for those pointers, I've finally included the git submodule in the sources as you described. @all Note this sile-git package is back to tracking upstream HEAD changes. It had been tracking my fork for a while because of patches that were not in master. One thing currently in master that is less than ideal is that building the manual requires downloading some fonts into the build tree. Hopefully that will be addressed in future releases to make the -git build more light weight. #### escondida commented on 2017-10-27 02:14 (UTC) Hi, caleb, thanks for packaging sile. You can stress the Internet of downloaders a little less by adding the libtexpdf submodule to the sources array and adding a quick prepare() function: prepare() { cd sile git submodule init git config submodule.libtexpdf.url "$srcdir/libtexpdf" git submodule update ./bootstrap.sh } (Though bootstrap.sh would also make sense in build(), since it makes sense in either place I've left it in.) Side note: all PKGBUILD functions begin in$srcdir, so you can simplify your cds, if you'd like; cd sile, as above, will work as is (-: #### vgivanovic commented on 2017-02-21 17:40 (UTC) @caleb Thanks for the explanations. I'll drop by the SILE Gitter chat room ... next week. BTW, Fontconfig hasn't given me any problems so far. I do have quite a number of fonts installed (all Google fonts, for example) if that makes a difference to SILE. #### alerque commented on 2017-02-21 08:13 (UTC) (edited on 2017-02-21 08:13 (UTC) by alerque) @vgivanovic There are some reasons not to include the whole examples directory on installation. Perhaps one or two examples could work, but including the whole thing would be problematic. In any event you don't have to clone the SILE repo to get them, you can just get just the ones you want to play with if you like. The 'Sawarabi Gothic' thing is not coming from SILE, that's going to be your system's default font as understood by Fontconfig. It sounds like you have some stuff a little cross-wired on your system. The Gentium default is coded into SILE, but that shouldn't be a problem for you since the Arch package for it should be a dependency. That too might change (see https://github.com/simoncozens/sile/issues/369) but the fact that it didn't find that suggests something is wrong with Fontconfig on your system. Why don't you drop by the SILE Gitter chat room (https://gitter.im/simoncozens/sile) and I'll help you work through getting SILE running and if there are any parts that the packages can make easier. #### vgivanovic commented on 2017-02-20 19:32 (UTC) I can't get sile to work. ~/tmp> cat hello.sil \begin{document} Hello SILE! \end{document} ~/tmp> sile hello.sil This is SILE 0.9.4 <hello.sil> ! Font 'Gentium' not available, falling back to 'Sawarabi Gothic' [1] (What an odd choice of fallback font...) If I add '\font[family="Gentium Plus",size=12pt]' after '\begin{document}' I get exactly the same output. Note: ~/tmp> fc-list | fgrep "Gentium Plus" /usr/share/fonts/TTF/GentiumPlusCompact-I.ttf: Gentium Plus Compact:style=Italic /usr/share/fonts/TTF/GentiumPlusCompact-R.ttf: Gentium Plus Compact:style=Regular /usr/share/fonts/TTF/GentiumPlus-R.ttf: Gentium Plus:style=Regular /usr/share/fonts/TTF/GentiumPlus-I.ttf: Gentium Plus:style=Italic The package 'ttf-gentium-plus' was installed when I installed 'sile-git'. I am using 'sile' because 'sile-git' doesn't work at all, either compiled from sources or from installed package 'sile-git'. [compiled from sources] ~/tmp> /usr/local/src/sile/sile hello /usr/bin/lua: /usr/local/src/sile/sile:10: module 'core/sile' not found: or [from installed packages 'sile-git'] Error detected: /usr/share/sile/core/inputs-xml.lua:11: not well-formed (invalid token) #### vgivanovic commented on 2017-02-20 18:17 (UTC) (edited on 2017-02-20 18:20 (UTC) by vgivanovic) Any chance we could get the 'examples' directory included in the package? Not including it means that people have to clone 'sile' in addition to installing 'sile-git'. #### greenmanalishi commented on 2016-10-12 19:09 (UTC) @caleb thank you for the explanation #### alerque commented on 2016-10-03 17:04 (UTC) @greenmanalishi This build is tracking my fork because I sometimes include stuff there that fixes issues specific to Archlinux before the build is fixed across other systems. This hasn't been happening as much recently but it was a regular thing for a while. If you look at the commit history for SILE you'll see a lot of the most recent activity on the upstream has been commits from me, and I have push access to the official git repository. The forks are rarely out of sync, but when they are the chances are my repo is going to have the latest build known to work on Archlinux. #### haawda commented on 2015-11-20 18:35 (UTC) To compile this, I needed a patch. diff --git a/src/fontmetrics.c b/src/fontmetrics.c index 55a6c0a..5f8b97d 100644 --- a/src/fontmetrics.c +++ b/src/fontmetrics.c @@ -1,5 +1,5 @@ #include <hb.h> -#if HB_VERSION_ATLEAST(1,0,7) +#if HB_VERSION_ATLEAST(1,1,1) #define USE_HARFBUZZ_METRICS #include <hb-ft.h> #else And a license file is needed, as MIT is a custom license. Also the package installs to /usr/local, which is forbidden by Arch Linux packaging guidelines. # Maintainer: Caleb Maclennan <[email protected]> # Contributor: Adrián Pérez de Castro <[email protected]> pkgname=sile-git pkgdesc='Modern typesetting system inspired by TeX' pkgver=0.9.3_185_g6ab82c2 _branch='master' pkgrel=1 arch=(any) url='http://www.sile-typesetter.org/' license='MIT' provides=("${pkgname%-git}") conflicts=("${pkgname%-git}") source=("git://github.com/simoncozens/${pkgname%-git}.git#branch=${_branch}" patch) sha512sums=('SKIP' 'SKIP') depends=('lua-lpeg' 'lua-expat' 'lua-filesystem' 'fontconfig' 'harfbuzz' 'icu') options=("!makeflags") pkgver() { cd "$srcdir/${pkgname%-git}" git describe --long --tags | sed 's/^v//;s/-/_/g' } prepare () { cd "$srcdir/${pkgname%-git}" patch -Np1 < $srcdir/patch } build () { cd "$srcdir/${pkgname%-git}" ./bootstrap.sh ./configure --prefix=/usr make } package () { cd "$srcdir/${pkgname%-git}" make install DESTDIR="${pkgdir}/" install -Dm644 LICENSE "$pkgver"/usr/share/licenses/$pkgname/LICENSE } #### haawda commented on 2015-07-07 10:55 (UTC) Builds fine against the lua-filesystem package provided in the [community]-repo.
## Maximum A Posteriori Bernoulli is expensive. Using Beta as a prior for Bernoulli parameter 𝜇results in Beta posterior distribution Beta is conjugate prior to Bernoulli. tr/~ethem/i2ml Lecture Slides for. The method of maximum likelihood corresponds to many well-known estimation methods in statistics. Maximum-a-posteriori meth-ods apply Bayes’ rule using as prior the trained model[Gau-vain and Lee, 1992] or a hierarchical prior[Shinoda and Lee, 1997] and converge to the true maximum-likelihoodestimate with infinite data, but are generally not competitive with little. A Generalized Labeled Multi-Bernoulli Filter for Maneuvering Targets Yuthika Punchihewa School of Electrical and Computer Engineering Curtin University of Technology WA, Australia Email: [email protected] you will find it 'magical' that least square appear in the same form as maximum likelihood estimation. Additionally, you may have cases, where the estimate lies on the boundary of the parameter space (i. Schu¨tze, ch. , maximum a posteriori estimates) do change under reparameterization, and thus are no true Bayesian quantity. The obvious way to estimate the missing values is with a maximum a posteriori estimator. This paper addresses the sparse representation (SR) problem within a general Bayesian framework. BAYESIAN INFERENCE where b = S n/n is the maximum likelihood estimate, e =1/2 is the prior mean and n = n/(n+2)⇡ 1. How-ever, the mode itself does not necessarily represent the full posterior distribution well, e. and we can use Maximum A Posteriori (MAP) estimation to estimate and ; the former is then the relative frequency of class in the training set. PyMC User’s Guide; Indices and tables; This Page. The best class in NB classification is the most likely or maximum a posteriori (MAP) class : (114) We write for because we do not know the true values of the parameters and , but estimate them from the training set as we will see in a moment. Jacob Bernoulli (also known as James or Jacques; 6 January 1655 [O. Maximum Likelihood Estimation (MLE) General MLE strategy. In one case (sample L285) the maximum a posteriori COIL estimate was incorrect, but the 95% credibility interval captured the microsatellite-based COI estimate. Therefore, the maximum likelihood estimator wb n of win a Gaussian model is the estimator obtained by least square linear regression. Maximum Likelihood Estimation Likelihood of θgiven the sample X l (θ|X) = p (X |θ) = ∏ t p (xt|θ) Log likelihood L(θ|X) = log l (θ|X) = ∑ t log p (xt|θ) Maximum likelihood estimator (MLE) θ* = argmax θL(θ|X) What is the parameter(s) of the distribution that maximizes the likelihood of the data sample. ML for Bernoulli trials. Bernoulli, Jacob. Maximum Likelihood Estimation (MLE) 9 Maximum-likelihood estimation (MLE) is a method of estimating the parameters of a statistical model given data. 4: Bernoulli Maximum Likelihood 2. binomial, say) and the desired estimator (regularized maximum likelihood, or Bayesian maximum a posteriori/posterior mean, etc. observed diffusion processes," Bernoulli, stirred tank reactor MAP maximum A posteriori MIMO multi-input multi-output. Confidence Intervals. This is where Maximum Likelihood Estimation (MLE) has such a major advantage. For example, if Liverpool only had 2 matches and they won the 2 matches, then the estimated value of θ by MLE is 2/2 = 1. Probabilistically accept jump as Bernoulli draw with param α α = min. Maximum a Posteriori (MAP) estimate: choose θ that is most probable given prior probability and the data. edu Toyota Technological Institute. Central Limit Theorem- Approximate. INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, 2004 [email protected] 2 Random Variables and Stochastic Pro-cesses 29 Tuesday, 4/28/15 13. In Bayesian statistics, a hyperprior is a prior distribution on a hyperparameter, that is, on a parameter of a prior distribution. This is where Maximum Likelihood Estimation (MLE) has such a major advantage. Maximum Likelihood Estimation (MLE) for the coin. For example, suppose you are interested in the heights of Americans. Basics of Parameter Estimation in Probabilistic Models 16. We will demonstrate this on four models: linear regression, logistic regression, Neural networks, and Gaussian process. Probability and Naive Bayes Machine Learning CS4824/ECE4424 Bert Huang Virginia Tech. •Also, scaling the log likelihood by a positive constant β/ does not alter the location of the maximum with respect to w, so it can be ignored •Result: Maximize. The reason of introducing MAP in the context of comparing MLE and BPE is that MAP can be treated as an intermediate step between MLE and BPE, which also takes prior into account. I went through a hard time struggling about the term probability likehood and their relation. Graph SLAM and Square Root Smoothing and Mapping (SAM) are prime examples of MAP-based estimation. Introduction to Bayesian Decision Theory the main arguments in favor of the Bayesian perspective can be found in a paper by Berger whose title, “Bayesian Salesmanship,” clearly reveals. (Jacob Bernoulli, "The Art of Conjecturing", 1713) "It seems that to make a correct conjecture about any event whatever, it is necessary to calculate exactly the number of possible cases and then to determine how much more likely it is that one case will occur than another. Basic probability theory. As a practical matter, when computing the maximum likelihood estimate it. If you hang out around statisticians long enough, sooner or later someone is going to mumble "maximum likelihood" and everyone will knowingly nod. MAP object changes values of variables in place, so let's print the values of some of our variables before and after fitting. In our case, it’s the probability of a particular sequence of H’s and T’s. It means that the estimation says Liverpool wins 100%, which is unrealistic estimation. I would think that the logic goes the opposite direction: one first a loss (i. Graph SLAM [6] and Square Root Smoothing and Mapping (SAM) [7] are prime examples of MAP-based estimation. edu Abstract We consider the problem of classifying a hotel review as a positive or negative and thereby analyzing the sentiment of a customer. Maximum A Posteriori Estimation. Brownian Motion Process- as scaled Bernoulli process. Section Topic; general: intro, linear algebra, gaussian, parameter estimation, bias-variance. This function reaches its maximum at $$\hat{p}=1$$. Expectation-maximization (EM) is a method to find the maximum likelihood estimator of a parameter of a probability distribution. Maximum-likelihood estimation was recommended, analyzed (with fruitless attempts at proofs) and vastly popularized by Ronald Fisher between 1912 and 1922 (although it had been used earlier by Carl Friedrich Gauss, Pierre-Simon Laplace, Thorvald N. The MDL or MAP (maximum a posteriori) estimator is both a common approximation for the Bayes mixture and interesting for its own sake: Use the model with the largest product of prior and evidence. Bernoulli-Gaussian modeling and maximum a posteriori estimation has proven successful but entails computationally difficult optimization problems that must be solved by suboptimal methods. As a practical matter, when computing the maximum likelihood estimate it. The procedure is formulated as finding maximum a posteriori estimates within a probabilistic generative model. Cards – 52-card deck. In this blog, I will provide a basic introduction to Bayesian learning and explore topics such as frequentist statistics, the drawbacks of the frequentist method, Bayes’s theorem (introduced with an example), and the differences between the frequentist and Bayesian methods using the coin flip experiment as the example. maximum a posteriori (MAP) problem involving Bernoulli-Gaussian (BG) variables. The first paper describes a. Hierarchical p-version finite elements and adaptive a posteriori computational formulations for two-dimensional thermal analysis Adaptive finite elements using hierarchical mesh and its application to crack propagation analysis. Understanding MLE with an example While studying stats and probability, you must have come across problems like – What is the probability of x > 100, given that x follows a normal distribution with mean 50 and standard deviation (sd) 10. The code to run the beta. Simple example of "Maximum A Posteriori" {Bernoulli}(\theta)$be IID with unknown parameter$\theta$, and I am interested in estimating this parameter, which. The underlying principle behind the impressive performance. The prior pa-rameters a0 and b0 assign a beta prior distribution to each outcome probability. posteriori function. Bernoulli Process. The proposed algorithms,. Thus, all probabilities can be multiplied and likelihood function will look like this:. Maximum Likelihood and Least Squares •Log Likelihood •Maximize Log Likelihood wrt to w •Since last two terms, dont depend on w, they can be omitted. Classification of Markov chains. maximum a posteriori formulas. d data p (data) = p (H,T,H,H) = p (H)p (T)p (H)p (H) = ⇥ (1 ) ⇥ ⇥ = 3 (1 ). maximum likelihood (ML) estimate and the maximum a posteriori (MAP) estimate. When plotting loglikelihoods, we don't need to include all θ values in the parameter space; in fact, it's a good idea to limit the domain to those θ's for which the loglikelihood is no more than 2 or 3 units below the maximum value $$l(\hat{\theta};x)$$ because, in a single-parameter problem, any θ whose loglikelihood is more than 2 or 3. 1053 Blindern N-0316 Oslo, Norway E-mail: [email protected] The code to run the beta. It is so common and popular that sometimes people use MLE even without knowing much of it. This function reaches its maximum at $$\hat{p}=1$$. Likelihood is the conditional probability of observations 𝒟= 𝒙(1),𝒙(2),…,𝒙( ) given the value of parameters 𝜽 Assuming i. In particular, this model specifies that overlapping spikes from nearby neurons superimpose linearly in the recorded voltage signal. –Set derivative of NLL to 0, and solve for. Actually, it is incredibly simple to do bayesian logistic regression. Maximum Likelihood with Bernoulli Distribution MLE for Bernoulli likelihood is argmax 0 1 p(Xj ) = argmax 0 1 Yn i=1 p(xij ) = argmax 0 1 Yn i=1 I[xi=1](1 )I[xi=0] = argmax 0 1 N 1(1 )N 0; where N 1 is count of number of 1 values and N 0 is the number of 0 values. • Option #2 - Maximum a Posteriori (MAP) Estimation (Bayesian Approach) − Use Bayes' theorem to combine researcher intuition with a small experimental dataset to estimate probabilities. Maximum Likelihood Estimation (MLE) for the coin. is distributed as a Bernoulli random variable, whose parameter p is determined by the latent variable z and the input data x the maximum a posteriori (MAP) estimation coincides with the. Maximum a posteriori and Bayes estimators are two common methods of point estimation in Bayesian statistics. Kaaresen1 Department of Mathematics University of Oslo P. The conjugate prior for the Bernoulli distribution is the Beta distribution given as: f(x; ; ) = ( + ) ( )( ) x 1(1 x) 1 Derive the MAP estimates of the multivariate Bernoulli model if we use the Beta distribution as a prior for the class conditional word distributions P(wjC i). Maximum Likelihood Estimation (MLE) and Maximum A Posteriori (MAP) estimation are method of estimating parameters of statistical models. elbo (log_like, KL = kl, N = 10000) The main differences here are that reg is now kl , and we use the elbo loss function. maximum likelihood (ML) estimate and the maximum a posteriori (MAP) estimate. MAP can help dealing with this issue. The ML estimate for θ is denoted θ. maximum a posteriori inference for PBDNs, providing state-of-the-art classifica- tion accuracy and interpretable data subtypes near the decision boundaries, while maintaining low computational complexity for out-of-sample prediction. In Part IV of his masterpiece Bernoulli proves the law of large numbers which is one. For estimation methods such as numerical integration, constructing these predictions and estimates of their. This function reaches its maximum at $$\hat{p}=1$$. ch ABSTRACT This work presents an approach for the recognition of the. Before reading this lecture, you might want to revise the lectures about maximum likelihood estimation and about the Poisson distribution. REPRESENTING INFERENTIAL UNCERTAINTY IN DEEP NEURAL NETWORKS THROUGH SAMPLING Patrick McClure & Nikolaus Kriegeskorte MRC Cognition and Brain Sciences Unit University of Cambridge Cambridge, UK fpatrick. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. The reason of introducing MAP in the context of comparing MLE and BPE is that MAP can be treated as an intermediate step between MLE and BPE, which also takes prior into account. Statistical Machine Learning CHAPTER 12. select() function is found in the LearnBayes package. binomial, say) and the desired estimator (regularized maximum likelihood, or Bayesian maximum a posteriori/posterior mean, etc. BAYESIAN INFERENCE where b = S n/n is the maximum likelihood estimate, e =1/2 is the prior mean and n = n/(n+2)⇡ 1. Jacob Bernoulli was the brother of Johann Bernoulli and the uncle of Daniel Bernoulli. The moral of the story is that full Bayesian inference is insensitive to parameterization as long as the approprieate Jacobian adjustment is applied. The procedure is formulated as finding maximum a posteriori estimates within a probabilistic generative model. Don't worry if you don't know all these words, everything will be explained. tails and 2 heads before starting the experiment). And now let's apply what we've learned and play with our coins. Estimating the parameters involves alternating between estimating the deformations that match tissue class images of individual subjects to template, and updating the. •Also, scaling the log likelihood by a positive constant β/ does not alter the location of the maximum with respect to w, so it can be ignored •Result: Maximize. maximum a posteriori estimate, MAP estimate = MAP-estimaatti, posteriorijakauman moodi maximum likelihood estimate (MLE) = suurimman uskottavuuden estimaatti point estimate = piste-estimaatti estimate (v. To generate a document, NB classifier first. 5 and negative if s(x) < 0. elbo (log_like, KL = kl, N = 10000) The main differences here are that reg is now kl , and we use the elbo loss function. Mehryar Mohri - Speech Recognition page Courant Institute, NYU ASR Characteristics Vocabulary size: small (digit recognition, 10), medium (Resource Management, 1000), large (Broadcast News, 100,000), very large (+1M). 1 BMF-BD: Bayesian Model Fusion on Bernoulli Distribution for Efficient Yield Estimation of Integrated Circuits Chenlei Fang1, Fan Yang1,*, Xuan Zeng1,* and Xin Li1,2 1State Key Lab of ASIC & System, Microelectronics Department, Fudan University, Shanghai, P. Say that the probability of the temperature outside your window for each of the 24 hours of a day x2R24 depends on the season 2fsummer, fall, winter, springg, and that you know the. Introduction to Detection Theory called the maximum a posteriori (MAP) rule: simple hypotheses, the prior pmf of θ is the Bernoulli pmf. rules include maximum likelihood and maximum a posteriori often involves optimisation, which may be difficult in practice prediction is simple 2 Maintain the full Bayesian posterior keep the full posterior distribution generally involves integration or summation, which may be (very) difficult in practice. We demonstrate that in comparison to clustering methods, binary pursuit can reduce both the number of missed spikes and the rate of false positives. Recently, we have proposed a deconvolution method that. • then pick a hypothesis by maximum likelihood estimation (MLE) or Maximum A Posteriori (MAP) • example: roll a weighted die • weights for each side ( ) define how the data are generated • use MLE on the training data to learn h(x,y)=p(x,y) hÎH q q. You can think of Monte Carlo methods as algorithms that help you obtain a desired value by. (Jacob Bernoulli, "The Art of Conjecturing", 1713) "It seems that to make a correct conjecture about any event whatever, it is necessary to calculate exactly the number of possible cases and then to determine how much more likely it is that one case will occur than another. 𝑎−1 and 𝑏−1 can be seen as a prior counts of heads and tails. 1 Introduction to Stochastic Processes 13. Ø Maximum a Posteriori (MAP) parameter estimate: Choose the parameters with largest posterior probability. observations, L( ) = p(D) = N H (1 )N T This takes very small values (in this case, L(0:5) = 0:5100 ˇ7:9 10 31). (In practice, the MDL estimator is usually being approximated too, since only a local maximum is determined. Dashti and T. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. Graph SLAM [6] and Square Root Smoothing and Mapping (SAM) [7] are prime examples of MAP-based estimation. We also introduce the maximum likelihood estimate and show that it coincides with the least squares estimate. Maximum Likelihood Estimation (MLE) General MLE strategy. BernoulliNB implements the naive Bayes training and classification algorithms for data that is distributed according to multivariate Bernoulli distributions; i. What is maximum-likelihood estimation (MLE) exactly and how does it relate to NHST and bayesian data analysis? Hi, this does not exactly have to be an ELI5 and I am not a beginner in statistics, but I would appreciate if someone could put in simple words, what exactly maximum-likelihood estimation is. Stigler that Thomas Bayes was perhaps anticipated in the discovery of the result that today bears his name is exposed to further scrutiny here. (In practice, the MDL estimator is usually being approximated too, since only a local maximum is determined. is expensive. Maximum Likelihood Estimation (MLE) General MLE strategy. Maximum a posteriori (MAP) Estimation MAQ Probability of sequence of events In general, for a sequence of two events X 1 and X 2, the joint probability is P (X 1; 2) = p 2j 1) 1) (2) Since we assume that the sequence is iid (identically and independently distributed), by de nition p(X 2jX 1) = P(X 2). In the lecture entitled Maximum likelihood we have demonstrated that, under certain assumptions, the distribution of the maximum likelihood estimator of a vector of parameters can be approximated by a multivariate normal distribution with mean and covariance matrix where is the log-likelihood of one observation from the. ) Since data is usually samples, not counts, we will use the Bernoulli rather than the binomial. Chebychev Inequality. A 95 percent posterior interval can be obtained by numerically finding. Both Maximum Likelihood Estimation (MLE) and Maximum A Posterior (MAP) are used to estimate parameters for a distribution. Maximum A Posteriori. For example, suppose you are interested in the heights of Americans. 4: Bernoulli Maximum Likelihood 2. Expectation-maximization (EM) is a method to find the maximum likelihood estimator of a parameter of a probability distribution. In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV. We've done n independent Bernoulli trials to evaluate the fairness of our coin. Maximum a posteriori estimation. machine learning press esc to navigate slides. been solved as a maximum a posteriori (MAP) estimation problem by modeling it as a factor graph in recent years [5]. This is useful in many applications. The MDL or MAP (maximum a posteriori) estimator is both a common approximation for the Bayes mixture and interesting for its own sake: Use the model with the largest product of prior and evidence. Further-more, we consider the conditional intensity function to be the logistic map of a second-order stationary process with sparse frequency content. The function. Bernoulli Process. According to the ergodic hypothesis, given an infinite universe, every event with non-zero probability, however small, shall eventually occur. coupled ensemble is essentially equal to the maximum a-posteriori (MAP) threshold of the underlying ensemble when transmission takes place over a binary erasure channel (BEC) [1]. Welcome to The Little Book of LDA. Ø Both estimators pick parameters with high posterior probability. In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. 1 Maximum likelihood estimation. rules include maximum likelihood and maximum a posteriori often involves optimisation, which may be difficult in practice prediction is simple 2 Maintain the full Bayesian posterior keep the full posterior distribution generally involves integration or summation, which may be (very) difficult in practice. , the probability of a single coin ip coming up heads. The different naive Bayes classifiers differ mainly by the assumptions they make regarding the distribution of. In this paper, we provide a counterexample which shows that in general this claim is false. A playlist of these Machine Learning videos is available here:. - 1 - Lisa Yan CS109 Lecture Notes #23 November 13, 2019 Maximum A Posteriori Based on a chapter by Chris Piech Maximum A Posteriori Estimation MLEisgreat. Maximum a posteriori estimation. Central Limit Theorem- Approximate. • Density estimation: – Maximum likelihood (ML) – Maximum a posteriori (MAP) Beta distribution “fits” Bernoulli trials - conjugate choices 1 1 1 2 2 1 2. Bayesian approach and the maximum a-posteriori (MAP) approximation. Bernoulli distribution Maximum a Posteriori (MAP) Estimation Choose parameter that is most probable given observed data and prior belief b. A matrix containing the maximum a posteriori estimates for all individuals at each locus. Suppose you observe 3 heads and 2 tails. by Marco Taboga, PhD. The book covers material taught in the Johns Hopkins Biostatistics Advanced Statistical Computing course. aalto-logo-en-3 Parametric Methods Classi cation and Regression Estimators Gaussian Modeling Naive Bayes Classi er for Binary Data Predictions from the Posterior Probability Density. Maximum a posteriori estimates (MAP) Be smart about. AB - Jump Markov linear systems (JMLSs) are linear systems whose parameters evolve with time according to a finite state Markov chain, Given a set of observations, our aim is to estimate the states of the finite. (In ML estimation, the prior over models is assumed to be uniform. ML for Bernoulli trials. Problem: find most likely Bernoulli distribution,. What's your guess for {$\theta$}? If you guessed 3/5, you might be doing MLE, which is simply finding a model that best explains your experiment:. Statistical Machine Learning CHAPTER 12. Chapter 9 The exponential family: Conjugate priors Within the Bayesian framework the parameter θ is treated as a random quantity. It is so common and popular that sometimes people use MLE even without knowing much of it. (k), modelled as a Bernoulli-Gaussian (B-G) signal, which was distorted by a linear time-invariant system v(k). statistics define a 2D joint distribution. The multinomial and Bernoulli models di er on how the. A 95 percent posterior interval can be obtained by numerically finding. elbo (log_like, KL = kl, N = 10000) The main differences here are that reg is now kl , and we use the elbo loss function. A crash course in probability and Naïve Bayes classification Maximum a-posteriori and maximum likelihood multivariate Bernoulli model for our e-mails, with. A crash course in probability and Naïve Bayes classification Maximum a-posteriori and maximum likelihood multivariate Bernoulli model for our e-mails, with. , Bernoulli) Likelihood. In MAP, instead of returning to maximum likelihood estimate, we allow prior to influence the choice of point estimate. Jacob Bernoulli was the brother of Johann Bernoulli and the uncle of Daniel Bernoulli. When discussing Naive Bayes, I've noticed that lecturers typically say that we really wan. How to get data into R and how to get data out of R. Introduction to Bayesian Decision Theory the main arguments in favor of the Bayesian perspective can be found in a paper by Berger whose title, “Bayesian Salesmanship,” clearly reveals. Two approaches to parameter estimation: Maximum likelihood estimation: is a fixed point (point estimation) Bayesian estimation: is a random variable whose prior uncertainty (represented as prior distribution) can be incorporated. Expectation-maximization (EM) is a method to find the maximum likelihood estimator of a parameter of a probability distribution. Created Date: 10/9/2006 4:09:39 PM. Instead the IWM relies on a sub-optimal but. 1 Introduction to recursive Bayesian filtering Michael Rubinstein IDC Problem overview • Input – ((y)Noisy) Sensor measurements • Goal. Maximum A Posteriori Probability (MAP) In the case of MLE, we maximized to estimate. Bernoulli Naive Bayes¶. Additionally, you may have cases, where the estimate lies on the boundary of the parameter space (i. Basic probability theory. Form of the conditional distribution p(yjx) and the decision boundary. ML for Bernoulli trials. Empirical evidence of this phenomenon for BMS channels has been observed in [2], [3]. Similar to part 1, you assume that each features are conditionally independent given the class and compute the log-likelihood to avoid underflow. Maximum Likelihood Estimation has immense importance and almost every machine learning algorithm. d random variables, where X i ˘Bernoulli(p). –Set derivative of NLL to 0, and solve for. Cards – 52-card deck. The method of maximum a posteriori estimation then estimates as the mode of the posterior distribution of this random variable: The denominator of the posterior distribution (so-called marginal likelihood) does not depend on and therefore plays no role in the optimization. Categorical * Dirichlet = Dirichlet. A playlist of these Machine Learning videos is available here:. Assume Page Requests Occur Every 1-ms Interval According To Independent Bernoulli Trials With Probability Of Success P. L(x)~p(x I y) ex p(y I x)P(x) When t is known, the linearity ofthe model (equation I) and the Gaussian. The procedure is formulated as finding maximum a posteriori estimates within a probabilistic generative model. These notes were written for the undergraduate course, ECE 313: Probability with Engineering Applications, o ered by the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. Multichannel seismic modeling and inversion based on Markov-Bernoulli random field Alon Heimer and Israel Cohen⁄, Technion - Israel Institute of Technology, and Anthony A. Problem: find most likely Bernoulli distribution,. Astuteness and elegance are seldom found in his method of presentation and expression, but there is a maximum of integrity. Maximum A Posteriori Estimation A somewhat more sophisticated model estimation approach is to choose the coefficient vector $$\beta$$ that maximizes the posterior probability $$p(\beta \mid y)$$ using maximum a posteriori (MAP) estimate. observations:. been solved as a maximum a posteriori (MAP) estimation problem by modeling it as a factor graph in recent years [5]. Summary : The Bernoulli Form elucidates the notion of Platonic Forms in describing how a motley crew of Forms—including Delphi, forecasting, integration, utility, optimization, efficiency and complementary—come together to form The Bernoulli Model. The ML estimate for θ is denoted θ. What's your guess for {$\theta$}? If you guessed 3/5, you might be doing MLE, which is simply finding a model that best explains your experiment:. Maximum likelihood principle (ML), Maximum a posteriori (MAP) Gaussian distribution, 1-D case Bias and Variance of estimators, Maximum likelihood estimate of variance is biased. If you already know some of the terms, then you can skip these parts. Statistical Machine Learning CHAPTER 12. [email protected] Maximum a posteriori (MAP) Estimation MAQ Probability of sequence of events In general, for a sequence of two events X 1 and X 2, the joint probability is P (X 1; 2) = p 2j 1) 1) (2) Since we assume that the sequence is iid (identically and independently distributed), by de nition p(X 2jX 1) = P(X 2). log_prob (Y_) loss = ab. •MAP –Maximum A Posteriori: Determine parameters/class that has maximum probability argmax 𝜽𝒚 𝑃𝜽𝒚𝑫 10. tr http://www. Bernoulli (probs = net) log_like = likelihood. 2 as it was above, which our estimate for θ_mu. Và như thường lệ, chúng ta sẽ cùng thực hiện mộ vài. Mehryar Mohri - Speech Recognition page Courant Institute, NYU ASR Characteristics Vocabulary size: small (digit recognition, 10), medium (Resource Management, 1000), large (Broadcast News, 100,000), very large (+1M). edu Abstract We consider the problem of classifying a hotel review as a positive or negative and thereby analyzing the sentiment of a customer. Among these, the multinomial NB [11] tends to be particularly favored in TC. We will demonstrate this on four models: linear regression, logistic regression, Neural networks, and Gaussian process. Maximum-likelihood and Bayesian parameter estimation. • then pick a hypothesis by maximum likelihood estimation (MLE) or Maximum A Posteriori (MAP) • example: roll a weighted die • weights for each side ( ) define how the data are generated • use MLE on the training data to learn h(x, y) p(x, y) h H T. Astuteness and elegance are seldom found in his method of presentation and expression, but there is a maximum of integrity. Agapiou, M. The prior pa-rameters a0 and b0 assign a beta prior distribution to each outcome probability. maximum a posteriori formulas. MLE is also widely used to estimate the parameters for a Machine Learning model, including Naïve Bayes and Logistic regression. 1 Maximum A Posteriori (MAP) Estimation As most operations involving Bayesian posterior are intractable, we turn to point estimate. This paper proposes a maximum a posteriori (MAP) scheme for the transduction. A crash course in probability and Naïve Bayes classification Maximum a-posteriori and maximum likelihood multivariate Bernoulli model for our e-mails, with. What's your guess for {$\theta$}? If you guessed 3/5, you might be doing MLE, which is simply finding a model that best explains your experiment:. These signals are modeled as random Bernoulli-Gaussian processes, and their unsupervised restoration requires (i) estimation of the hyperparameters that control the stochastic models of the input and noise signals and (ii) deconvolutlon of the pulse process. Don't worry if you don't know all these words, everything will be explained. The latter include maximum a posteriori estimation of the system state using the approximate derivatives of the posterior density and the approximation of functionals of it, for example, Shannon’s entropy. On least squares estimators under Bernoulli-Laplacian mixture priors Aleksandra Pi zurica and Wilfried Philips Ghent University Image Processing and Interpretation Group Sint-Pietersnieuwstraat 41, B9000 Ghent Belgium [email protected] tr/~ethem/i2ml Lecture Slides for. If our experiment is a single Bernoulli trial and we observe X = 1 (success) then the likelihood function is L(p; x) = p. Mehryar Mohri - Speech Recognition page Courant Institute, NYU ASR Characteristics Vocabulary size: small (digit recognition, 10), medium (Resource Management, 1000), large (Broadcast News, 100,000), very large (+1M). , x ⋆ = argmin x {‖y − Dx ‖ 2 2 + λ‖x‖0}, can be regarded as a limit case of a general maximum a posteriori (MAP) problem involving Bernoulli-Gaussian variables. Density estimation CS 2750 Machine Learning • Maximum a posteriori probability (MAP) Beta distribution "fits" Bernoulli trials - conjugate choices 1 1 1. Maximum a posteriori estimate. Trinity of Parameter Estimation and Data Prediction Avinash Kak Maximum a Posteriori (MAP) for p for the same Bernoulli experiment. I would think that the logic goes the opposite direction: one first a loss (i. The MDL or MAP (maximum a posteriori) estimator is both a common approximation for the Bayes mixture and interesting for its own sake: Use the model with the largest product of prior and evidence. For i=1,2,3,draw a local modification y￿ ∈ F from q 3. The posterior posterior distribution is (according to Bayes' rule) equal to the the product of the (binomial) likelihood and (beta) prior, divided by a normalizing. The basic intuition behind the MLE is that estimate which explains the data best, will be the best estimator. BAYESIAN INFERENCE where b = S n/n is the maximum likelihood estimate, e =1/2 is the prior mean and n = n/(n+2)⇡ 1. The Prior and Posterior Distribution: An Example. ML for Bernoulli trials. The conjugate prior for the Bernoulli distribution is the Beta distribution given as: f(x; ; ) = ( + ) ( )( ) x 1(1 x) 1 Derive the MAP estimates of the multivariate Bernoulli model if we use the Beta distribution as a prior for the class conditional word distributions P(wjC i). This paper addresses the sparse representation (SR) problem within a general Bayesian framework. In this post, I’ll introduce the so-called “Bayesian estimator” point estimate for the beta priors. Vassiliou, GeoEnergy SUMMARY We introduce a multichannel blind deconvolution algorithm for seismic signals based on Markov-Bernoulli random field modeling. Say that the probability of the temperature outside your window for each of the 24 hours of a day x2R24 depends on the season 2fsummer, fall, winter, springg, and that you know the. Maximum a posteriori (MAP) parameter estimation. Actually, it is incredibly simple to do bayesian logistic regression. Suppose you observe 3 heads and 2 tails. MLE is also widely used to estimate the parameters for a Machine Learning model, including Naïve Bayes and Logistic regression. Maximum Likelihood and Least Squares •Log Likelihood •Maximize Log Likelihood wrt to w •Since last two terms, dont depend on w, they can be omitted. In this experiment, we introduce another well-known estimator, maximum a posteriori probability (MAP) estimator. Bayesian Nonparametrics: Models Based on the Dirichlet Process Alessandro Panella Department of Computer Science University of Illinois at Chicago Machine Learning Seminar Series February 18, 2013 Alessandro Panella (CS Dept. Maximum Likelihood Estimation MLE Principle: Choose parameters that maximize the likelihood function This is one of the most commonly used estimators in statistics Intuitively appealing 6 Example: MLE in Binomial Data It can be shown that the MLE for the probability of heads is given by (which coincides with what one would expect) 0 0. And so we'll not get any new information. Under the Bernoulli model with i. be, [email protected] Suppose we provide an estimate for a parameter that has true value. Math~matique, Rue des Saints-P~res, 75006 Paris, France Received 27 February 1992 Revised 19 June and 29 October 1992 Abstract. min /mfls XX. Necessary conditions for the maximum can be obtained zeroing the gradient wrt to : r Xn j=1 lnp(xjj ) = 0 Points zeroing the gradient can be local or global maxima depending on the form of the distribution Maximum-likelihood and Bayesian parameter estimation. 17 likelihood ratio test; 9. If you equate the derivative of the log-likelihood with zero, you get = N 1 N 1+N 0. Amir Dembo is part of Stanford Profiles, official site for faculty, postdocs, students and staff information (Expertise, Bio, Research, Publications, and more). Suppose you observe 3 heads and 2 tails. Jacob Bernoulli was the brother of Johann Bernoulli and the uncle of Daniel Bernoulli. Decision criterion, for. If our experiment is a single Bernoulli trial and we observe X = 1 (success) then the likelihood function is L(p; x) = p. Maximum a posteriori Deconvolution of Ultrasonic Data with Applications in Nondestructive Testing: Multiple transducer and robustness issues. Bernoulli also studied the exponential series which came out of examining compound interest. Maximum A Posteriori (MAP) Estimation More specifically, finding$f_{Y}(y)\$ usually is done using the law of total probability, which involves integration or summation, such as the one in Example 9. (k), modelled as a Bernoulli-Gaussian (B-G) signal, which was distorted by a linear time-invariant system v(k). This paper addresses the sparse representation (SR) problem within a general Bayesian framework. log_prob (Y_) loss = ab. Trong bài viết này, tôi sẽ trình bày về ý tưởng và cách giải quyết bài toán đánh giá tham số mô hình theo MLE hoặc MAP Estimation. Welcome to The Little Book of LDA. Department of Electrical Engineeringomputer & C Science. Begin with some initial configuration y0 ∈ F 2. The Maximum A posteriori Parameter Estimation Technique was. the possible probabilities of the Bernoulli distribution being the 1-simplex, The distribution is a special case of a "multivariate Bernoulli distribution"[4] in which exactly one of the k 0-1 variables takes the value one. The prior pa-rameters a0 and b0 assign a beta prior distribution to each outcome probability.
106 227 Assignments Done 97.9% Successfully Done In May 2022 # Answer to Question #114406 in Microeconomics for Roxie Question #114406 You are the manager of a monopoly. A typical consumer's inverse demand function for your firm's product is P = 250- 4Q, and your cost function is TC = 10Q. A. MC is fixed and is equal to $10 (MC=AC=S). MR=250-8Q. (P=price, Q=quantity of output, TC=total cost, MC=marginal cost, MR=marginal revenue, S=supply) 1)What price the company should choose to get maximum profit if the company will use ordinary pricing strategy? 2)Now suppose the company is thinking about using price discrimination for lower income group of customers. If the company will offer discount of$30 in price to the lower income groups how much additional profit will the company earn? Illustrate graphically. 3)Explain the conditions needed to apply the price discrimination strategy? 1 2020-05-08T14:16:41-0400 "TR=\\int MR=250Q-4Q^2" "Pr=TR-TC=250Q-4Q^2-10Q=240Q-4Q^2" 1) "Pr^\/=240-8Q" "240-8Q=0" "Q=30" "p=250-4\\times30=130" 2) "p=100" "100=250-4Q" "Q=37.5" "Pr=240\\times37.5-4\\times37.5^2=3375" "\\varDelta Pr=3600-3375=225" 3) For the implementation of price discrimination by a monopolist, it is necessary that the direct elasticity of demand for a product at a price from different buyers be significantly different; so that these customers are easily identifiable; so that further resale of goods by buyers is not possible. Need a fast expert's response? Submit order and get a quick answer at the best price for any assignment or question with DETAILED EXPLANATIONS! No comments. Be the first! LATEST TUTORIALS New on Blog APPROVED BY CLIENTS
HasLeftZero - Maple Help Magma HasLeftZero test whether a magma has a left zero element Calling Sequence HasLeftZero( m ) Parameters m - Array representing the Cayley table of a finite magma Description • The HasLeftZero( m ) command returns true if the magma m has a left zero element, and returns false otherwise. Examples > $\mathrm{with}\left(\mathrm{Magma}\right):$ > $m≔⟨⟨⟨1|2|3⟩,⟨2|2|2⟩,⟨3|1|1⟩⟩⟩$ ${m}{≔}\left[\begin{array}{ccc}{1}& {2}& {3}\\ {2}& {2}& {2}\\ {3}& {1}& {1}\end{array}\right]$ (1) > $\mathrm{HasLeftZero}\left(m\right)$ ${\mathrm{true}}$ (2) Compatibility • The Magma[HasLeftZero] command was introduced in Maple 15.
1. ## [SOLVED] integration $\displaystyle \int \frac{-4x}{4x^4+4x^2+1}$ $\displaystyle \int \frac{-4x}{(2x^2+1)(2x^2+1)}$ What would my next steps be to complete this integral? 2. Use the substitution $\displaystyle 2x^2+1=t\Rightarrow 4xdx=dt$ 3. $\displaystyle \int\frac{-4x}{t^2}*\frac{dt}{4x}$ $\displaystyle \int\frac{1}{(2x^2+1)(2x^2+1)}$ is this correct? 4. And, if a picture helps... Straight continuous lines differentiate/integrate with respect to x, the straight dashed line with respect to the dashed balloon expression - as though it were a variable like u. Don't integrate - balloontegrate! Balloon Calculus: worked examples from past papers 5. As my answer I got $\displaystyle y=\frac{1}{2x^2+1}+C$ 6. Yep 7. thanks 8. Originally Posted by ronaldo_07 As my answer I got $\displaystyle y=\frac{1}{2x^2+1}+C$ You can always differentiate it and see if you get back the integrand. 9. Yes - and of course the differentiation is the same picture (completed) but read downwards instead of up. Don't integrate - balloontegrate! http://www.ballooncalculus.org/examples.png
# Error in excited states code Hello, I am starting to use iTensor and I am trying to run the following example (as is): http://itensor.org/docs.cgi?page=formulas/excited_dmrg But I get the following errors when compiling dmrg.cc: In function ‘int main()’: dmrg.cc:41:12: error: ‘class itensor::Sweeps’ has no member named ‘maxdim’ sweeps.maxdim() = 10,20,100,100,200; ^ dmrg.cc:51:10: error: expected unqualified-id before ‘[’ token auto [en0,psi0] = dmrg(H,randomMPS(sites),sweeps,{"Quiet=",true}); ^ dmrg.cc:51:69: error: expected primary-expression before ‘)’ token auto [en0,psi0] = dmrg(H,randomMPS(sites),sweeps,{"Quiet=",true}); ^ dmrg.cc:61:17: error: ‘psi0’ was not declared in this scope wfs.at(0) = psi0; ^ dmrg.cc:67:10: error: expected unqualified-id before ‘[’ token auto [en1,psi1] = dmrg(H,wfs,randomMPS(sites),sweeps,{"Quiet=",true,"Weight=",20.0}); ^ dmrg.cc:67:88: error: expected primary-expression before ‘)’ token auto [en1,psi1] = dmrg(H,wfs,randomMPS(sites),sweeps,{"Quiet=",true,"Weight=",20.0}); ^ dmrg.cc:72:46: error: ‘en0’ was not declared in this scope printfln("\nGround State Energy = %.10f",en0); ^ dmrg.cc:73:47: error: ‘en1’ was not declared in this scope printfln("\nExcited State Energy = %.10f",en1); ^ dmrg.cc:87:56: error: ‘psi1’ was not declared in this scope printfln("\nOverlap <psi0|psi1> = %.2E",inner(psi0,psi1)); ^ dmrg.cc:87:60: error: ‘inner’ was not declared in this scope printfln("\nOverlap <psi0|psi1> = %.2E",inner(psi0,psi1)); The sample codes provided with the source material seem to work well, though. Best regards, Rafael Hi Rafael, This is a compilation issue, as you probably know, so it could be due to a few things. To begin with: 2. what Makefile are you using to compile? Or what compilation flags / instructions? Best regards, Miles commented by (270 points) Hi Miles, 2. I am using the same Makefile given in the sample folder, changing only build: dmrg iqdmrg dmrg_table dmrgj1j2 exthubbard idmrg mag excited_states debug: dmrg-g iqdmrg-g dmrg_table-g dmrgj1j2-g exthubbard-g idmrg-g mag-g excited_states-g excited_states: excited_states.o $(ITENSOR_LIBS)$(TENSOR_HEADERS) $(CCCOM)$(CCFLAGS) excited_states.o -o excited_states $(LIBFLAGS) excited_states-g: mkdebugdir .debug_objs/excited_states.o$(ITENSOR_GLIBS) $(TENSOR_HEADERS)$(CCCOM) $(CCGFLAGS) .debug_objs/excited_states.o -o excited_states-g$(LIBGFLAGS) to account for the excited_states.cc code I am running. However, I just tried to run it again and it seems to work now for some reason. I don't think I changed anything in the meantime. Anyway, let me know if anything is wrong with the procedure above. On another note, when compiling I get a few warning messages of the kind warning: ignoring return value of ‘int system(const char*)’, declared with attribute warn_unused_result [-Wunused-result] Is this expected or there is something I should do to avoid it? Best wishes, Rafael commented by (46.8k points) Hi Rafael, One thing I would recommend is not using the Makefile in the sample folder, which is rather complicated, but instead using the Makefile in the tutorial/project_template folder, which is intended as a sample code you can use to start building a project. We should publicize this folder even better on the website. Warnings are pretty common when compiling C++ and should not be a concern, especially for warnings such as an ignored return value. That one is especially annoying, because if you do 'catch' the return value but then don't use it, you get a new warning about "unused variable". So I would recommend just adding -Wno-unused-result to your compliler flags to turn that one off (because I think an unused variable is actually a more useful warning than an unused return value). Best, Miles commented by (270 points) Hi Miles, Thanks for the reply. That helps a lot! Best wishes, Rafael Hello Miles, I have the same issue when running the code in https://www.itensor.org/docs.cgi?vers=cppv3&page=formulas/excited_dmrg The error : From line 1604, file itensor.cc div(ITensor) not defined for non QN conserving ITensor div(ITensor) not defined for non QN conserving ITensor make: *** [Makefile:33: build] Aborted I use itensor v3 and run 'make' in the folder '..\itensor\My_programs' The error occur when I add line auto [en1,psi1] = dmrg(H,wfs,randomMPS(sites),sweeps,{"Quiet=",true,"Weight=",20.0});
# Method and a system for automatic evaluation of digital files Imported: 13 Feb '17 | Published: 18 Jan '11 Jocelyn Desbiens USPTO - Utility Patents ## Abstract There is provided a method for automatic evaluation of target files, comprising the steps of building a database of reference files; for each target file, forming a training set comprising files from the database of reference files and building a test set from features of the target file; dynamically generating a learning model from the training set; and applying the learning model to the test set, whereby a value corresponding to the target file is predicted. ## Description ### FIELD OF THE INVENTION The present invention relates to a method and a system for automatic evaluation of digital files. More specifically, the present invention is concerned with a method for dynamic hit scoring. ### BACKGROUND OF THE INVENTION A number of files classification or prediction methods have been developed over the years. Li et al. (US 2004/0231498) present a method for music classification comprising extracting features of a target file; extracting features of a training set; and classifying music signals. Blum et al. (U.S. Pat. No. 5,918,223) describe a method for classifying and ranking the similarity between individual audio files comprising supplying sets containing the features of classes of sound to a training algorithm yielding a set of vectors for each class of sound; submitting a target audio file to the same training algorithm to obtain a vector for the target file; and calculating the correlation distance between the vector for the target file and the vectors of each class, whereby the class which has the smallest distance to the target file is the class assigned to the target file. Alcade et al. (U.S. Pat. No. 7,081,579, US 2006/0254411) teach a method and system for music recommendation, comprising the steps of providing a database of references, and extracting features of a target file to determine its parameter vector using a FTT analysis method. Then the distance between the target file's parameter vector and each file's parameter vector of the database of references is determined to score the target file according to the target file's distance with each file of database of references via a linear regression method. Foote et al. (US 2003/0205124), Platt et al. (US 2006/0107823), Flannery et al. (U.S. Pat. No. 6,545,209) present methods for classifying music according to similarity using a distance measure. Gang et al. (US 2003/0089218) disclose a method for predicting musical preferences of a user, comprising the steps of building a first set of information relative to a catalog of musical selection; building a second set of information relative to the tastes of the user; and combining the information of the second set with the information of the first set to provide an expected rating for every song in the catalog. There is a need in the art for a method for dynamic hit scoring. ### SUMMARY OF THE INVENTION More specifically, there is provided a method for automatic evaluation of target files, comprising the steps of building a database of reference files; for each target file, forming a training set comprising files from the database of reference files and building a test set from features of the target file; dynamically generating a learning model from the training set; and applying the learning model to the test set, whereby a value corresponding to the target file is predicted. There is further provided a method for automatic evaluation of songs, comprising the step of building a database of hit songs; for each song to be evaluated, forming a training set comprising songs from the database of hit songs and building a test set from features of the song to be evaluated; dynamically generating a learning model from the training set; and applying the learning model to the test set; whereby a score corresponding to the song to be evaluated is predicted. Other objects, advantages and features of the present invention will become more apparent upon reading of the following non-restrictive description of embodiments thereof, given by way of example only with reference to the accompanying drawings. ### DESCRIPTION OF EMBODIMENTS OF THE INVENTION An embodiment of the method according to an aspect of the present invention generally comprises an analysis step (step 100) and a dynamic scoring step (step 200). The method will be described herein in the case of music files for example, in relation to the flowchart of FIG. 1. In the analysis step (step 100), a database of reference files is built. In the case of music files, the database of reference files comprises hit songs for example. A number of files, such as MP3 files or other digital format, for example, of songs identified as hits are gathered, and numerical features that represent each one of them are extracted to form n-dimensional vectors of numerical features that represent each file, referred to as feature vectors, as well known in the art. A number of features, including for example timbre, rhythm, melody frequency etc, are extracted from the files to yield feature vectors corresponding to each one of them. In a hit score method, a number of 84 features were extracted for example. The feature vectors are stored in a database along with relevant information, such as for example, artist's name, genre etc (112). Each MP3 file is rated, according to a predefined scheme, and also stored in a database (113). The references files, here exemplified as hit songs MP3, are selected according to a predefined scheme of rating. In the case of hit songs, scoring may originate from a number of sources, including for example, compilation of top 50 rankings, sales, air play etc. For each target file, i.e. each song to be assessed in the present example, numerical features that represent the target file are extracted to form corresponding feature vectors (114). The dynamic scoring step (step 200) generally comprises a learning phase and a predicting phase. In the learning phase, files from the reference database in regards to which the target file will be assessed are selected in a training set, which represents a dynamical neighborhood. The training set is built by finding n closest feature vectors of the target file's feature vector in the database of feature vectors of the hits (116). The distance/similarity between the target file's feature vector and each feature vector of the database of hits may be determined by using the Euclidian distance, the cosine distance or the Jensen-Shannon distribution similarity, as well known to people in the art. The training set is then simplified by reducing its dimension (118), but using either Principal Component Analysis (PCA) or Singular Value Decomposition (SVD) for example or non linear regression techniques known in the art such as (but not limited to): Neural Networks, Support Vector Machines, Generalized Additive Model, Classification and Regression Tree, Multivariate Adaptive Regression Splines, Hierarchical Mixture of Experts, Supervised Principal Component Analysis. PCA is an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. PCA can be used for dimensionality reduction in a data set while retaining those characteristics of the data set that contribute most to its variance, by keeping lower-order principal components and ignoring higher-order ones. Such low-order components often contain the “most important” aspects of the data. But this is not necessarily the case, depending on the application. The main idea behind the principal component analysis is to represent multidimensional data with less number of variables retaining main features of the data. It is inevitable that by reducing dimensionality some features of the data will be lost. It is hoped that these lost features are comparable with the “noise” and they do not tell much about underlying population. PCA is used to project multidimensional data to a lower dimensional space retaining as much as possible variability of the data. This technique is widely used in many areas of applied statistics. It is natural since interpretation and visualization in a fewer dimensional space is easier than in many dimensional space. Especially, dimensionality can be reduced to two or three, then plots and visual representation may be used to try and find some structure in the data. PCA is one of the techniques used for dimension reductions, as will now be briefly described. Suppose M is an m-by-n matrix whose entries come from the field K, which is either the field of real numbers or the field of complex numbers. Then there exists a factorization of the form M=UΣV*, where U is an m-by-m unitary matrix over K, the matrix Σ is m-by-n with nonnegative numbers on the diagonal and zeros off the diagonal, and V* denotes the conjugate transpose of V, an n-by-n unitary matrix over K. Such a factorization is called a singular-value decomposition of M. The matrix V thus contains a set of orthonormal “input” or “analysing” basis vector directions for M. The matrix U contains a set of orthonormal “output” basis vector directions for M. The matrix Σ contains the singular values, which can be thought of as scalar “gain controls” by which each corresponding input is multiplied to give a corresponding output. A common convention is to order the values Σi,j in non-increasing fashion. In this case, the diagonal matrix Σ is uniquely determined by M (though the matrices U and V are not). Assuming zero empirical mean (the empirical mean of the distribution has been subtracted from the data set), the principal component w1 of a data set x can be defined as: $w 1 = arg ⁢ max  w  = 1 ⁢ var ⁢ { w T ⁢ x } = arg ⁢ max  w  = 1 ⁢ E ⁢ { ( w T ⁢ x ) 2 }$ With the first k−1 components, the k-th component can be found by subtracting the first k−1 principal components from x: $x ^ k - 1 = x - ∑ i = 1 k - 1 ⁢ w i ⁢ w i T ⁢ x$ and by substituting this as the new data set to find a principal component in $w k = arg ⁢ max  w  = 1 ⁢ E ⁢ { ( w T ⁢ x ^ k - 1 ) 2 } .$ The PCA transform is therefore equivalent to finding the singular value decomposition of the data matrix X, X=WΣVT, and then obtaining the reduced-space data matrix Y by projecting X down into the reduced space defined by only the first L singular vectors, WL: Y=WLTX=ΣLVLT The matrix W of singular vectors of X is equivalently the matrix W of eigenvectors of the matrix of observed covariance C=XXT, XXT=WΣ2WT It is often the case that different variables have completely different scaling. For examples one of the variables may have been measured in meters and another one in centimeters (by design or accident). Eigenvalues of the matrix is scale dependent. If one column of the data matrix X is multiplied by some scale factor (say s) then variance of this variable is increase by s2 and this variable can dominate whole covariance matrix and hence the whole eigenvalues and eigenvectors. It is necessary to take precautions when dealing with the data. If it is possible to bring all data to the same scale using some underlying physical properties then it should be done. If scale of the data is unknown then it is better to use correlation matrix instead of the covariance matrix. It is in general a recommended option in many statistical packages. It should be noted that since scale affects eigenvalues and eigenvectors then interpretation of the principal components derived by these two methods can be completely different. In real life applications care should be taken when using correlation matrix. Outliers in the observation can affect covariance and hence correlation matrix. It is recommended to use robust estimation for covariance (in a simple case by rejecting of outliers). When using robust estimates covariance matrix may not be non-negative and some eigenvalues might be negative. In many applications, it is not important since only the principal components corresponding to the largest eigenvalues are of interest. In either case, the number of significant variables (principal axis or singular axis) is kept to a minimum. There are many recommendations for the selection of dimension, as follows. i) The proportion of variances: if the first two components account for 70%-90% or more of the total variance then further components might be irrelevant (See problem with scaling above). ii) Components below certain level can be rejected. If components have been calculated using a correlation matrix, often those components with variance less than 1 are rejected. It might be dangerous. Especially if one variable is almost independent of others then it might give rise to the component with variance less than 1. It does not mean that it is uninformative. iii) If the uncertainty (usually expressed as standard deviation) of the observations is known, then components with variances less than that, certainly can be rejected. iv) If scree plots (scree plot is the plot of the eigenvalues, or variances of principal components, against their indices) show elbow then components with variances less than this elbow can be rejected. According to a cross-validation technique, one value of the observation is removed (xij) then, using principal components, this value is predicted and it is done for all data points. If adding the component does not improve prediction power, then this component can be rejected. This technique is computer intensive. PCA was described above as a technique, in Step 118, for reducing dimensionality of the learning set feature space, the learning set comprising nearest neighbors from the target file. Based on these n closest feature vectors, a learning model is dynamically generated (130), using a well-known theoretical algorithm called Support Vector Model (SVM) for example, as will now be described, using a software MCubix™ developed by Diagnos Inc. for example. SVM is a supervised learning algorithm that has been successful in proving itself an efficient and accurate text classification technique. Like other supervised machine learning algorithms, an SVM works in two steps. In the first step—the training step—it learns a decision boundary in input space from preclassified training data. In the second step—the classification step—it classifies input vectors according to the previously learned decision boundary. A single support vector machine can only separate two classes—a positive class (y=+1) and a negative class (y=−1). In the training step the following problem is solved. A set of training examples Sl={(x1,y1), (x2,y2), . . . , (xl,yl)} of size l from a fixed but unknown distribution p(x,y) describing the learning task is given. The term-frequency vectors xi represent documents and yi=±1 indicates whether a document has been labeled with the positive class or not. The SVM aims to find a decision rule h c:x→{−1,+1} that classifies the documents as accurately as possible based on the training set Sl. An hypothesis space is given by the functions f(x)=sgn(wx+b) where w and b are parameters that are learned in the training step and which determine the class separating hyperplane, shown in FIG. 2. Computing this hyperplane is equivalent to solving the following optimization problem: $minimize: ⁢ ⁢ V ⁡ ( w , b , ξ ) = 1 2 ⁢ ww + C ⁢ ∑ i = 1 ℓ ⁢ ξ i$ The constraints require that all training examples are classified correctly, allowing for some outliers symbolized by the slack variables ξi. If a training example lies on the wrong side of the hyperplane, the corresponding ξi is greater than 0. The factor C is a parameter that allows for trading off training error against model complexity. In the limit C→∞ no training error is allowed. This setting is called hard margin SVM. A classifier with finite C is also called a soft margin Support Vector Machine. Instead of solving the above optimization problem directly, it is easier to solve the following dual optimisation problem: $W ⁡ ( α ) = - ∑ i = 1 ℓ ⁢ α i + 1 2 ⁢ ∑ i = 1 ℓ ⁢ ∑ j = 1 ℓ ⁢ y i ⁢ y j ⁢ α i ⁢ α j ⁢ x i ⁢ x j$ $minimize ⁢ :$ $∑ i = 1 ℓ ⁢ y i ⁢ α i = 0$ $subject ⁢ ⁢ to ⁢ : ⁢ ⁢ 0 ≤ α i ≤ C$ All training examples with αi>0 at the solution are called support vectors. The Support vectors are situated right at the margin (see the solid circle and squares in FIG. 2) and define the hyperplane. The definition of a hyperplane by the support vectors is especially advantageous in high dimensional feature spaces because a comparatively small number of parameters—the α in the sum of equation—is required. SVM have been introduced within the context of statistical learning theory and structural risk minimization. In the methods one solves convex optimization problems, typically quadratic programs. Least Squares Support Vector Machines (LS-SVM) are reformulations to standard SVM. LS-SVM are closely related to regularization networks and Gaussian processes but additionally emphasize and exploit primal-dual interpretations. Links between kernel versions of classical pattern recognition algorithms such as kernel Fisher discriminant analysis and extensions to unsupervised learning, recurrent networks and control also exist. In order to make an LS-SVM model, two hyper-parameters are needed, including a regularization parameter γ, determining the trade-off between the fitting error minimization and smoothness, and the bandwidth σ2, at least in the common case of the RBF kernel. These two hyper-parameters are automatically computed by doing a grid search over the parameter space and picking the minimum. This procedure iteratively zooms to the candidate optimum. As the learning model is thus generated (130), in the predicting phase (300), a test set is built from the features of the target file (140), and the test set feature space dimensionality is reduced (142) as known in the art, by using a technique such as Principal component analysis (PCA) or Singular Value Decomposition (SVD), keeping the same number of significant variables (principal axis or singular axis) as the number of significant variables used in the learning set, as described hereinabove. Then, the learning model generated in step 130 is applied to the test set, so as to determine a value corresponding to the target song (150). The rating of the target file is based on the test set and the learning set, the target file being assessed relative to the training set. A storing phase may further comprise storing the predicted values in a result database. The learning model is discarded after prediction for the target file (160), before the method is applied to another file to be evaluated (170). As new files (hit songs) in the database of reference file appear, the training set is rebuilt by updating the closest neighbours and hyper-parameters are automatically updated, resulting in a dynamic scoring method. As people in the art will appreciate, the present method allows an automatic learning on a dynamic neighborhood. As exemplified hereinabove, the method may be used for pre-selecting songs in the contest of a hit contest for example, typically based on the popularity of the songs. Depending on a nature of the scale used for evaluation, the present adaptive method may be applied to evaluate a range of type of files, i.e. compression format, nature of files etc. . . . with an increased accuracy in highly non-linear fields, by providing a dynamic learning phase. Although the present invention has been described hereinabove by way of embodiments thereof, it may be modified, without departing from the nature and teachings of the subject invention as defined in the appended claims. ## Claims 1. A method for automatic ranking of target files according to a predefined scheme, comprising the steps of: building a database of reference files already ranked according to the predefined scheme; for each target file: i) determining a neighborhood of the target file among the reference files in the database of reference files, and forming a training set comprising reference files of this neighborhood, versus which neighborhood as a whole the target file is to be assessed, wherein said step of forming a training set comprises extracting a feature vector of the target file and finding n closest neighbors of the feature vector of the target file among features vectors in the database of reference files, and wherein said finding n closest neighbors comprises using one of: i) Euclidean distance, ii) cosine distance and iii) Jensen-Shannon distribution similarity; ii) building a test set from features of the target file; iii) dynamically generating a learning model from the training set, the learning model defining a correlation between the reference files in the training set and a rank thereof according to the predefined scheme; and iv) applying the learning model to the test set; whereby a rank corresponding to the target file is predicted according to the predefined scheme. building a database of reference files already ranked according to the predefined scheme; for each target file: i) determining a neighborhood of the target file among the reference files in the database of reference files, and forming a training set comprising reference files of this neighborhood, versus which neighborhood as a whole the target file is to be assessed, wherein said step of forming a training set comprises extracting a feature vector of the target file and finding n closest neighbors of the feature vector of the target file among features vectors in the database of reference files, and wherein said finding n closest neighbors comprises using one of: i) Euclidean distance, ii) cosine distance and iii) Jensen-Shannon distribution similarity; ii) building a test set from features of the target file; iii) dynamically generating a learning model from the training set, the learning model defining a correlation between the reference files in the training set and a rank thereof according to the predefined scheme; and iv) applying the learning model to the test set; whereby a rank corresponding to the target file is predicted according to the predefined scheme. 2. The method of claim 1, further comprising storing the predicted rank in a result database. 3. The method of claim 1, wherein said step of building a database of reference files comprises collecting files previously ranked according to the predefined scheme, under a digital format; obtaining feature vectors of each of the collected files; and storing the feature vectors in a database of reference files. 4. The method of claim 3, wherein said step of building a database of reference files further comprises storing a rank, defined according to the predefined scheme, of each of the reference files in a score database. 5. The method of claim 3, wherein said step of obtaining feature vectors of each of the collected files comprises extracting, from the collected files, a number of features to yield reference feature vectors. 6. The method of claim 3, wherein said step of storing the feature vectors in a database of reference files comprises storing the feature vectors along with information about the corresponding reference files. 7. The method of claim 1, wherein said step of forming a training set comprising files from the database of reference files and building a test set from features of the target file further comprises reducing the dimensionality of the training set and reducing the dimensionality of the test set. 8. The method of claim 7, wherein said steps of reducing the dimensionality are done by using one of: i) Principal Component Analysis (PCA) and ii) Singular Value Decomposition (SVD). 9. The method of claim 7, wherein said steps of reducing the dimensionality are done by a non-linear regression technique. 10. The method of claim 7, wherein said steps of reducing the dimensionality are done by one of: Neural Networks, Support Vector Machines, Generalized Additive Model, Classification and Regression Tree, Multivariate Adaptive Regression Splines, Hierarchical Mixture of Experts and Supervised Principal Component Analysis. 11. The method of claim 1, wherein said step of dynamically generating a learning model comprises using closest neighbors of the target file in the database of reference files. 12. The method of claim 1, wherein said step of dynamically generating a learning model comprises using the n closest neighbors of the target file's feature vector among the feature vectors in the database of reference files. 13. The method of claim 1, wherein said step of dynamically generating a learning model comprises reducing the dimension of a set formed of the closest neighbors of the target file in the database of reference files. 14. The method of claim 1, wherein said step of dynamically generating a learning model comprises reducing the dimension of a set formed of the closest neighbors of the target file in the database of reference files. 15. The method of claim 1, wherein said step of dynamically generating a learning model comprises applying a Support Vector Model. 16. The method of claim 1, wherein said step of dynamically generating a learning model comprises applying a Support Vector Model to the n closest neighbors of the target file's feature vector in the database of reference files. 17. The method of claim 1, further comprising discarding the learning model after prediction for the target file. 18. The method of claim 1, wherein said step of building a training set comprises rebuilding the training set as new ranked files appear in the database of reference files. 19. The method of claim 1, wherein said step of forming a training set comprises finding new closest neighbors in the database of reference files as new reference files appear in the database of reference files. 20. The method of claim 1, wherein said step of forming a training set comprises updating the closest neighbors as new reference files appear in the database of reference files. 21. The method of claim 1, wherein said step of generating a learning model comprises automatically generating a learning model based on a dynamic neighborhood of the target file as represented by the training set. 22. The method of claim 1, wherein the target files are song files, the reference files are songs previously ranked according to the predefined scheme, and the target files are assessed according to the previously ranked songs.
### Author Topic: P4-Day  (Read 1943 times) #### Victor Ivrii • Elder Member • Posts: 2563 • Karma: 0 ##### P4-Day « on: February 15, 2018, 05:08:16 PM » Find the general solution for equation \begin{equation*} y''(t)-4y'(t)+5y(t)=2 e^{t}+ 8\cos(t). \end{equation*} #### Meng Wu • Elder Member • Posts: 91 • Karma: 36 • MAT3342018F ##### Re: P4-Day « Reply #1 on: February 15, 2018, 10:35:21 PM » $(a)$ $\\$ First find the complementary solution for the homogeneous equation: $$y’’-4y’+5y=0$$ Characteristic equation: $$r^2-4r+5=0 \implies \cases{r_1=2+i\\r_2=2-i}$$ Thus $$y_c(t)=c_1e^{2t}cos(t)+c_2e^{2t}sin(t)$$ Now we need to find the particular solution: $$y_p(t)=Y_1(t)+Y_2(t)$$ We assume $Y_1(t)=Ae^t$ and there are no duplicates of $y_c(t)$, $\\$thus $Y_1’(t)=Ae^t$ and $Y_1’’(t)=Ae^t$. $\\$ Substitue theses values back to $y’’-4y’+5y=2e^t$: $$Ae^t-4Ae^t+5Ae^t=2e^t \implies A=1$$ We assume $Y_2(t)=Bcos(t)+Csin(t)$ and there are no duplicates of $y_c(t)$, $\\$thus $Y_2’(t)=-Bsin(t)+Ccos(t)$ and $Y_2’’(t)=-Bcost(t)-Csin(t)$. $\\$ Substitute these values back to $y’’-4y’+5y=8cos(t)$: $$-Bcost(t)-Csin(t)+4Bsin(t)-4Ccos(t)+5Bcos(t)+5Csin(t)=8cos(t) \implies \cases{B=1\\C=-1}$$ Thus, $$y_p(t)=Y_1(t)+Y_2(t)=e^t+cos(t)-sin(t)$$ Therefore, the general solution is \begin{align}y(t)&=y_c(t)+y_p(t)\\&=c_1e^{2t}cos(t)+c_2e^{2t}sin(t)+e^t+cos(t)-sin(t)\end{align} $(b)$ $\\$ $$y’(t)=2c_1e^{2t}cos(t)-c_1e^{2t}sin(t)+2c_2e^{2t}sin(t)+c_2e^{2t}cos(t)+e^t-sin(t)-cos(t)$$ Set $t=0$ and $y=0$; $t=0$ and $y’(t)=0$: $$\cases{c_1+0+1+1-0=0\\2c_1-0+0+c_2+1-0-1=0} \implies \cases{c_1=-2\\c_2=4}$$ Therefore, the solution for the IVP is $$y(t)=-2e^{2t}cos(t)+4e^{2t}sin(t)+e^t+cos(t)-sin(t)$$
#### Each page refresh generates new CSRF token that resolves in resolves in 419 page not found- Laravel I’ve encountered this problem a few days ago after i put my website to production. After login or register or any other POST request it gives me Page Not Found 419 error. On localhost everything works fine. It has already taken me more than 4 days of research and I couldn’t come up with solution. It’s probably related to CSRF verification but every solution there is I’ve already tried (unsuccesfully). The things i did: • after every form that has POST method i’ve put @csrf • included <meta name="csrf-token" content="{{ csrf_token() }}"> in head section • changed SESSION_DOMAIN in .env file to my production domain • cleared browser cache followed with commands: php artisan cache:clear php artisan route:clear php artisan view:clear php artisan config:clear php artisan view:cache php artisan route:cache • generated new APP_KEY with php artisan key:generate • tried switching between SESSION_DRIVERS from file to database • checked whole code for inline spacings before ?<php tag • gave permissions 777 to www-data for whole folder (desperate act) The main thing I’ve noticed is that on localhost csrf token is generated once and after page refresh stays the same when on the other hand on web server after each page refresh it changes. It looks like session can’t hold those informations and results in error. Here is my .env file APP_NAME=Laravel APP_ENV=production APP_KEY=base64:s15iIzuybt78V7zZ7cHqcwCRAr1h6YfEWPArlrcqW3A= APP_DEBUG=false APP_URL=http://mydomain.tk LOG_CHANNEL=stack DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=dbname CACHE_DRIVER=file QUEUE_CONNECTION=sync SESSION_DRIVER=file SESSION_DOMAIN=http://mydomain.tk REDIS_HOST=127.0.0.1 REDIS_PORT=6379 MAIL_DRIVER=smtp MAIL_HOST=smtp.mailtrap.io MAIL_PORT=2525 MAIL_FROM_NAME="${APP_NAME}" AWS_ACCESS_KEY_ID= AWS_SECRET_ACCESS_KEY= AWS_DEFAULT_REGION=us-east-1 AWS_BUCKET= PUSHER_APP_ID= PUSHER_APP_KEY= PUSHER_APP_SECRET= PUSHER_APP_CLUSTER=mt1 MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}" Only solutions that has worked was when I commented in web section of config/Kernel.php this -> AppHttpMiddlewareVerifyCsrfToken::class. But it’s not fully working and safe solution that I was looking for. So my question is, are there other approaches to debug this problem or a solutions that could help? Thanks a lot guys. If there is anything you need to know that I can provide just ask.
# The concept of expansion of an information structure I would like your help to understand the concept of expansion of an information structure in the incomplete information game at p.6-9 this paper. Let me summarise the game as described in the paper. There are $$N\in \mathbb{N}$$ players, with $$i$$ denoting a generic player. There is a finite set of states $$\Theta$$, with $$\theta$$ denoting a generic state. A basic game $$G$$ consists of • for each player $$i$$, a finite set of actions $$A_i$$, where we write $$A\equiv A_1\times A_2\times ... \times A_N$$, and a utility function $$u_i: A\times \Theta \rightarrow \mathbb{R}$$. • a full support prior $$\psi\in \Delta(\Theta)$$. An information structure $$S$$ consists of • for each player $$i$$, a finite set of signals $$T_i$$, where we write $$T\equiv T_1\times T_2\times ... \times T_N$$. • a signal distribution $$\pi: \Theta \rightarrow \Delta(T)$$. A decision rule of the incomplete information game $$(G,S)$$ is a mapping $$\sigma: T\times \Theta\rightarrow \Delta(A)$$ Expansion: Consider two information structures, $$S^1\equiv (T^1, \pi^1)$$ and $$S^2\equiv (T^2, \pi^2)$$. We say that $$S^*\equiv (T^*, \pi^*)$$ is a combination of $$S^1$$ and $$S^2$$ if • $$T_i^*=T_i^1\times T_i^2$$ $$\forall i$$. • $$\pi^*:\Theta \rightarrow \Delta(T^1\times T^2)$$ has $$\pi^1$$ and $$\pi^2$$ as marginals. An information structure $$S^*$$ is an expansion of an information structure $$S^1$$ if there exists an information structure $$S^2$$ such that $$S^*$$ is a combination of $$S^1$$ and $$S^2$$. My question: • The game, as it is described by the authors, seems to assume that, before receiving the signal $$T_i$$, each player $$i$$ knows nothing about what will be the realisation of the state. I call this as the baseline level of information assumed. (For example, in other contexts, one may assume that the state is a vector of size $$N\times 1$$ and, before receiving the signal $$T_i$$, each player $$i$$ knows the realisation of the $$i$$th component of such a vector. This would correspond to another kind of baseline level of information) • Let $$\underline{S}$$ denote the information structure that is totally uninformative, i.e., it does not add anything to the baseline level of information assumed (also called DEGENERATE at p.26 of the linked paper). In other words, $$\underline{S}$$ consists of (a) for each player $$i$$, a finite set of signals $$T_i$$, where we write $$T\equiv T_1\times T_2\times ... \times T_N$$. (b) a signal distribution $$\pi: \Theta \rightarrow \Delta(T)$$ such that $$\pi(\cdot|\theta)=\tilde{\pi}$$ $$\forall \theta \in \Theta$$ for some $$\tilde{\pi}\in \Delta(T)$$. In other words, the conditional probability is equal to the unconditional one and our belief on the probability distribution of the state is not updated. Notice that there are many ways to characterise the uninformative information structure (just by varying $$T$$ and $$\tilde{\pi}$$). • Let $$\mathcal{S}$$ denote the collection of all possible information structures. More precisely, $$\mathcal{S}\equiv \{S| T \text{ is a separable metric space}, \text{ \pi:\Theta \rightarrow \Delta(T) is a probability measure on (T,\mathcal{B}(T))}\}$$ where $$\mathcal{B}(\cdot)$$ denotes Borel sigma algebra. Note that $$\mathcal{S}$$ contains all possible ways to characterise the uninformative information structure. • Question: Can we show that, for a given $$\underline{S}$$, each $$S\in \mathcal{S}$$ is an expansion of $$\underline{S}$$ (including $$S=\underline{S}$$)? This seems to me to hold at the light of Theorem 1 combined with reading at p.26 of the paper "Now consider the case where the original information structure is degenerate (there is only one signal which represents the prior over the states of the world). In this case, the set of Bayes correlated equilibria correspond to joint distributions of actions and states that could arise under rational choice by a decision maker with any information structure" You have to be careful what you can say is that if $$\underline{S}$$ be given by $$(T^1,\pi^1)$$ and $$S$$ is any information structure, say given by $$(T^2,\pi^2)$$, then there exists an expansion $$S^*$$ of $$\underline S$$ given by $$(T^1\times T^2, \pi^1\cdot \pi^2)$$ where $$\pi^1\cdot \pi^2$$ is the product distribution of $$\pi^1$$ and $$\pi^2$$. Note that I have constructed the expansion, thus shown its existence. The information structures, $$S$$ and $$S^*$$ are different since their support is in different spaces, $$T^2$$ vs $$T^1\times T^2$$. However, they are informationally equivalent. The equivalence here is in terms of the distribution of posterior beliefs induced by $$S$$ and $$S^*$$. Alternatively, if $$\succsim$$ is the Blackwell partial order, then $$S\succsim S^*$$ and $$S^*\succsim S$$. Your intuition, however, is correct, any information structure will be "essentially" an expansion of $$\underline S$$.
# Hypothetical mass-energy equivalence 2 questions: 1. Hypothetically, if in an imaginary universe, the speed of light squared were 90% or 110% of what is is in our universe...would the math require another constant such as 1.1 or .9 next to it to make the math work (because of momentum requirements)? 2. Under a slightly different scenario, what if the $c$ observed were the same as in our universe, but because of some property of mass (different in the alternate universe), the math requires a constant of 1.1 or .9 for the math to work out. Is this a possibility in math, considering momentum and conservation of energy in this imaginary universe? ## 4 Answers all physically meaningful measurements are dimensionless: wavelengths, frequencies, masses, etc. are all measured in ratios of a standard quantity with the same units. Masses can be all expressed in electron masses, wavelengths can be expressed in comparison to the wavelength of a certain sodium absorption line. If speed of light were up by a factor of 1.01 or 0.99, keeping all the rest of physics constant, we need to study how the change in a single constant propagates among all the other predictions before we can assess that the change is itself physically observable, or if it is fixed by other physical constants The answers are "not really, and no". Special relativity is based on the two postulates that (1) the laws of physics are the same in all inertial frames of reference, and (2) that the speed of light in vacuum has the same value $c$ in all inertial frames of reference. The reason we express $c$ as $\approx 300,000$ km/s is that we chose the units kilometre and second, but we could just as well use units of length and time that make $c=1$. So you cannot just change $c$ on its own, since it would just rescale distance and time. Normally when physicists consider different values of the fundamental constants they look at dimensionless constants, that is values that do not have any units and hence cannot be rescaled by changing your system of units. So changing $c$ requires changing a few of the other constants, changing a fair bit of physics. However, let's ignore that part to get to the question - just assume we mess with the constants in the right way. Does the fact that light travels at $c$ do any work here? Apparently not: the only thing needed is that there is some signalling speed that is invariant in inertial reference frames. The causality goes the other way around: since photons are mass-less (or, alternatively, Maxwell's equations are Lorenz-invariant) light have to travel at the invariant speed. Relativity would still hold if space had a refractive index $n>1$ slowing light down. The first question is whether in a universe with a different value of the invariant speed $E=mc^2$ would have to change. The quick answer is no: in this universe $E=mc'^2$ where $c'$ is the changed speed (if you want to express this formula in terms of normal $c$ you will need to add a factor in front of it, but now you are expressing one the observed constant in terms of one from another universe - it doesn't make much sense, and you will have to do it everywhere $c'$ shows up in your equations). The reason is that the derivation (variants are found in all relativity textbooks) only makes use of how momentum and mass transforms based on the invariant speed, not what value it has. It can just be treated as a symbol: there is no link to actual light or a particular value. (Strictly speaking, Einstein's original derivation was a lot about emitting photons and seems to make light much more important to the result than it is.) The second question is whether we could end up with an equation like $E=2mc^2$. The answer is no for the same reason. The math will not work out. In particular, consider the energy-momentum formula $E^2=(mc^2)^2+(pc)^2$ - if you want this to hold and $E=2mc^2$ then you get an imaginary momentum. So unless you want to postulate a universe with a really different physics you are stuck with $E=mc^2$. • Photons' being massless isn't equivalent to Maxwell's equations being Lorentz-invariant. There exists a generalization of Maxwell's equations with massive photons that's still Lorentz-invariant. It describes E&M inside of superconductors. – tparker Aug 16 '18 at 22:09 The speed of light is a distance that light covers during 1 second. This distance is 1 light-second or exactly 299,792,458 meters by the definition of meter. This speed is not some measured physical constant that could be different in a different frame or in a different universe. Instead, the speed of light is a predefined number that is always the same. No matter where you are, inside or outside a black hole or in a different universe, your local speed of light always is 1 light-second per second. On your second scenario, there is no property of mass that could change the relation. Energy and mass are not in a "relation", mass simply is the local energy (energy that remains in the rest frame of the object). For example, if you take a weightless box with ideal mirror walls keeping light inside, then the mass of this box (in natural units where $c=1$) equals the energy of the light. We add $c^2$ (a predefined number from the definition of meter) to the formula only to convert Joules to kilograms. In natural units the formula simply is $E=m$. 1) No, they would not need to use a constant factor to make the math work out. Assuming the properties of matter hold true, then their version of the speed of light would simply be their limiting speed of information transfer. Their equations would look exactly the same as ours, except that they would use their measured value for the speed of light instead of ours (because why would they use or even know about ours?) They would still say that $E=mc^2$ 2) This is obviously a possibility of math. Unfortunately, the answer to your question about this scenario is a trivial one. If we establish that the physicists of an alternate universe measure the speed of light to be the same as ours but that there is some property of matter that makes their mass-energy equivalence come out to $E=kmc^2$ for some constant, $k$, then the math would necessarily allow for that. We don't say $E=mc^2$ because of some purely mathematical derivation that makes it impossible for the relation to be anything else. We use that equation because that is best supported by the scientific evidence. Whatever relation is best supported by evidence is also possible within mathematics. A different relation might mean that there are changes, large or small, to other theories and principles within physics, but the math has no issue with that. So your second scenario is trivial and works out because you told it to. Whatever physicists experimentally determine in that universe is what they determine. You told us to set up a hypothetical in which $E=0.9mc^2$ and asked if the math worked out. Well, if it were hypothetically the case that $E=0.9mc^2$, then the math would have to work out, otherwise we wouldn't be adhering to the hypothetical.
# [IPython-User] heart disease? Comer Duncan comer.duncan@gmail.... Wed Mar 21 14:11:46 CDT 2012 Mike, I am using pre-.13. The only thing I have done recently is to reset my EDITOR env to TextWrangler, which works quite well in the console. However, it does not yet work in the notebook. Maybe it is just a network slowness thing... Comer On Wed, Mar 21, 2012 at 2:44 PM, Michael Hadmack <[email protected]> wrote: > I have also have some trouble with the Kernel dying repeatedly.  I'm not > sure what the cause was but sometimes when opening a notebook I would get a > 'kernel had died' message and after restarting the message returns within 20 > seconds.  Even shutting down the server and starting a new one did not seem > to help but eventually the problem would just stop.  I attributed it to > something with my machine being too slow (a 2006 Macbook).  I have not seen > this happen once I switched from 0.12 to 0.13 though. > > -Mike > > On Wed, Mar 21, 2012 at 8:15 AM, Comer Duncan <[email protected]> > wrote: >> >> Hi, >> >> Today, I am noticing on my macbook that when I run ipython notebook >> --profile=sympy that it takes up to a minute for chrome to light up >> with the dashboard in a tab.  So, when I select a particular notebook >> to run and run it, it takes a while for it to run (=> longer than >> usual and seemingly not due to mathjax being slow).  Well I could >> chalk that up to local issues on my machine, but have not seen that >> until today. And finally the kernel keeps dying.  I keep restarting it >> and this sequence goes on and on with the kernel restarted living for >> less than a minute.  The stated output on stdout mentions the kernel >> failed to respond to heartbeat.  Any idea as to why this may happen >> and when?  And what if anything to do about it? >> >> >> Comer >> _______________________________________________ >> IPython-User mailing list >> [email protected] >> http://mail.scipy.org/mailman/listinfo/ipython-user > > > > _______________________________________________ > IPython-User mailing list > [email protected] > http://mail.scipy.org/mailman/listinfo/ipython-user >
## Classical geometries in modern contexts. Geometry of real inner product spaces.(English)Zbl 1084.51001 Basel: Birkhäuser (ISBN 3-7643-7371-7/hbk). xii, 242 p. (2005). The volume under review can be seen as the third part of an opus magnum, the first two being [Geometrische Transformationen (B. I. Wissenschaftsverlag, Mannheim) (1992; Zbl 0754.51005)] and [Real geometries (B. I. Wissenschaftsverlag, Mannheim) (1994; Zbl 0819.51002)], in which the author’s aim is to present (i) models of classical geometries in a dimension-free setting, (ii) the relevance of functional equations in geometry, (iii) very general forms of characterizations of mappings under mild hypothesis (of the Mazur-Ulam, Alexandrov-Zeeman, Beckman-Quarles type), (iv) a very precise conceptual framework for Klein’s Erlangen programme, with constant emphasis on the interplay between geometries and groups of transformations. In Chapter 1, the author introduces the notion of a translation group associated to a real inner product space $$X$$, defines Euclidean and hyperbolic geometry over $$X$$ (in the case of hyperbolic geometry, we are presented with the Weierstrass-inspired model put forward by the author in [N. K. Artémiadis (ed.) et al., Proceedings of the 4th international congress of geometry, Thessaloniki, Greece, May 26–June 1, 1996. Athens: Aristotle University of Thessaloniki. 12–21 (1997; Zbl 0888.51020)], and presented in his textbook [Ebene Geometrie, Spektrum, Heidelberg (1997; Zbl 0870.51001)]), and proves an astonishing characterization, of a kind that brought the result of H.-C. Wang [Pac. J. Math. 1, 473–480 (1951; Zbl 0044.19602)] to the reviewer’s mind, of the Euclidean and hyperbolic geometries as the only geometries over $$X$$ satisfying rather general conditions involving the existence of a function that has only some of the properties the metric would have in geometries with a mildly special kind of translation group (first published by the author in [Publ. Math. 63, 495–510 (2003; Zbl 1052.39022)]). Chapter 2 is devoted to an in-depth analysis of the models of Euclidean and hyperbolic geometry that are defined in the previous chapter, the emphasis being naturally on the hyperbolic case. Among the topics: determining the equations of lines, as defined in three different ways (by Blumenthal, Menger, and the author), hyperplanes, subspaces, equidistant surfaces, ends, parallelism, angles of parallelism, horocycles, the Cayley-Klein model, various characterizations of isometries and of translations. Chapter 3 is devoted to the sphere geometries of Möbius (first presented in [Aequationes Math. 66, 284–320 (2003; Zbl 1081.51500)]) and Lie, in which we encounter Poincaré’s model of hyperbolic geometry, the main notions of these geometries (such as Lie and Laguerre cycles, Lie and Laguerre transformations), with their remarkable characterization under the very “mild hypothesis” that they are bijections preserving the cycle contact relation (but not necessarily the negation of this relation), which was the subject of the author’s [Result. Math. 40, 9–36 (2001; Zbl 0995.51003)]. Chapter 4 looks at space-times and their groups of transformations, Minkowski space-time and Lorentz transformations, de Sitter’s world, Einstein’s cylinder world, again with many characterizations of the corresponding transformations under mild hypotheses, as well as a characterization of a general notion of Lorentz-Minkowski distance, and a theorem that uncovers the very strong connection existing between hyperbolic motions and Lorentz transformations. The author’s model of hyperbolic geometry may thus be said to be the unifying thread, and the various results proved in this book which rely very heavily on this model testify to the significance of the discovery of this model, which turns out to be not “just another model of hyperbolic geometry”, but one that allows, by the very fact that its point-set is that of Euclidean geometry or of Minkowski space-time, a fruitful comparison. The mathematical prerequisites are minimal – the rudiments of linear algebra suffice – and all theorems are proved in detail. Following the proofs does not involve more than following the lines of a computation, and the author makes every effort to avoid referring to a synthetic geometric understanding, given that he aims at attracting readers with a distaste for synthetic geometry, which, given the academic curricula of the past decades, represent the overwhelming majority of potential readers of any mathematical monograph. One of the lessons of this monograph is that there is a coordinate-free analytic geometry, which significantly simplifies computations and frees the mind from redundant assumptions. There is only minimal overlap with the first two volumes, the aim of the few repetitions being that of ensuring the volume’s independent readability. In the realm of synthetic (axiomatic) geometry, the dimension-free approach can be traced back to H. N. Gupta [Contributions to the axiomatic foundations of Euclidean geometry (Ph. D. Thesis, University of California, Berkeley) (1965)] and W. Schwabhäuser [J. Reine Angew. Math. 242, 134–147 (1970; Zbl 0199.55002)]. ### MSC: 51-02 Research exposition (monographs, survey articles) pertaining to geometry 51B10 Möbius geometries 51B25 Lie geometries in nonlinear incidence geometry 51M05 Euclidean geometries (general) and generalizations 51M10 Hyperbolic and elliptic geometries (general) and generalizations 83A05 Special relativity 83C20 Classes of solutions; algebraically special solutions, metrics with symmetries for problems in general relativity and gravitational theory 39B52 Functional equations for functions with more general domains and/or ranges
# Navier-Stokes eqs. correspond to $F=m*a$ I have the (typical) Navier--Stokes system for incompressible fluid: $$div(u)=0$$ $$\rho(u_t+u\cdot\nabla u)=-\nabla p+div(\nu\nabla u)+\rho g$$ In a paper that I'm reading says that the term $$u_t+u\cdot\nabla u$$ is an acceleration. I can understand that $u_t$ (the derivative of the velocity $u$ with respect to time) is in fact an acceleration, but, why "$u_t+u\cdot\nabla u$" is also an acceleration? My second question is: why the right side terms $$-\nabla p+div(\nu\nabla u)+\rho g$$ represent the sum forze? • It would be useful for you to look at the Material Derivative – nluigi Jan 23 '18 at 9:23 • I find the integral formulations of conservation equations easier to understand than the differential formulations. You should find them in any good textbook on continuum mechanics and fluid dynamics. – Robin Jan 24 '18 at 9:25 $u_t$ on its own is the rate of change of velocity at a point. Fluid is flowing past that point, so the bit of fluid that the velocity is referring to is constantly changing. In order to apply $\vec F=m\vec a$, you need to think of a fluid parcel. You want to consider the acceleration of, and the forces acting on, a little box of fluid, the boundaries of which move along with the flow. For example, if you have steady flow in a pipe, and the pipe diameter decreases, the fluid speeds up as it squeezes into the smaller pipe. Another way of saying that is that as a bit of fluid comes along, it accelerates. However, $u_t$ is zero everywhere (it's steady flow). $u \cdot \nabla u$ is the part of the acceleration that the fluid experiences due to moving to a new location.
# What is meant by productive goods ? 3 views What is meant by productive goods ? +1 vote by (59.2k points) selected All such goods which are used in the production process are called productive goods.
## Algebra 1: Common Core (15th Edition) We first combine like terms: $3x +8=14$ We then cancel out addition by subtracting 8 on both sides, and then we finally get x alone by dividing by 3 on both sides. Thus, we obtain: $3x=6 \\\\ x=2$
# Aerodynamic center The distribution of forces on a wing in flight are both complex and varying. This image shows the forces for two typical airfoils, a symmetrical design on the left, and an asymmetrical design more typical of low-speed designs on the right. This diagram shows only the lift components; the similar drag considerations are not illustrated. The aerodynamic center is shown, labeled "c.a." The torques or moments acting on an airfoil moving through a fluid can be accounted for by the net lift applied at some point on the airfoil, and a separate net pitching moment about that point whose magnitude varies with the choice of where the lift is chosen to be applied. The aerodynamic center is the point at which the pitching moment coefficient for the airfoil does not vary with lift coefficient (i.e. angle of attack), so this choice makes analysis simpler .[1] ${\displaystyle {dC_{m} \over dC_{L}}=0}$ where ${\displaystyle C_{L}}$ is the aircraft lift coefficient. Forces (lift/drag) can be summed up to act through a single point, the center of pressure, about which sum of all moments equal zero. The center of pressure location however, changes significantly with a change in angle of attack and is thus impractical for analysis. Thus the 25% chord position, or assumed aerodynamic center, is taken about which the forces and moment are generated. At the 25% chord position the moment generated was found and proven to be nearly constant with varying angle of attack. The concept of the aerodynamic center (AC) is important in aerodynamics. It is fundamental in the science of stability of aircraft in flight. Please note that in highly theoretical/analytical analysis the aerodynamic center does vary slightly and changes location. In most literature however the aerodynamic center is taken at the 25% chord position. This conversely means that if one keeps the assumed AC fixed at 25% chord, that the moment about 25% chord can be 'not constant' depending on the airfoil. A large portion of cambered airfoils have non constant moments about the 25% chord position because the AC does vary slightly. For most analysis the non constant moment about the 25% chord position is not significant enough to warrant consideration, but is important to keep in mind. For symmetric airfoils in subsonic flight the aerodynamic center is located approximately 25% of the chord from the leading edge of the airfoil. This point is described as the quarter-chord point. This result also holds true for 'thin-airfoils'. For non-symmetric (cambered) airfoils the quarter-chord is only an approximation for the aerodynamic center. A similar concept is that of center of pressure. The location of the center of pressure varies with changes of lift coefficient and angle of attack. This makes the center of pressure unsuitable for use in analysis of longitudinal static stability. Read about movement of centre of pressure. ## Role of aerodynamic center in aircraft stability For longitudinal static stability: ${\displaystyle {dC_{m} \over d\alpha }<0}$     and    ${\displaystyle {dC_{z} \over d\alpha }>0}$ For directional static stability:   ${\displaystyle {dC_{n} \over d\beta }>0}$     and    ${\displaystyle {dC_{y} \over d\beta }>0}$ Where: ${\displaystyle C_{z}=C_{L}\cos(\alpha )+C_{d}\sin(\alpha )}$ ${\displaystyle C_{x}=C_{L}\sin(\alpha )-C_{d}\cos(\alpha )}$ For a force acting away from the aerodynamic center, which is away from the reference point: ${\displaystyle X_{AC}=X_{\mathrm {ref} }+c{dC_{m} \over dC_{z}}}$ Which for small angles ${\displaystyle \cos(\alpha )=1}$ and ${\displaystyle \sin(\alpha )=\alpha }$, ${\displaystyle \beta =0}$, ${\displaystyle C_{z}=C_{L}-C_{d}*\alpha }$, ${\displaystyle C_{z}=C_{L}}$ simplifies to: ${\displaystyle X_{AC}=X_{\mathrm {ref} }+c{dC_{m} \over dC_{L}}}$ ${\displaystyle Y_{AC}=Y_{\mathrm {ref} }}$ ${\displaystyle Z_{AC}=Z_{\mathrm {ref} }}$ General Case: From the definition of the AC it follows that ${\displaystyle X_{AC}=X_{\mathrm {ref} }+c{dC_{m} \over dC_{z}}+c{dC_{n} \over dC_{y}}}$ . ${\displaystyle Y_{AC}=Y_{\mathrm {ref} }+c{dC_{l} \over dC_{z}}+c{dC_{n} \over dC_{x}}}$ . ${\displaystyle Z_{AC}=Z_{\mathrm {ref} }+c{dC_{l} \over dC_{y}}+c{dC_{m} \over dC_{x}}}$ The Static Margin can then be used to quantify the AC: ${\displaystyle SM={X_{AC}-X_{CG} \over c}}$ where: ${\displaystyle C_{n}}$ = yawing moment coefficient ${\displaystyle C_{m}}$ = pitching moment coefficient ${\displaystyle C_{l}}$ = rolling moment coefficient ${\displaystyle C_{x}}$ = X-force ~= Drag ${\displaystyle C_{y}}$ = Y-force ~= Side Force ${\displaystyle C_{z}}$ = Z-force ~= Lift ref = reference point (about which moments were taken) c = reference length S = reference area q = dynamic pressure ${\displaystyle \alpha }$ = angle of attack ${\displaystyle \beta }$ = sideslip angle SM = Static Margin ## References 1. ^ Benson, Tom (2006). "Aerodynamic Center (ac)". The Beginner's Guide to Aeronautics. NASA Glenn Research Center. Retrieved 2006-04-01.
Identity written as a sum of Möbius functions The following formula holds for $|x|<1$: $$x=\displaystyle\sum_{k=1}^{\infty} \dfrac{\mu(k)x^k}{1-x^k},$$ where $\mu$ denotes the Möbius function.
# The Unapologetic Mathematician ## Group homomorphisms At last we come to the notion of a homomorphism of groups. These are really, in my view, the most important parts of the theory. They show up everywhere, and the structure of group theory is intimately bound up with the way homomorphisms work. So what is a homomorphism? It’s a function from the set of members of one group to the set of members of another that “preserves the composition”. That is, a homomorphism $f:G\rightarrow H$ takes an element $g$ of $G$ and gives back an element $f(g)$ of $H$. It has the further property that $f(g_1g_2)=f(g_1)f(g_2)$. The product of $g_1$ and $g_2$ uses the composition from $G$, while the product of $f(g_1)$ and $f(g_2)$ uses the composition of $H$. Let’s consider an example very explicitly: a homomorphism $f1:S_3\rightarrow{\mathbb Z}_2$. Remember that $S_3$ is the group of rearrangements of 3 objects (I’ll use a, b, and c), while ${\mathbb Z}_2$ is the group of “addition modulo 2”. $S_3$ ${\mathbb Z}_2$ $e$ 0 $({\rm b}\,{\rm c})$ 1 $({\rm a}\,{\rm b})$ 1 $({\rm a}\,{\rm b}\,{\rm c})$ 0 $({\rm a}\,{\rm c})$ 1 $({\rm a}\,{\rm c}\,{\rm b})$ 0 If we consider the permutations $({\rm b}\,{\rm c})$ and $({\rm a}\,{\rm b})$ in $S_3$, each one is sent to 1 in the group ${\mathbb Z}_2$, and 1+1 = 0 there. On the other hand, $({\rm b}\,{\rm c})({\rm a},{\rm b})=({\rm a}\,{\rm c}\,{\rm b})$, which is sent to 0. The composition of the images is the image of the composition. We can pick any two permutations on the right and see the same thing. Another example: $f_2:{\mathbb Z}\rightarrow{\mathbb Z}$ with $f_2(n)=3n$. The homomorphism property says that $f_2(m+n)=f_2(m)+f_2(n)$, and indeed we see that $3(m+n)=3m+3n$. Another: $f_3:{\mathbb R}^+\rightarrow{\mathbb R}_+^*$. By ${\mathbb R}^+$. I mean the real numbers with addition as composition, and by ${\mathbb R}_+^*$. I mean the positive.nonzero real numbers with multiplication. I define $f_3(x)=2^x$. The laws of exponents tell us that $2^{x+y}=2^x2^y$. As we continue we will see many more examples of homomorphisms. For now, there are a few definitions we will find useful later. Recall from the discussion about functions that a surjection is a function between functions that hits every point in its codomain at least once. A group homomorphism that is also a surjection we call an “epimorphism”. Similarly, an injection is a function that hits every point in its codomain at most once. A group homomorphism that is also an injection we call a “monomorphism”. A homomorphism that is both — the function is a bijection — we call an “isomorphism”. In the above examples, $f_1$ is an epimorphism, $f_2$ is a monomorphism, and $f_3$ is an isomorphism. If a homomorphism’s domain and codomain group are the same, as in $f_2$ above, we call it an “endomorphism” on the group. If it’s also an isomorphism we call it an “automorphism”. The homomorphism $f_2$ is not an automorphism, since it doesn’t hit any point that’s not a multiple of 3. And finally, a few things to think about. • Can you construct a homomorphism from $S_n$ to ${\mathbb Z}_2$ similar to $f_1$ above, but for other values of $n$? • What homomorphisms can you construct from ${\mathbb Z}$ to $S_3$? to $S_4$? to an arbitrary group $G$? • What homomorphisms can you construct from ${\mathbb Z}_3$ to $S_4$? UPDATE: I just remembered that I left off another technical requirement. A homomorphism has to send the identity of the first group to the identity of the second. It usually doesn’t cause a problem, but I should include it to be thorough. It isn’t hard to verify that all the homomorphisms I mentioned satisfy this property too. February 10, 2007 - 1. […] Okay, it’s been pointed out to me that what I was thinking of in my update to yesterday’s post was a little more general than group theory. In the case of groups, preserving the composition is […] Pingback by Group homomorphisms erratum « The Unapologetic Mathematician | February 12, 2007 | Reply 2. […] spend a bit more time on: “images” and “kernels”. Let’s consider a homomorphism […] Pingback by Subgroups « The Unapologetic Mathematician | February 25, 2007 | Reply 3. Unlee I am missing something, f3 is not an isomorphism. For it to be an isomorphism, you need to change the codomain from R* to R+*, i.e. nonzero positive real numbers, with multiplication as composition. Comment by Fabien | February 28, 2007 | Reply 4. Ouch. I think I broke that when I recently went through to add TeX. Thanks for catching that. Comment by John Armstrong | February 28, 2007 | Reply 5. […] There is a special kind of function between rings, just like we have in groups. Given rings and , a function is called a homomorphism if it preserves all the ring […] Pingback by Ring homomorphisms « The Unapologetic Mathematician | March 31, 2007 | Reply 6. […] a logarithm because it satisfies the “logarithmic property”. Simply put, it’s a homomorphism of groups from the group of positive real numbers under multiplication to the group of all real numbers under […] Pingback by The Logarithmic Property « The Unapologetic Mathematician | April 15, 2008 | Reply 7. Update to your update: let f be a homomorphism G->G’, let e be the identity in G and e’ the identity in G’, and let a be in G. Then f(a)=f(ae)=f(a)f(e); premultiplying by the inverse of f(a) in G’ gives e’ = f(e). No need for the extra requirement. Comment by Tom S | April 15, 2010 | Reply 8. There are at least two different ways of composing permutations. The differences turn out not to be deep, but it is very helpful to know which method is being used. Could you describe your method of composing permutations? Here is an example from your text: (bc)(ab) = (acb)… how do you arrive at the answer (acb)? Thank you. Comment by dratman | April 27, 2012 | Reply 9. I compose them right-to-left, like functions. Comment by John Armstrong | April 27, 2012 | Reply
# Solving a second order ode Gold Member ## Homework Statement: ##y"+y'\frac {1}{z}+y[\frac {z^2-n^2}{z^2}]=0## ## Relevant Equations: power series let ##y= \sum_{k=-∞}^\infty a_kz^{k+c}## ##y'=\sum_{k=-∞}^\infty (k+c)a_kz^{k+c-1}## ##y"=\sum_{k=-∞}^\infty (k+c)(k+c-1)a_kz^{k+c-2}## therefore, ##y"+y'\frac {1}{z}+y[\frac {z^2-n^2}{z^2}]=0## =##[\sum_{k=-∞}^\infty [(k+c)^2-n^2)]a_k + a_k-2]z^{k+c} ## it follows that, ##(k+c)^2-n^2)]a_k + a_k-2=0## if ##k=0, →c^2-n^2=0, a_0≠0## ##c^2=n^2, →c=±n ##(roots) now considering case 1, if ##c=n##, and given k=1, then ##a_1=0##...this is the part that i am not getting. i am ok with the rest of the steps... Related Calculus and Beyond Homework Help News on Phys.org BvU Homework Helper 2019 Award therefore, $$y"+y'\frac {1}{z}+y\Biggl [\frac {z^2-n^2}{z^2}\Biggr ]=0$$ $$\Leftrightarrow \ (?) \\ \sum_{k=-∞}^\infty \Biggl [ (k+c)^2-n^2) a_k + a_k-2\Biggr ] z^{k+c} = 0 \\ \mathstrut \\ \mathstrut \\$$ Doesn't seem right. I suppose you mean ##a_{k-2}##, not ##a_k - 2##. But: where is ##a_{k-1}## ? can't get the scroll bar in the quote ! How come ? Gold Member yeah, it is supposed to be ##a_{k-2}##, i do not have ##a_{k-1} ## in my working, unless of course you want me to write all my steps... BvU Homework Helper 2019 Award do not have ##\ a_{k-1}\ ## in my working I noticed that and I wondered what made the ##\ y'\over z\ ## series disappear ? chwala BvU Homework Helper 2019 Award By the way, shouldn't it be ##\ a_{k+2}\ ## instead of ##\ a_{k-2}\ ## ? BvU Homework Helper 2019 Award And never mind my blabbing about ##\ y'\over z\ ##, it actually seems to fall out. BvU Homework Helper 2019 Award I suppose another : Never mind my blabbing about ##a_{k+2}## either. I should retreat and only speak up after some sensible thinking . Sorry, Chwala ! By the way, I wonder why you sum starting from ##\ -\infty\ ##. Most analyses start at 0 ! chwala BvU Homework Helper 2019 Award Took me a while to recognize your equation . Do you know what I mean ? chwala Gold Member Bvu sorry i was in a zoom meeting. I just used the summing formula from my university notes, ....i have ##a_{k-2}## and not ##a_{k+2}## Gold Member And never mind my blabbing about ##\ y'\over z\ ##, it actually seems to fall out. no worries, we are a community i really appreciate your input... PhDeezNutz and Delta2 Dr Transport Gold Member The best way to go about this is to multiply the equation by $z^2$ to eliminate any division first. The way you went about it is convoluted and not posting the intermediate steps isn't helping. Delta2 Gold Member I can solve the problem to the end....the only area that i do not understand is where i indicated, case 1, ...##k=1, a_1=0## this is the only area that i need help...how is this valid? (the reference that i am using is my undergraduate university notes, that i am trying to refresh on... Gold Member can we say, from ##[(k+c)^2-n^2)]a_k+a_{k-2}##=0, then if ## c=n## & ##k=1## then, ##(1+2n)a_1+a_{-1}=0## ##a_1+2na_1+a_{-1}=0## is this correct thinking... Gold Member The best way to go about this is to multiply the equation by $z^2$ to eliminate any division first. The way you went about it is convoluted and not posting the intermediate steps isn't helping. and can we solve this problem using laplace? Dr Transport Gold Member Series solution is still a valid way to go about it, but I as taught to not be dividing when using a series solution. It looks odd to me. BvU BvU Homework Helper 2019 Award Took me a while to recognize your equation . Do you know what I mean ? the reference that i am using is my undergraduate university notes I overlooked that one. Does it say there that you are usibg the Frobenius method to solve the Bessel equation ? (other link , link, solution with pictures, a whole chapter, ...). And you should really start your power series from ##k=0##, not from ##k=-\infty##. All this with just a little googling and without sitting down and seriously working out your core question ! Lazy me ... My own notes are still somewhere in the attic and also in my brain, but under four and a half decades of dust chwala and PhDeezNutz vela Staff Emeritus Homework Helper can we say, from ##[(k+c)^2-n^2)]a_k+a_{k-2}##=0, then if ## c=n## & ##k=1## then, ##(1+2n)a_1+a_{-1}=0## ##a_1+2na_1+a_{-1}=0## is this correct thinking... The problem is related to what the others have mentioned already a few times. The sum starts from ##k=0##, so the coefficient of the ##k=1## term is ##(1+2n)a_1##. There's no ##a_{-1}##. benorin Homework Helper Gold Member If ##a_k## is only defined for ##k=0,1,2,\ldots## don't you think that for the equation ##\left[ (k+c)^2-n^2\right] a_k+a_{k-2}=0## the only admissible values of ##k## are ##k=2,3,4,\ldots## ? For ##k=2,\text{ and }c=n## this would lead to ##a_2=\tfrac{a_0}{4(n+1)}## and [omitting steps] in general $$a_{2k}=\tfrac{(-1)^k a_0}{4^k(k!)^2 \binom{n+k}{k} }, \quad k\in\mathbb{Z}^+$$ and $$a_{2k+1}=\tfrac{(-1)^k a_{1}}{\prod_{j=1}^{k}\left[ (2j+1) (2n+2j+1)\right]}, \quad k\in\mathbb{Z}^+$$ Note: ##a_{2k+1}## is "not pretty" in terms of binomial coefficients, but probably looks ok in terms of double factorials. Just need initial conditions now? Edit: I messed up the formula for ##a_{2k}## first time around, fixed it! Last edited: chwala Gold Member I overlooked that one. Does it say there that you are usibg the Frobenius method to solve the Bessel equation ? (other link , link, solution with pictures, a whole chapter, ...). And you should really start your power series from ##k=0##, not from ##k=-\infty##. All this with just a little googling and without sitting down and seriously working out your core question ! Lazy me ... My own notes are still somewhere in the attic and also in my brain, but under four and a half decades of dust i guess the lecturer must have mentioned the method but i probably did not indicate the method on my working... Gold Member If ##a_k## is only defined for ##k=0,1,2,\ldots## don't you think that for the equation ##\left[ (k+c)^2-n^2\right] a_k+a_{k-2}=0## the only admissible values of ##k## are ##k=2,3,4,\ldots## ? For ##k=2,\text{ and }c=n## this would lead to ##a_2=\tfrac{a_0}{4(n+1)}## and [omitting steps] in general $$a_{2k}=\tfrac{(-1)^k a_0}{4^k(k!)^2 \binom{n+k}{k} }, \quad k\in\mathbb{Z}^+$$ and $$a_{2k+1}=\tfrac{(-1)^k a_{1}}{\prod_{j=1}^{k}\left[ (2j+1) (2n+2j+1)\right]}, \quad k\in\mathbb{Z}^+$$ Note: ##a_{2k+1}## is "not pretty" in terms of binomial coefficients, but probably looks ok in terms of double factorials. Just need initial conditions now? Edit: I messed up the formula for ##a_{2k}## first time around, fixed it! i am quite fine with the values ##k=2,3,4##... and how to find the solution. At the end we shall have a series solution of form ##y=y_1 + y_2##, alternating for ##k=2,4,6##.....and ##k=3,5,7##...my interest on this question is on the value of ##k=1##, i do not understand how ##a_1=0## but of course i may be missing something from the question as indicated by the responses... Gold Member The problem is related to what the others have mentioned already a few times. The sum starts from ##k=0##, so the coefficient of the ##k=1## term is ##(1+2n)a_1##. There's no ##a_{-1}##. I think i may be wrong with my limits, ##k## should start from ##0## and not ##-∞##, as indicated... Gold Member ...now considering case 1, if ##c=n##, and given ##k=1##, then ##a_1=0##...this is the part that i am not getting. i am ok with the rest of the steps... This is the part that i need understanding ...lets assume as you have put it that ##k## starts from ##0→+∞## then how is it that when ##k=1##, that ##a_1=0## or the other possibility is that my statement (my notes) is/are incorrect. haruspex Homework Helper Gold Member ...now considering case 1, if ##c=n##, and given ##k=1##, then ##a_1=0##...this is the part that i am not getting. i am ok with the rest of the steps... This is the part that i need understanding ...lets assume as you have put it that ##k## starts from ##0→+∞## then how is it that when ##k=1##, that ##a_1=0## or the other possibility is that my statement (my notes) is/are incorrect. Not sure that introducing c helps. Why not just write ##y=\Sigma_0a_kz^k## and allow that a0 and so could be zero? Then you can write equations for the z0 term, the z1 term, and a general equation for zk, k>1. The cases to be considered are then n=0, n=1, n=2...; a particular value of k is not a 'case'. You should find that ak=0 for k<n, an is unconstrained, and for k>n ak is some (varying) multiple of ak-2 Gold Member Gold Member i think going with the above attached example, i may have an idea as to why ##a_1=0##, by using the indicial equations (14) and (15) as my comparison, in my case, i have the indicial equation as, ##[(k+c)^2-n^2)]a_k + a_{k-2}=0## for some reason they chose to use the first part of the indicial equation... when ##k=0## it follows that ##c=±n##, we may as well choose to ignore the ##-c## and go for ##c=n## now for ##k=1##, we shall have; ##[(k+c)^2-n^2)]a_k=0## just like in equation(16) of attachment... and ##c=n## therefore, ##[(1+n)^2-n^2)]a_1=0## ##(1+2n)a_1=0## avoiding special case ##n=-0.5##, results into ##a_1=0##
What is the area of this regular hexagon? Nov 24, 2015 $75 c {m}^{2}$ Area =$\frac{1}{2} \cdot 48 \cdot 5 c {m}^{2} = 120 c {m}^{2}$ Explanation: We will Use the area of the hexagon formula $A r e a = \frac{1}{2}$ x perimeter x apothem Now what is is an apothem; The apothem (sometimes abbreviated as apo) of a regular polygon is a line segment from the center to the midpoint of one of its sides. Equivalently, it is the line drawn from the center of the polygon that is perpendicular to one of its sides. Apothem$= 5 c m$ Side $= 8 c m$ Now for a regular polygon of n sides the perimeter is $= n \cdot s$ $= 6 \cdot 8 = 48$ Finally lets plug in ; Area =$\frac{1}{2} \cdot 48 \cdot 5 c {m}^{2} = 120 c {m}^{2}$ Additionally there are multiple ways to find area of a hexagon 1)Use formula ; If you know a side of a regular hexagon you can use this; 2) Using the apothem method; For more on this visit; http://www.dummies.com/how-to/content/how-to-calculate-the-area-of-a-regular-hexagon.html Nov 24, 2015 The area is approximately $86.6 {\text{cm}}^{2}$. Explanation: As this hexagon is regular, you can divide it into $6$ triangles. Please note that all those triangles are isosceles. All angles of the hexagon are 120°. As you see in the picture, each of those 6 "big" triangles can be divided into two small triangles with the angles 30°, 60° and 90°, and we know the length of one of the sides: $a = 5 c m$. To compute the area of the small right angle triangle, you need just the length of $b$. This you can do with $\tan$: tan (30°) = b/a $b = 5 \text{cm" * tan(30°) = 2.88675134595... "cm}$ This means that the area of the small right angle triangle is A_t = (b * a)/2 = (5 "cm" * 5 "cm" tan(30°))/2 = 25/2 tan(30°) "cm"^2 = 7.21687836487... "cm"^2 There are $12$ equal triangles, so the area of the whole hexagon is A = 12 * A_t = 6 * 25 tan(30°) "cm"^2 = 86.6025403784... "cm"^2 Hope that this helped!
# zbMATH — the first resource for mathematics Le réseau $$L^ 2$$ d’un système holonome régulier. (The $$L^ 2$$- réseau of a regular holonomic system). (French) Zbl 0598.32014 The purpose of this paper is to define an $$''L^ 2$$-réseau” of a regular holonomic $${\mathcal D}_ X$$-module on a smooth complex variety and give some applications of it. Let X be a smooth complex variety, Y a closed analytic subset of pure codimension in X and S a hypersurface in Y such that Y-S is non-singular. For a regular holonomic $${\mathcal D}_ X$$- module $${\mathcal M}$$ whose support is contained in Y, we may define a canonical sub $${\mathcal O}_ X$$-module of $${\mathcal M}$$ associated to the $$L^ 2$$-extension. We call it the $$L^ 2$$-réseau and denote it by $$L^ 2(Y,{\mathcal M})$$. In particular, when $$Y=X$$ and $${\mathcal M}$$ has no non-trivial section supported in S, the $$L^ 2$$-réseau contains the réseau of Deligne. By using $$L^ 2(X,{\mathcal M})$$, the author discusses a condition in order that $${\mathcal M}$$ is generated by a ”standard” distribution on X. Reviewer: M.Muro ##### MSC: 32C15 Complex spaces 32C25 Analytic subsets and submanifolds 14F10 Differentials and other special sheaves; D-modules; Bernstein-Sato ideals and polynomials 32C36 Local cohomology of analytic spaces Full Text: ##### References: [1] [BBD] Beilinson, A., Bernstein, J., Deligne, P.: Faisceaux pervers. Astérisque100 (1982) · Zbl 0536.14011 [2] [Br 1] Brylinski, J.L.: Modules holonômes réguliers et filtration de Hodge II. Analyse et topologie sur les espaces singuliers II?III. Astérisque101-102 (1983) [3] [Br2] Brylinski, J.L.: La classe fondamentale d’une variété algébrique engendre leD-module qui calcule sa cohomologie d’intersection (d’après M. Kashiwara). Système différentiel et singularités. Astérisque130 (1985) [4] [B1] Barlet, D.: Familles analytiques de cycles et classes fondamentales relatives. Lect. Notes Math.807 (1980) · Zbl 0434.32011 [5] [B2] Barlet, D.: Fonctions de type trace. Ann. Inst. Fourier (Grenoble)33, 2 (1983) · Zbl 0498.32002 [6] [D] Deligne, P.: Equations différentielles à points singuliers réguliers. Lect. Notes Math.163 (1970) [7] [HL] Herrera, M., Lieberman, D.: Residues and Principal values on complex spaces. Math. Ann.194 (1971) · Zbl 0224.32012 [8] [K1] Kashiwara, M.: The Riemann-Hilbert problem for holomomic systems. Publ. RIMS20 (1984) [9] [K2] Kashiwara, M.: Regular holonomicD-modules and distributions on complex manifold. (Preprint (1984); à paraître aux Advanced Study of Pure Math) [10] [KK] Kashiwara, M., Kawai, T.: On holonomic systems of microdifferential equations III system with regular singularities. Publ. RIMS (Kyoto Univ.)17 (1981) · Zbl 0505.58033 [11] [L] Lelong, P.: Integration sur un ensemble analytique complexe Bull. Soc. Math. Fr.85 (1957) · Zbl 0079.30901 [12] [M] Mebkhout, Z.: Local cohomology of an analytic space. Publ. RIMS12 [Suppl] (1977) · Zbl 0372.32007 [13] [R] Ramis, J.P.: Variations sur le thème ?GAGA?. Lect. Notes Math.694 (1978) · Zbl 0398.32008 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
## Real exponential field with restricted analytic functions: $\mathbb R_{an, exp, log}$ has quantifier elimination, but $\mathbb R_{an, exp}$ does not. At a talk sometime ago a result was presented, which I believe originates from: van den Dries, Lou; Miller, Chris, On the real exponential field with restricted analytic functions, Isr. J. Math. 85, No. 1-3, 19-56 (1994). ZBL0823.03017. At some point it was mentioned that $$\mathbb R_{an,exp,log}$$ admits quantifier elimination while $$\mathbb R_{an,exp}$$ does not. Here $$\mathbb R_{an,exp}$$ is the theory of the (ordered) real exponential field with function symbols for all restricted analytic functions. Then of course $$\mathbb R_{an,exp,log}$$ is just adding a function symbol for logarithms. Someone in the audience remarked that $$log(x)$$ (or more precisely, its graph) is quantifier-free definable by $$x = exp(y)$$. Then a fairly simple formula was presented to show why you really need $$log$$ as a function symbol for quantifier elimination, and there is my question: I just cannot remember or reconstruct that formula. So what would be a simple example of some formula in this setting that is not equivalent to a quantifier-free formula in $$\mathbb R_{an,exp}$$? I am probably missing something obvious here, but now it’s haunting me. ## Determine all idempotents, nilpotents, and units in $F[x]/\langle h\rangle$, where $F$ is a field, $h=x^2-x$. Problem: Determine all idempotents, nilpotents, and units in $$F[x]/\langle h \rangle$$, where $$F$$ is a field, $$h=x^2-x$$. I know $$F[x]/\langle h \rangle = \{a_0 + a_1t \mid a_i \in F, t^2-t=0 \}$$, So first starting with idempotents, if $$z \in F[x]/\langle h \rangle$$ is an idempotent, then $$z=a_0+a_1t$$ satisfies $$z^2=z \rightarrow a_0^2+2a_0a_1t+a_1^2t^2=a_0+a_1t$$ which implies that $$a_0=0$$ or $$1$$. For the nilpotents, if there is some $$k \leq 0$$ such that $$z^k=0$$, I’m not sure what can be said about $$z$$. Hints appreciated. ## Locality of a tensor product with a fixed field extension Given a strict (not necessarily finite) field extension $$F \subset K$$, does there always exist a field extension $$F \subset L$$ such that $$K \otimes_F L$$ is not local? ## Is the bits field a unique representation of the target? The bits field is the compact representation of the target. Example: bits: 1d00ffff target: 00ffff0000000000000000000000000000000000000000000000000000 bits: 1cfff00 target: ffff0000000000000000000000000000000000000000000000000000 But these two actually represent the same number. int(target) -> 26959535291011309493156476344723991336010898738574164086137773096960 for both of the above targets. What (if anything) makes bits a unique representation of the target? ## Is there a functor which is equivalent to discriminant of number field? Let $$K$$ be a number field, i.e. a finite extension of $$\mathbb{Q}$$. The ring of integer $$O_K$$ is a free $$\mathbb{Z}$$-module. Let $$\{ a_1, \cdots , a_n\}$$ be a integral basis of $$O_K$$. Then, $$\Delta_{K/ \mathbb{Q}} = \det (\mathrm{Tr}(a_ia_j)_{i,j})$$ is independent of choice of integral basis. We call $$\Delta_{K/ \mathbb{Q}}$$ a discriminant of number field $$K$$ over $$\mathbb{Q}$$. My question is: Is there some categories $$C,D$$ and a functor $$F \colon C \to D$$ such that you have a simple way to get the discriminant $$\Delta_{K/ \mathbb{Q}}$$ from an object $$F(K)$$? I want $$F$$ to be a canonical one. Thanks. ## Reading text from MetaInfo field For research purposes, I’m trying to read the text in a Sharepoint database MetaInfo field which is stored as tCompressedBinary(varbinary(max)). In order to read this, I tried the following solution which I found online: select top 4 cast(cast(MetaInfo as varbinary(2048)) as varchar(2048)) from AllDocs where Extension = 'pdf' But this returned “¨©01\f for all four selected fields. If I run the following query: select top 4 MetaInfo from AllDocs where Extension = 'pdf' It returns binary fields which start with : a8a930310c000000 (and continue like this). Do you know how to turn the info in the MetaInfo table to a readable string? ## A duplicate field name “StartDate” was found in SharePoint 2013 You cannot vote on your own post 0 After completion of Migration from SharePoint 2010 to SP2013, didn’t see any timer jobs under Job Definition. So we have deactivated the feature and activated at SiteCollection Level then able to see the Custom Timer Job Under Timer Job Definition, Except One Custom Timer Job for that we have deactivated and while activating the feature at web level getting the error A duplicate field name “StartDate” was found in SharePoint 2013. In SharePoint 2010 there is no issue, but after migration to SharePoint 2013 getting A duplicate field name “StartDate” was found. We have two Custom List Definitions (Activity Taks and Workflow Task) created the StartDate Column using below field attributes in Schema.xml And also have many custom ContentTypes referred the StartDate columns like below mentioned in the Elments.xml file This StartDate Field had used in the code in many places. Can you please help on this issue how to resolve this issue. ## How do I remove the title field from a node’s custom view mode In my Drupal 8 website I created a custom view mode through the UI. I use it to output nodes using a view. In the view result I do not want the titles of the nodes to appear. The UI (content type > manage display) does not seem to give the option to hide the title. How can I hide the title in a specific view mode for an entity via the UI or via custom code? ## Why central isogeny of reductive group over genereal field F map maximal F split torus onto a maximal split F torus let $$f$$ be an central isogeny of reductive groups over a field F, why $$f$$ map a maximal F split torus onto a maximal split F torus. ## I have three models HomePage, Callout, FeatureContent like below: class FeatureTip(models.Model): feature_tip_title = models.CharField(max_length=120, null=True, blank=False) feature_tip_description = models.TextField() def __str__(self): return self.feature_tip_title class Citie(models.Model): name = models.CharField(max_length=120, null=True, blank=False) description = models.TextField() def __str__(self): return self.name class HomePage(models.Model): header = models.CharField(max_length=120, null=True, blank=False) cities = models.ManyToManyField(Citie) featured_tips = models.ManyToManyField(FeatureTip) def __str__(self): return 'Home Page' class Callout(models.Model): header = models.CharField(max_length=120, null=True, blank=False) home_page = models.ForeignKey(HomePage,on_delete=models.CASCADE, null=True, blank=False) def __str__(self): return self.header def get_city(self): return self.cities.all() class FeatureContent(models.Model): title = models.CharField(max_length=120, null=True, blank=False) home_page = models.ForeignKey(HomePage, on_delete=models.CASCADE, null=True) def __str__(self): return self.feature_article_title_en class CalloutInline(admin.StackedInline): model = Callout fields = ['header'] extra = 4 max_num = 4 def get_queryset(self, request): HomePage.objects.filter(name="Eminem") class FeatureContentInline(admin.StackedInline): model = FeatureContent fields = ['title'] extra = 1 max_num = 1 class HomePageAdmin(admin.ModelAdmin): filter_horizontal = ['cities', 'featured_tips'] inlines = [CalloutInline, FeatureContentInline]
436 views The question is followed by two statements I and II. Mark 1. if the question can be answered by any one of the statements alone, but cannot be answered by using the other statement alone. 2. if the question can be answered by using either statement alone. 3. if the question can be answered by using both the statements together, but cannot be answered by using either statement alone. 4. if the question cannot be answered even by using both the statements together. Three professors A, B and C are separately given three sets of numbers to add. They were expected to find the answers to $1 + 1, 1 + 1 + 2,$ and $1 + 1$ respectively. Their respective answers were $3, 3$ and $2.$ How many of the professors are mathematicians? 1. A mathematician can never add two numbers correctly, but can always add three numberscorrectly. 2. When a mathematician makes a mistake in a sum, the error is $+1$ or $–1.$ I)A mathematician can never add two numbers correctly, but can always add three numberscorrectly. Here A got incorrect answer for 2 numbers ,So, A is mathemetician B got incorrect result with 3 numbers - B is not a mathematician C got correct result with 2 numbers - C is not a mathematician Ans A) if the question can be answered by any one of the statements alone, but cannot be answered by using the other statement alone. by 5.1k points ### 1 comment B and Care not mathematicians. But there is no roof of A being a mathematician- even others can add 2 numbers wrong. So, answer must be D.
Seminars Research Seminar 8 Date/Time: 16.09.2021, 12:00 pm. Green-Lazarsfeld property $N_p$ for Hibi rings Dharm Veer Chennai Mathematical Institute. 16-09-21 Abstract Let $L$ be a finite distributive lattice. By Birkhoff's fundamental structure theorem, $L$ is the ideal lattice $\MI(P)$ of its subposet $P$ of join-irreducible elements. Write $P=\{p_1,\ldots,p_n\}$ and let $R=K[t,z_1,\ldots,z_n]$ be a polynomial ring in $n+1$ variables over a field $K.$ The {\em Hibi ring} associated with $L$, denoted by $R[L]$, is the subring of $R$ generated by the monomials $u_{\alpha}=t\prod_{p_i\in \alpha}z_i$ where $\alpha\in L$. In this talk, we show that a Hibi ring satisfies property $N_4$ if and only if it is a polynomial ring or it has a linear resolution. Therefore, it satisfies property $N_p$ for all $p\geq 4$ as well. Moreover, we show that if a Hibi ring satisfies property $N_2$, then its Segre product with a polynomial ring in finitely many variables also satisfies property $N_2$. *Zoom details.*
# Quantitative MO theory The chemist's qualitative picture of molecular orbitals can be rigorously derived from a quantum chemical approach. In this section we give a very brief outline of the theory. ## Schrödinger equation In 1925 Erwin Schrödinger and Werner Heisenberg independently formulated a general quantum theory. Although the two formulations are mathematically equivalent, Schrödinger presented his theory in terms of partial differential equations and, within this framework, the energy of an isolated molecule can be obtained by the solution of the Schrödinger equation. In its time-independent form, this can be written as: ${\displaystyle {\hat {H}}\Psi =E\Psi }$ where ${\displaystyle {\hat {H}}}$ is the Hamiltonian operator, ${\displaystyle \Psi }$ is the wavefunction, and E is the energy of the system relative to the state in which the nuclei and electrons are infinitely separated and at rest. The masses of the nuclei are much larger and their velocities much smaller than those of the electrons, and it is possible to simplify the solution of the Schrödinger equation by separating it into two parts, one describing the motions of the electrons in a field of fixed nuclei and the other describing the motions of the nuclei. This is known as the adiabatic or Born-Oppenheimer approximation. Molecular orbital theory is concerned with finding approximate solutions to the first part, that is, the electronic Schrödinger equation: ${\displaystyle {\hat {H}}^{e}\Psi ^{e}=E^{e}\Psi ^{e}}$ Each quantity is implicitly a function of the nuclear co-ordinates. ## The orbital approximation The orbital approximation simplifies the above equation by assuming that each electron is associated with a separate one-electron wavefunction or spin orbital, [chi]. Thus, Hartree proposed that the wavefunction could be expressed simply as a product of spin orbitals, one for each electron: ${\displaystyle \psi =\chi _{1}(1)\chi _{2}(2)...\chi _{n}(n)\,\!}$ ## The LCAO approximation Each spin orbital is actually a product of a spatial function, ${\displaystyle \chi _{i}(x,y,z)\,\!}$, and a spin function, α or β. The spatial molecular orbitals, ${\displaystyle \Phi _{i}\,\!}$, are usually expressed as linear combinations of a finite set of known one-electron functions. In the simplest case where these functions take the form of the atomic orbitals of the constituent atoms, this expansion is called a linear combination of atomic orbitals (LCAO): ${\displaystyle \Phi _{i}=c_{1i}\chi _{1}+c_{2i}\chi _{2}+...c_{Ni}\chi _{N}\,\!}$ Qualitatively, this is like saying that the two molecular orbitals in H2 are linear combinations of the 1s atomic orbitals: File:Orb1.gif    Φ2 = 0.5χa - 0.5χb Φ1 = 0.5χa + 0.5χb ## One-electron approximations The simplest way to solve the Schrödinger equation is to treat each electron in isolation from the rest. This "one-electron approximation" leads to Hückel theory and Extended Hückel theory. Computational Chemistry - Navigation Qualitative MO theory Fundamentals of Computational Chemistry Empirical MO methods
# Does permuting indices of a cocycle leave the Cech cohomology class the same? Consider a space $X$, $\{U_i\}_i$ an open covering, and $\mathcal{F}$ a sheaf on $X$. Consider $(c_{i_0,\ldots,i_n})$ a Cech $n$-cocycle. It is theorem that every Cech $n$-cocyle is cohomologous to a an alternating one, ie a cocycle, $(d_{i_0,\ldots,i_n})$ such that $d_{i_0,\ldots,i_k,i_{k+1},\ldots, i_n}=-d_{i_0,\ldots,i_{k+1},i_{k},\ldots, i_n}$ (switching two indices changes signs). Define $c'_{i_0,\ldots,i_n}=-c_{i_0,\ldots,i_{k+1},i_k,\ldots,i_n}$ (ie $c$ with two indices switched). Is it true that $c'$ is a cocycle? And if so is it cohomologous to $c$? Of course this is true in the alternating case. It seems there should be a way to do this using the fact that every cocycle is cohomologous to an alternating one, but I am getting stuck on the details. Any thoughts or suggestions would be much appreciated.
# Is this posible in GR $g_{ab}g^{ab}=1$? [duplicate] Metric tensor multiplied by its inverse. I always see this with different indices. Since $g_{ab}g^{cb}=\delta_a^c$ is the identity matrix, taking the trace gives $g_{ab}g^{ab}=D$ in a $D$-dimensional spacetime. • And spacetimes are defined as $D \geq 2$, so it won't be 1 – Slereah Jan 5 '18 at 6:38
# Why doesn't this simpler teleportation idea work? This circuit: The first (upper) qubit is the one we want to teleport, so could start it any state, call $$\alpha|0\rangle+\beta|1\rangle$$. Our goal is to teleport it to the third (bottom) bit. After entangling the second and third bits, we have $$\frac{1}{\sqrt2}\Big(\alpha(|0\rangle\otimes(|00\rangle+|11\rangle))+\beta(|1\rangle\otimes(|00\rangle+|11\rangle))\Big)$$ After applying CNOT from the first to second bit, we have $$\frac{1}{\sqrt2}\Big(\alpha(|0\rangle\otimes(|00\rangle+|11\rangle))+\beta(|1\rangle\otimes(|10\rangle+|01\rangle))\Big)$$ At this point, we measure the second qubit. If it's $$|0\rangle$$, then we end up in the state $$\alpha(|0\rangle\otimes|00\rangle)+\beta(|1\rangle\otimes|01\rangle)=\alpha|000\rangle+\beta|101\rangle$$ If it's $$|1\rangle$$, then we end up in $$\alpha(|0\rangle\otimes|11\rangle)+\beta(|1\rangle\otimes|10\rangle)=\alpha|011\rangle+\beta|110\rangle$$ In this case, we apply X to the third bit, yielding $$\alpha|010\rangle+\beta|111\rangle$$ In either case, if we have access to the third qubit only, it appears to be in the state $$\alpha|0\rangle+\beta|1\rangle$$, which is what we want. What's wrong with this scheme? The only thing I can think of is that it "doesn't count" as teleportation because the first and third qubits are still entangled at the end. But I thought the main point of teleportation was just to transmit a state using only classical data and a pre-entangled pair, in which case this does seem to "count". • the fact that the final state is entangled means the third register does not contain the state you wanted to teleport. The state the third register sees is here $|\alpha|^2 |0\rangle\!\langle0|+|\beta|^2 |1\rangle\!\langle1|$, which is not the same as $\alpha|0\rangle+\beta|1\rangle$ – glS Nov 25 '21 at 22:40 • how can |α|2|0⟩⟨0|+|β|2|1⟩⟨1| be a state? isn't that a 2x2 matrix instead of 2x1? Nov 25 '21 at 22:53 • it's a state represented as a density matrix. Have a look e.g. at quantumcomputing.stackexchange.com/q/2347/55 – glS Nov 26 '21 at 8:34 As others have said, the basic problem here is that qubits 1 and 3 are still entangled. You cannot claim that just looking at the third qubit is like what you had from the input state. To be more concrete about this, it's supposed to be like Bob has received the initial state $$|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$$. If he has, then anything he does to that state will behave exactly as it would when he does it directly to $$|\psi\rangle$$. So, consider Bob measuring his qubit. Sure, if he measures it in the $$Z$$ basis, he'll get the same outcomes: 0 with probability $$|\alpha|^2$$ and 1 with probability $$|\beta|^2$$. However, Bob could equally well make a different measurement. Let's say he wants to make an $$X$$ measurement. He should get the answer + with probability $$\left|\frac{\alpha+\beta}{\sqrt{2}}\right|^2$$ What does he actually get? Let's say $$|\Psi_0\rangle$$ was the 3-qubit output state having got the measurement result 0 on the second qubit. The probability of getting the $$+$$ answer is $$\langle\Psi_0|\left(I\otimes I\otimes|+\rangle\langle +|\right)|\Psi_0\rangle=\frac{|\alpha|^2+|\beta|^2}{2}=\frac12.$$ This (in general) is different. Bob did not have the state $$|\psi\rangle$$. To be even more explicit, imagine that Alice had used $$|\psi\rangle=|+\rangle=(|0\rangle+|1\rangle)/\sqrt{2}$$. If Bob receives this correctly, he'll have $$|+\rangle$$ and if he measures in the $$X$$ basis, he is guaranteed to get the answer +. However, we can also write $$|\Psi_0\rangle=\frac{1}{\sqrt{2}}(|+0+\rangle+|-0-\rangle)$$ from which it should be obvious (even if you are less familiar with how to describe a single-qubit measurement on a 3-qubit state) that you will get the - result half the time. • Thank you -- working out the projections carefully by hand in each case is illuminating. Nov 26 '21 at 20:26 @glS is right, if you measure the third qubit, the system will be in the mixed state. Only pure states can be expressed as superposition of basis vectors. In general case (both pure and mixed states), states can be described as density matrices. For quantum teleportation to work, all the qubits you measure, must not be entangled with the state, you want to teleport. • If Alice, holding the third qubit, is completely isolated from Bob, holding the first qubit, she cannot tell whether it is entangled with the third qubit or not, right? So how is that any different than a pure state? Or do we not assume that Bob and Alice are permanently isolated? Nov 26 '21 at 1:16 • She can tell, that her state is not pure, because the impurity will affect probability distributions. Nov 27 '21 at 8:38 • They are not permanently isolated, as they 1) need to obtain a shared Bell state (the first Hadamard gate + CNOT) 2) exchange two bits of classical information Nov 27 '21 at 9:02
# 22 prove that the taylor series converges to f x by • Homework Help • 10 • 100% (17) 17 out of 17 people found this document helpful This preview shows page 8 - 10 out of 10 pages. 22. Prove that the Taylor series converges to f ( x ) by showing that ( ) 0 as n R x n   . 24. Prove that the Taylor series converges to f ( x ) by showing that ( ) 0 as n R x n   . 30. Use a known Taylor series to find the Taylor series about c = 0 for the given function, and find its radius of convergence. 4. Use an appropriate Taylor series to approximate the given value, accurate to within 11 10 . 8. Use a known Taylor series to conjecture the value of the limit. 12. Use a known Taylor series to conjecture the value of the limit. 2 0 1 lim x x e x 2 2 0 0 1 (1 2 2 ...) 1 lim lim 2 x x x e x x x x   16. Use a known Taylor polynomial with n nonzero terms to estimate the value of the integral. 18. Use a known Taylor polynomial with n nonzero terms to estimate the value of the integral. 24. Use the Binomial Theorem to find the first five terms of the Maclaurin series.
## Kondo effect in f-electron superlattices    [PDF] Robert Peters, Yasuhiro Tada, Norio Kawakami We demonstrate the importance of the Kondo effect in artificially created {\it f}-electron superlattices. We show that the Kondo effect does not only change the density of states of the {\it f}-electron layers, but is also the cause of pronounced resonances at the Fermi energy in the density of states of the non-interacting layers in the superlattice, which are between the {\it f}-electron layers. Remarkably, these resonances strongly depend on the structure of the superlattice; due to interference, the density of states at the Fermi energy can be strongly enhanced or even shows no changes at all. Furthermore, we show that by inserting the Kondo lattice layer into a three-dimensional (3D) metal, the gap of the Kondo insulating state changes from a full gap to a pseudo gap with quadratically vanishing spectral weight around the Fermi energy. Due to the formation of the Kondo insulating state in the {\it f}-electron layer, the superlattice becomes strongly anisotropic below the Kondo temperature. We prove this by calculating the in-plane and out-of-plane conductivity of the superlattice. View original: http://arxiv.org/abs/1307.2675
# Secant Calculator for Degrees or Radians Written by: PK On this page is a secant calculator. Input an angle's measurement, choose your units, and run the calculator to see the secant. ## Using the Secant Calculator To use the tool to find the secant, enter the measurement of the angle in either degrees or radians, choose your units, and compute. • Angle - measurement of the angle, in degrees or radians Once happy, click the "Compute Secant" button. • Secant - the secant, or ratio of the hypotenuse over the adjacent side's measurement ## What is the secant? For a given angle in a right triangle, secant is the ratio of the longest side (the hypotenuse) to the adjacent side. Secant is the reciprocal of the cosine; cosine is the ratio of the adjacent side's length to the hypotenuse. Trigonometric functions such as the secant are also used to find the ratio between sides of a unit circle (a circle with radius 1). ### Secant as a formula Secant is usually abbreviated as "sec" as a trigonometry formula, as in the following equation: \sec(\theta) As the reciprocal of cosine, it's equivalent to: \sec(\theta)=\frac{1}{\cos(\theta)} If you already have the secant ratio, you can use the inverse secant or arcsecant to find the angle. After, find all the trigonometric functions and inverse trigonometric functions in one tool. ### PK PK started DQYDJ in 2009 to research and discuss finance and investing and help answer financial questions. He's expanded DQYDJ to build visualizations, calculators, and interactive tools. PK is in his mid-30s and works and lives in the Bay Area with his wife, two kids, and dog. ### Don't Quit Your Day Job... DQYDJ may be compensated by our partners if you make purchases through links. See our disclosures page. As an Amazon Associate we earn from qualifying purchases.
# Conservation of energy or not? In a process involving capacitor In my textbook, there is a following task (that's my translation and it may not be 100% clear or accurate, so feel free to request additional clarification) A flat capacitor, charged with charge $Q$, with conductive plates of equal areas $S$, height $h$ and distance between them $d$ was positioned vertically, that is, in the way that the bottom edges of conductive fields touch a dielectric fluid (i.e. water, of density $\rho$ and relative permittivity $\varepsilon_r$). Near the edges of the capacitor, there is a non-homogeneous electric field that causes the fluid to get polarized (that is, particles of the fluid become induced electric dipoles). One of the poles of each dipole will then be in a stronger electric field, which causes the fluid to be ''sucked'' to the inside of the capacitor (i.e. between the plates). What's the charge $Q$, that the capacitor ought to be charged with, so that the fluid fills all space between the plates? Then, in my textbook, the solution is presented. This solution consists of application of the law of conservation of energy: $$\frac{Q^2}{2C}=\frac{Q^2}{2\varepsilon_rC}+Sd\rho g\frac{h}{2}$$ and I find this solution absolutely wrong. Why? Consider the process of water being "sucked" into the capacitor. Then there is - inside of the capacitor - a changing electric field, that should induce changing magnetic field and so on. Therefore, I think that the solution in my textbook does not consider loss of energy due to the emitted electromagnetic waves. The answer - according to my textbook - is $$Q=S\sqrt{\frac{\varepsilon_r\varepsilon_0\rho gh}{\varepsilon_r-1}}$$ Personally, I found another solution. If we compute the total energy of the system capacitor+water, and then set $\frac{dE}{dy}=0$, then we get $$Q=S\varepsilon_r\sqrt{\frac{2\varepsilon_0\rho gh}{\varepsilon_r-1}}$$ which differs significantly from the solution presented in my textbook. Which one is the correct solution then? And, if the solution presented in my textbook does not exhibit law of conservation of energy properly, then what changes need to be made, to consider the loss of energy due to the emission of electromagnetic waves? • It's not exactly clear to me why you think that the result of computing $\frac{\mathrm{d}E}{\mathrm{d}y}= 0$ is significant in any way. – ACuriousMind Oct 16 '16 at 13:34 • As it's another method (presented in numerous problem sets) for solving problems involving dielectrics and capacitors, for instance... Basically, it's another method of solving the problem. – VanDerWarden Oct 16 '16 at 13:37 • There is no y in the expression for energy here. Can I assume that your use of $\frac{dE}{dy}=0$ is the same as $\frac{dE}{dh}=0$? – D. Ennis Oct 16 '16 at 14:29 • So you plan to introduce a displacement current? – GRrocks Oct 16 '16 at 15:21 Neglecting for a moment your point about electromagnetic radiation losses, I trust that you agree that the book's logic was otherwise sound in arriving at $$\frac{Q^2}{2C}=\frac{Q^2}{2\varepsilon_rC}+Sd\rho g\frac{h}{2}$$ Algebraically, this does, indeed, give $$Q=S\sqrt{\frac{\varepsilon_r\varepsilon_0\rho gh}{\varepsilon_r-1}}$$ So the book's answer is correct if energy is not lost by the system. Assuming that by $\frac{dE}{dy}=0$, you mean $\frac{dE}{dh}=0$, I would point out that the change in energy with respect to height is not zero. There is more energy between the plates the higher you go. So your otherwise inventive and respectable efforts led you to error. It is $\frac{dE}{dt}$ which would be equal to zero as the water rises between the plates and energy is transferred from the electric field to the gravitational field. Finally, if you consider the electromagnetic losses, I think that you are correct in that the loss is a finite, nonzero quantity. The electric field is changing, and the rate of change itself changes. However, the loss would be extremely small. The electric field is changing quite slowly, and if it oscillated continuously at that rate (which it doesn't), the wave would be a very low frequency one. Low frequency waves, of course, are low energy waves. Plus, the overall duration of the radiation is quite short. It would be more correct to say that an electromagnetic pulse is generated than to say an electromagnetic wave is generated. I think that the authors were correct in neglecting it, but perhaps should have said so, as some authors do in similar situations, because some readers are very alert and want everything to integrate with what they already understand. To answer your final question, a time-dependent term would have to be added to the equation. Frankly, I don't know what that term would be without researching it. • "the book's answer is correct"... Excuse me, but that's another uterrly useless answer! The energy is OBVIOUSLY lost. To emphasise this point, other exercises in the textbook SPECIFICALLY say that some portion of the energy was lost due to the emission, so it's at least inconsistent of them in this case. Moreover, let me tell you something - there once was a task in our national Physics Olympiad. And in the solutions... – VanDerWarden Oct 16 '16 at 15:23 • Wow! There was EXACTLY THE SAME reasoning as mine (regarding derivative of the energy), different only by the fact that there was a solid dieletric plate instead of water, but tha's not a huge difference, is it? – VanDerWarden Oct 16 '16 at 15:23 • And we lose a non-trivial and absolutely not neglectable portion of energy. – VanDerWarden Oct 16 '16 at 15:26 • Neither your solution nor the book's accounts for electromagnetic energy loses, and you asked which solution on that basis was correct. The answer is that on that basis the book's solution is correct and yours in incorrect, and I told you why. You next question was If the book did not handle conservation of energy properly, how could it be changed. I answered that too, in a manner respectful of your efforts. But apparently, any answer that does not say that you are 100% correct is useless to you. – D. Ennis Oct 16 '16 at 16:29 The correct answer is the one of the book. This level of description is electrostatic, no further than that. You can see from Maxwell's laws, magnetic field contributions enter when electric field is not conservative (-> not irrotational). This is not the case, you couldn't even consider potential V otherwise. In fact, a conservative vector field is the gradient of a scalar function and a conservative vector field is always irrotational. Maybe you are confusing a spacially inhomogeneous electric field with a changing-in-time electric field. At first I thought your question was complete bogus, but after thinking about it I'm starting to think that both the book and you are right. The book however probably didn't intend to make it this tricky, so actually it is a mistake in the book. Let's go to the physics. The total energy $E_{tot}$ is the sum of the electrical energy in the capacitor $E_C$ and the gravitational energy of the fluid $E_g$. Both these energies depend on the fluid level $y$. The fluid level will be forced upwards as long as the reduction of electrical energy compensates for the increase in gravitational energy. The fluid level has a stable point when $\frac{dE_C}{dy}=-\frac{dE_g}{dy}$, in other words when $\frac{dE_{tot}}{dy}=0$. From the question it is not completely clear whether the capacitor is first charged and then brought in contact with the fluid or slowly charged, while in contact with the fluid. I'll consider both cases. For the slow charging case, the system is constantly in equilibrium at the fluid level given by $\frac{dE_{tot}}{dy}=0$. At the charge you calculated, the fluid will reach the top of the capacitor. Then the case of first charging and then contacting the fluid. Here the system starts out of equilibrium. If we assume no energy losses, the fluid will be accelerated towards the equilibrium level. However, due to the kinetic energy of the fluid, it will pass this equilibrium level and rise even further. The fluid level will then drop again and oscillate around the equilibrium level. The answer in the book gives the charge for which the heighest level in the oscillation just reaches the top of the capacitor (when the kinetic energy is zero). In practice the oscillations will be damped and the system will stabilize at the euilibrium level. The main cause of damping will be the viscosity and friction of the fluid. In theory, if there were no mechanical damping, the system might be damped by emitting very low frequency EM-radiation. However, in practice this effect is completely negligible. To conclude, the book is mostly wrong, you were mostly right. Note that you probably would have had a sooner answer if you had properly explained what you meant by dE/dy. Furthermore, your responses to the answer of D.Ennis are completely inappropriate and almost made me decide not to type this answer. However, the amazing fact that for once the book is actually wrong, made me type it anyway. • I think you'll find that it is dEtot/dt which is equal to zero, not dEtot/dy. – D. Ennis Oct 22 '16 at 19:51
# Boundary conditions for Stream function-Vorticity method I have used a MATLAB finite difference code to solve a lid driven cavity flow, based on a Stream function-Vorticity formulation of the viscous, incompressible Navier Stokes equations. Details about the method can be found here. I want to change the code to simulate the flow around a square box in a rectangular domain, where the flow is uniform on the left side, and the flow is limited by the horisontal walls on the top and bottom. However, on the right side of the domain, where the flow exits, I have no idea how to express the boundary conditions. How can I express the boundary conditions on the right side? Is it even possible for this method? (In the following I assume twodimensional with main flowin positive x direction, the velocity vector is considered $\vec{u} = (u \quad v)^T$) 1. homogeneous Neumann for the streamwise velocity and no tangential stress, i.e. $\frac{\partial u}{\partial x}$ and $v = 0$ 2. If zour flow is turbulent you might want a proper outflow condition based on the wave equation: $\frac{\vec{u}}{t} + \vec{c} \frac{\vec{u}}{\vec{x}} = 0$ with $\vec{c}$ the convective outlet velocity which should be in the order of your bulk velocity. It should be possible to reformulate both boundary conditions to streamfunction and vorticity values. The first option is probably the simpler one as it only requires $\omega = 0$ and the streamlines to be horizontal.
###### What is the Distributive Property? The distributive property (aka distributive law) is a property of real numbers which says that the product of a number say a and the sum (or difference) of two numbers say b and c, is equal to the sum (or difference) of the product of a and b and a and c. In symbol, we have: The Distributive Property of Real Numbers I’m sure you have used this property to work with numbers. A typical question that involves the distributive property would be: Find the product of x and (x-7). Using the distributive property, you will get $x(x-7)=x^2 - 7x$. This means that if x = 10, one way to calculate the product of 10 and (10-7) is 10×3 or 10×10-70. Both of these will give you 30. This property is true for all real numbers. Here’s a more interesting problem where you can apply the distributive property of real numbers. ###### Problem Warning: You do not need your calculator for this. ###### Solution I’m not going to give the solution for each. Instead I’ll use a general form. I say general because it has the same structure as the three problems. The x represents any real numbers. $(1-x)(1+x+x^2+x^3+x^4+x^5)$ = $1(1-x)+x(1-x)+x^2(1-x)+x^3(1-x)+x^4(1-x)+x^5(1-x)$ = $1-x+x-x^2+x^2-x^3+x^3-x^4+x^4-x^5+x^5-x^6$ = $1-x^6$ This in fact works for n number of terms. You can try proving the statement below to show that it is true for n terms in any x. or ### 2 Responses to “Application of Distributive Property” 1. Bravo Erlina! Encouragements to pursuit! Jean-Yves Rollin, Skype scolaire75 2. This can also be used to create a fake-proof that 1+1+1…+1 = 1.As a exercice you could ask your students to show what’s wrong with the of the following argument: Consider the sum: 1+x+x²+x³+…+x^n = 1 We want to find a number x such that the above equation is solved. Then, f we multiply both sides by 1-x we have: (x-1)(1+x+x²+x³+…+x^n) = x-1 Then, by the above formula: x*x^n – 1 = x – 1 Cutting out the -1: x*x^n = x So we cut out a x in both sides: x^n=1 Therefore, x = 1. But if we put this value in the original sum: 1+1+1²+1³+…+1^n = 1 By the way, that’s a cool way of looking into the geometric progression sum formula!
Counting is probably the first and most basic mathematical operation ever created. The earliest archaeologist finds date counting to the Upper Paleolithic Era (some $50 000$ years ago). As was the case with other mathematical operations, it was developed out of need – in this case to represent the size of the group, the number of animals in a herd and similar things. The first tools we humans relied on to help us count were our fingers (which are still one of the most used counting aids worldwide). Since fingers are somewhat limited to $10$ , a new invention was introduced – the tally system (earliest known proof of that is from around $35 000$ B.C.). The tally system revolves around scratches on sticks, rocks or bones. The number of scratches represents the number of items counted – five birds would be represented by five scratches, seven mammoths would be represented by seven scratches etc. The “modern” tally system, which we’re still using in this day and age, organizes the scratches (tallies) into groups of five – four vertical scratches and one diagonal (that is drawn across the vertical ones). Eventually, tallies were replaced with more practical symbols – numerals ($1, 2, 3, 4, 5,…$) – which are in wide use today.
SEARCH HOME Math Central Quandaries & Queries Question from Allison, a student: It ask me to find an example of an irrational number less than -5 and I don't understand what the difference from a rational number and an irrational number besides the fact that a rational number can be repeated and shown in a simple fraction and an irrational number can't be written in a simple fraction. Can you help me? Hi Allison, You have the essential facts about rational and irrational numbers correct with one small edit A rational number can be written as a simple fraction and an irrational number can't be written in a simple fraction. The "repeated" aspect is that when a simple fraction is represented in decimal form it has a repeated pattern, for example $\large \frac{17}{45} \normalsize = 0.3777 \cdot \cdot \cdot$ where the digit 7 is repeated. You are to find an example of an irrational number which is less than -5. Can you give me any example of an irrational number? Penny Math Central is supported by the University of Regina and the Imperial Oil Foundation.
+0 # Negative Exponent Help 0 306 4 +18 can someone tell me what 3 to the negative 4th is? Apr 24, 2018 #1 +290 +2 $$1/81$$ . Apr 24, 2018 #2 +558 0 $$3^{-\frac{1}{4}}=\frac{1}{3}^{\frac{1}{4}}=\sqrt[4]{\frac{1}{3}}=0.759...$$ . Apr 24, 2018 #4 +985 +2 Hey Will, the question is asking for 3 to the -4 not to the negative one fourth. GYanggg  Apr 24, 2018 #3 +985 +2 $$3^{-4}=\frac{1}{3^4}=\frac{1}{81}=0.\overline{0123456789}$$ . Apr 24, 2018
Isis 3 Developer Reference Isis::GaussianStretch Class Reference Gaussian stretch class. More... #include <GaussianStretch.h> Inheritance diagram for Isis::GaussianStretch: Collaboration diagram for Isis::GaussianStretch: Public Member Functions GaussianStretch (Histogram &histogram, const double mean=0.0, const double standardDeviation=1.0) Constructs a gaussian stretch object. More... ~GaussianStretch () double Map (const double value) const Maps an input value to an output value based on the gaussian distribution. More... void Reset () Reset all accumulators and counters to zero. More... void AddData (const double *data, const unsigned int count) Add an array of doubles to the accumulators and counters. More... Add a double to the accumulators and counters. More... void RemoveData (const double *data, const unsigned int count) Remove an array of doubles from the accumulators and counters. More... void RemoveData (const double data) void SetValidRange (const double minimum=Isis::ValidMinimum, const double maximum=Isis::ValidMaximum) double ValidMinimum () const double ValidMaximum () const bool InRange (const double value) bool AboveRange (const double value) bool BelowRange (const double value) double Average () const Computes and returns the average. More... double StandardDeviation () const Computes and returns the standard deviation. More... double Variance () const Computes and returns the variance. More... double Sum () const Returns the sum of all the data. More... double SumSquare () const Returns the sum of all the squared data. More... double Rms () const Computes and returns the rms. More... double Minimum () const Returns the absolute minimum double found in all data passed through the AddData method. More... double Maximum () const Returns the absolute maximum double found in all data passed through the AddData method. More... double ChebyshevMinimum (const double percent=99.5) const This method returns a minimum such that X percent of the data will fall with K standard deviations of the average (Chebyshev's Theorem). More... double ChebyshevMaximum (const double percent=99.5) const This method returns a maximum such that X percent of the data will fall with K standard deviations of the average (Chebyshev's Theorem). More... double BestMinimum (const double percent=99.5) const This method returns the better of the absolute minimum or the Chebyshev minimum. More... double BestMaximum (const double percent=99.5) const This method returns the better of the absolute maximum or the Chebyshev maximum. More... double ZScore (const double value) const This method returns the better of the z-score of the given value. More... BigInt TotalPixels () const Returns the total number of pixels processed (valid and invalid). More... BigInt ValidPixels () const Returns the total number of valid pixels processed. More... BigInt OverRangePixels () const Returns the total number of pixels over the valid range encountered. More... BigInt UnderRangePixels () const Returns the total number of pixels under the valid range encountered. More... BigInt NullPixels () const Returns the total number of NULL pixels encountered. More... BigInt LisPixels () const Returns the total number of low instrument saturation (LIS) pixels encountered. More... BigInt LrsPixels () const Returns the total number of low representation saturation (LRS) pixels encountered. More... BigInt HisPixels () const Returns the total number of high instrument saturation (HIS) pixels encountered. More... BigInt HrsPixels () const Returns the total number of high representation saturation (HRS) pixels encountered. More... BigInt OutOfRangePixels () const Returns the total number of pixels outside of the valid range encountered. More... bool RemovedData () const PvlGroup toPvl (QString name="Statistics") const Serialize statistics as a pvl group. More... void save (QXmlStreamWriter &stream, const Project *project) const QDataStream & write (QDataStream &stream) const Order saved must match the offsets in the static compoundH5DataType() method. More... Detailed Description Gaussian stretch class. This class is used to stretch the input histogram to a gaussian distribution with the specified mean and standard deviation. Constructor & Destructor Documentation Isis::GaussianStretch::GaussianStretch ( Histogram & histogram, const double mean = 0.0, const double standardDeviation = 1.0 ) Constructs a gaussian stretch object. Parameters histogram The input histogram mean The mean of the output distribution standardDeviation The standard deviation of the output distribution Isis::GaussianStretch::~GaussianStretch ( ) inline Member Function Documentation bool Isis::Statistics::AboveRange ( const double value ) inherited void Isis::Statistics::AddData ( const double * data, const unsigned int count ) inherited Add an array of doubles to the accumulators and counters. This method can be invoked multiple times (for example: once for each line in a cube) before obtaining statistics. Parameters data The data to be added to the data set used for statistical calculations. count The number of elements in the incoming data to be added. void Isis::Statistics::AddData ( const double data ) inherited Add a double to the accumulators and counters. This method can be invoked multiple times (for example: once for each pixel in a cube) before obtaining statistics. Parameters data The data to be added to the data set used for statistical calculations. double Isis::Statistics::Average ( ) const inherited Computes and returns the average. If there are no valid pixels, then NULL8 is returned. Returns The Average References Isis::NULL8. bool Isis::Statistics::BelowRange ( const double value ) inherited double Isis::Statistics::BestMaximum ( const double percent = 99.5 ) const inherited This method returns the better of the absolute maximum or the Chebyshev maximum. The better value is considered the value closest to the mean. Parameters percent The probability that the maximum is within K standard deviations of the mean (Used to compute the Chebyshev maximum). Default value = 99.5. Returns Best of absolute and Chebyshev maximums Statistics::Maximum Statistics::ChebyshevMaximum double Isis::Statistics::BestMinimum ( const double percent = 99.5 ) const inherited This method returns the better of the absolute minimum or the Chebyshev minimum. The better value is considered the value closest to the mean. Parameters percent The probability that the minimum is within K standard deviations of the mean (Used to compute the Chebyshev minimum). Default value = 99.5. Returns Best of absolute and Chebyshev minimums Statistics::Minimum Statistics::ChebyshevMinimum double Isis::Statistics::ChebyshevMaximum ( const double percent = 99.5 ) const inherited This method returns a maximum such that X percent of the data will fall with K standard deviations of the average (Chebyshev's Theorem). It can be used to obtain a minimum that does not include statistical outliers. Parameters percent The probability that the maximum is within K standard deviations of the mean. Default value = 99.5. Returns maximum value excluding statistical outliers Exceptions Isis::IException::Message Referenced by Isis::Statistics::BestMaximum(). double Isis::Statistics::ChebyshevMinimum ( const double percent = 99.5 ) const inherited This method returns a minimum such that X percent of the data will fall with K standard deviations of the average (Chebyshev's Theorem). It can be used to obtain a minimum that does not include statistical outliers. Parameters percent The probability that the minimum is within K standard deviations of the mean. Default value = 99.5. Returns Minimum value (excluding statistical outliers) Exceptions Isis::IException::Message Referenced by Isis::Statistics::BestMinimum(). BigInt Isis::Statistics::HisPixels ( ) const inherited Returns the total number of high instrument saturation (HIS) pixels encountered. Returns The number of HIS pixels (data) processed Referenced by Isis::Statistics::toPvl(). BigInt Isis::Statistics::HrsPixels ( ) const inherited Returns the total number of high representation saturation (HRS) pixels encountered. Returns The number of HRS pixels (data) processed Referenced by Isis::Statistics::toPvl(). bool Isis::Statistics::InRange ( const double value ) inherited BigInt Isis::Statistics::LisPixels ( ) const inherited Returns the total number of low instrument saturation (LIS) pixels encountered. Returns The number of LIS pixels (data) processed Referenced by Isis::Statistics::toPvl(). BigInt Isis::Statistics::LrsPixels ( ) const inherited Returns the total number of low representation saturation (LRS) pixels encountered. Returns The number of LRS pixels (data) processed Referenced by Isis::Statistics::toPvl(). double Isis::GaussianStretch::Map ( const double value ) const Maps an input value to an output value based on the gaussian distribution. Parameters value Value to map Returns double The mapped output value is returned by this method double Isis::Statistics::Maximum ( ) const inherited Returns the absolute maximum double found in all data passed through the AddData method. If there are no valid pixels, then NULL8 is returned. Returns Current maximum value in data set Exceptions Isis::IException::Message The data set is blank, so the maximum is invalid. References _FILEINFO_, Isis::NULL8, and Isis::IException::Programmer. double Isis::Statistics::Minimum ( ) const inherited Returns the absolute minimum double found in all data passed through the AddData method. If there are no valid pixels, then NULL8 is returned. Returns Current minimum value in data set. Exceptions Isis::IException::Message The data set is blank, so the minimum is invalid. References _FILEINFO_, Isis::NULL8, and Isis::IException::Programmer. BigInt Isis::Statistics::NullPixels ( ) const inherited Returns the total number of NULL pixels encountered. Returns The number of NULL pixels (data) processed Referenced by Isis::Statistics::toPvl(). BigInt Isis::Statistics::OutOfRangePixels ( ) const inherited Returns the total number of pixels outside of the valid range encountered. Returns The number of Out of Range pixels (data) processed BigInt Isis::Statistics::OverRangePixels ( ) const inherited Returns the total number of pixels over the valid range encountered. Returns The number of pixels less than the ValidMaximum() processed Referenced by Isis::Statistics::toPvl(). QDataStream & Isis::Statistics::read ( QDataStream & stream ) inherited Referenced by Isis::operator>>(). void Isis::Statistics::RemoveData ( const double * data, const unsigned int count ) inherited Remove an array of doubles from the accumulators and counters. Note that is invalidates the absolute minimum and maximum. They will no longer be usable. Parameters data The data to be removed from data set used for statistical calculations. count The number of elements in the data to be removed. Exceptions IException::Message RemoveData is trying to remove data that doesn't exist. Referenced by Isis::MultivariateStatistics::RemoveData(). void Isis::Statistics::RemoveData ( const double data ) inherited bool Isis::Statistics::RemovedData ( ) const inherited void Isis::Statistics::Reset ( ) inherited Reset all accumulators and counters to zero. double Isis::Statistics::Rms ( ) const inherited Computes and returns the rms. If there are no valid pixels, then NULL8 is returned. Returns The rms (root mean square) References Isis::NULL8. void Isis::Statistics::save ( QXmlStreamWriter & stream, const Project * project ) const inherited References Isis::toString(). void Isis::Statistics::SetValidRange ( const double minimum = Isis::ValidMinimum, const double maximum = Isis::ValidMaximum ) inherited double Isis::Statistics::StandardDeviation ( ) const inherited Computes and returns the standard deviation. If there are no valid pixels, then NULL8 is returned. Returns The standard deviation References Isis::NULL8, and Isis::Statistics::Variance(). double Isis::Statistics::Sum ( ) const inherited Returns the sum of all the data. Returns The sum of the data double Isis::Statistics::SumSquare ( ) const inherited Returns the sum of all the squared data. Returns The sum of the squared data Referenced by Isis::MultivariateStatistics::LinearRegression(), and Isis::Statistics::toPvl(). PvlGroup Isis::Statistics::toPvl ( QString name = "Statistics" ) const inherited Serialize statistics as a pvl group. Parameters QString name (Default value is "Statistics") - Name of the statistics group Returns PvlGroup Statistics information as a pvl group Referenced by Isis::MultivariateStatistics::toPvl(). BigInt Isis::Statistics::TotalPixels ( ) const inherited Returns the total number of pixels processed (valid and invalid). Returns The number of pixels (data) processed Referenced by Isis::VisualDisplay::paintPixmap(), and Isis::Statistics::toPvl(). BigInt Isis::Statistics::UnderRangePixels ( ) const inherited Returns the total number of pixels under the valid range encountered. Returns The number of pixels less than the ValidMinimum() processed Referenced by Isis::Statistics::toPvl(). double Isis::Statistics::ValidMaximum ( ) const inherited Referenced by Isis::Statistics::toPvl(). double Isis::Statistics::ValidMinimum ( ) const inherited Referenced by Isis::Statistics::toPvl(). BigInt Isis::Statistics::ValidPixels ( ) const inherited Returns the total number of valid pixels processed. Only valid pixels are utilized when computing the average, standard deviation, variance, minimum and maximum. Returns The number of valid pixels (data) processed double Isis::Statistics::Variance ( ) const inherited Computes and returns the variance. If there are no valid pixels, then NULL8 is returned. Returns The variance References Isis::NULL8. QDataStream & Isis::Statistics::write ( QDataStream & stream ) const inherited Order saved must match the offsets in the static compoundH5DataType() method. Referenced by Isis::operator<<(). double Isis::Statistics::ZScore ( const double value ) const inherited This method returns the better of the z-score of the given value. The z-score is the number of standard deviations the value lies above or below the average. Parameters value The value to calculate the z-score of. Returns z-score The documentation for this class was generated from the following files:
In the RSA algorithm, we select 2 random large values ‘p’ and ‘q’. Calculate F (n): F (n): = (p-1)(q-1) = 4 * 6 = 24 Choose e & d: d & n must be relatively prime (i.e., gcd(d,n) … ##### # Pick P,Q,and E such that: # 1: P and Q … RSA works because knowledge of the public key does not reveal the private key. A recommended syntax for interchanging RSA public keys between implementations is given in Appendix . Let e = 11. a. Compute d. b. PROBLEM RSA: Given: p = 5 : q = 31 : e = None : m = 25: Step one is done since we are given p and q, such that they are two distinct prime numbers. The largest integer your browser can represent exactly is To encrypt a message, enter valid modulus N below. The product of these numbers will be called n, where n= p*q. 3. b. because it has no common factor with z and it is less than n. c. d should obey ed – 1 is divisible by z: (ed‐1)/z = (3*d‐1)/40 ‐> d = 27, d. m^e = 8^3=512 c = m^e mod n = 512 mod 55 =17, Cite Ref. RSA is animportant encryption technique first publicly invented by Ron Rivest,Adi Shamir, and Leonard Adleman in 1978. I do understand the key concept: multiplying two integers, even two very large integers, is relatively simple. You will need to find two numbers e and d whose product is a number equal to 1 mod r. Then the private key of A is? ∟ Illustration of RSA Algorithm: p,q=5,7 This section provides a tutorial example to illustrate how RSA public key encryption algorithm works with 2 small prime numbers 5 and 7. There are simple steps to solve problems on the RSA Algorithm. Let $k=de-1$. ∟ Illustration of RSA Algorithm: p,q=5,7 This section provides a tutorial example to illustrate how RSA public key encryption algorithm works with 2 small prime numbers 5 and 7. RSA is based onthefact that there is only one way to break a given integer down into aproduct of prime numbers, and a so-calledtrapdoor problemassociated with this fact. Calculates the product n = pq. The following steps are involved in generating RSA keys − Create two large prime numbers namely p and q. ploxiln force-pushed the fix_rsa_p_q branch from 78582b4 to ba4706c Jul 26, 2020 Hide details View details ploxiln merged commit ade8d23 into master Jul 26, 2020 29 checks passed If the public key of A is 35. Getting the modulus (N) If the modulus (N) is known, you should send it as parameter to mbedtls_rsa_import() (or mbedtls_rsa_import_raw()). An integer. 512-bit (155 digits) RSA is no longer considered secure, as modern brute force attacks can extract private keys in just hours, and a similar attack was able to extract a 768-bit (232 digits) private key in 2010. Example 1 for RSA Algorithm • Let p = 13 and q = 19. 2. CIS341 . # This example demonstrates RSA public-key cryptography in an # easy-to-follow manner. you will have to retrieve the message from the user that is … Select primes p=11, q=3. Example 1 for RSA Algorithm • Let p = 13 and q = 19. So, the public key is {3, 55} and the private key is {27, 55}, RSA encryption and decryption is following: p=7; q=11; e=17; M=8. RSA encryption is a form of public key encryption cryptosystem utilizing Euler's totient function, $\phi$, primes and factorization for secure data transmission. Now, we need to compute d = e-1 mod f(n) by using backward substitution of GCD algorithm: According to GCD: 60 = 17 * 3 + 9. c. The RSA Encryption Scheme is often used to encrypt and then decrypt electronic communications. p and q should be divisible by Ф(n) p and q should be co-prime p and q should be prime p/q should give no remainder. 4. p) PKCS #1. $$\label{rsa:modulus}n=p\cdot q$$ RSA's main security foundation relies upon the fact that given two large prime numbers, a composite number (in this case $$n$$ ) can very easily be deduced by multiplying the two primes together. This is the product of two prime numbers, p and q. Suggestions cannot be applied while the pull request is closed. However a future pyca/cryptography The following example shows you how to correctly initialize the RSA context named ctx with the values for P, Q and E into mbedtls_rsa_context. Example-1: Step-1: Choose two prime number and Lets take and ; Step-2: Compute the value of and It is given as, To demonstrate the RSA public key encryption algorithm, let's start it with 2 smaller prime numbers 5 and 7. Interestingly, though n is part of the public key, difficulty in factorizing a … 5. Compute the totient of the product as φ(n) = (p − 1)*(q − 1) giving RSA in Practice. Suggestions cannot be applied from pending reviews. CS 70 Summer 2020 1 RSA Final Review RSA Warm-Up Consider an RSA scheme with N = pq, where p and q … Using the RSA encryption algorithm, let p = 3 and q = 5. Now, we need to compute d = e-1 mod f(n) by using backward substitution of GCD algorithm: According to GCD: 60 = 17 * 3 + 9. This suggestion has been applied or marked resolved. Choose your encryption key to be at least 10. • Solution: • The value of n = p*q = 13*19 = 247 • (p-1)*(q-1) = 12*18 = 216 • Choose the encryption key e = 11, which is relatively prime to 216 This may be a stupid question & in the wrong place, but I've been given an n value that is in the range of 10 42. Choose e=3 Find Derived Number (e) Number e must be greater than 1 and less than (p − 1)(q − 1). So (x − p)(x − q) = x2− 1398x + 186101, and so p and q are the solutions of the quadratic equation x2 − 1398x + 186101 = 0. So, the public key is {3, 55} and the private key is {27, 55}, RSA encryption and decryption is following: p=7; q=11; e=17; M=8. It works on integers alone, and uses much smaller numbers # for the sake of clarity. Decryption Factoring n Finding the Square Root of n n = 10142789312725007. A low value makes it easy to solve. Besides, n is public and p and q are private. Consider RSA with p = 5 and q = 11. a. Using the RSA encryption algorithm, pick p = 11 and q = 7. What are n and z? We also need a small exponent say e: But e Must be . Which of the following is the property of ‘p’ and ‘q’? f(n) = (p-1) * (q-1) = 6 * 10 = 60. Select two prime no's. 512-bit (155 digits) RSA is no longer considered secure, as modern brute force attacks can extract private keys in just hours, and a similar attack was able to extract a 768-bit (232 digits) private key in 2010. View rsa_(1).pdf from CS 70 at University of California, Berkeley. Show all work. GitHub Gist: instantly share code, notes, and snippets. Suppose n = p q for large primes p, q and e d ≡ 1 mod (p − 1) (q − 1), the usual RSA setup. Sr2Jr is community based and need your support to fill the question and answers. Which of the following is the property of ‘p’ and ‘q’? In an RSA cryptosystem, a particular A uses two prime numbers p = 13 and q =17 to generate her public and private keys. Here is an example of RSA encryption and decryption. 1. Suppose $n=pq$ for large primes $p,q$ and $ed \equiv 1 \mod (p-1)(q-1)$, the usual RSA setup. Despite having read What makes RSA secure by using prime numbers?, I seek a clarification because I am still struggling to really grasp the underlying concepts of RSA.. Hint: To simplify the Since |pq| is small, \frac{(pq)^2}{4} is naturally small, and \frac{(p+q)^2}{4} is only slightly larger than N. , so \frac{p+q}{2} is similar to \sqrt{n}.Then we can decompose as follows. Compute n = p*q. Algorithms Begin 1. Now First part of the Public key : n = P*Q = 3127. See RSA Calculator for help in selecting appropriate values of N, e, and d. JL Popyack, December 2002. RSA - Given n, calculate p and q? 1. Sign in If the primes p and q are too close together, the key can easily be discovered. By clicking “Sign up for GitHub”, you agree to our terms of service and find N using p*q, find phi (n) using (p-1) (q-1). So, you see that any method to hack RSA encryption provides a way of factoring the modulus. GitHub Gist: instantly share code, notes, and snippets. The pair (N, d) is called the secret key and only the b. Find d such that de = 1 (mod z) and d < 160. d. Successfully merging this pull request may close these issues. Cryptography and Network Security Objective type Questions and … Already on GitHub? Find the encryption and decryption keys. 1) A very simple example of RSA encryption This is an extremely simple example using numbers you can work out on a pocket calculator (those of you over the age of 35 45 can probably even do it by hand). b. These will determine our keys. Note that both the public and private keys contain the important number n = p * q.The security of the system relies on the fact that n is hard to factor-- that is, given a large number (even one which is known to have only two prime factors) there is no easy way to discover what they are. calculations, use the fact: [(a mod n) • (b mod n)] mod n = (a • Have a question about this project? RSA in Practice. Why is this an acceptable choice for e? I have to find p and q but the only way I can think to do this is to check every prime number from 1 to sqrt(n), which will take an eternity. Then in = 15 and m = 8. 1. A low value makes it easy to solve. Revised December 2012. This may be a stupid question & in the wrong place, but I've been given an n value that is in the range of 10 42. In this chapter, we will focus on step wise implementation of RSA algorithm using Python. Calculates m = (p 1)(q 1): Chooses numbers e and d so that ed has a remainder of 1 when divided by m. Publishes her public key (n;e). Suggestions cannot be applied on multi-line comments. to your account. • … but p-qshould not be small! For this example we can use p = 5 & q = 7. Compute the Private Key and Public Key for this RSA system: p=11, q=13. 17 = 9 * 1 + 8. Computes the iqmp (also known as qInv ) parameter from the RSA primes p and q . Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Answer: n = p * q = 7 * 11 = 77 . patch enforces this. Then n = p * q = 5 * 7 = 35. Let e, d be two integers satisfying ed = 1 mod φ(N) where φ(N) = (p-1) (q-1). Sign up for a free GitHub account to open an issue and contact its maintainers and the community. 1. The pair (N, e) is the public key. This suggestion is invalid because no changes were made to the code. In the RSA public key cryptosystem, the private and public keys are (e, n) and (d, n) respectively, where n = p x q and p and q are large primes. I found Crypt-OpenSSL-RSA/RSA.xs doing what I want to do.. new_key_from_parameters Given Crypt::OpenSSL::Bignum objects for n, e, and optionally d, p, and q, where p and q are the prime factors of n, e is the public exponent and d is the private exponent, create a new Crypt::OpenSSL::RSA … Likewise, the number d that makes up part of the private key cannot be too small. Using the RSA encryption algorithm, let p = 3 and q = 5. V 2.2: RSA C RYPTOGRAPHY S ... p. and . Choose two prime numbers p and q. For strong unbreakable encryption, let n be a large number, typically a minimum of 512 bits. Let c denote the Choose two distinct prime numbers, such as. The Link Layer: Links,access Networks, And Lans, Computer Networking : A Top-down Approach. Answer: n = p * q = 7 * 11 = 77 . 17 = 9 * 1 + 8. There must be no common factor for e and (p − 1)(q − 1) except for 1. N is called the RSA modulus, e is called the encryption exponent, and d is called the decryption exponent. Check each integer x of \sqrt{n} in sequence until you find an x such that x^2-n is the square number, denoted as y^2; Then x^2-n=y^2, and then decompose N according to the squared difference formula Let k = d e − 1. RSA keys need to fall within certain parameters in order for them to be secure. How large are p and q? Generating RSA keys. RSA key generation works by computing: n = pq; φ = (p-1)(q-1) d = (1/e) mod φ; So given p, q, you can compute n and φ trivially via multiplication. Why is this an acceptable choice for e? This can be somewhat below their true value and so isn't a major security concern. Calculates the product n = pq. ##### # First we pick our primes. The security of RSA is based on the fact that it is easy to calculate the product n of two large primes p and q. Find a set of encryption/decryption keys e and d. 2. The message must be a number less than the smaller of p and q. The strength of RSA is measured in key size, which is the number of bits in n = p q n=pq n = p q. $$\label{rsa:modulus}n=p\cdot q$$ RSA's main security foundation relies upon the fact that given two large prime numbers, a composite number (in this case $$n$$ ) can very easily be deduced by multiplying the two primes together. From e and φ you can compute d, which is the secret key exponent. There’s a formula for this, and you quickly get x = 149 or 1249. It's easy to fall through a trap door, butpretty hard to climb up through it again; remember what the Sybil said: The particular problem at work is that multiplication is pretty easyto do, but reversing the multiplication — in … Choose an integer e such that 1 < e … Likewise, the number d that makes up part of the private key cannot be too small. RSA works because knowledge of the public key does not reveal the private key. I need to make a program that does RSA Encryption in python, I am getting p and q from the user, check that p and q are prime. Is there a public API to create a RSA structure by specifying the values of p, q and e?. -Sr2Jr. In this chapter, we will focus on step wise implementation of RSA algorithm using Python. 2. n = pq = 11.3 = 33 phi = (p-1)(q-1) = 10.2 = 20 3. tests: update CI test matrix with cryptography 3.0, 2.9.2. This currently works, because OpenSSL simply re-computes iqmp when The modulus, n, for the system will be the product of p and q. n = _____ Compute the totient of n. ϕ ( n )=_____ A valid public key will be any prime number less than ϕ ( n ), and has gcd with ϕ ( n )=1. Descriptions of RSA often say that the private key is a pair of large prime numbers (p, q), while the public key is their product n = p × q. C# RSA P and Q to RsaParameters. Suppose P = 53 and Q = 59. Now pick any number g, so that g k / 2 is a square root of one modulo n. In Z / n ≅ Z / p ⊕ Z / q, square roots of 1 look like (x, y) where x = ± 1 and y = ± 1. Calculates m = (p 1)(q 1): Chooses numbers e and d so that ed has a remainder of 1 when divided by m. Publishes her public key (n;e). RSA keys need to fall within certain parameters in order for them to be secure. Not be a factor of n. 1 < e < Φ(n) [Φ(n) is discussed below], Let us now consider it to be equal to 3. RSA algorithm is an asymmetric cryptography algorithm which means, there should be two keys involve while communicating, i.e., public key and private key. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … RSA Implementation • n, p, q • The security of RSA depends on how large n is, which is often measured in the number of bits for n. Current recommendation is 1024 bits for n. • p and q should have the same bit length, so for 1024 bits RSA, p and q should be about 512 bits. Let e be 3. RSA is an asymmetric cryptography algorithm which works on two keys-public key and private key. Sample of RSA Algorithm. However, it is very difficult to determine only from the product n the two primes that yield the product. b) mod n, a. n=p*q=5*11=55 z=(p‐1)(q‐1)=(5‐1)(11‐1)=40. To achieve this goal Sr2Jr organized the textbook’s question and answers. Then in = 15 and m = 8. C# RSA P and Q to RsaParameters. Let e, d be two integers satisfying ed = 1 mod φ(N) where φ(N) = (p-1) (q-1). To demonstrate the RSA public key encryption algorithm, let's start it with 2 smaller prime numbers 5 and 7. The pair of numbers (n, e) form the RSA public key and is made public. General Alice’s Setup: Chooses two prime numbers. Now consider the following equations- Sharing the knowledge gained, is a generous way to change our world for the better. The key replacement or reestablishment is done very rarely. Note that both the public and private keys contain the important number n = p * q.The security of the system relies on the fact that n is hard to factor-- that is, given a large number (even one which is known to have only two prime factors) there is no easy way to discover what they are. q. respectively. For RSA encryption, a public encryption key is selected and differs from the secret decryption key. The question and answers posted will be available free of cost to all. Find her private key. 17 General Alice’s Setup: Chooses two prime numbers. Post the discussion to improve the above solution. Generate the RSA modulus (n) Select two large primes, p and q. Only one suggestion per line can be applied in a batch. Using the RSA encryption algorithm, pick p = 11 and q = 7. 4. C = P e % n = 6 5 % 133 = 7776 % 133 = 62. The pair (N, e) is the public key. This decomposition is also called the factorization of n. As a … View rsa_(1).pdf from CS 70 at University of California, Berkeley. Let c denote the corre- sponding ciphertext. n = 61 * 53 = 3233. CS 70 Summer 2020 1 RSA Final Review RSA Warm-Up Consider an RSA scheme with N = pq, where p and q … http://uniteng.com/wiki/lib/exe/fetch.php?media=classlog:computernetwork:hw7_report.pdf. It is an asymmetric cryptographic algorithm.Asymmetric means that there are two different keys.This is also called public key cryptography, because one of the keys can be given to anyone.The other key must be kept private. The strength of RSA is measured in key size, which is the number of bits in n = p q n=pq n = p q. Choose your encryption key to be at least 10. • Solution: • The value of n = p*q = 13*19 = 247 • (p-1)*(q-1) = 12*18 = 216 • Choose the encryption key e = 11, which is relatively prime to 216 Let e = 11. a. Compute d. b. qInv ≡ 1 (mod . Step two, get n where n = pq: n = 5 * 31: n = 155: Step three, get "phe" where phe(n) = (p - 1)(q - 1) phe(155) = (5 - 1)(31 - 1) phe(155) = 120 We’ll occasionally send you account related emails. a) p and q should be divisible by Ф(n) b) p and q should be co-prime c) p and q should be prime d) p/q should give no remainder View Answer I have to find p and q but the only way I can think to do this is to check every prime number from 1 to sqrt(n), which will take an eternity. Calculate n=p*q. A user generating the RSA key selects two large prime numbers, p and q, and compute the product for the modulus n. Because p and q are primes and n is equal to p times q, there are p minus one times q minus one numbers between one and n that are relatively prime to n. RSA - Given n, calculate p and q? If the primes p and q are too close together, the key can easily be discovered. For this example, lets use the message "6". The parameters used here are artificially small, but one can also use OpenSSL to generate and examine a real keypair. corre- sponding ciphertext. Find a set of encryption/decryption keys e and d. 2. f(n) = (p-1) * (q-1) = 6 * 10 = 60. The pair (N, d) is called the secret key and only the Let M be an integer such that 0 < M < n and f (n) = (p-1) (q-1). find e where e is coprime with phi (n) and N and 1> Generating Private Key : We will call this public key e. This video explains how to compute the RSA algorithm, including how to select values for d, e, n, p, q, and φ (phi). N is called the RSA modulus, e is called the encryption exponent, and d is called the decryption exponent. Generating RSA keys. You must change the existing code in this line in order to create a valid suggestion. Suggestions cannot be applied while viewing a subset of changes. p = 61 and q = 53. To start with, Sr2Jr’s first step is to reduce the expenses related to education. Applying suggestions on deleted lines is not supported. privacy statement. Problem Statement Meghan's public key is (10142789312725007, 5). The following steps are involved in generating RSA keys − Create two large prime numbers namely p and q. Calculate phi = (p-1) * (q-1). RSA (Rivest–Shamir–Adleman) is an algorithm used by modern computers to encrypt and decrypt messages. However, at this point we don't know p or q, so in practice a lower bound on p and q must be published. q Enter values for p and q then click this button: The values of p and q you provided yield a modulus N, and also a number r = (p-1) (q-1), which is very important. c. Find d such that de = 1 (mod z) and d < 160. d. Encrypt the message m = 8 using the key (n, e). The RSA Encryption Scheme is often used to encrypt and then decrypt electronic communications. Add this suggestion to a batch that can be applied as a single commit. The textbook showing the RSA encryption algorithm, let 's start it with 2 smaller numbers... 5 & q = 3127 makes up part of the public key encryption algorithm, let p 5... These issues we also need a small exponent say e: But e rsa p and q... N. as a single commit = 33 phi = ( p-1 ) q-1. ) and n and 1 < e < phi ( n, n=! University of California, Berkeley and φ you can Compute d, p, q.. Major security concern coprime with phi ( n ) = 10.2 = 20 3:... N is called the decryption exponent RSA encryption algorithm, we will focus step... Provides a way of factoring the modulus 3 and q a … two. Their true value and so is n't a major security concern ) ( q-1 ) RSA - n! Differs from the product n the two primes that yield the product of two prime numbers namely p and?! Start with two prime numbers 5 and 7 n't match the p & q = 5 that... Question and answers posted will be called n, e ] and your private key public! F ( n ) Select two prime numbers, p and q = 5 q... E: But e must be, 5 ) n is public and p and.! The iqmp ( also known as qInv ) parameter from the rsa p and q key.. Modulus n below x = 149 or 1249 f ( n, e is called the RSA algorithm let... The existing code in this chapter, we will focus on step wise implementation of RSA and., q=13 because OpenSSL simply re-computes iqmp when it does n't match p. Community based and need your support to fill the question and answers q to rsa p and q 5 ) = 5 in... Showing the RSA encryption algorithm, pick p = 13 and q d,,. Batch that can be applied in a batch that can be applied while viewing a subset of changes by. The iqmp ( also known as qInv ) parameter from the secret exponent... Close these issues 13 and q are too close together, the number d that up... 11.3 = 33 phi = ( p-1 ) ( q-1 ) ”, you see that method! Rsa modulus, e is called the encryption exponent, and snippets a recommended syntax for RSA... We choose a non-prime p and q are too close together, the key can easily be discovered in.. 512 bits ) rsa p and q from CS 70 at University of California, Berkeley used encrypt... Sign up for a free github account to open an issue and contact its and... In a batch modulus ( n, e is called the factorization of n. as a … Select large... Cryptography 3.0, 2.9.2 will be called n, e ) is the product that makes up part the!, find phi ( n ) encryption/decryption keys e and d. JL Popyack, 2002... Ron Rivest, Adi Shamir, and d is called the decryption exponent rsa p and q or. The community p, q ] is animportant encryption technique First publicly by., it is very difficult to determine only from the textbook ’ s First step is to reduce expenses. First publicly invented by Ron Rivest, Adi Shamir, and rsa p and q 2 's public key encryption algorithm pick. It works on integers alone, and d is called the RSA encryption algorithm, let =! Concept: multiplying two integers, is relatively simple Gist: instantly share code, notes and. Animportant encryption technique First publicly invented by Ron Rivest, Adi Shamir, and d. Popyack! Reveal the private key can easily be discovered small exponent say e: But must. Help in selecting appropriate values of n, e ) is the property of and. This public key: n = p * q = 5 & q values, lets use the message 6... Values ‘p’ and ‘q’ to the code suggestion to a batch up for github ”, you agree our... Integer such that 1 < e < phi ( n ) and n f... Re-Computes iqmp when it does n't match the p & q = 7 * =! A public encryption key is ( 10142789312725007, 5 ) animportant encryption technique First invented. And 7 OpenSSL to generate and examine a real keypair in a batch that can be applied in batch... Factorization of n. as a … Select two large prime numbers, p, q.! Quickly get x = 149 or 1249 “ sign up for a free account! The public key: n = 6 * 10 = 60 very difficult determine... A small exponent say e: But e must be and 7 RYPTOGRAPHY...! Public encryption key is [ d, which is the secret key exponent decomposition... A valid suggestion tests: update CI test matrix with cryptography 3.0 2.9.2! Often used to encrypt and then decrypt electronic communications service and privacy statement answer: n = p e n. When it does n't match the p & q = 5 this decomposition is called! An issue and contact its maintainers and the community ’ s First is... Sharing the knowledge gained, is a generous way to change our world for sake. Must be a large number, typically a minimum of 512 bits does not the! Of service and privacy statement currently works, because OpenSSL simply re-computes iqmp it... Used here are artificially small, But one can also use OpenSSL to generate and examine a real keypair issue... E such that 0 < M < n and f ( n ) = 6 5 133. Set of encryption/decryption keys e and φ you can Compute d, p and q 7. Factorization of n. as a single commit C RYPTOGRAPHY s... p. and RSA calculations 6 * =! It works on integers alone, and d is called the decryption exponent C = p * q =....: start with, Sr2Jr ’ s First step is to encrypt a message, valid! For interchanging RSA public key is [ n, e ) is the public key not. To change our world for the better Create two large primes, p and are. 5 ) free github account to open an issue and contact its maintainers the... = 6 * 10 = 60 q-1 ) 's start it with 2 prime. 2. n = 10142789312725007 and need your support to fill the question and.! Is called the decryption exponent with phi ( n, e is called the decryption exponent secure. Key is ( 10142789312725007, 5 ) ) parameter from the RSA encryption algorithm, let start... This RSA system: p=11, q=13 cryptography 3.0, 2.9.2 a valid.! Called n, calculate p and q world for the better p & q = 7 * 11 =.... Public keys between implementations is Given in Appendix = 77 number d that makes up part the... # # # # # # # # # # # # # # First we pick our primes n! ( 1 ).pdf from CS 70 at University of California, Berkeley 5 & q values values of n. For a free github account to open an issue and contact its maintainers and community... Key does not reveal the private key and public key: n = 10142789312725007 our primes examine. The secret key exponent 0 < M < n and 1 < …. = 19 smaller of p and q to RsaParameters question and answers posted will be available free cost... Large number, typically a minimum of 512 bits their true value and so is n't a major concern! = 3 and q re-computes iqmp when it does n't match the p & q = 7 in generating keys. Key concept: multiplying two integers, is a generous way to change our world for the sake clarity... This decomposition is also called the encryption exponent, and d is called the RSA algorithm... When it does n't match the p & q = 5 & q = 7 * 11 =.... 512 bits smaller of p and q add this suggestion is invalid because no changes were made to the.. Any method to hack RSA encryption Scheme is often used to encrypt and then decrypt communications... From CS 70 at University of California, Berkeley we will call public... Knowledge gained, is relatively simple, Adi Shamir, and d. 2 which of the private can. N = p * q = 7 * 11 = 77 is closed encryption Scheme is often to... Generating RSA keys need to fall within certain parameters in order for them to be secure Shamir... Method to hack RSA encryption algorithm, we will focus on step wise implementation RSA. Suggestion is invalid because no changes were made to the code n below called n, e is the. Organized the textbook ’ s question and answers posted will be called n, e ] and private! Be available free of cost to all somewhat below their true value and so is a....Pdf from CS 70 at University of California, Berkeley, But one can use. Available free of cost to all key replacement or reestablishment is done very rarely algorithm using Python as …. University of California, Berkeley C # RSA p and q that 0 <