text
stringlengths
104
605k
# Lonely item error in bibliography .bbl file I want to create a .bbl file with bibitems to be able to use it with bibentry package. Below is a single example how I'm doing this: \begin{thebibliography}{1} \bibitem{ABC} Author Name {\em Title} 2016. \end{the bibliography} In my testFile.tex file I attached the testFile.bbl file as follows: \documentclass{article} \usepackage{bibentry} \begin{document} \input{testFile.bbl} \bibliographystyle{alpha} \nobibliography{testFile} My citation: \cite{ABC} \end{document} However, when I run my .tex file I obtain the following error: Lonely \item--perhaps a missing list environment. I checked that solution latex error lonely item--perhaps a missing list environment in the bibliography , but it is not working unfortunately. I'm using TexShop for Mac's, however I tried also to compile via terminal and the error was the same. Can anyone suggest what I might be doing wrong? • I am a bit confused by the appearance of bibliographystyle and nobiliography in your code. Why is it in there? Jun 23, 2016 at 12:36 • @Johannes_B nobiliography is because even if I want to cite, I don't want to have the list of references at the end of the file. bibliography style is to define the style, shouldn't it be there? – Ziva Jun 23, 2016 at 12:47 • The \nobibliography{testFile} in not needed, beside this your example works fine. Are you sure that you are really running exactly what you are showing? And how is the name of your main document? Can you show the log-file? Jun 23, 2016 at 12:50 • @The name of my main document is the same as the bibliography file (testFile.tex) – Ziva Jun 23, 2016 at 12:52 • I ran into the same problem by switching from an old working latex source to xelatex. Are you by any chance running xelatex? Mar 11, 2019 at 9:50 \usepackage{bibentry} \usepackage{bibentry} fixed the Lonely \item--perhaps a missing list environment error in the auto-generated .bbl file when using the \nobibliography{testFile} option.
On Sample-Based Testers Abstract The standard definition of property testing endows the tester with the ability to make arbitrary queries to elements'' of the tested object. In contrast, sample-based testers only obtain independently distributed elements (a.k.a. labeled samples) of the tested object. While sample-based testers were defined by Goldreich, Goldwasser, and Ron ({\em JACM}\/ 1998), most research in property testing is focused on query-based testers. In this work, we advance the study of sample-based property testers by providing several general positive results as well as by revealing relations between variants of this testing model. In particular: • We show that certain types of query-based testers yield sample-based testers of sublinear sample complexity. For example, this holds for a natural class of proximity oblivious testers. • We study the relation between distribution-free sample-based testers and one-sided error sample-based testers w.r.t the uniform distribution. While most of this work ignores the time complexity of testing, one part of it does focus on this aspect. The main result in this part is a sublinear-time sample-based tester for $k$-Colorability, for any $k\geq2$. Material available on-line Back to either Oded Goldreich's homepage or general list of papers.
Lee and Lemieux (p. 31, 2009) suggest the researcher to present the graphs while doing Regression discontinuity design analysis (RDD). They suggest the following procedure: "...for some bandwidth $h$, and for some number of bins $K_0$ and $K_1$ to the left and right of the cutoff value, respectively, the idea is to construct bins ($b_k$,$b_{k+1}$], for $k = 1, . . . ,K = K_0$+$K_1$, where $b_k = c−(K_0−k+1) \cdot h.$" c=cutoff point or threshold value of assignment variable h=bandwidth or window width. ...then compare the mean outcomes just to the left and right of the cutoff point..." ..in all cases, we also show the fitted values from a quartic regression model estimated separately on each side of the cutoff point...(p. 34 of the same paper) My question is how do we program that procedure in Stata or R for plotting the graphs of outcome variable against assignment variable (with confidence intervals) for the sharp RDD.. A sample example in Stata is mentioned here and here (replace rd with rd_obs) and a sample example in R is here. However, I think both of these didn't implement the step 1. Note, that both have the raw data along with the fitted lines in the plots. Sample graph without confidence variable [Lee and Lemieux,2009] Thank you in advance. - In response to your flag, a good way to revive your question is to edit it and offer a bounty: This will bump your question and get more people interested in it. If you feel this question might be better served on Stack Overflow, let us know and we can migrate it for you. – chl Feb 9 '13 at 20:06 I would like this to be migrated to Stack Overflow. – Metrics Feb 12 '13 at 12:10 Unfortunately, this question is too old to be migrated to Stack Overflow. I believe it belongs on Cross Validated but if you want to ask on Stack Overflow (putting emphasis on the programming aspect and providing a minimal reproducible example), let me know and I will close it here. – chl Feb 14 '13 at 10:29 Thanks chi for the bounty – Metrics Feb 15 '13 at 2:54 You should use cmogram. It does everything you need. – Yan Song Apr 7 '13 at 19:49 Is this much different from doing two local polynomials of degree 2, one for below the threshold and one for above with smooth at $K_i$ points? Here's an example with Stata: use votex // the election-spending data that comes with rd tw (scatter lne d, mcolor(gs10) msize(tiny)) (lpolyci lne d if d<0, bw(0.05) deg(2) n(100) fcolor(none)) (lpolyci lne d if d>=0, bw(0.05) deg(2) n(100) fcolor(none)), xline(0) legend(off) Alternatively, you can just save the lpoly smoothed values and standard errors as variables instead of using twoway. Below $x$ is the bin, $s$ is the smoothed mean, $se$ is the standard error, and $ul$ and $ll$ are the upper and lower limits of the 95% Confidence Interval for the smoothed outcome. lpoly lne d if d<0, bw(0.05) deg(2) n(100) gen(x0 s0) ci se(se0) lpoly lne d if d>=0, bw(0.05) deg(2) n(100) gen(x1 s1) ci se(se1) /* Get the 95% CIs */ forvalues v=0/1 { gen ulv' = sv' + 1.95*sev' gen llv' = sv' - 1.95*sev' }; tw (line ul0 ll0 s0 x0, lcolor(blue blue blue) lpattern(dash dash solid)) (line ul1 ll1 s1 x1, lcolor(red red red) lpattern(dash dash solid)), legend(off) As you can see, the lines in the first plot are the same as in the second. - @Dimitry: +1 for the solution. However, I would like to have the mean value for each bin (please run the stata example above) rather than the scatter plot showing raw values. CI is great. – Metrics Feb 15 '13 at 2:54 I am not quite sure what you mean. I added coded showing how you get the smoothed means in each bin by hand. If that's not what you are looking for, please explain what you have in mind in more detail. As far as I can tell, these graphs usually show the raw data and the smoothed means. – Dimitriy V. Masterov Feb 15 '13 at 21:34 To quote Lee and Lemieux (p. 31, 2009): "A standard way of graphing the data is to divide the assignment variable(d here) into a number of bins, making sure there are two separate bins on each side of the cutoff point (to avoid having treated and untreated observations mixed together in the same bin). Then, the average value of the outcome variable can be computed for each bin and graphed against the mid-points of the bins". So, if there are 50 bins, then we will have only 25 data points on the left and right and not all the raw data (e.g, Graph 6(b) of the reference: updated in question) – Metrics Feb 15 '13 at 22:39 Now it's clear! I agree on the kernel. But are you certain it's now not degree 0? That would correspond to equally-weighted mean smoothing. – Dimitriy V. Masterov Feb 15 '13 at 22:51 I believe that corresponds to lpoly with a regular kernel and a degree 0 polynomial – Dimitriy V. Masterov Feb 19 '13 at 4:35 Here's a canned algorithm. Calonico, Cattaneo, and Titiunik recently proposed a procedure for robust bandwidth selection. They implemented their theoretical work for both Stata and R, and it also comes with a plot command. Here's an example in R: # install.packages("rdrobust") library(rdrobust) set.seed(26950) # from random.org x<-runif(1000,-1,1) y<-5+3*x+2*(x>=0)+rnorm(1000) rdplot(y,x) That will give you this graph: -
## Search Result ### Search Conditions Years All Years for journal 'PTP' author 'Y.* Kurihara' : 14 total : 14 ### Search Results : 14 articles were found. 1. Progress of Theoretical Physics Vol. 48 No. 5 (1972) pp. 1758-1759 : (2) On a $T$-matrix Theory for Many-Particle System Yoshihiro Kuroda and Yasunari Kurihara 2. Progress of Theoretical Physics Vol. 51 No. 4 (1974) pp. 959-972 : (2) Theory of Quantum Crystals Yasunari Kurihara, Yoshihiro Kuroda and Norikazu Ishimura 3. Progress of Theoretical Physics Vol. 51 No. 6 (1974) pp. 1987-1989 : (2) Renormalization Theory of the Peierls Transition in the One-Dimensional Fröhlich Hamiltonian Yasunari Kurihara and Yoshikazu Suzumura 4. Progress of Theoretical Physics Vol. 53 No. 5 (1975) pp. 1233-1242 : (2) Self-Consistent Theory of Peierls Transition in One-Dimensional Electron-Phonon Systems Yoshikazu Suzumura and Yasunari Kurihara 5. Progress of Theoretical Physics Vol. 67 No. 5 (1982) pp. 1483-1494 : (4) Particle-Nucleus Scattering Theory with Realistic Two-Body Interactions Yukio Kurihara, Yoshinori Akaishi and Hajime Tanaka 6. Progress of Theoretical Physics Vol. 71 No. 3 (1984) pp. 561-568 : (4) Central Repulsion of $\Lambda$-$\alpha$ Interaction with Hard-Core $\Lambda$-$N$ Potential Yukio Kurihara, Yoshinori Akaishi and Hajime Tanaka 7. Progress of Theoretical Physics Vol. 73 No. 6 (1985) pp. 1455-1470 : (4) Nuclear Collision Theory with Many-Body Correlations. I Yukio Kurihara 8. Progress of Theoretical Physics Vol. 73 No. 6 (1985) pp. 1471-1484 : (4) Nuclear Collision Theory with Many-Body Correlations. II Yukio Kurihara 9. Progress of Theoretical Physics Vol. 75 No. 5 (1986) pp. 1196-1203 : (4) The Dynamical Origin of Nuclear Mass Number Dependence in EMC-Effect INS 232, 233 Collaboration, Yukio Kurihara, Schin Daté, Atsushi Nakamura, Hiroshi Sato, Hiroyuki Sumiyoshi and Koh Yoshinada 10. Progress of Theoretical Physics Vol. 88 No. 1 (1992) pp. 103-110 : (5) Vector Boson Pair Productions with a Hard Photon Emission Keisuke Fujii, Junpei Fujimoto, Junichi Kanzaki, Yoshimasa Kurihara, Akiya Miyamoto and Toshifumi Tsukamoto 11. Progress of Theoretical Physics Vol. 95 No. 2 (1996) pp. 375-388 : (5) Improved QEDPS for Radiative Corrections in $e^{+}e^{-}$ Annihilaiton Tomo Munehisa, Junpei Fujimoto, Yoshimasa Kurihara and Yoshimitsu Shimizu 12. Progress of Theoretical Physics Vol. 96 No. 6 (1996) pp. 1223-1235 : (5) Hard Photon Distributions in $e^{+}e^{-}$ Annihilation Process by QEDPS Yoshimasa Kurihara, Junpei Fujimoto, Tomo Munehisa and Yoshimitsu Shimizu 13. Progress of Theoretical Physics Vol. 103 No. 3 (2000) pp. 587-612 : (5) A QED Shower Including the Next-to-Leading Order Logarithmic Correction in $\boldsymbol{ e^+e^-}$ Annihilation Tomo Munehisa, Junpei Fujimoto, Yoshimasa Kurihara and Yoshimitsu Shimizu 14. Progress of Theoretical Physics Vol. 103 No. 6 (2000) pp. 1199-1211 : (5) QED Radiative Corrections to Non-Annihilation Processes Using the Structure Function and the Parton Shower Yoshimasa Kurihara, Junpei Fujimoto, Kiyoshi Kato, Tomo Munehisa, Yoshimitsu Shimizu and Keijiro Tobimatsu
# 3.1.0¶ ## ImageDraw arc, chord and pieslice can now use floats¶ There is no longer a need to ensure that the start and end arguments for arc, chord and pieslice are integers. Note that these numbers are not simply rounded internally, but are actually utilised in the drawing process. ## Consistent multiline text spacing¶ When using the ImageDraw multiline methods, the spacing between lines was inconsistent, based on the combination on ascenders and descenders. This has now been fixed, so that lines are offset by their baselines, not the absolute height of each line. There is also now a default spacing of 4px between lines. ## Exif, Jpeg and Tiff Metadata¶ There were major changes in the TIFF ImageFileDirectory support in Pillow 3.0 that led to a number of regressions. Some of them have been fixed in Pillow 3.1, and some of them have been extended to have different behavior. ### TiffImagePlugin.IFDRational¶ Pillow 3.0 changed rational metadata to use a float. In Pillow 3.1, this has changed to allow the expression of 0/0 as a valid piece of rational metadata to reflect usage in the wild. Rational metadata is now encapsulated in an IFDRational instance. This class extends the Rational class to allow a denominator of 0. It compares as a float or a number, but does allow access to the raw numerator and denominator values through attributes. When used in a ImageFileDirectory_v1, a 2 item tuple is returned of the numerator and denominator, as was done previously. This class should be used when adding a rational value to an ImageFileDirectory for saving to image metadata. ### JpegImagePlugin._getexif¶ In Pillow 3.0, the dictionary returned from the private, experimental, but generally widely used _getexif function changed to reflect the ImageFileDirectory_v2 format, without a fallback to the previous format. In Pillow 3.1, _getexif now returns a dictionary compatible with Pillow 2.9 and earlier, built with ImageFileDirectory_v1 instances. Additionally, any single item tuples have been unwrapped and return a bare element. The format returned by Pillow 3.0 has been abandoned. A more fully featured interface for EXIF is anticipated in a future release.
# Manipulation of molecules on surfaces with the scanning tunnelling microscope Kaya, Dogan (2016). Manipulation of molecules on surfaces with the scanning tunnelling microscope. University of Birmingham. Ph.D. Full text not available from this repository. ## Abstract An experimental study of buckministerfullerene on the Au(111) surface and of chlorobenzene and oxygen molecules on the Si(111)-7x7 surface has been conducted with variable and room temperature (RT) scanning tunneling microscopes, respectively. First, the formation of hybrid clusters, (C$$_{60}$$)$$_n$$-(Au)$$_m$$, from 110 K to RT has been studied at different C$$_{60}$$ molecule coverages. The properties of the hybrid clusters, such as rotation, transformation and diffusion, were observed at RT. Mechanical manipulation of C60 molecules in the hybrid cluster was performed in order to explore the production of a single type of hybrid cluster. Cascade manipulation was achieved by downsizing (C$$_{60}$$)$$_{14}$$-(Au)$$_{63}$$ clusters to (C$$_{60}$$)$$_{7}$$-(Au)$$_{19}$$ clusters at RT. The manipulation of the hybrid clusters was performed at 110 K in addition to RT. A comparative study of the non-local manipulation of chlorobenzene molecules and oxygen on the Si(111)-7x7 surface was performed with the RT STM via electron induced from the STM tip. It is found that a suppression region (~40 Å) for both molecules is quite universal in the STM experiments. A local desorption threshold of +1.4 V was found for the chlorobenzene molecule. Local manipulation of bright and dark sites of oxygen were induced six different transformations on the molecular sites. Type of Work: Thesis (Doctorates > Ph.D.) Award Type: Doctorates > Ph.D. Supervisor(s): Supervisor(s)EmailORCID Palmer, Richard E.UNSPECIFIEDUNSPECIFIED Guo, QuanminUNSPECIFIEDUNSPECIFIED College/Faculty: Colleges (2008 onwards) > College of Engineering & Physical Sciences School or Department: School of Physics and Astronomy Funders: Other Other Funders: Ministry of National Education, Turkey Subjects: Q Science > QC Physics URI: http://etheses.bham.ac.uk/id/eprint/6797 ### Actions Request a Correction View Item
# Friction question This topic is 4588 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi I'm making a small demo something like 1942 the top down plane game. I'm not making the physics completely realistic but i'm still using the standard equations just to show that I can implement these. So i'm using the following variables and using them as 2d vectors. Displacement = s Inital Velocity = u Velocity = v Acceleration = a Time = t Force = F Mass = m So say the keypad up key is pressed the program will apply a force of x in the up direction of the screen to the plane. Then use F=ma to get the acceleration. Then plug this into s = ut + 1/2at^2 to get the displacement of the plane in the time interval of the frame. The problem lies in how I should apply air resistance or friction to make the plane slow down and stop gradually. As it is it just keeps going until I press down to put a different force on the plane or it hits the edge of the screen. All help is greatly appreciated. Cheers Neil ##### Share on other sites You apply a force in the reverse direction, and then use the netforce or whatever to get your a.. However, I'm not sure that's a good idea to use your second formula.. It'll work wierd when you don't use constant acceleration I think.. Friction between surfaces is the normal force * the friction coefficient or whatever.. Air resistanse is a bit more complicated.. I don't remember the formula.. But you'd still substract this force from the F you use right now to get acceleration.. ##### Share on other sites Air resistance at high speeds is proportional to velocity2 (i.e the faster you go, the greater the force). You could calculate a resistive force FR = Av2, where A is some constant. However an easier way would just be to directly modify the velocity each frame. i.e update(dt){ a = F/m v = 0.99*v + a*dt s = v*dt} ##### Share on other sites So if I added in a statement at the Force calculation bit say if(v > 0) { Force_Resistance = (A)*(v^2) ; Force_Resistance *= (-1) ; //To make the force opposite to the velocity } Then sum up the total forces to get a net force and work out the acceleration from there and plug it into the velocity/displacement equations. ##### Share on other sites Yep that should work. If it doesn't work too well, just fallback onto the other method I mentioned.. Thanks Much appreciated You're welcome.
### Top 6 Arxiv Papers Today in Classical Analysis And Odes ##### #1. A $q$-analogue of the matrix sixth Painlevé system ###### Hiroshi Kawakami We derive a $q$-analogue of the matrix sixth Painlev\'e system via a connection-preserving deformation of a certain Fuchsian linear $q$-difference system. In specifying the linear $q$-difference system, we utilize the correspondence between linear differential systems and linear $q$-difference systems from the viewpoint of the spectral type. The system of non-linear $q$-difference equations thus obtained can also be regarded as a non-abelian analogue of Jimbo-Sakai's $q$-$P_{\mathrm{VI}}$. more | pdf | html None. ###### Tweets MathPHYPapers: A $q$-analogue of the matrix sixth Painlev\'e system. https://t.co/2FPSVbAyN2 mathCAbot: Hiroshi Kawakami : A $q$-analogue of the matrix sixth Painlevé system https://t.co/wKUB7V6KIi https://t.co/GimMcoaVqk ysykimura: RT @MathPHYPapers: A $q$-analogue of the matrix sixth Painlev\'e system. https://t.co/2FPSVbAyN2 None. None. ###### Other stats Sample Sizes : None. Authors: 1 Total Words: 7847 Unqiue Words: 1706 ##### #2. A proof of the Veselov Conjecture for segments ###### Antonio J. Duran In this note, we prove Veselov's conjecture on the zeros of Wronskians whose entries are Hermite polynomials when the degrees of the polynomials are consecutive positive integres. more | pdf | html None. None. None. ###### Other stats Sample Sizes : None. Authors: 1 Total Words: 0 Unqiue Words: 0 ##### #3. On certain generalizations of one function and related problems ###### Symon Serbenyuk The present article is devoted to the generalized Salem functions, the generailed shift operator, and certain related problems. A description of further investigations of the author of this article is given.These investigations (in terms of various representations of real numbers) include the generalized Salem functions and generalizations of the Gauss-Kuzmin problem. more | pdf | html None. ###### Tweets mathCAbot: Symon Serbenyuk : On certain generalizations of one function and related problems https://t.co/5vnFmJUMWe https://t.co/DUykGz2soV None. None. ###### Other stats Sample Sizes : [2, 2] Authors: 1 Total Words: 5649 Unqiue Words: 1280 ##### #4. A simplified proof of the existence of flat Littlewood polynomials ###### Tamás Erdélyi Polynomials with coefficients in $\{-1,1\}$ are called Littlewood polynomials. Using special properties of the Rudin-Shapiro polynomials and classical results in approximation theory such as Jackson's Theorem, de la Vall\'e Poussin sums, Bernstein's inequality, Riesz's Lemma, divided differences, etc., we give a significantly simplified proof of a recent breakthrough result by Balister, Bollob\'as, Morris, Sahasrabudhe, and Tiba stating that there exist absolute constants $\eta_2 > \eta_1 > 0$ and a sequence $(P_n)$ of Littlewood polynomials $P_n$ of degree $n$ such that $$\eta_1 \sqrt{n} \leq |P_n(z)| \leq \eta_2 \sqrt{n}\,, \qquad z \in \mathbb{C}\,, \, \, |z| = 1\,,$$ confirming a conjecture of Littlewood from 1966. more | pdf | html None. ###### Tweets mathCAbot: Tamás Erdélyi : A simplified proof of the existence of flat Littlewood polynomials https://t.co/i1hXlg9cZb https://t.co/INXMfiQuM5 None. None. ###### Other stats Sample Sizes : None. Authors: 1 Total Words: 4702 Unqiue Words: 1223 ##### #5. Hyers-Ulam stability for differential equations and partial differential equations via Gronwall Lemma ###### Daniela Marian, Sorina Anamaria Ciplea, Nicolaie Lungu, Themistocles M. Rassias In this paper we will study Hyers-Ulam stability for Bernoulli differential equations, Riccati differential equations and quasilinear partial differential equations of first order, using Gronwall Lemma, following a method given by Rus. more | pdf | html None. ###### Tweets mathCAbot: Daniela Marian, Sorina Anamaria Ciplea, Nicolaie Lungu, Themistocles M. Rassias : Hyers-Ulam stability for differential equations and partial differential equations via Gronwall Lemma https://t.co/lha5Dz6Eil https://t.co/1XVbBoq0Vc None. None. ###### Other stats Sample Sizes : None. Authors: 4 Total Words: 0 Unqiue Words: 0 ##### #6. Stability for nonautonomous linear differential systems with infinite delay ###### Teresa Faria We study the stability of general $n$-dimensional nonautonomous linear differential equations with infinite delays. Delay independent criteria, as well as criteria depending on the size of some finite delays are established. In the first situation, the effect of the delays is dominated by non-delayed diagonal negative feedback terms, and sufficient conditions for both the asymptotic and the exponential asymptotic stability of the system are given. In the second case, the stability depends on the size of some bounded diagonal delays and coefficients, although terms with unbounded delay may co-exist. Our results encompass DDEs with discrete and distributed delays, and enhance some recent achievements in the literature. more | pdf | html None. None. None. ###### Other stats Sample Sizes : None. Authors: 1 Total Words: 14789 Unqiue Words: 2372 Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day. Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter). To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else). To see beautiful figures extracted from papers, follow us on Instagram. Tracking 257,975 papers. ###### Search Sort results based on if they are interesting or reproducible. Interesting Reproducible Online ###### Stats Tracking 257,975 papers.
Hi Scott, I had a question regarding the sequential numbering Apex example…I am looking to automatically restart the sequence every month, which is not a problem using your example (i.e. changing the year to month in the expression). However, I would also like to add in a condition for year, so that the sequence restarts for each month of each year (i.e. my problem is for example that Feb 2011 last sequence number is 5, and then in Feb 2012 this becomes 6, where I would like it to be 1). I am wondering what the syntax would be. Thanks in advance, Lawn. Normally, the term infinite sequence refers to a sequence that is infinite in one direction, and finite in the other—the sequence has a first element, but no final element. Such a sequence is called a singly infinite sequence or a one-sided infinite sequence when disambiguation is necessary. In contrast, a sequence that is infinite in both directions—i.e. that has neither a first nor a final element—is called a bi-infinite sequence, two-way infinite sequence, or doubly infinite sequence. A function from the set Z of all integers into a set, such as for instance the sequence of all even integers ( …, −4, −2, 0, 2, 4, 6, 8… ), is bi-infinite. This sequence could be denoted {\displaystyle (2n)_{n=-\infty }^{\infty }} . You may be familiar to view multiple webpages in Firefox/Chrome/IE, and switch between them by clicking corresponding tabs easily. Here, Office Tab supports similar processing, which allow you to browse multiple Excel workbooks or Word documents in one Excel window or Word window, and easily switch between them by clicking their tabs. Click for free trial of Office Tab! No matter how light the file might be, if you need 1000 units, that means you send a 1000-page file to the printer if you use Data Merge, that is not as efficient as sending one page and 1000 numbers to insert in the print stream, but to do that you need a plugin, which is what that page Bob directed you is leading up to. I've never used the program Harbs has mentioned, but he's a pretty smart guy and if he says it works well, I'd take a look. A subsequence of a given sequence is a sequence formed from the given sequence by deleting some of the elements without disturbing the relative positions of the remaining elements. For instance, the sequence of positive even integers (2, 4, 6, ...) is a subsequence of the positive integers (1, 2, 3, ...). The positions of some elements change when other elements are deleted. However, the relative positions are preserved. Developing Document Control and Image Numbering Systems. The document and imaging numbering or coding system for your application gives you control over the documents. If you are going to image documents, the document database record number and the image record or file number may be the same number. A document and imaging numbering or coding system can be very simple or sophisticated, depending upon your desires and the needs of a particular case. Below are three possible methods of coding your documents and images: If you want to save all that work you just did, click the Save button. As you exit Word, the Building Blocks (the feature Quick Parts and AutoText are grouped under) are being saved in your Normal template. If you’re really up for a challenge, you could start a whole new discovery template with its own set of Building Blocks like the ones above, then distribute it to your work group so they can get the benefit of your new-found expertise. If you use the Form Wizard, controls will be named with the field name the control is bound to. But that name can be changed. This trips up a lot of people because my code samples use a naming convention that is not what is automatically generated. So you just need to make sure you use the correct name for the object. The name is shown in the Name property on the Other tab (Not the Caption property). To determine what field in your table the control is bound to check the ControlSource property. It should be bound to the PONum field. For whatever reason, AllExperts did not let me post a direct reply to your response re: “Ok, what is the ControlSource of the Fixture Number control? It should be: =cboZone & “-” & Format(Me.FNumber,”000″)” and adding “Me.Refresh” to my code (within the last 10 minutes). It just had the “rate this reply”. I added the Me.Refresh and corrected my location of the =cboZone code and it works correctly now. InDesign allows you to add a page number marker to a master page within the document. The master page functions as a template for every page it's applied to, so the consecutive page numbers appear on every page. InDesign updates the page number automatically as you insert, delete and move pages. To add a page number marker to a master page, create a text box on the master page by going to the Type menu and choosing "Insert Special Character," "Markers" and then "Current Page Number." If you would like to add an image to your comment (not an avatar, but an image to help in making the point of your comment), include the characters [{fig}] in your comment text. You’ll be prompted to upload your image when you submit the comment. Maximum image size is 6Mpixels. Images larger than 600px wide or 1000px tall will be reduced. Up to three images may be included in a comment. All images are subject to review. Commenting privileges may be curtailed if inappropriate images are posted.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. This post is a supplementary material for an assignment. The assignment is part of the Augmented Machine Learning unit for a Specialised Diploma in Data Science for Business. The aim of this project is to classify if patients with Community Acquired Pneumonia (CAP) became better after seeing a doctor or became worse despite seeing a doctor. Previously, EDA (part1, part 2)and feature enginnering was done in R. The dataset was then imported into DataRobot for the first round of basic modelling (the aim of the assignment was to use DataRobot). Next, R was used for advanced feature selection in the second round of DataRobot’s modelling. The post here will outline the results after both rounds of modelling. # All models library(datarobot) library(tidyverse) theme_set(theme_light()) ConnectToDataRobot(endpoint = "https://app.datarobot.com/api/v2", token = "API token key") ## [1] "Authentication token saved" # Saving the relevant project ID projects_inDR<-data.frame(name=ListProjects()$projectName, ID=ListProjects()$projectId) project_ID<-projects_inDR %>% filter(name=="Classify CAP") %>% pull(ID) After 2 runs of modelling (basic and with advanced feature selection), a total of 31 models was created. # Saving the models List_Model3<-ListModels(project_ID) %>% as.data.frame(simple = F) List_Model3 %>% select(expandedModel, featurelistName, Weighted AUC.crossValidation) %>% arrange(-Weighted AUC.crossValidation) %>% DT::datatable(rownames = F, options = list(searchHighlight = TRUE, paging= T)) ## TPR and TNR 3 metrics were used to compare the models, namely, AUC, sensitivity/ true positive rate (TPR) and specificity/ true negative rate (TNR). AUC was the primary metric. TPR and TNR had to be extracted separately. DataRobot determines the TPR and TNR based on the maximized F1 score (though the TPR and TNR can be adjusted based on other F1 score). There is an indirect way and direct way to extract the TPR and TNR . ### 1. Indirect The TPR and TNR can be indirectly calculated by using the values in the confusion matrix. # extract all modelId model_id<-List_Model3 %>% filter(!is.na(Weighted AUC.crossValidation)) %>% pull(modelId) However, the API was unable to obtain the confusion matrix. (GetModel(project_ID, model_id[1]) %>% GetConfusionChart(source = DataPartition$VALIDATION)) (GetModel(project_ID, model_id[1]) %>% GetConfusionChart(source = DataPartition$CROSSVALIDATION)) (GetModel(project_ID, model_id[9]) %>% GetConfusionChart(source = DataPartition$VALIDATION)) (GetModel(project_ID, model_id[9]) %>% GetConfusionChart(source = DataPartition$CROSSVALIDATION)) ## [1] "Error: (404) No confusion chart for validation" ## [1] "Error: (404) No confusion chart for validation" ## [1] "Error: (404) No confusion chart for validation" ## [1] "Error: (404) No confusion chart for validation" ### 2. Direct way The TPR and TNR is directly printed on the Selection Summary under the ROC option in the DataRobot GUI. The same can be accessed via the GetRocCurve function in the API. # loop to get all TNR and TPR from ROC Curve option. https://stackoverflow.com/questions/29402528/append-data-frames-together-in-a-for-loop TPR_TNR<-data.frame() for (i in model_id){ selection_summary<-GetModel(project_ID, i) %>% GetRocCurve(source = DataPartition$CROSSVALIDATION) temp<-selection_summary$rocPoints%>% filter(f1Score==max(f1Score)) %>% select(truePositiveRate, trueNegativeRate) %>% mutate(modelId= i) TPR_TNR<-bind_rows(TPR_TNR, temp) } ## # A tibble: 6 x 3 ## truePositiveRate trueNegativeRate modelId ## <dbl> <dbl> <chr> ## 1 0.724 0.903 5f564d6cdf91a0559ffbfb8c ## 2 0.621 0.931 5f54de5d269d39249ea468a3 ## 3 0.770 0.876 5f54de5d269d39249ea468a2 ## 4 0.649 0.936 5f54de5d269d39249ea468a4 ## 5 0.787 0.886 5f54de5d269d39249ea468a7 ## 6 0.736 0.906 5f564d880a859d0d17d859c8 ### Updating main dataframe TPR_TNR was joined with the main dataframe, List_Model3, to allow all metrics to be in a common dataframe. List_Model3<-List_Model3 %>% left_join(TPR_TNR, by = "modelId") %>% # select columns of interest select(modelType, expandedModel, modelId, blueprintId, featurelistName, featurelistId, Weighted AUC.crossValidation, TPR=truePositiveRate, TNR=trueNegativeRate) %>% mutate(Weighted AUC.crossValidation= as.double(Weighted AUC.crossValidation)) ## Warning in mask$eval_all_mutate(dots[[i]]): NAs introduced by coercion # Results ## Performance of models The AUC was high and similar for all models. For this study, it was more important to identify as many patients who became worse despite seeing a doctor (i.e. true positive). Identification of these patients with poor outcomes would allow better intervention to be provided to increase their chances of a better clinical evolution. Thus, models with high TPR were preferred. Though there is a trade-off between TPR and TNR, selecting models with higher TPR in this study would not compromise TNR drastically as TNR were high and similar for all models. TNR was better in TPR for all models due to the imbalanced dataset. The models saw more negative classes during training and thus were more familiar to identify negative cases during validation/testing. List_Model3 %>% pivot_longer(cols = c(Weighted AUC.crossValidation, TPR, TNR), names_to="metric") %>% ggplot(aes(metric, value, fill=metric)) + geom_boxplot(alpha=.5) ## Warning: Removed 3 rows containing non-finite values (stat_boxplot). ## Performance of Advanced Feature Selection Between models generated using advanced feature selection, the AUC was high for both DR Reduced Features and the top 32 mean feature impact variables. There was less variability for DR Reduced Features. DR Reduced Features had higher average TPR but larger variation compared to the top 32 mean feature impact variables. A model using DR Reduced Features had the highest TPR. List_Model3 %>% pivot_longer(cols = c(Weighted AUC.crossValidation, TPR, TNR), names_to="metric") %>% ggplot(aes(metric, value, fill=featurelistName)) + geom_boxplot(alpha=.5) ## Warning: Removed 3 rows containing non-finite values (stat_boxplot). ## Selecting a model for deployment Out of the 31 models, a handful of the models were selected to be compared against the holdout set to determine the most appropriate model for deployment. The models had to have high AUC and high TPR (as shared above TPR is prioritise over TNR). deploy_all<-List_Model3 %>% mutate( AUC_rank=min_rank(desc(Weighted AUC.crossValidation)), TPR_rank= min_rank(desc(TPR)), #min rank will have same ranking for ties TNR_rank= min_rank(desc(TNR))) %>% select(modelType, featurelistName, Weighted AUC.crossValidation, AUC_rank, TPR, TPR_rank, TNR, TNR_rank, modelId, blueprintId) %>% mutate(across(.cols = c(Weighted AUC.crossValidation, TPR, TNR), .fns = ~round(.x, 5))) deploy_all %>% select(-c(modelId, blueprintId)) ## # A tibble: 31 x 8 ## modelType featurelistName Weighted AUC.c~ AUC_rank TPR TPR_rank TNR ## <chr> <chr> <dbl> <int> <dbl> <int> <dbl> ## 1 eXtreme ~ DR Reduced Fea~ NA NA NA NA NA ## 2 Keras Sl~ DR Reduced Fea~ 0.909 19 0.724 11 0.903 ## 3 RuleFit ~ L1 0.905 21 0.621 24 0.931 ## 4 Keras Sl~ L1 0.908 20 0.770 4 0.876 ## 5 eXtreme ~ L1 0.927 4 0.649 21 0.936 ## 6 RandomFo~ L1 0.919 12 0.787 3 0.886 ## 7 Generali~ mean ft impact 0.923 8 0.736 6 0.906 ## 8 Generali~ L1 0.920 11 0.724 11 0.903 ## 9 AVG Blen~ Multiple featu~ 0.929 2 0.638 22 0.942 ## 10 RandomFo~ mean ft impact 0.914 15 0.695 15 0.913 ## # ... with 21 more rows, and 1 more variable: TNR_rank <int> The 5 models were: 1. The model with the highest AUC which also had the highest TPR 2. The model with the 2nd highest AUC 3. The model with the 2nd highest TPR 4. A model with a relatively balanced high AUC and TPR (7th for AUC and 4th for TPR) 5. Another model with a relatively balanced high AUC and TPR (5th for AUC and 6th for TPR) (deploy_all %>% select(-c(modelId, blueprintId)) %>% filter(AUC_rank %in% c(1,2,7,5)| TPR_rank==2)) %>% DT::datatable(rownames = F, options = list(searchHighlight = TRUE, paging= T)) # model ID for the 5 candidates deploy_modelsId<-deploy_all %>% filter(AUC_rank %in% c(1,2,7,5)| TPR_rank==2) %>% pull(modelId) The holdout dataset was unlocked and the performance of these 5 models on the holdout set was compared. # extract model and holdout AUC deploy_models<-ListModels(project_ID) %>% as.data.frame(simple = F) deploy_models<-deploy_models%>% select(modelType, expandedModel, featurelistName, Weighted AUC.holdout, modelId, blueprintId) %>% filter(modelId %in% deploy_modelsId) # extract holdout TPR and TNR TPR_TNR_holdout<-data.frame() for (i in deploy_modelsId){ selection_summary<-GetModel(project_ID, i) %>% GetRocCurve(source = DataPartition$HOLDOUT) temp<-selection_summary$rocPoints%>% filter(f1Score==max(f1Score)) %>% select(truePositiveRate, trueNegativeRate) %>% mutate(modelId= i) TPR_TNR_holdout<-bind_rows(TPR_TNR_holdout, temp) } # join all metrics deploy_results<- left_join(deploy_models, TPR_TNR_holdout, by="modelId") %>% distinct(modelId, .keep_all = T) %>% rename(TPR= truePositiveRate, TNR=trueNegativeRate) Based on the holdout metrics, Light Gradient Boosted Trees with early stopping (Light GBT) was selected as the model for deployment. Although Light GBT did not have the highest holdout AUC, its AUC of 0.94033 was still close the maximum AUC of 0.95695. The merits of Light GBT was its high TPR and TNR and it had the smallest difference between TPR and TNR. Moreover, it had the highest TPR which is valued for this classification problem. # plot (deploy_results %>% unite("model", c(modelType, featurelistName), sep=":") %>% pivot_longer(cols = c(Weighted AUC.holdout, TPR, TNR), names_to="metric") %>% ggplot(aes(model, value, colour=metric)) + geom_point(alpha=.7) + coord_flip()+ labs(x="", title = "Metrics of Potential Models for Deployment", subtitle = "tested on holdout set")+ scale_x_discrete(labels = scales::wrap_format(55))) #https://stackoverflow.com/questions/21878974/auto-wrapping-of-labels-via-labeller-label-wrap-in-ggplot2 ### LightGBT Description of the deployed Light GBT # modelId of Light GBT LightGBT_Id <-deploy_results %>% filter(str_detect(modelType, "Light")) %>% pull(modelId) #blueprint LightGBT_blueprint<-GetModelBlueprintDocumentation(project_ID, LightGBT_Id) %>% data.frame() (glue::glue ("Description of the deployed model:: \n{LightGBT_blueprint$description}")) ## Description of the deployed model:: ## The Classifier uses the LightGBM implementation of Gradient Boosted Trees. ## LightGBM is a gradient boosting framework designed to be distributed and efficient with the following advantages: ## Gradient Boosting Machines (or Gradient Boosted Trees, depending on who you ask to explain the acronym ‘GBM’) are a cutting-edge algorithm for fitting extremely accurate predictive models. GBMs have won a number of recent predictive modeling competitions and are considered among many Data Scientists to be the most versatile and useful predictive modeling algorithm. GBMs require very little preprocessing, elegantly handle missing data, strike a good balance between bias and variance and are typically able to find complicated interaction terms, which makes them a useful “swiss army knife” of predictive models. GBMs are a generalization of Freund and Schapire’s adaboost algorithm (1995) to handle arbitrary loss functions. They are very similar in concept to random forests, in that they fit individual decision trees to random re-samples of the input data, where each tree sees a boostrap sample of the rows of a the dataset and N arbitrarily chosen columns where N is a configural parameter of the model. GBMs differ from random forests in a single major aspect: rather than fitting the trees in parallel, the GBM fits each successive tree to the residual errors from all the previous trees combined. This is advantageous, as the model focuses each iteration on the examples that are most difficult to predict (and therefore most useful to get correct). Due to their iterative nature, GBMs are almost guaranteed to overfit the training data, given enough iterations. The 2 critial parameters of the algorithm; therefore, are the learning rate (or how fast the model fits the data) and the number of trees the model is allowed to fit. It is critical to cross-validate one of these 2 parameters. ## Early stopping is used to determine the best number of trees where overfitting begins. In this manner GBMs are usually capable of squeezing every last bit of information out of the training set and producing the model with the highest possible accuracy. Hyper-parameters which were tuned:: ### tuned values # list within a list para<-GetModel(project_ID, LightGBT_Id) %>% GetTuningParameters() # option 1 to covert to df para_df<-summary(para) # option 2 to convert to df2 #paraName<- list() #value<-list() #for(i in 1:18){ # paraName_temp<-para[["tuningParameters"]][[i]][["parameterName"]] %>% data.frame() # paraName[[i]]<-paraName_temp # value_temp<-para[["tuningParameters"]][[i]][["currentValue"]] %>% data.frame() # value[[i]]<-value_temp #} #paraName<-bind_rows(paraName) #value_df<-data.frame() #for (i in 1:length(value)){ # temp<- value[[i]] %>% data.frame() # value_df<-rbind(temp, value_df) #} #bind_cols(paraName, value_df) ### parameters description para_describe<-LightGBT_blueprint %>% # as num ID to para w no ID (e.g. parameters.type-> parameters.type.0) rename_with(.fn=~paste0(.x, ".0") , .cols=matches("parameter(.*)[^0-9]\$")) %>% # select only parameters var select(starts_with("parameter")) %>% # parameters.type.0-> type.0 rename_with(.fn=~str_replace(.x, "parameters.", "")) %>% # type.0-> type_parameter0. ##need [] to encapsulate full stop, list of permitted char rename_with(.fn=~str_replace(.x, "[.]", "_parameter")) %>% # https://tidyr.tidyverse.org/articles/pivot.html#multiple-observations-per-row-1 pivot_longer(cols=everything(), names_to = c(".value", "parameter"), names_sep = "_") %>% # parameter10-> 10 ## sep the alphabets not numbers, convert to int separate(parameter, into = c(NA, "para"), sep = "[a-z]+", convert = T) %>% # remove parthesis and abbreviation for name mutate(name= str_remove(name, "\$$.*\$$") %>% str_trim()) There is an extra tuned parameter in para_df which is the type of classification. This extra parameter was ignored. anti_join(para_df, para_describe, by="name") ## # A tibble: 1 x 4 ## name current default constraint ## <chr> <chr> <chr> <chr> ## 1 objective binary binary select from: binary, multiclass, multiclassova Description of the hyper-parameters and the tuned values left_join(para_describe, para_df, by="name") %>% select(parameter=name, description, limits=constraint, default_value=default, tuned_values=current) %>% DT::datatable(rownames = F, options = list(searchHighlight = TRUE, paging= T)) ## Reviewing deployed model Light GBT was deployed and used to make predictions on unseen data. An unseen dataset was reserved earlier to evaluate the deployed model’s performance in the real world. The AUC was 0.926 for the unseen data which is close to the AUC for the holdout set (0.94) suggesting Light GBT is neither overfitting nor underfitting the unseen data. The frequent interlacing of the Predicted and the Actual Lines in the Lift Chart also accentuated that the Light GBT is neither underestimating nor overestimating. The sensitivity of 0.759 was not as high compared to the sensitivity of 0.861 when predicting the holdout set. The smaller sensitivity is likely due to the low absolute number of positive cases in the unseen data. Inspecting purely by numbers, identifying 22/29 positive cases appears rather impressive. However, when the relative proportion was calculated, it appeared less remarkable. The powerful predictive properties of machine learning can determine patients with CAP who will deteriorate despite receiving medical treatment. According to the cumulative lift chart, when Light GBT was used to predict the unseen data, the top 20% predictions based on the model’s probability of worse CAP outcomes contained almost 4 times more cases compared to no modelling. A CAP watch list targeting 20% of the patients based on Light GBT can expect to have almost 4 times more patients with poorer outcomes compared to not using a model and selecting a random sample of patients. Doctors can pay closer attention to these patients who are on the watch list as they are almost 4 times more likely to have worse clinical evolution. According to the cumulative gain chart, the CAP watch list can expect to have almost 80% of the patients with worse outcomes using the top 20% predictions based on Light GBT. # Thoughts on DataRobot Automated machine learning (AutoML) platforms like DataRobot reduces the amount of manual input in a data science project and democratizes data science to less data literate clinicians. Nonetheless, there are pitfalls to AutoML as manual review is still required to enhance the models and validate that the variables driving the predictions make clinical sense. As seen in this series, an augmented machine learning approach integrating domain knowledge and the context of data when using AutoML platforms will be more appropriate, especially in healthcare where human life is at stake. For functions which are available in both DataRobot GUI and via R API, the ease of using the mouse makes it more convenient to use the GUI. However, without documentation of code, the project will be harder to reproduce. The option of using either GUI or via the API may create confusion among the team on which steps to be done in the GUI or via the API. Additionally, there are functions which are not available via the API. While there is a function via the API to predict on the unseen dataset, there is no function to provide the performance on the unseen dataset. Thus, the above cumulative lifts and gain charts were screen shots. # Future work 1. Deploying the model. However, deployment is not an option with the student’s version of DataRobot. 2. Visualizing DataRobot’s results in Tableau. However, this requires the deployment option. 3. Running customized models in DataRobot. However, model registry is required but is not an option with the student’s version of DataRobot. 4. Modelling the entire project in either R’s tidymodels or python’s sklearn`.
# Exercise 5 (LO 3) STOCK DIVIDENDS AND SPLITS Prepare appropriate general journal entries and memo... Exercise  5  (LO 3) STOCK DIVIDENDS AND SPLITS Prepare appropriate general journal entries and memo notations, as needed, for each of the following transactions of Rockmaney Co.: Feb.     3     Declared a 15% stock dividend to common shareholders. The market value of the common stock is $14 per share. The par value is$10. There are 80,000 shares of common stock outstanding. 24     Issued 12,000 shares of common stock in settlement of the stock dividend declared on February 3. July     1     Declared a 2-for-1 stock split. GENERAL JOURNAL                                               PAGE DATE DESCRIPTION POST. REF. DEBIT CREDIT 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11
## Cryptology ePrint Archive: Report 2018/896 Proofs of Ignorance and Applications to 2-Message Witness Hiding Apoorvaa Deshpande and Yael Kalai Abstract: We consider the following paradoxical question: Can one prove lack of knowledge? We define the notion of 'Proofs of Ignorance', construct such proofs, and use these proofs to construct a 2-message witness hiding protocol for all of NP. More specifically, we define a proof of ignorance (PoI) with respect to any language L in NP and distribution D over instances in L. Loosely speaking, such a proof system allows a prover to generate an instance x according to D along with a proof that she does not know a witness corresponding to x. We construct construct a PoI protocol for any random self-reducible NP language L that is hard on average. Our PoI protocol is non-interactive assuming the existence of a common reference string. We use such a PoI protocol to construct a 2-message witness hiding protocol for NP with adaptive soundness. Constructing a 2-message WH protocol for all of NP has been a long standing open problem. We construct our witness hiding protocol using the following ingredients (where T is any super-polynomial function in the security parameter): 1. T-secure PoI protocol, 2. T-secure non-interactive witness indistinguishable (NIWI) proofs, 3. T-secure rerandomizable encryption with strong KDM security with bounded auxiliary input, where the first two ingredients can be constructed based on the $T$-security of DLIN. At the heart of our witness-hiding proof is a new non-black-box technique. As opposed to previous works, we do not apply an efficiently computable function to the code of the cheating verifier, rather we resort to a form of case analysis and show that the prover's message can be simulated in both cases, without knowing in which case we reside. Category / Keywords: cryptographic protocols / Witness Hiding, Witness Indistinguishability, Zero Knowledge Date: received 23 Sep 2018, last revised 2 Mar 2019 Contact author: apoorvaa_deshpande at brown edu Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2018/896 [ Cryptology ePrint archive ]
# Difference between revisions of "2018 AMC 8 Problems/Problem 17" ## Problem 17 Bella begins to walk from her house toward her friend Ella's house. At the same time, Ella begins to ride her bicycle toward Bella's house. They each maintain a constant speed, and Ella rides 5 times as fast as Bella walks. The distance between their houses is $2$ miles, which is $10,560$ feet, and Bella covers $2 \tfrac{1}{2}$ feet with each step. How many steps will Bella take by the time she meets Ella? $\textbf{(A) }704\qquad\textbf{(B) }845\qquad\textbf{(C) }1056\qquad\textbf{(D) }1760\qquad \textbf{(E) }3520$ ## Solution 1 Since Ella rides 5 times as fast as Bella, Ella rides at a rate of $\frac{25}{2}$ or $12 \tfrac{1}{2}$. Together, they move $15$ feet towards each other every unit. Dividing $10560$ by $15$ to find the number of steps Bella takes results in the answer of $\boxed{\textbf{(A) }704}$ ## Solution 2 Since Ella rides 5 times faster than Bella, the ratio of their speeds is 5:1. This means that Bella travels 1/6 of the way, and 1/6 of 10560 feet is 1760 feet. Bella also walks 2.5 feet in a step, and 1760 divided by 2.5 is $\boxed{\textbf{(A) }704}$. ## Solution 3 We know that Ella's speed is $12.5$ st/s (steps per second) and Bella's speed is $2.5$ st/s. Let the speeds in which they are walking and riding at be the slope of two equations, one for Bella and one for Ella. We make $s$ the amount of steps she takes and $d$ distance from their houses. Since we are finding where they meet, we need to solve for the intersections of the two equations for Ella and Bella. The equations we get are: $$d = -12.5s + 10560$$ $$d = 2.5s$$ Subtracting both equations from each other gives us: $$-15s + 10560 = 0$$ Solving that equation grants us: $$s = 704$$ Since we are finding the number of steps they take until they meet each other, we can stop there. The solution is $\boxed{\textbf{(A) }704}$
# JQuery: Novice to Ninja- P9 Chia sẻ: Cong Thanh | Ngày: | Loại File: PDF | Số trang:15 0 55 lượt xem 5 download ## JQuery: Novice to Ninja- P9 Mô tả tài liệu Download Vui lòng tải xuống để xem tài liệu đầy đủ JQuery: Novice to Ninja- P9:No matter what kind of ninja you are—a cooking ninja, a corporate lawyer ninja, or an actual ninja ninja—virtuosity lies in first mastering the basic tools of the trade. Once conquered, it’s then up to the full-fledged ninja to apply that knowledge in creative and inventive ways. Chủ đề: Bình luận(0) Lưu ## Nội dung Text: JQuery: Novice to Ninja- P9 1. Images and Slideshows 97 situation where your code fails to do what you expect it to. Figuring out exactly what’s going on at any given moment in your code can be frustrating. Sometimes you need to know if a certain function is being called, or what the value of a variable is at a specific point in time. Traditionally, this sort of debugging is often achieved with the trusty old alert method. For example, if you need to to know what value the code has stored in the top variable, you type alert(top);. But this interrupts the flow of the program—and forces you to close the alert before continuing. And if the code you’re interested in is in the middle of a loop, you might wind up having to close a lot of alerts. Thankfully, web development tools are constantly advancing, and if you use the Licensed to [email protected] excellent Firebug plugin for Firefox (introduced back in Chapter 2), you can take advantage of the built-in debugging options. One of Firebug’s most handy features is the console, where instead of alerting the value of variables, you can use the command console.log: chapter_04/01_lightbox/script.js (excerpt) console.log(top,left); Just open the Console tab of Firebug (you may need to enable it first), and you’ll see the values displayed. No more annoying alert windows! You can specify as many variables or expressions as you would like in a single statement by separating them with commas. The outputs generated by different types of log statements are depicted in Figure 4.2: two simple string outputs, a multivariable output consisting of two numbers, and a jQuery selection. Figure 4.2. The Firebug console 2. 98 jQuery: Novice to Ninja If the variable is a JavaScript object, you can even click on it in the console to exam­ ine its contents. If it is a DOM node or jQuery object, clicking on it will highlight it on the page and jump to it in the Firebug DOM tree.This will save your sanity when you’re stuck on those obnoxious bugs! Just remember to remove any con- sole.log lines from your code when you release it. ColorBox: A Lightbox Plugin Our custom lightbox is a fine solution for our modest needs, but you’ll have to admit that it’s fairly limited as far as features go. Sometimes you’ll need more. The prin­ cipal contender for “more” for quite some time has been Cody Lindley’s ThickBox. ThickBox has certainly fought the big fights, but like all true champions, you have Licensed to [email protected] to know when it’s time to step out of the ring and hang up the gloves. ThickBox is still a powerful plugin and suits many developers despite the fact that it’s no longer maintained. It did what it did, and did it well. It’s precisely that level of quality that has set the bar high for a new generation of lightbox plugins. Let’s take a look at one of the big challengers: ColorBox. ColorBox1 is the brainchild of Jack Moore, and with an array of public methods and event hooks—and a staggering 37 options to choose from—it’s likely that even seasoned users won’t touch on everything it has to offer. Given ColorBox’s focus on standards-based XHTML, reliance on CSS for styling, and wide support of content options, it’s easy to see that the “lightweight” tag line on its web page refers only to its tiny 9KB footprint—and not to its huge feature set! Grab ColorBox from the download area of the web site and examine its contents. There’s a directory called ColorBox that contains both the minified and uncom­ pressed version of the plugin code. As usual, you should use the minified version unless you’re keen to understand the inner workings of ColorBox. Also included in the download are a number of example directories; the examples all use the same markup and JavaScript code, but show how the lightbox can be styled to look completely different. The best way to start out is to have a look at the examples and choose the CSS file (and corresponding images) that you like best, and then build on that for your implementation. 1 http://colorpowered.com/colorbox/ 3. Images and Slideshows 99 We’ve copied over the CSS and image files from one of the example directories, and included both that CSS file and the minified plugin file in our HTML: chapter_04/02_colorbox_plugin/index.html (excerpt) ➥ ColorBox can work on a single image as we did in the previous section, but it excels at displaying slideshow-style galleries—letting the user move between the images, as illustrated in Figure 4.3. To take advantage of this we need to group the images we want to show, and ColorBox expects us to do this with the rel attribute of our Licensed to [email protected] links. Figure 4.3. A styled gallery using the ColorBox plugin 4. 100 jQuery: Novice to Ninja In the markup, we’ve included rel="celeb" on all of the images we want to group together. Now we can use the jQuery attribute selector to find those images: a[rel="celeb"]. Calling the colorbox method on the selection gives us a fantastic- looking lightbox: chapter_04/02_colorbox_plugin/script.js (excerpt) $(document).ready(function() {$('a[rel="celeb"]').colorbox(); }); It looks and works briliiantly by default, but there are stacks and stacks of options to play around with. In the following example we give it a fading transition, rather Licensed to [email protected] than the default elastic resize (the speed option, as you might have guessed, specifies the duration of the fade). To suit the StarTrackr! style, we’ll also customize the wording of the lightbox text. This is just the tip of the iceberg, though—poke around on the ColorBox site to explore all the other options and events available for cus­ tomizing the lightbox: chapter_04/02_colorbox_plugin/script.js (excerpt) $('a[rel=celeb]').colorbox({ transition: 'fade', speed: 500, current: "{current} of {total} celebrity photos" }); What’s great about ColorBox is that it’s highly unobtrusive and customizable: you can alter behavior settings, add callbacks, and use event hooks without modifying your markup or the plugin’s source files. ColorBox preloads any required images— and can even start preloading your gallery images—so it always appears snappy on your pages. And last, but by no means least, ColorBox is released under the per­ missive MIT License2—so you can use it in your commercial projects as you see fit. 2 http://creativecommons.org/licenses/MIT/ 5. Images and Slideshows 101 Cropping Images with Jcrop While we’re looking at mature and excellent plugins and lightbox effects, we’d be remiss if we skipped over the Jcrop plugin3 for defining regions of an image. The plugin adds a lightbox-style overlay on an image and lets the user drag a rectangle to select a required area of an image. This functionality is common on many large web sites, where it allows users to crop an uploaded image for their profile picture. If you know a little about image manipulation on the Web, you’re likely to know that image manipulation of this sort usually takes place on the server side. Right? Yes, that’s correct—the Jcrop plugin doesn’t actually crop images, it provides an intuitive interface for defining the bounding edges where the user would like to Licensed to [email protected] crop an image. The results returned from the plugin can then be fed to the server to perform the actual image manipulation. You can see an image being cropped with Jcrop in Figure 4.4. Figure 4.4. The Jcrop plugin in action 3 http://deepliquid.com/content/Jcrop.html 6. 102 jQuery: Novice to Ninja The typical workflow for using the Jcrop plugin would be to display an image to the user that needs to be cropped (either a stored image or a freshly uploaded one), and overlay the Jcrop interface. When the user has made their selection the coordin­ ates are posted to the server, where the resulting image is created and saved for display or download. To apply the Jcrop interaction, you first need to download it and extract the files. Contained in the download bundle is the Jcrop JavaScript file, a small CSS file, a clever animated GIF (that’s responsible for the “moving lines” effect when you select a region), and some demo pages that highlight all of Jcrop’s features. You’ll need to include the CSS (at the top of the page) and JavaScript (at the bottom Licensed to [email protected] of the page) files. The Jcrop.gif image should be in the same directory as your CSS file: chapter_04/03_jcrop/index.html (excerpt) Once everything is in place, you just need to add an image that you’d like to make selectable to the page. We’ve given the image an ID so that it’s nice and easy to select with jQuery. If you want the user to signal that they’re happy with their selection, you can add a clickable button too: chapter_04/03_jcrop/index.html (excerpt) In its simplest form, you just have to apply the jQuery plugin to the image. When you reload the page, the image will be augmented with draggable handles and an overlay:$('#mofat').Jcrop(); The plugin exposes a couple of useful events that you can use to keep an eye on what the user is selecting. It also has a handful of default options for customizing 7. Images and Slideshows 103 how the selector works. You can restrict the aspect ratio of the crop area and the minimum and maximum selection sizes, as well as the color and opacity of the background overlay: var jcrop = $('#mofat).Jcrop({ setSelect: [10,10,300,350], minSize:[50,50], onChange: function(coords) { // use the coordinates }, onSelect: function(coords) { // use the coordinates } Licensed to [email protected] }); Here we’ve included some default properties. setSelect allows us to define a default cropping area; we need to pass it an array of coordinates, in the format [x1, y1, x2, y2]. The minSize option is an array containing the selection’s minimum width and height. We’ve also illustrated how you’d capture the onChange and onSelect events. The onChange event will fire many times as the user is dragging the handles or the selection around the image. The onSelect event, on the other hand, will only fire when a selection has been defined; that is, when the user has stopped dragging. The handlers for the events receive a coordinates object that contains the x, y, x2, y2, w, and h properties. So, in your handler code, you’d write coords.w to obtain the current selection’s width. By far the most common use for the Jcrop plugin is to define points to send to the server after the user is done selecting. The events that the plugin fires are of no use to us for this purpose, as we have no way of knowing if the user is really finished selecting—that’s why we added a button! We want to know where the selection is when the user clicks the button. In order to do this, we’ll need to modify our original code a little. When you call Jcrop on a jQuery object as we did above, the jQuery object is returned, ready to be chained into more jQuery methods. However, this gives us no access to the selec­ tion coordinates. In order to grab these, we’ll need to call Jcrop differently, directly from$. When called in this way, it will return a special Jcrop object, which has properties and methods for accessing the selected coordinates (as well as modifying 8. 104 jQuery: Novice to Ninja the selection programmatically). We need to pass it both a selector for the image to crop, and the set of options: chapter_04/03_jcrop/script.js (excerpt) var jcrop = $.Jcrop('#mofat',{ setSelect: [10,10,300,350], minSize:[50,50] });$('#crop :button').click(function() { var selection = jcrop.tellSelect(); alert('selected size: ' + selection.w + 'x' + selection.h); }) Licensed to [email protected] We’re using the tellSelect method to obtain the current selection; this has the same properties as the event coordinates, so we can use them to send to the server and chop up our picture! In the absence of a server, we’ve chosen to simply alert them, to let you know what’s going on. Jcrop has a vast array of available options and methods, so it’s strongly recommended that you inspect the demos included in the plugin download to see what’s available. Slideshows Every night, customers of the StarTrackr! site use the location information they purchase to hunt down and photograph the world’s social elite. Many of the photos are posted back to the web site, and the client wants to feature some of them on the home page. We’re increasingly comfortable with jQuery, so we’ve told our client we’d mock up a few different slideshow ideas for him. First we’ll look at some ways of cross-fading images; that is, fading an image out while another is fading in. Then we’ll look at a few scrolling galleries, and finally a more sophisticated flip-book style gallery. Along the way we’ll pick up a bunch of new jQuery tricks! Cross-fading Slideshows If you work in television, you’ll know that unless you’re George Lucas, the only transition effect they’ll let you use is the cross-fade (aka the dissolve). The reason for this is that slides, starbursts, and swirl transitions nearly always look tacky. This 9. Images and Slideshows 105 also applies outside the realm of television; just think back to the last PowerPoint presentation you saw. There are different techniques for cross-fading images on the Web—all with pros and cons mostly boiling down to simplicity versus functionality. We’ll cover some of the main methods used to cross-fade items, so that you have a selection to choose from when necessary. Rollover Fader The first cross-fader we’ll have a look at is a rudimentary rollover fader; it’s much like the hover effects we’ve already looked at, except this time we’ll perform a gradual fade between the two states. First, we need to tackle the problem of where Licensed to [email protected] and how to store the hover image. This solution works by putting both of our images into a span (or whatever container you like). The hover image is positioned on top of the first image and hidden until the user mouses over it; then the hidden image fades in. To start, we set up our rollover container: chapter_04/04_rollover_fade/index.html (excerpt) To hide the hover image, we employ the usual position and display properties: chapter_04/04_rollover_fade/style.css (excerpt) #fader { position: relative; } #fader .to { display: none; position: absolute; left: 0; } 10. 106 jQuery: Novice to Ninja We now have something juicy to attach our hover event handler to. Knowing that we have two images trapped inside the container, we can access them with the :eq filter: image 0 is our visible image, and image 1 is our hover image. There’s More than One Way to Select a Cat We’ve used this method primarily to highlight the :eq selector attribute. There are several other ways we could’ve accessed the two images inside the container: by using the :first and :last filters, the corresponding .eq, .last, or .first actions, the child (>) selector, or simply a class name. There are usually multiple ways to accomplish tasks with jQuery, and the choice often boils down to personal preference. Licensed to [email protected] Here’s the code we’ll use to perform the rollover: chapter_04/04_rollover_fade/script.js (excerpt) $('#fader').hover(function() {$(this).find('img:eq(1)').stop(true,true).fadeIn(); }, function() { $(this).find('img:eq(1)').fadeOut(); }) There’s nothing new to use here—except that we’re using the advanced version of the stop command (which we first saw in the section called “Animated Navigation” in Chapter 3). We’re specifying true for both clearQueue and gotoEnd, so our fade animation will immediately stop any other queued animations and jump straight to where it was headed (in this case, it will jump straight to the fully faded-out state, so we can fade it back in). This prevents animations from backing up if you mouse over and out quickly. You’d probably be thinking of using this effect for navigation buttons—which is a good idea! Another consideration, though, is adding the hover image as the link’s :hover state background image in CSS too. That way, your rollover will function as a traditional hover button for those without JavaScript. JavaScript Timers The Web is an event-driven environment. Elements mostly just sit on the page waiting patiently for a user to come along and click or scroll, select or submit. When 11. Images and Slideshows 107 they do, our code can spring to life and carry out our wishes. But there are times when we want to avoid waiting for the user to act, and want to perform a task with a regular frequency. This will be the case with the next few slideshows we’re going to build: we want to rotate automatically through a series of images, displaying a new one every few seconds. Unlike many other areas of the library, there’s been no need for jQuery to expand on JavaScript’s timer functionality; the basic JavaScript methods are simple, flexible, and work across browsers. There are two of them: setTimeout and setInterval. The timer functions both work the same way: they wait a certain amount of time before executing the code we give them. The syntax used to call them is also much Licensed to [email protected] the same: setTimeout(, ); setInterval(, ); The key difference is that setTimeout will wait the specified period of time, run the code we give it, and then stop. setInterval, on the other hand, will wait, run the code—then wait again and run the code again—repeating forever (or until we tell it to stop). If the code we pass it updates a visible property of an element, and the delay we assign it is relatively small, we can achieve the illusion of animation. In fact, behind the scenes, this is what jQuery is doing when we use any of its an­ imation functions! Timers can be tricky to use correctly, largely because they cause problems of scope, which we’ll discuss in more detail in the section called “Scope” in Chapter 6. However, we want to master them, as timers are the key to freeing our pages from the tyranny of user-initiated events! Setting up a Timer Here’s a small example to demonstrate timers at work: we’ll simply move a green box smoothly across the screen. Of course, we could rely on jQuery’s animate method to do this for us, but we want to learn what is really going on in the Java- Script. This will involve positioning a square div and continuously updating its left CSS property. Let’s set up our boxes: 12. 108 jQuery: Novice to Ninja chapter_04/05_timers/index.html (excerpt) Go! Go! The boxes are sitting still; to animate them we’re going to need a timer. We’ll use the setInterval timer, because we want our code to be executed repeatedly: chapter_04/05_timers/script.js (excerpt) var greenLeft = parseInt($('#green').css('left')); setInterval(function() { Licensed to [email protected] $('#green').css('left', ++greenLeft); }, 200); Our div moves slowly across the screen: every 200 milliseconds, we’re pushing it a pixel to the right. Changing the size of the delay affects the speed of the animation. Be careful, though: if you set the delay very low (say, less than 50 milliseconds) and you’re doing a lot of DOM manipulation with each loop, the average user’s browser will quickly grind to a halt. This is because their computer is not given enough time to do everything you asked it to before you come around asking it again. If you think your code might be at risk, it’s best to test it on a variety of ma­ chines to ensure the performance is acceptable. It’s also possible to replicate setInterval’s functionality using the setTimeout function, by structuring our code a bit differently: chapter_04/05_timers/index.html (excerpt) var redLeft = parseInt($('#red').css('left')); function moveRed() { setTimeout(moveRed, 200); $('#red').css('left', ++redLeft); } moveRed(); Here we have a function called moveRed, inside of which we have a setTimeout timer that calls … moveRed! As setTimeout only runs once, it will only call moveRed 13. Images and Slideshows 109 once. But because moveRed contains the timer call, it will call itself again and again—achieving the same result as setInterval. Stopping Timers Usually it’s undesirable (or unnecessary) for our timers to run forever. Thankfully, timers that you start running can be forced to stop by calling the appropriate JavaScript command, clearInterval or clearTimeout: clearInterval(); clearTimeout(); To call either of these functions you need to pass in the timer’s ID. How do we know Licensed to [email protected] what the ID is? The ID is an integer number that’s assigned to the timer when you create it. If you know you might want to stop a timer in the future, you must store that number in a variable: var animationTimer = setInterval(animate, 100); The timer can now be stopped at any time with the following code: clearInterval(animationTimer); And that’s all there is to know about setTimeout and setInterval! Don’t worry if they still seem a little fuzzy: we’ll be using them as required through the rest of the book, and seeing them in context will help you become more accustomed to them. Fading Slideshow Cross-fading between two images is fairly straightforward: it’s always fading one image in as the other fades out. If we extend the idea to a whole bunch of images for, say, a rotating image gallery the task becomes a little more difficult. Now we’ll need to calculate which picture to show next, and make sure we wrap around after we’ve shown the last image. A common trick you’ll see with jQuery image galleries is to fake the cross-fade by hiding all of the images except for the current image. When it comes time to swap, you simply hide the current image, and fade in the next one. Because there’s no true overlap occurring with the images, this doesn’t really qualify as a cross-fade; 14. 110 jQuery: Novice to Ninja however, it’s a simple solution that might be all you need, so we’ll look at it first. The next example we’ll look at will be a true cross-fader. Our basic slideshow will consist of a bunch of images inside a div. We’ll designate one of the images as the visible image by assigning it the class show: chapter_04/06_slideshow_fade/index.html (excerpt) Licensed to [email protected] We’ll hide all the images by default. The show class has a double purpose for this slideshow: it enables us to target it in CSS to display it, and—equally importantly—it gives us a handle to the current image. There’s no need to keep track of a variable, such as var currentImage = 1, because the class name itself is functioning as that variable. Now we need to start running a JavaScript timer so we can loop around our images. We’ll write a function that calls itself every three seconds: chapter_04/06_slideshow_fade/script.js (excerpt)$(document).ready(function() { slideShow(); }); function slideShow() { var current = \$('#photos .show'); var next = current.next().length ? current.next() : ➥current.parent().children(':first'); current.hide().removeClass('show'); next.fadeIn().addClass('show'); setTimeout(slideShow, 3000); } 15. Images and Slideshows 111 We know the current image has the class show, so we use that to select it. To find the next image we want to show, we use a bit of conditional logic. If the next sibling exists, we select it. If it doesn’t exist, we select the first image, so the slideshow wraps around. Ternary Operator You might be a little confused by the syntax we’ve used to assign a value to the next variable. In JavaScript (and in many other programming languages), this is called the ternary operator. It’s a shortcut for setting a variable conditionally. The syntax a ? b : c means that if a is true, return b; otherwise, return c. You can use this in variable assignment, as we did above, to assign different values to Licensed to [email protected] the same variable depending on some condition. Of course, a longer if / else statement can always do the same job, but the ternary operator is much more succinct, so it’s well worth learning. So, to resume, the line: var next = current.next().length ? current.next() : ➥current.parent().children(':first'); can be translated into English as follows: if the current element has a sibling after it in the same container (if the next method returns a non-empty array), we’ll use that. On the other hand, if next returns an empty array (so length is 0, which is false in computer terms), we’ll navigate up to the current element’s parent (the #photos div) and select its first child (which will be the first photo in the slideshow). Finally, we hide the current image and fade in the next one. We also swap the show class from the old photo onto the new one, and set a timeout for the slideShow method to call itself again after three seconds have passed. True Cross-fading Our last solution looks nice—but it’s just a fade, rather than a true cross-fade. We want to be able to truly cross-fade: as the current picture is fading out, the next picture is fading in. There is more than one way to skin a jQuery effect, and the approach we’ll take for our implementation is as follows: CÓ THỂ BẠN MUỐN DOWNLOAD
# Thread: One and Two Sided Inverses 1. ## One and Two Sided Inverses I posted a thread not two long ago about one and two sided identities but I'm having a little confusion with inverses. The question is: "Prove: Let $\displaystyle <A,O>$ be a system with identity $\displaystyle e$ in which $\displaystyle O$ is associative. If $\displaystyle b$ is a left-inverse for $\displaystyle a \in A$ and c a right-inverse for a, then b = c. As corollaries, show that (a) a two inverse is unique, and (b) if $\displaystyle O$ is commutative, then $\displaystyle <A,O>$ has at most one left inverse." I'm sure there is a super simple solution but it isn't immediately apparent to me, thanks in advance. 2. Originally Posted by jameselmore91 I posted a thread not two long ago about one and two sided identities but I'm having a little confusion with inverses. The question is: "Prove: Let $\displaystyle <A,O>$ A is a set of objects, O is a binary operation on them? be a system with identity $\displaystyle e$ in which $\displaystyle O$ is associative. If $\displaystyle b$ is a left-inverse for $\displaystyle a \in A$ and c a right-inverse for a, then b = c. As corollaries, show that (a) a two inverse is unique, and (b) if $\displaystyle O$ is commutative, then $\displaystyle <A,O>$ has at most one left inverse." I'm sure there is a super simple solution but it isn't immediately apparent to me, thanks in advance. By the definition of "left inverse", ba= I, the identity. By the definition of "right inverse", ac= I. Mutiply both sides of ac= I, on the left, by b and see what happens. (Using, of course, the fact that O is associative so b(ac)= (ba)c.) The last two parts, then, are simple.
# A Whole New Dimension of Optimization So far in this series about optimization (1, 2) I've dealt only with univariate objectives . It's time to face the multivariate setting. Here I'm about to explore two very different approaches that build upon all the previous work. The first will "lift" any univariate method and use it to construct an algorithm for multivariate objectives. The second will generalize the only non-bracketing method encountered thus-far, namely, Newton's method. # 1. Coordinatewise Optimization #### 1.1. Sufficient Conditions It is a dark and stormy night. You're finding yourself all alone in a desolated ally, facing a multivariate objective. All you got is your wallet, your wit and some univariate optimization algorithms you've read about in an obscure blog. Looks like you have only one course of action. But it's insane, desperate and far-fetched. It can never work, can it? You're in luck! It can, and probably will. As naive as it sounds, optimizing a multivariate objective by operating on each coordinate separably actually often works very well. Moreover, some state-of-the-art machine learning algorithms are based on exactly this concept. But it does not always work. If it's going to work, we need that a coordinate- wise extrema would be a local extrema. So given an objective $f:R^N\rightarrow R$, and a point $\hat{x}\in R^N$ such that $f(\hat{x}+\delta e_i)\gt f(\hat{x})$ for all small-enough $\delta$ and all $i=1,...,N$ (where $e_i=(0,...,1...,0)\in R^N$ is the standard basis vector) - when does it follow that $f(\hat{x})$ is a local minima of $f$? Here are 2 simple cases, a positive one and a negative one: • If $f$ is differentiable, then coordinatewise optimization indeed leads to a candidate extremum (that's easy: $\nabla f(\hat{x})=(\frac{\partial f(\hat{x})}{\partial x_1},...,\frac{\partial f(\hat{x})}{\partial x_N})=0$). • But if $f$ is not differentiable then coordinatewise optimization doesn't necessarily lead to a candidate extremum, even if it's convex: A more interesting case comes from mixing the two: consider an objective $f(\vec{x})=g(\vec{x})+\sum_{i=1}^kh_i(x_i)$ where $g$ and the $h_i$s are assumed convex, but only $g$ is surely differentiable. Note that each term $h_i$ depends only on the $i$-th coordinate. Such functions are called "separable", and such objectives are actually quite common. For example, they can be used to formulate plenty machine-learning algorithms, where $g(x)$ is a differentiable loss-function composed with a parameterization of the hypothesis space, and the $h_i$ functions are regularization terms which are often convex yet non-smooth (as in the case where $\ell_1$ regularization is involved, so $h_i(x)=|x|$ for some $i$s). Similarly, such objectives are also naturally occur in the context of compressed sensing. And for god is just and merciful, separable objectives can be optimized coordinatewise. Well, maybe it's not so much but about god, as it's about the fact that for any $x\in R^k$ we have - $$\begin{equation*} \begin{split} f(x)-f(\hat{x})=g(x)-g(\hat{x})+&\sum_{i=1}^k{(h_i(x_i)-h_i(\hat{x}_i))} \ge\nabla g(\hat{x})(x-\hat{x})+\sum_{i=1}^k{(h_i(x_i)-h_i(\hat{x}_i))} \\ & \ge\sum_{i=1}^k{(\underbrace{\nabla_ig(\hat{x})(x_i-\hat{x}_i)+h_i(x_i)-h_i(\hat{x}_i)}_{\ge 0})}\ge 0 \end{split} \end{equation*}$$ where the first inequality follows from the (sub)gradient inequality, and the last inequalities hold since we assumued that $\hat{x}$ is a coordinatewise-minimizer. #### 1.2. Algorithmic Schemes The outline of a coordinatewise-optimization algorithm practically writes itself: • 1. Maintain a "current best solution" $\hat{x}$. • 2. Repeat until you had enough: • 2.1. Loop over the coordinates $i=1,...k$: • 2.1.1. Optimize $f$ as a univariate function of $x_i$ (while holding the other coordinates of $\hat{x}$ fixed). • 2.1.2. Update the $i$-th coordinate of $\hat{x}$. This algorithm is guaranteed to convergence (when coordinatewise-optimization is applicable, as discussed in the previous section), and in practice, it usually convergences quickly (though as far as I know, the theoretical reasons for its convergence rate are not yet fully understood). Still, two issues demand attention: Sweep Patterns and Grouping. By sweep-patterns I mean the order in which the algorithm goes through the coordinates in each iteration. But first, it should be noted that the fact that the coordinates are optimized sequentially and not in parallel is crucial. In most cases, the following variation will not converge: • 1. Maintain a "current best solution" $\hat{x}$. • 2. Repeat until you had enough: • 2.1. For each coordinates $i=1,...k$, work in parallel: • 2.1.1. Optimize $f$ as a univariate function of $x_i$. • 2.2. Update all the coordinates of $\hat{x}$. In some very special cases it does converge, but even then it tends to converge much slower. For example (though in the very related context of root finding instead of optimization), the Gauss-Seidel algorithm for solving a system of linear equations has the first sequential form while the Jacobi algorithm has the second parallel form. In this special case each algorithm converges if and only if the other one does - but the Gauss-Seidel algorithm is twice as-fast. For the sequential (and typical) variation, the order in which the coordinates are iterated is often not important, and going over them in a fixed arbitrary order is just fine. But there's a-lot of variation: sometimes the convergence rate may be improved by a randomization of the order; sometimes it's possible to fixate the values of some coordinates and skip their optimization in future iterations; and sometimes the algorithm can access additional information that may hint which coordinates will likely lead to a faster convergence and should be optimized first. Here's the sequential scheme in pseudo code: Finally, the point of "grouping" is that a function $f(x_1,...,x_n):R^n\rightarrow R$ can be treated as a function $f(X_1,...,X_N):R^{n_1}\times R^{n_2}\times...\times R^{n_N}\rightarrow R$ where $\sum_{i=1,...,N}{n_i}=n$, and the scheme above can work by optimizing one group $X_i$ at a time (using some multivariate optimization algorithm for each block). Actually, the degrees of freedom are even greater, since nonadjacent coordinates can be grouped together. Many times, this can be used to convert problems that are unsuitable for coordinatewise-optimization into problems that are solvable coordinatewise, and lead to significant improvements. For example, it can make a nondifferentiable convex objective into a separable one. Possibly the most notable example for such a scheme is the SMO algorithm which was one of the earliest efficient optimization methods for SVMs. It minimizes 2-coordinates at the time (though it works in a constraint setting). Nowadays there are better algorithms for learning SVMs, but the state-of-the-art (to my knowledge) is still based on coordinatewise-optimization. And a concluding note about terminology: many sources (including wikipedia) refer to the algorithmic scheme described here by the name Coordinate Descent. But there's another algorithmic scheme, to be presented in the near future, that also have this name - and I think more deservedly (spoiler: it's a line-search whose search directions are parallel to the axis). So I prefer to reserve the name "Coordinate Descent" for that algorithm, and call the one presented here a "Coordinatewise Optimization". # 2. Multidimensional Newton’s Method #### 2.1. Multidimensional Optimization Alright, so sometimes it's possible to utilize univariate methods in a multivariate setting without any generalization - simply by applying them coordinate-wise. This can work really great at times, but it doesn't always work. And even when it does, it sometimes doesn't work well. An alternative approach would be to generalize a univariate method so it would work on "all coordinates at once". A hint as for how it can be done was already given in the previous post where Newton's method for finding roots was introduced, and it was mentioned that it can be used (at least theoretically) for finding roots of multivariate functions. A quick reminder: Newton method is the iterative algorithm $x_{n+1} \leftarrow x_n + \Delta_n$ where $\Delta_n$ is the solution to the linear system $J(x_n)\Delta_n = -F(x_n)$ for the function of interest $F:R^n\rightarrow R^m$. Ideally, $J(x_n)$ should be computed analytically or algorithmically, but - unlike in the univariate case - it is acceptable to compute it numerically. The following implementation assumes, for simplicity, $m=n$: By courtesy of Fermat's theorem, Newton's method leads to an optimization algorithm which can be efficient for finding candidate extrema points. Naturally, in this context $m=1$ and the objective has the form of $f:R^n\rightarrow R$. Again, due to Taylor $f(x+h) = f(x) + h^T\nabla f(x) + \frac{1}{2}h^TH(x)h + O(h^3)$. If $h$ leads to an extremum, the optimality condition asserts that $\nabla f(x+h)=0$. Thus differentiation of both sides with respect to $h$ leads to the conclusion that $h$ is the solution of $0\approx\nabla f(x)+H(x)h$ (the equation $H(x)h = -\nabla f(x)$ - or sometimes $\nabla f(x+h) = \nabla f(x) + H(x)h$ - is known as the Secant Equation, and is central to many optimization algorithms). The iterative scheme of Newton's method is prototypic for many optimization algorithms; each step, the algorithm solves the secant equation $0=\nabla f(x_n)+H(x_n)h$ to obtain a step $h$, which it then takes $x_{n+1}\leftarrow x_n+h$. When - as expected - $x_{n+1}\lt x_n$, the step $h$ is called a "descent step" and its direction is called a descent direction. This is all fine and dandy, unless you actually want to use this method in practice. That's when constraints regarding time-complexity and memory usage are going to render naive implementations of Newton's method useless. The future posts in this series will deal with actual implementation details of Newton- related optimization algorithms, but for starters let's consider the implementation of Newton's algorithm for multivariate root-finding. This will allow me to introduce more easily some core-ideas that are going to be used over and over again later. The main themes are the refinement of the concepts of "descent steps" (for which the following section on "Adaptive Steps" serves as an introduction), and approximations for the Jacobian and the Hessian (which will be introduced in the next section, on "Broyden's Method"). Remember, even though I'm constantly thinking about optimization, here I'm discussing roots. So instead of the extrema of $f:R^n\rightarrow R$, the following will deal with the roots of $F:R^n\rightarrow R^m$. Given a newton-step $h$, it is not always advised to accept it and set $x_{n+1} \leftarrow x_n + h$. In particular, when $x_n$ is far away from the root, the method may fail completely. However, a newton-step for $F$ is guaranteed to be a descent direction with respect to $f:=\frac{1}{2}F\cdot F$, thus there exists $0<\lambda\le1$ for which $f(x_n+\lambda h) < f(x_n)$ (yet again a demonstration for the folk wisdom "optimization is easier than finding roots"). Since the minimization of $f$ is a necessary condition for roots of $F$, we use this as a "regularization procedure", and each step becomes $x_{n+1} = x_n + \lambda h$ with $\lambda$ for which $f(x_n+\lambda h)$ has decreased sufficiently. Furthermore, we require that $f$ will be decreased relatively fast compared to the step-length $\|\lambda h\|$ (specifially, $f(x_{n+1}) < f(x_n) + \alpha\nabla f\cdot (x_{n+1}-x_n)=f(x_n)+\alpha\nabla f\cdot\lambda h$), and that the step-length itself won't be too small (e.g. by imposing a cutoff on $\lambda$). Following the above improves greatly the global behaviour (far away from the roots) of Newton-Raphson. It remains the decide how to find appropriate $\lambda$. The strategy is to define $g(\lambda):=f(x_n + \lambda h)$, and at each step model it quadratically or cubically based on the known values of $g$ from previous steps, and choose as the next $\lambda$ a value that minimizes $g$'s model (trying to minimize $g$ directly is extremely wasteful in terms of function evaluations). This is also the core idea behind a major family of optimization algorithms, called line-search algorithms. In details, here: 1. Start with $\lambda_0=1$ (a full newton step). Calculate $g(1)=f(x_{n+1})$ and test if $\lambda_0$ is acceptable. 2. If it is unacceptable, model $g(\lambda)$ as a quadratic based on $g(0), g'(0), g(1)$ take its minimizer $\lambda_1 = -\frac{g'(0)}{2(g(1))-g(0)-g'(0)}$. Calculate $g(\lambda_1)=f(x_{n+1})$ and test if $\lambda_1$ is acceptable. 3. If it is unacceptable, model $g(\lambda)$ as a cubic based on $g(\lambda_{0}), g'(\lambda_{0}), g(\lambda_{k-1}, g(\lambda_{k-2})$ take its minimizer $\lambda_{k} = \frac{-b+\sqrt{b^2-3ag'(0)}}{3a}$ where $(a, b)$ are the coefficients of $g$'s model $g(\lambda) = a\lambda^3 + b\lambda^2 + g'(0)\lambda + g(0)$, so: 4. $$a = \frac{1}{\lambda_{k-2}-\lambda_{k-1}} \langle\frac{A_2}{\lambda_{k-2}^2}-\frac{A_1}{\lambda_{k-1}^2}\rangle$$ $$b = \frac{1}{\lambda_{k-2}-\lambda_{k-1}} \langle\frac{A_1\lambda_{k-2}}{\lambda_{k-1}^2}-\frac{A_2\lambda_{k-1}}{\lambda_{k-2}^2}\rangle$$ with $A_i=g(\lambda_{k-i})-g'(0)\lambda_{k-i}-g(0)$. 5. Repeat step 3 if necessary, and always enforce $0.1\lambda_1 < \lambda_k < 0.5\lambda_1$. #### 2.3. Broyden’s Method Central to Newton-Raphson algorithm, is the equation $J(x_n)\Delta_n = -F(x_n)$ (or the secant equation for optimization). For large problems, the computation of the Jacobian $J(x_n)$ can be expensive. Broyden's method is simply a modification for Newton-Raphson that maintains a cheap approximation for the Jacobian. This idea (here presented in the context of root-finding) is central to the useful BFGS optimization algorithm that will be discussed later. From the definition of the differential, we know that $J$ is a linear-map that approximately satisfies $J\Delta x = \Delta F$. So at the $i$-th step, Broyden's method approximates $J$ by $J_i$ that solves the equation $J_i(x_{i-1}-x_i) = F_{i-1}-F_i$. Since generally this equation does not determine $J_i$ uniquely, Broyden's method uses $J_{i-1}$ as a prior, and takes as $J_i$ the solution that is closest to $J_{i-1}$ (in the sense of Frobenius norm): One possible follow-up strategy is to compute a newton step by solving $J(x_n)\Delta_n = -F(x_n)$. Doing this directly has a time complexity of something like $O(N^3)$ (where $N$ is the number of variables). By "something like" I sloppingly mean that solving a system of linear equations in $N$ unknowns has the same (arithmetic) time-complexity as of matrix-multiplication, and specifically, practical applications use matrix factorization algorithms which are $O(N^3)$. But instead of using, say, LU decomposition to find $\Delta_n$ (which takes $\frac{2}{3}N^3$),the Sherman- Morisson inversion formula can be used to obtain - and the result is an $O(N^2)$ algorithm for approximately find $\Delta_n$. On the other hand, in order to incorporate adaptive steps (as described above), the approximation for $J_i$ is required (recall: $\nabla(\frac{1}{2}F\cdot F) \approx J^T\cdot F$), while the above method produces an approximation for $J_i^{-1}$. So instead, it's more common to forget all about Sherman-Morisson, and stick to the original iterative approximation of $J_i$. That's ok, since It's still possible to exploit the fact that in each iteration a 1-rank update is done, and keep the $O(N^2)$ time-complexity, instead of the naive $O(N^3)$. The secret is to solve $J(x_n)\Delta_n = -F(x_n)$ via a QR factorization instead of the (usually preferable) LU factorization. That does the trick, since the QR factorization can be updated iteratively in $O(N^2)$: if $A$ is a $N\times N$ matrix, and $\hat{A}=A+s\otimes t$ (where $s$ and $t$ are in $R^N$), then $A=QR\Rightarrow\hat{A}=Q(R+u\otimes t)$ with $u=Q^Ts$. From here it takes $2(N-1)$ Jacobi rotations in order to obtain a QR factorization of $\hat{A}$. For example, consider $f(x,y)=[x^2y, 5x+\sin y]$, so $J(x,y)=\begin{bmatrix} 2xy & x^2 \\ 5 & \cos{y} \end{bmatrix}$: Broyden's approximation is pretty good: Out: Actual Jacobian at x=[9.614, 2.785]: [[ 53.54 92.42 ] [ 5. -0.937]] Approximated Jacobian at x=[9.614, 2.785]: [[ 53.54 92.42 ] [ 5. -0.937]] And it can used to find roots as following: Out: Initial point: [ 0.818 0.428] Found root: [ 6.974e-04 8.168e+01] Objective = [ 3.973e-05 3.425e-08]
# Kerning between $\uptau$ and subscript? I'm trying to get a $\uptau$ with a right-hook at the bottom: \documentclass{article} \usepackage{upgreek} \begin{document} $\tau_X$ $\uptau_X$ \end{document} Unfortunately, there doesn't seem to be any kerning the $\uptau$ and a following subscript: Wondering if there is a simple solution, or do users simply tolerate this. • $\tau_{\!X}$? – GuM Jun 8 '17 at 22:50 • Thanks, Gustavo. That works great. Even $\tau\!_X$ looks good. – user36800 Jun 9 '17 at 2:59 • Gustavo, if you can post that as the answer, I'll mark it as the answer. Thanks. – user36800 Jun 9 '17 at 14:48 It may perhaps be surprising, but TeX makes no provision at “machine-level” for inserting a negative kern before a subscript if the letter being subscripted has a shape that would make it recommendable: such a kern must be inserted manually, for example as \tau_{\!X} where the \! command inserts a negative thin space. More generally, you could say something like \tau_{\mkern -4mu X} and experiment until you find the correct value to place after \mkern. Note, however, the difference between the two constructions \tau_{\!X} and \tau\!_{X} They differ in two ways: first, a “thin space” corresponds to two different absolute amounts in “text size” (latter alternative) and in “script size” (former one); second, in the first case the subscript is actually appended to the character “tau”, but in the second case it becomes the subscript of an atom whose nucleus is empty. Although the output that the two idioms yield may look almost the same at first sight, a closer glance will reveal it is not quite so. Here is a minimal compilable example; if you uncomment the two \showlists commands, some tracing information will be displayed on your terminal that proves what we have just said. % My standard header for TeX.SX answers: \documentclass[a4paper]{article} % To avoid confusion, let us explicitly % declare the paper format. \usepackage[T1]{fontenc} % Not always necessary, but recommended. % End of standard header. What follows pertains to the problem at hand. \usepackage{upgreek} \showboxdepth = 10 \tracingonline = 1 \begin{document} Note that $$\tau_{\!X},\uptau_{\!X} \ne \tau\!_{X},\uptau\!_{X} % \showlists$$. % \showlists Alternative: $$\tau_{\mkern -4mu X}, \uptau_{\mkern -3.5mu X}$$. \end{document} And here is magnified view of the significant portion of the output: # Edit The statement that “TeX makes no provision… for inserting a negative kern before a subscript” could be questioned, because, for example, $P_{x}^{2}$ and $Q_{x}^{2}$ yield different results, so let me clarify exactly what I mean here. As The TeXbook says, “a subscript will be ‘tucked in’ slightly when it follows certain letters” (bottom of p. 129); the point is that the amount of this “tucking in” depends only on the symbol being subscripted, and not on the subscript, whereas the amount of kerning between two characters is a function of both of them. Indeed, given an atom with a non-empty subscript and whose nucleus is simply a charcter, the horizontal placement the subscript is governed by the italic correction of that character; in a font intended for typesetting math, the italic correction of characters is actually used only for this purpose. • Thanks, Gustavo! I suspected that the placement of \! mattered. I like the cozier spacing option, at least for these two characters. – user36800 Jun 9 '17 at 21:21
hal-00716593, version 2 ## Greedy-Like Algorithms for the Cosparse Analysis Model Raja Giryes a1, Sangnam Nam () b2, Michael Elad () 1, Rémi Gribonval (, ) 2, Mike E. Davies () 3 Résumé : The cosparse analysis model has been introduced recently as an interesting alternative to the standard sparse synthesis approach. A prominent question brought up by this new construction is the analysis pursuit problem -- the need to find a signal belonging to this model, given a set of corrupted measurements of it. Several pursuit methods have already been proposed based on $\ell_1$ relaxation and a greedy approach. In this work we pursue this question further, and propose a new family of pursuit algorithms for the cosparse analysis model, mimicking the greedy-like methods -- compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), iterative hard thresholding (IHT) and hard thresholding pursuit (HTP). Assuming the availability of a near optimal projection scheme that finds the nearest cosparse subspace to any vector, we provide performance guarantees for these algorithms. Our theoretical study relies on a restricted isometry property adapted to the context of the cosparse analysis model. We explore empirically the performance of these algorithms by adopting a plain thresholding projection, demonstrating their good performance. • Domaine : Informatique/Traitement du signal et de l'image Sciences de l'ingénieur/Traitement du signal et de l'image Mathématiques/Analyse fonctionnelle • Mots-clés : Sparse representations – Compressed sensing – Synthesis – Analysis – CoSaMP – Subspace-pursuit – Iterative hard threshodling – Hard thresholding pursuit. • Commentaire : partially funded by the ERC – PLEASE project – ERC-2011-StG-277906 • Versions disponibles :  v1 (10-07-2012) v2 (18-01-2013) • hal-00716593, version 2 • oai:hal.inria.fr:hal-00716593 • Contributeur : • Soumis le : Vendredi 18 Janvier 2013, 11:01:29 • Dernière modification le : Mardi 7 Mai 2013, 16:39:53
# IR2110 Simulation not working in LTSPICE I am trying to implement a half bridge topology as shown in the figure below: I have replaced the IRF450 with STGW40H60DLFB IGBT. I have also replaced the 11DF4 with UF4007. My LTSPICE schematic looks like: My VDS and VGS in the high side and low side is: When I run my LTSPICE simulation with a pulse with 100K frequency, I get: I can't spot a error in my LTSPICE, please let me know if you have any idea what it might be. MODEL Used for UF4007: .MODEL UF4007 D N=3.97671 IS=3.28772u RS=0.149734 EG=1.11 XTI=3 CJO=2.92655E-011 VJ=0.851862 M=0.334552 FC=0.5 TT=1.84973E-007 BV=1000 IBV=0.2 Iave=1 Vpk=1000 type=silicon • Can you post the contents of the IR2110 subcircuit? It requires registration. You could to also post the contents of the .asc file, it would help. Use as delimiters for the block of text. – a concerned citizen Oct 22 '20 at 8:45 • No need for that, anymore. The problem is with your IGBT symbols, see the answer. It's safe to delete your comments. – a concerned citizen Oct 22 '20 at 9:11 • I built the circuit using a FET and got the desired results, I will do further research on the circuit using the STGW40H60DLFB and mark your answer as correct. Thanks for your help. – Anuj Simkhada Oct 22 '20 at 21:52 It looks like you used an autogenerated symbol for your IGBT, but you connected it wrong, because the order of the pins is D-G-S. But, instead of using an autogenerated symbol you can make things easier for you if you choose the [Misc]/nigbt symbol, which you can use directly as an IGBT. All you have to do is rename NIGBT with STGW40H60DLFB-V2, and you're done. You still have to add the correct prefix (right-click on the symbol, change Z to X), I thought that would be implied. Anyway, the IGBT subcircuit is full of behavioural expressions and it's not very convergence-friendly. I don't know if it will help with your particular schematic, but I poked around and managed to get it working in a simple test by making these changes inside the STGW40H60DLFB-V2.lib file: • on line 70 change r_escusione 1z a1 500 to c_escusione 1z a1 10p Rpar=500 • on line 73 change r_conv1 1y a1 10 to c_conv1 1y a1 10p Rpar=10 • on line 114 there's a Grg1 ...; add this line, crg1 g2 g 10p rpar=1g right below it • on line 168 add a ; at the beginning of the line, in front of E2 ..., then add these two lines below: g2 50 40 g d1k 1k r2 50 40 1m • on line 204 add a ; at the beginning of the line, in front of E22 ..., and add these two lines below: G22 502 402 ss d1k 1k r22 502 402 1m You can help in your schematic by setting Rser=10...100m for the voltage sources (V3, V4, and V1; V2 can be ignored), adding Rser=1...10m to capacitors (all four), and adding Rpar=10...100k to the inductor. Also try changing V2, A1 and their connections like this: • delete A1 and all the connections to the input pins HIN and LIN. V2 should be just sitting there in the schematic with no connections. • add [Digital]/buf (not buf1) and connect its input to V2 and its outputs to HIN and LIN pins. There should be a new A1 in the schematic. • add vhigh=6 tau=10n tripdt=30n to the new A1. These changes could help, too: • add Vp=0.3 to the .model UF4007 card • add this model for the 1N4148: .model 1N4148 D(Is=2.52n Rs=.568 N=1.752 Cjo=4p M=.4 tt=20n Iave=200m Vpk=75 Vp=0.3 mfg=OnSemi type=silicon) ` Try running your schematic with these changes. If you're religious, praying might help. The suggestion above was correct. The problem was with my IGBTs. I tried to use the STGW40H60DLFB-V2 by choosing the [Misc]/nigbt symbol as suggested above but I couldn't get it to work. So, I used a FET model which can withstand high voltage and the circuit worked. The simulation served its purpose. The correct simulation looks like: The results from the simulation looks like: V(n008) is the input PWN signal to HIN pin. V(n0011) is the inverted input PWN signal to LOW pin. V(n003) is the Drain-Source Voltage of the HIGH side. V(n003,n004) is the Gate-Source Voltage of the HIGH side. V(n004) is the Drain-Source Voltage of the LOW side. V(n0010) is the Gate-Source Voltage of the HIGH side. In order to simulate this circuit properly without getting errors, the settings in the Tools\ControlPanel\SPICE has to be changed as: • I've updated my answer, see if it helps. – a concerned citizen Oct 23 '20 at 7:39
UK Primary (3-6) Patterns with tenths and hundredths (add/sub) Lesson Let's bring together everything we've seen so far with patterns - whether working with decimals in the tenths place, decimals in the hundredths place, or fractions. Each time, we've seen that we need to: 1. Find the pattern - which will either be given, or which will need to be found by looking at the list of numbers. 2. Continue the pattern - using the pattern that you found. Watch the video below to see some examples! Now that you've seen how to work with patterns using decimals and fractions, have a go at some questions! Worked Examples Example 1 Complete the pattern. 1. $\frac{8}{10}$810​ $\frac{7}{10}$710​ $\frac{6}{10}$610​ $\editable{}$ $\editable{}$ $\editable{}$ 2. What is the pattern? The numbers are decreasing by $10$10 A The numbers are decreasing by $0.1$0.1 B The numbers are decreasing by $0.01$0.01 C The numbers are decreasing by $1$1 D The numbers are decreasing by $10$10 A The numbers are decreasing by $0.1$0.1 B The numbers are decreasing by $0.01$0.01 C The numbers are decreasing by $1$1 D Example 2 Complete the pattern. 1. $0.3$0.3 $1.0$1.0 $1.7$1.7 $\editable{}$ $\editable{}$ $\editable{}$ 2. What is the pattern? The numbers are increasing by $\frac{7}{100}$7100 A The numbers are increasing by $\frac{7}{10}$710 B The numbers are increasing by $70$70 C The numbers are increasing by $7$7 D The numbers are increasing by $\frac{7}{100}$7100 A The numbers are increasing by $\frac{7}{10}$710 B The numbers are increasing by $70$70 C The numbers are increasing by $7$7 D Example 3 Complete the pattern. 1. $0.02$0.02 $0.03$0.03 $0.04$0.04 $\editable{}$ $\editable{}$ $\editable{}$ 2. What is the pattern? The numbers are increasing by $\frac{1}{10}$110 A The numbers are increasing by $\frac{1}{100}$1100 B The numbers are increasing by $1$1 C The numbers are increasing by $10$10 D The numbers are increasing by $\frac{1}{10}$110 A The numbers are increasing by $\frac{1}{100}$1100 B The numbers are increasing by $1$1 C The numbers are increasing by $10$10 D
# What is the meaning of limit of Fibonacci sequence? For general Fibonacci sequence with $F_1=F_2=1$, It is known that limit of $\frac {F_{n+1}} {F_n}$ exists. I am wondering what this limit implies and why it is important to compare these two sequences. Thanks • I tis not in the nature of limits to imply, and there are no two sequences being compared (unless you mean the Fibonacci sequence and the same sequence shifted one place). – Marc van Leeuwen Nov 24 '17 at 12:39 One reason to be interested in $\frac{F_{n+1}}{F_n}$ and its limit, is that it is a rational number which gives a good approximation to the golden ratio $\phi$ (which is irrational). In fact, in a way, this is the best rational approximation to $\phi$. Understanding why leads to the fascinating topic of continued fractions. The continued fraction expansion of $\phi$ is $$\phi = 1 + \frac{1}{1 + \frac{1}{1 + \frac{1}{1+\ldots}}}.$$ If you truncate this fraction after $n-1$ divisions you get the $n$-th convergent. The first convergent of $\phi$ is $1$, the second one is $1 + \frac{1}{1} = 2$, the third is $1 + \frac{1}{1 + \frac{1}{1}} = \frac{3}{2}$, the fourth is $1 + \frac{1}{1 + \frac{1}{1+\frac{1}{1}}} = \frac{5}{3}$. Hopefully you see the pattern: using induction you can show that the $n$-th convergent is $\frac{F_{n+1}}{F_n}$. The neat thing is that a basic theorem about continued fractions shows that if $\frac{p}{q}$ is the $n$-th convergent of the continued fraction of a number $x$, then any other rational number $\frac{r}{s}$ with $s \le q$ is a worse approximation to $x$. I.e. for all $r$ and all $s \le q$, we have $$\left|x - \frac{p}{q}\right| \le \left|x - \frac{r}{s}\right|.$$ So, not only $\frac{F_{n+1}}{F_n}$ is a rational sequence that converges to $\phi$, but in fact it converges as fast as possible, in a certain precise sense. Additionally, as noted in a comment, because all terms in the continued fraction expansion of $\phi$ are $1$, $\phi$ is the hardest irrational to approximate. This is the essence of Hurwitz's inequality. • Interesting to me, is that the ratio of successive terms is alternately more and less than $\phi$ rather than approaching $\phi$ from one direction or the other. In a series which has alternating sign on each term (such as the Maclaurin series for cosine) it would be expected. – Weather Vane Nov 24 '17 at 7:39 • You should also mention that because of all those 1s in the continued fraction that in a sense $\phi$ is the most irrational number because its continued fraction has the slowest possible convergence. – PM 2Ring Nov 24 '17 at 12:13 • @PM2Ring This is a good point: $\frac{F_{n+1}}{F_n}$ is the best rational approximation to the hardest irrational to approximate. – Sasho Nikolov Nov 24 '17 at 19:16 The existence of the limit reflects the fact that the Fibonacci sequence is essentially a geometric sequence (it is actually a linear combination of two geometric sequences but one of them dominates the other). See Wikipedia. • Yep, so in fact, we could use this knowledge to say that the Fibonacci numbers are growing in $\Theta(\phi^n)$. That way, if there is an algorithm where a Fibonacci like pattern shows up, we know that it is an exponential algorithm. Or if there is a combinatorial problem whose answer involves Fibonacci numbers, we could give some assymptotics for it. – Agnishom Chattopadhyay Nov 24 '17 at 16:11 The recurrence $F_{i+2}=F_i+F_{i+1}$ gives rise to a simple recurrence relation for the ratio of successive terms $\frac{F_{i+1}}{F_i}\in\Bbb R\cup\{\infty\}$, namely $\frac{F_{i+2}}{F_{i+1}}=\frac{F_i}{F_{i+1}}+\frac{F_{i+1}}{F_{i+1}}=(\frac{F_{i+1}}{F_i})^{-1}+1$, which is of order$~1$. The dynamics of the map $x\mapsto x^{-1}+1$ on $\Bbb R\cup\{\infty\}$ is quite simple: the map has two fixed points $\frac{1+\sqrt5}2$ and $\frac{1-\sqrt5}2$, of which the first is attractive, ad the second is repulsive. For any sequence satisfying the recursion and with initial ratio not exactly equal to the second fixed point, the ratio of successive terms will converge to the the first fixed point as $n\to\infty\,$; this is in particular the case for the Fibonacci sequence. Let $f_n$ be the original Fibonacci sequence ($f_0=0,f_1=1$). Then any sequence $F$ defined as $$F_1=a\\F_2=b\\F_{n+2}=F_{n+1}+F_n$$ can be written as $$F_n=\frac{(3a-b)f_n-(a-b)L_n}2$$ where $L_n$ is $n$-th Lucas number. Now, knowing that Fibonacci sequence is recurrence equation, it can be solved like this: $$x^2=x+1\implies x_{1,2}=\frac{1\pm\sqrt5}2$$ Now, we can use these solutions to construct the solution for our recurrent equation (se "how to solve recurrent equation" article on Wikipedia): $$f_n=\frac{x_1^n-x_2^n}{x_1-x_2}$$ Knowing it, you will see that $F$ is always an exponential sequence and therefore limit of it's two consecutive elements always exists. • You final formula is wrong: the denominator should be $x_1-x_2=\sqrt5$, not $x_1+x_2=1$. But in any case this has nothing to do with what comes before, the "therefore" is completely incomprehensible. – Marc van Leeuwen Nov 24 '17 at 16:50 • @MarcvanLeeuwen. Thank you very much for pointing that out. Plus sign instead of minus sign was just a typo. I also updated the above sentence and explained that it is a part from the algorithm for solving recurrent equations. Hope it is better now. If you have any other suggestions, just let me know and I'll update the post accordingly. Thanks! – user499203 Nov 24 '17 at 17:34 • @MarcvanLeeuwen. BTW, I would suggest you to reconsider your vote, as the question is now updated and the typo is fixed. – user499203 Nov 24 '17 at 18:20 • @ThePirateBay voting is anonymous, you cannot associate votes with comments unless the commenter explicity says so. – Weather Vane Nov 24 '17 at 19:49 • @ThePirateBay Since you insist, I shall reconsider the rewritten answer. It has three parts. First a general solution $F_n$ of the recurrence is written as a linear combination of two particular solutions $f_n$ and $L_n$ (not surprising for a linear relation of order$~2$). Then a fresh quadratic equation in a fresh quantity $x$ is solved, correctly. Finally the general form of $f_n$ is expressed in terms of the solutions to that equation, but with no explanation of a relation to the equation, and referring to Wikipedia for an explanation. The parts are not logically related at all. – Marc van Leeuwen Nov 25 '17 at 7:38 Denote the limit with $L$. Then: $$\lim_{n\to\infty} \frac{F_{n+1}}{F_n}=L \Rightarrow \lim_{n\to\infty} \frac{1}{\frac{F_{n+2}-F_{n+1}}{F_{n+1}}}=L \Rightarrow \frac{1}{L-1}=L \Rightarrow L=\frac{1+\sqrt{5}}{2}=\phi.$$ • Hmmm, and this addresses the question because? – Did Nov 24 '17 at 14:33 • @Did, my point is to show the limit approaches not a random number, but the golden ratio, i.e. their relation. Like the relation of Fibonacci numbers with binomial coefficients. – farruhota Nov 25 '17 at 3:43 • ...Which is known to the OP, it seems, and not related to what they are actually asking. – Did Nov 25 '17 at 8:26
# Parse tree A parse tree or parsing tree[1] or derivation tree or (concrete) syntax tree is an ordered, rooted tree that represents the syntactic structure of a string according to some context-free grammar. The term parse tree itself is used primarily in computational linguistics; in theoretical syntax the term syntax tree is more common. Parse trees are distinct from the abstract syntax trees used in computer programming, in that their structure and elements more concretely reflect the syntax of the input language. They are also distinct from (although based on similar principles to) the sentence diagrams (such as Reed-Kellogg diagrams) sometimes used for grammar teaching in schools. Parse trees are usually constructed based on either the constituency relation of constituency grammars (phrase structure grammars) or the dependency relation of dependency grammars. Parse trees may be generated for sentences in natural languages (see natural language processing), as well as during processing of computer languages, such as programming languages. A related concept is that of phrase marker or P-marker, as used in transformational generative grammar. A phrase marker is a linguistic expression marked as to its phrase structure. This may be presented in the form of a tree, or as a bracketed expression. Phrase markers are generated by applying phrase structure rules, and themselves are subject to further transformational rules. ## Constituency-based parse trees The constituency-based parse trees of constituency grammars (= phrase structure grammars) distinguish between terminal and non-terminal nodes. The interior nodes are labeled by non-terminal categories of the grammar, while the leaf nodes are labeled by terminal categories. The image below represents a constituency-based parse tree; it shows the syntactic structure of the English sentence John hit the ball: The parse tree is the entire structure, starting from S and ending in each of the leaf nodes (John, hit, the, ball). The following abbreviations are used in the tree: • S for sentence, the top-level structure in this example • NP for noun phrase. The first (leftmost) NP, a single noun "John", serves as the subject of the sentence. The second one is the object of the sentence. Each node in the tree is either a root node, a branch node, or a leaf node.[2] A root node is a node that doesn't have any branches on top of it. Within a sentence, there is only ever one root node. A branch node is a mother node that connects to two or more daughter nodes. A leaf node, however, is a terminal node that does not dominate other nodes in the tree. S is the root node, NP and VP are branch nodes, and John (N), hit (V), the (D), and ball (N) are all leaf nodes. The leaves are the lexical tokens of the sentence.[3] A node can also be referred to as parent node or a child node. A parent node is one that has at least one other node linked by a branch under it. In the example, S is a parent of both N and VP. A child node is one that has at least one node directly above it to which it is linked by a branch of a tree. From the example, hit is a child node of V. The terms mother and daughter are also sometimes used for this relationship. ## Dependency-based parse trees The dependency-based parse trees of dependency grammars[4] see all nodes as terminal, which means they do not acknowledge the distinction between terminal and non-terminal categories. They are simpler on average than constituency-based parse trees because they contain fewer nodes. The dependency-based parse tree for the example sentence above is as follows: This parse tree lacks the phrasal categories (S, VP, and NP) seen in the constituency-based counterpart above. Like the constituency-based tree, constituent structure is acknowledged. Any complete sub-tree of the tree is a constituent. Thus this dependency-based parse tree acknowledges the subject noun John and the object noun phrase the ball as constituents just like the constituency-based parse tree does. The constituency vs. dependency distinction is far-reaching. Whether the additional syntactic structure associated with constituency-based parse trees is necessary or beneficial is a matter of debate. ## Phrase markers Phrase markers, or P-markers, were introduced in early transformational generative grammar, as developed by Noam Chomsky and others. A phrase marker representing the deep structure of a sentence is generated by applying phrase structure rules; this may then be undergo further transformations. Phrase markers may be presented in the form of trees (as in the above section on constituency-based parse trees), but are often given instead in the form of bracketed expressions, which occupy less space. For example, a bracketed expression corresponding to the constituency-based tree given above may be something like: ${\displaystyle [_{S}\ [_{\mathit {NP}}\ John]\ [_{\mathit {VP}}\ [_{V}\ hit]\ [_{\mathit {NP}}\ the\ [_{N}\ ball]]]]}$ As with trees, the precise construction of such expressions and the amount of detail shown can depend on the theory being applied and on the points that the author wishes to illustrate. ## Notes 1. ^ See Chiswell and Hodges 2007: 34. 2. ^ See Carnie (2013:118ff.) for an introduction to the basic concepts of syntax trees (e.g. root node, terminal node, non-terminal node, etc.). 3. ^ See Alfred et al. 2007. 4. ^ See for example Ágel et al. 2003/2006. ## References • Ágel, V., Ludwig Eichinger, Hans-Werner Eroms, Peter Hellwig, Hans Heringer, and Hennig Lobin (eds.) 2003/6. Dependency and valency: An international handbook of contemporary research. Berlin: Walter de Gruyter. • Carnie, A. 2013. Syntax: A generative introduction, 3rd edition. Malden, MA: Wiley-Blackwell. • Chiswell, Ian and Wilfrid Hodges 2007. Mathematical logic. Oxford: Oxford University Press. • Aho, Alfred et al. 2007. Compilers: Principles, techniques, & tools. Boston: Pearson/Addison Wesley.
Find all School-related info fast with the new School-Specific MBA Forum It is currently 27 Jul 2016, 03:43 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar 4350 is invested into three parts at simple interest so that Author Message Manager Joined: 10 Jul 2006 Posts: 55 Followers: 1 Kudos [?]: 0 [0], given: 0 4350 is invested into three parts at simple interest so that [#permalink] Show Tags 11 May 2008, 03:09 00:00 Difficulty: (N/A) Question Stats: 0% (00:00) correct 0% (00:00) wrong based on 0 sessions HideShow timer Statistics This topic is locked. If you want to discuss this question please re-post it in the respective forum. $4350 is invested into three parts at simple interest so that the amounts received after 1, 2 and 3 years respectively in each part are equal. Find the amount invested for 3 years, if the rate of interest is 10% p.a. (1)$1320 (2)$1430 (3)$1540 (4)$1560 (5)$1650 Intern Joined: 29 Apr 2008 Posts: 21 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: PS: Simple Interest question [#permalink] Show Tags 11 May 2008, 04:59 let x,y,z are the amount invested for 3,2 and 1 years respectively. x+x*3*0.1 = y + y*2*0.1 = z + z*1*0.1 => 1.3x=1.2y=1.1z x+y+z =4350 substitute z and y in terms of x 4.31x/1.32=435 x = 1332.25 Re: PS: Simple Interest question   [#permalink] 11 May 2008, 04:59 Display posts from previous: Sort by
2 fix typo 1 # Conditional probability with permuations Hello, This problem looks very simple and I conjecture it's true but I have a hard time proving it. It'd be very useful for my work (I'm doing a PhD) and I'll be glad to cite you in a future article if you help me. Let $P$ be a random permutation of $\mathbb{Z}/N\mathbb{Z}$ with the condition that $P$ verifies $q$ equations : $P(a_i)=b_i, i\leq q$. Let $k_0, k_1$ be random and $x_1, x_2, y_1, y_2$ fixed numbers with $x_1\neq x_2, y_1\neq y_2$ Prove that : $$Pr[P(x_2+k_0)=y_2+k_1 | P(x_1+k_0)=y_1+k_1] \geq (1-\frac{q}{N}) \frac{1}{N-1}$$ Thank you !
# NRC Deals With Application Surge, Proliferation Threat By Michael Lucibella ### Of Related Interest Technical Steps to Support Nuclear Arsenal Downsizing APS Report - Executive Summary New Technique Raises Fears of Nuclear Proliferation (APS News Article) In a recent speech, the Chair of the Nuclear Regulatory Commission emphasized the importance of maximizing the safety and security of nuclear power plants. Currently, the Commission is inundated with the most construction applications in three decades. Speaking at the Brookings Institution in Washington, NRC Chair Gregory Jaczko said that one of the biggest challenges facing the agency is the large number of applications for new power plants. When he first took the post he expected to see one or two new applications; however, the agency is currently considering 13 applications, down from a peak of 18. “It’s a significant change for the agency to have this many applications in front of them,” Jaczko said. He added that he felt the agency was properly prepared to address the swell of applications and had increased the size of its staff accordingly. There has been a big push on the part of the Obama administration to encourage the development of nuclear power as a viable alternative to fossil fuels. It increased loan guarantees for new nuclear power plants from $18.5 billion authorized in 2005 to$54 billion. Despite this push, only one plant has completed the licensing process and begun construction. Jaczko said he was more concerned with having an effective approval process rather than with the number of plants built. “I think our focus is: If there are plants, they are safe. How many there are is up to the utilities,” Jaczko said. He emphasized that the 1979 accident at Three Mile Island in Pennsylvania was a turning point for the nuclear power industry. Since then, with the exception of the one currently under construction, no new nuclear power plants have been built. As a result of the accident, the agency worked to establish better management to implement regulations at plants. “I think fundamentally those improvements led to an improvement in safety,” Jaczko said. “The issue of a safety culture is an important issue for the agency.” Though no major accidents at facilities have occurred since Three Mile Island, Jaczko cautioned against institutional overconfidence at power plants. “We need to be wary of the view that just because it hasn’t happened in the past, it can’t happen in the future,” Jaczko said. “The core of that is instilling the right safety culture in every facility.” Another concern over nuclear safety is the potential misuse of the technology for the proliferation of nuclear weapons. The start of construction by General Electric of a plant in North Carolina to enrich uranium using a new process called SILEX has attracted such concerns. The Separation of Isotopes by Laser Excitation uses lasers to purify nuclear fuel by ionizing the atoms of the U-235 isotope. A charged plate then collects the charged uranium atoms. It is thought that this method would require less energy to enrich nuclear fuel than the existing method using centrifuges. Experts have raised concerns that a SILEX facility could be easily concealed from surveillance satellites by an unfriendly nation and used to create fuel for nuclear weapons. In March, Francis Slakey, a professor at Georgetown University and APS’s Associate Director of Public Affairs, co-authored a letter in Nature calling for the NRC to conduct a proliferation risk assessment for any domestic company looking to license the technology. Jaczko said that the NRC was still considering the matter. “At this point the commission really hasn’t made a decision about this,” Jaczko said. He added that he thought that the current system in place was working well. “The question is whether you really can control the information and the material,” Jaczko said, “I believe our approach to these two questions is adequate.” He added that it was the suppliers, not the reactors, who were the biggest source of concern about proliferation. The suppliers of nuclear fuel are more decentralized and as a result are becoming more of a focus for the NRC as they are working to overhaul oversight generally of the entire fuel cycle. “It’s more of a challenge on the enrichment side,” Jaczko said. ©1995 - 2021, AMERICAN PHYSICAL SOCIETY APS encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated or changed. Editor: Alan Chodos ## November 2010 (Volume 19, Number 10) Table of Contents APS News Archives Contact APS News Editor Articles in this Issue Graphene Experiments Garner Nobel Prize Hsu, Chudzicki are Apker Award Honorees APS Responds to Member's Resignation over Climate Change Physics Stars in Theater, Music and Dance Mass Media Fellows Bring Science to the Public Community Values APS’s Education Research Journal Physics Lags in Minority Representation NRC Deals With Application Surge, Proliferation Threat Research Exposes Danger of Boring Environments APS Meeting Briefs Letters to the Editor The Back Page Members in the Media This Month in Physics History The Education Corner Zero Gravity: The Lighter Side of Science Inside the Beltway
Carol Jacoby is an active mathematics researcher. She recently completed a book for DeGruyter, entitled Abelian Groups: Classifications and Structures, with Prof. Peter Loth of Sacred Heart University. This book is targeted at mathematics graduate students and researchers. Dr. Jacoby enjoys sharing her love of mathematics with a broad audience. She is the host of the podcast The Art of Mathematics, appearing weekly wherever you get your podcasts. Conversations, puzzles, book reviews, conjectures solved and unsolved, mathematicians and beautiful mathematics. No math background required. Join the conversation. Leave a comment. Suggest a puzzle, a topic or a person to interview. Leave a voice message at https://anchor.fm/the-art-of-mathematics  or email her at [email protected]. Following are Carol Jacoby’s recent publications in peer-reviewed journals. (Note that LaTeX is used to encode the symbols in the titles.) • C. Jacoby, Undefinability of local Warfield groups in $L_{\infty\omega}$}, in Groups and Model Theory: A Conference in Honor of Ruediger Goebel’s 70th Birthday, Contemp. Math., Vol. 576, Amer. Math. Soc., Providence, RI (2012), pp. 151-162. • C. Jacoby and P. Loth, Abelian groups with partial decomposition bases in $L_{\infty\omega}^\delta$, Part II, in Groups and Model Theory: A Conference in Honor of Ruediger Goebel’s 70th Birthday, Contemp. Math., Vol. 576, Amer. Math. Soc., Providence, RI (2012), pp. 177-185. • C. Jacoby and P. Loth, $\Z_p$-modules with partial decomposition bases in $L_{\infty\omega}^\delta$, Houston J. Math. Volume 40, No. 4 (2014), pp.1007-1019. • C. Jacoby and P. Loth, Partial decomposition bases and Warfield modules, Comm. Algebra 42 (2014), 4333-4349. • C. Jacoby and P. Loth, Partial decomposition bases and global Warfield groups, Communications in Algebra 44, 3262-3277, August 2016 • C. Jacoby and P. Loth, The classification of $\Z_p$-modules with partial decomposition bases in $L_{\infty \omega}$, Archive for Mathematical Logic 55 (2016) 939-954. • C. Jacoby and P. Loth, The classification of infinite abelian groups with partial decomposition bases in $L_{\infty \omega}$, Rocky Mountain J. Math. 47 (2107) 463-477. • C. Jacoby, K. Leistner, P. Loth and L. Struengmann}, Abelian groups with partial decomposition bases in $L_{\infty\omega}^\delta$, Part I, in Groups and Model Theory: A Conference in Honor of Ruediger Goebel’s l’s 70th Birthday, Contemp. Math., Vol. 576, Amer. Math. Soc., Providence, RI (2012), pp. 163-175.
# How do you solve \frac { 1} { 2} x + 5\leq 8? Mar 30, 2018 $\frac{1}{2} x + 5 \le 8$ $\Rightarrow x \le 6$ #### Explanation: The algebraic rules for solving inequalities are almost exactly the same as for solving equations. $\frac{1}{2} x + 5 \le 8$ $\Rightarrow \frac{1}{2} x + \cancel{5} - \cancel{\textcolor{red}{5}} \le 8 - \textcolor{red}{5}$ $\Rightarrow \frac{1}{2} x \le 3$ $\Rightarrow \cancel{\textcolor{red}{2}} \cdot \frac{1}{\cancel{2}} x \le \textcolor{red}{2} \cdot 3$ $\Rightarrow x \le 6$
# normally embedded subgroups of solvable $T$-group Definition: A subgroup $H$ of a group $G$ is said to be normally embedded in $G$ if each Sylow $p$-subgroup of $H$ is a Sylow $p$-subgroup of some normal subgroup $N$ of $G$. Definition: A group $G$ is a $T$-group if has a transitive relation among its subgroups i.e. all subnormal subgroups of $G$ are normal in $G$. The following is a classical result given by Gaschutz on the structure of the finite solvable $T$-groups Theorem: Let $G$ be a finite solvable $T$-group. Then (a) $G$ contains an abelian normal subgroup $L$. (b) every subgroup of $G/L$ is normal in $G/L$ The following is adapted from an exercise in Finite Soluble Groups by Doerk and Klaus. A finite group $G$ is a solvable $T$-group $\iff$ if every subgroup is normally embedded in $G$. I have managed to show the sufficiency condition. For the necessary condition, fix a subgroup $H$ of $G$. I need to show that $H$ is normally embedded in $G$. Let $P$ be a Sylow $p$-subgroup of $H$ for some prime $p$. By the theorem, we know $G$ has a normal abelian subgroup $L$. Suppose that $p$ does not divide $|L|$. Then it is clear that $P$ is a Sylow $p$-subgroup of $PL$. Moreover, $PL/L$ is normal in $G/L$, and so $PL$ is normal in $G$. In this case, $H$ is normally embedded in $G$. I'm not sure how to proceed for the case if $p$ divides $|L|$ Let $L = R \times Q$, where $R$ is a $p'$-group and $Q$ is a $p$-group. Now $PQ$ is a $p$-group, so $P$ is subnormal in $PQ$, and hence $PR$ is subnormal in $PL$, But $PL \unlhd G$, so $PR$ is subnormal and hence normal in $G$, with $P \in {\rm Syl}_p(PR)$.
Home > Cannot Open > Vc Cannot Open Include File No Such File Or Directory Vc Cannot Open Include File No Such File Or Directory Contents I tried even with the node 0.10 and I still got the same issue. Introducing the Universal CRT http://blogs.msdn.com/b/vcblog/archive/2015/03/03/introducing-the-universal-crt.aspx Excerpt from the above: "The headers, sources, and libraries are now distributed as part of a separate Universal CRT SDK. x10ba commented Feb 25, 2015 Hi, One Error resolved: I deleted and reinstalled the Instant Client-- it resolved the errors for oci.h and oratypes.h (noting that path = c:/oracle/instantclient, which is The C/C++ (win32) application included some standard header file which include the . check over here Try adding $(VCInstallDir)vc\include and$(VCInstallDir)vc\atlmfc\include yourself as workarounds. –Simple Jun 12 '14 at 14:49 Sorry about that, the window was cutting it out. C Run-Time Library Reference https://msdn.microsoft.com/en-us/library/59ey50w6%28v=vs.140%29.aspx If you drop the .h from and use you will get the C++ string class header instead. Best regards, André RSS Top 10 posts / 0 new Last post For more complete information about compiler optimizations, see our Optimization Notice. Does anyone have any ideas? C++ Cannot Open Include File No Such File Or Directory Now I'd like to know what the "<>" are supposed to be for... –Jimmy Oct 5 '15 at 14:40 2 It seems to me, the <> are used for system I only find some blank dir on the path: C:\Program Files (x86)\Microsoft SDKs\Windows Kits\10 I searched on the web, I didn't find the right windows kits 10 installation file. US Election results 2016: What went wrong with prediction models? Your project is missing that directory (C:\Program Files (x86)\Windows Kits\10\Include\10.0.10069.0\ucrt) from its include path. npm ERR! Solving a discrete equation Possible repercussions from assault between coworkers outside the office Assigning only part of a string to a variable in bash Find the rate of change at a share|improve this answer answered Oct 16 '13 at 23:50 tomi.lee.jones 997918 They are in the project folder and I have added the path to the project in both directories, Cannot Open Include File Qt more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed The lack of #pragma once // is deliberate. // #include #include // The CORECRT.H file is located in "C:\Program Files (x86)\Windows Kits\10\Include\10.0.10069.0\ucrt" QUESTION: - Shouldn'tthe CRTDEFS.H used '#include "corecrt.h"'? Log in to post comments shenghong-geng (Intel) Sun, 10/18/2015 - 18:41 Hi Andre, I can NOT reproduce your issue. I think the solution includes header files so it knows to recompile if they change. You could also add the directory where Banana.h is located to the list of include paths (project settings -> c++ -> include directories). stack at Process.ChildProcess._handle.onexit (child_process.js:1067:12) gyp ERR! Cannot Open Include File Sas I also tried again to compiler with "cl" and indeed I saw the message about the "third party tool". In fact, ucrt folder is already in INCLUDE in my environment. Building the app, the VS2015 reported error: 7>c:\program files (x86)\microsoft visual studio 14.0\vc\include\crtdefs.h(10): fatal error C1083: Cannot open include file: 'corecrt.h': No such file or directory The CRTDEFS.H has: // //
## Heisenberg $\Delta p \Delta x\geq \frac{h}{4\pi }$ Joanne Kang 3I Posts: 50 Joined: Thu Jul 25, 2019 12:17 am ### Heisenberg What is main point of Heisenberg's equation? Jasmine Summers 4G Posts: 50 Joined: Wed Sep 18, 2019 12:17 am ### Re: Heisenberg The idea that there are limits to how certain you can't be about a particle's position and velocity at the same time. The more accurate your position measurement is, the less sure you become about velocity and vice versa. Haley Chun 4H Posts: 54 Joined: Sat Aug 17, 2019 12:18 am ### Re: Heisenberg I agree with the person above. There are uncertainties in the momentum and velocity of a particle since it can take multiple paths (the actual one cannot be determined accurately). rohun2H Posts: 100 Joined: Wed Sep 18, 2019 12:19 am ### Re: Heisenberg It is a measure of uncertainty since it cannot be exactly determined the momentum and position of a particle simultaneously. Kehlin Hayes 4C Posts: 50 Joined: Wed Sep 18, 2019 12:17 am ### Re: Heisenberg It is difficult to find the exact position of particles, particularly electrons, so it measures the uncertainty of that particles momentum and position.
dy/dx = e^x^2 fin… - QuestionCove OpenStudy (anonymous): dy/dx = e^x^2 find coordinates of any turning points?? 4 years ago OpenStudy (debbieg): Turning points would mean that the graph of the function goes from increasing to decreasing or vice-versa, which means that there would be a local min or max, which means that the 1st derivative would = 0. so..... for what value of x would $$\large e^{x^2}=0$$? 4 years ago OpenStudy (anonymous): and then how do I get the point(s)? 4 years ago OpenStudy (debbieg): Well, if/when you find values of x that solve that equation, you would need the anti-derivative. But don't worry about that just yet. What do you think about the solutions to the equation, $$\large e^{x^2}=0$$? Hmmmm? 4 years ago OpenStudy (isaiah.feynman): That equation doesn't seem to have real solutions. 4 years ago OpenStudy (debbieg): That's correct. :) Because x^2 is always positive, and when you take a positive number (e) to a postive power, you won't ever get 0. 4 years ago OpenStudy (anonymous): the answer is supposed to be 18 4 years ago OpenStudy (debbieg): How so? That isn't coordinates of a point. Are you sure that you're reading the problem correctly? 4 years ago OpenStudy (debbieg): I mean, you said "find coordinates of any turning points".... so how is "18" the coordinates of a turning point? 4 years ago OpenStudy (isaiah.feynman): |dw:1380226383842:dw| @yaysocks 4 years ago
However, unlike RFC 8032's formulation, this package's private key representation includes a public key suffix to make multiple signing operations with the same key more efficient. This seed is hashed with SHA512 to produce 64 bytes (a couple of bits are flipped too). Is my Connection is really encrypted through vpn? An Ed25519 public key. MathJax reference. License: BSD-style: Maintainer: Vincent Hanquez Stability: experimental: Portability: unknown: Safe Haskell: None: Language: Haskell2010 Simple Hadamard Circuit gives incorrect results? ECDSA with secp256r1 (for which the key size never changes). Signaling a security problem to a company I've left. ssh-keygen -t ed25519 -f ssh-ed25519-passphrase-private-key.pem Generating public/private ed25519 key pair. In cryptography, Curve25519 is an elliptic curve offering 128 bits of security (256 bits key size) and designed for use with the elliptic curve Diffie–Hellman (ECDH) key agreement scheme. These functions are also compatible with the “Ed25519” function defined in RFC 8032. Generating the key is also almost as … Security: Not very many people want to waste .5 to 1 kilobyte of NVRAM on an ssh key - people will be tempted to step down the security. Symmetric-Key Encryption An Ed25519 key is only 256 bits in size, yet its cryptographic strength is comparable to a 4096 bit RSA key. This document specifies algorithm identifiers and ASN.1 encoding formats for Elliptic Curve constructs using the curve25519 and curve448 curves. ed25519. https://en.wikipedia.org/wiki/Nothing_up_my_sleeve_number, https://en.wikipedia.org/wiki/Dual_EC_DRBG, crypto.stackexchange.com/questions/71560/curve25519-by-openssl, Podcast 300: Welcome to 2021 with Joel Spolsky. Finally note that a well-designed 255-bit elliptic curve is estimated to be as secure as 3072-bit RSA, so any need for longer keys may, no offense, be more psychological than practical. If not, could I please be pointed to a method by which to securely generate such keys with a set size elsewhere? Generating an Ed25519 key is done using the -t ed25519 option to the ssh-keygen command. SeedSize = 32) // PublicKey is the type of Ed25519 public keys. Use, in order of preference: Ed25519 (for which the key size never changes). Very short. It is using an elliptic curve signature scheme, which offers better security than ECDSA and DSA. High-speed high-security signatures. Data structures crypto_sign_state , whose size can be … The signature algorithms covered are Ed25519 and Ed448. Use, in order of preference: Ed25519 (for which the key size never changes). I am creating some ssh keys using ed25519, something like: $ssh-keygen -t ed25519$ ssh-keygen -o -a 10 -t ed25519 $ssh-keygen -o -a 100 -t ed25519$ ssh-keygen -o -a 1000 -t ed25519 But I notice that the output of the public key is always the same size (80 characters): An ed25519 key starts out as a 32 byte seed. Is it possible to generate an Ed25519 keypair that has a very similar public key as another keypair (fooling a casual visual comparison) or is this as hard as solving one of SHA-512 or the discrete As such, (compressed) keys will never be longer than 256 bits, as explained by SEJPM, and would not usually be much shorter assuming keys are randomly generated, as it should be for security anyway. I don't know where you get 64 characters in your question above. These functions are also compatible with the “Ed25519” function defined in RFC 8032. The public key is the right size (32 bytes/256 bits), however isn't it supposed to start with 04? SSH public-key authentication uses asymmetric cryptographic algorithms to generate two key files – one "private" and the other "public". However the bottom line is, ed25519 private keys are always 32-bits and you can't change it. There are several different implementations of the Ed25519 signature system, and they each use slightly different key formats. 1 2 3 4 5 6 7 8 9 10 11 12 13 package ed25519 14 15 16 17 18 import (19 "bytes" 20 "crypto" 21 "crypto/ed25519/internal/edwards25519" 22 cryptorand "crypto/rand" 23 "crypto/sha512" 24 "errors" 25 "io" 26 "strconv" 27) 28 29 const (30 31 PublicKeySize = 32 32 33 PrivateKeySize = 64 34 35 SignatureSize = 64 36 37 SeedSize = 32 38) 39 40 41 type PublicKey []byte 42 43 44 45 46 47 func (pub PublicKey) … influence the length of the key, or by design will always be 80 (68 By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Asking for help, clarification, or responding to other answers. Instances. // SignatureSize is the size, in bytes, of signatures generated and verified by this package. How to interpret in swing a 16th triplet followed by an 1/8 note? Ed25519 Test Page Seed: (Will be hashed with sha256 to create a seed for key generation) Generate key pair from seed Generate key pair from random Private Key: Public Key: Message: (Text to be signed or verified) Signature: Sign Verify Message Generate key pair from seed Generate key pair from random Private Key: Public Key: Message: (Text to be signed or Thanks for pointing out also that it will not appear in. RSA with 2048-bit keys. 90,985 downloads per month Used in 500 crates (109 directly). Enter file in which to save the key (C:\Users\username\.ssh\id_ed25519): You can hit Enter to accept the default, or specify a path where you'd like your keys to be generated. Some software (such as NaCl, the reference implementation of Ed25519), supports only a single (signature) curve. However, unlike RFC 8032's formulation, this package's private key representation includes a public key suffix to make multiple signing operations with the same key more efficient. SSH key authentication is based on public key cryptography. As this is Base64-encoding, they can at most encode $43\cdot 6=258$ bits of information, which is enough to fit the 255-bit $y$-coordinate and 1-bit for the sign of the $x$-coordinate (this is called point compression). Recommended password complexity for SSH key encryption using AES-256-CBC. How to retrieve minimum unique values from list? 9.2.1.1. The secret key can be used to generate the public key via Crypt::Ed25519::eddsa_public_key and is not the same as the private key used in the Ed25519 API. At this point, you'll be prompted to use a passphrase to encrypt your private key … RFC 8032 EdDSA: Ed25519 and Ed448 January 2017 10. \$ ssh-add -K ~/.ssh/id_ed25519 Identify Episode: Anti-social people given mark on forehead and then treated as invisible by society. Also, I am very new to elliptic curve cryptography, and don't quite yet understand how EdDSA keys are generated. // SeedSize is the size, in bytes, of private key seeds. So please tell me if I've completely failed at understanding this, and please explain where I've gone terribly wrong if so. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. These include: rsa - an old algorithm based on the difficulty of factoring large numbers. Introduction into Ed25519. An Ed25519 public key instead is the compressed encoding of a (x, y) point on the Ed25519 Edwards curve obtained by multiplying the basepoint by a secret scalar derived from the private key. NRF_CRYPTO_ECC_ED25519_ENABLED 1 Defined as 1 if Ed25519 is enabled in any of the backends and it is usable in the API, 0 otherwise. Therefore, there will never be a need for longer Ed25519 keys, just like there will never be a need for longer RSA-3072 keys (as opposed to RSA in general) since it would simply be a misnomer otherwise. Key length: ed25519 is from a branch of cryptography called "elliptic curve cryptography (ECC)".RSA is based on fairly simple mathematics (multiplication of integers), while ECC is from a much more complicated branch of maths called "group theory". Note that the terms “private key” and “secret key” are used interchangeably. dropper post not working at freezing temperatures. ed25519 ssh public key is always 80 characters long? This is encoded according to section 7 of RFC8410. Some software may store keys in different formats not conformant with RFC8410 (e.g. Note: This example requires Chilkat v9.5.0.83 or greater. ed25519_publickey creates a public key from a private key. Why is it that when we say a balloon pops, we say "exploded" not "imploded"? In public key based method you can log into remote hosts and server, and transfer files to them, without using your account passwords. type PublicKey [] byte SeedSize=32 // PublicKey is the type of Ed25519 public keys. How use multiple keys to sign the same document? Future library releases will support a curve25519_expand function that hashes 32 bytes into 128 bytes suitable for use as a key; and, easiest to use, a combined curve25519_shared function. Generating the key is also almost as fast as the signing process. I need to generate some keypairs with the ed25519 curve for NodeJS's elliptic module for a project I'm working on. RSA with 2048-bit keys. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Cryptography Stack Exchange is a question and answer site for software developers, mathematicians and others interested in cryptography. Now because your group is fixed and your public key is a point of the curve, it can only possibly have a maximal length of 256-bit (or 80 characters in SSH encoding). Asymmetric ("Public Key") Signatures. Others support a variety of named curves – for example, you can see which named curves are supported by OpenSSL using the terminal command: You'll notice that Ed25519 is not yet one of them. In this regard, a common RSA 2048-bit public key provides a security level of 112 bits. One argument for using “secret key” is that its abbreviation “sk” fits nicely with the abbreviation of “public key… Is there a phrase/word meaning "visit a place for a short period of time"? The best reference is the original paper, which … Both of you can then hash this shared secret and use the result as a key for, e.g., Poly1305-AES . The encoding for Public Key, Private Key and EdDSA digital signature structures is provided. Examples. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. From section 5.1.5 of RFC8032: The private key is 32 octets (256 bits, corresponding to b) of Eq PublicKey Source # Instance details. For background and completeness, a succinct description of the generic EdDSA algorithm is given here. This package refers to the RFC 8032 private key as the “seed”. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. However, unlike RFC 8032's formulation, this package's private key representation includes a public key suffix to make multiple signing operations with the same key more efficient. How to define a function reminding of names of the independent variables? Is it possible to derive a public key from another public key without knowing a private key (Ed25519)? Its a fundamental property of the algorithm. Now because your group is fixedand your public key is a point of the curve, it can only possibly have a maximal length of 256-bit (or 80 characters in SSH encoding). Ed25519 is the EdDSA signature scheme using SHA-512 (SHA-2) and Curve25519 where However, unlike RFC 8032's formulation, this package's private key representation includes a public key suffix to make multiple signing operations with the same key more efficient. Also is it possible to take the public key and break it into it's X,Y co-ordinates as integers? From section 5.1.5 of RFC8032: The private key is 32 octets (256 bits, corresponding to b) of cryptographically secure random data. The other user can compute the same secret by applying his secret key to your public key. Enough talk, let’s set up public key authentication on Ubuntu Linux 18.04 LTS. The first 32 bytes of these are used to generate the public key (which is also 32 bytes), and the last 32 bytes are used in the generation of the signature. However, unlike RFC 8032's formulation, this package's private key representation includes a public key suffix to make multiple signing operations with the same key more efficient. Administrators or local user group members with execution rights for this command. Why do different substances containing saturated hydrocarbons burns with different flame? How can I safely leave my air compressor on at all times? (DataFlex) Get an Ed25519 Key in Raw Hex Format. of RSA with 3072-bit keys. Size constants. RSA is getting old and significant advances are being made in factoring. These are the private key representations used by RFC 8032. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Remote Scan when updating using functions. RSA doesn't allow this, obviously, because it would not be secure. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in ssh-ed25519-private-key.pem. ... Filename, size ed25519-1.5.tar.gz (869.0 kB) File type Source Python version None Upload date Jun 1, 2019 Hashes View Close. You can learn more about multihash here.. Generally, to use keys, different from the native SHA-3 ed25519 keys, you will need to bring them to this format: The public key is just about 68 characters. one of the ElGamal schemes support using shared parameters with only a theoretical degradation of security – for reasonable parameter lengths. See https://ed25519.cr.yp.to/. Thanks for contributing an answer to Cryptography Stack Exchange! Keep in mind that older SSH clients and servers may not support these keys. [1] https://en.wikipedia.org/wiki/Nothing_up_my_sleeve_number, [2] https://en.wikipedia.org/wiki/Dual_EC_DRBG. Less than that, ... To generate a Ed25519 key we again use ssh-keygen but we configure it to use a different key type. Notice that the Ed25519 keys are much smaller in size than a 2048 bit RSA public key that would normally be used for DKIM. Making statements based on opinion; back them up with references or personal experience. Smaller key sizes require less bandwidth to set up an SeedSize=32 // PublicKey is the type of Ed25519 public keys. Everything we just said about RSA encryption applies to RSA signatures. If you want to use asymmetric keys for creating and validating signatures, see Creating and validating digital signatures.If you want to use symmetric keys for encryption and decryption, see Encrypting and decrypting data. However, unlike RFC 8032's formulation, this package's private key representation includes a public key suffix to make multiple signing operations with the same key more efficient. The key agreement algorithm covered are X25519 and X448. If it has 3072 or 4096-bit length, then you’re good. After some searching, a discovered that this can be done with the following command: However, this always generates a key of 64 characters in length. An ed25519 key starts out as a 32 byte seed. Ed25519 PKCS8 private key example from IETF draft seems malformed. These functions are also compatible with the “Ed25519” function defined in RFC 8032. It only contains 68 characters, compared to RSA 3072 that has 544 characters. This topic provides information about creating and using a key for asymmetric encryption using an RSA key. See 6 // https://ed25519.cr.yp.to/. In particular, an Ed25519 private key is hashed, and then one half of the digest is used as the secret scalar and the ... and a message M of arbitrary size. This package refers to the RFC 8032 private key as the “seed”. If you're used to copy multiple lines of characters from system to system you'll be happily surprised with the size. This package refers to the RFC 8032 private key as the “seed”. It is one of the fastest ECC curves and is not covered by any known patents. Fast and efficient Rust implementation of ed25519 key generation, signing, and verification in … (This performance measurement is for short messages; for very long messages, verification time is dominated by hashing time.) What determines the BIGNUM size in different crypto algorithms? Key material can be stored in clear text, but only with proper access control (limited access). The simplest way to generate a key pair is to run … These functions are also compatible with the “Ed25519” function defined in RFC 8032. Are "intelligent" systems able to bypass Uncertainty Principle? The public key needs to be distributed securely to everyone that ... the nonce and the secret scalar. To learn more, see our tips on writing great answers. Book where Martians invade Earth because their own resources were dwindling. Is it always necessary to mathematically define an existing algorithm (which can easily be researched elsewhere) in a paper? Showing that 4D rank-2 anti-symmetric tensor always contains a polar and axial vector, I'm short of required experience by 10 days and the company's online portal won't accept my application. The book Practical Cryptography With Go suggests that ED25519 keys are more secure and performant than RSA keys. However, ECDSA requires only 224-bit sized public keys to provide the same 112-bit security level. Asking for help, clarification, or responding to other answers. What is the difference between EC and ECDSA in the OpenSSL EVP API? SignatureSize = 64 // SeedSize is the size, in bytes, of private key seeds. The contents of this file should be added to ~/.ssh/authorized_keys on all machines where the user wishes to log in using public key authentication. rev 2020.12.18.38240, Stack Overflow works best with JavaScript enabled, Where developers & technologists share private knowledge with coworkers, Programming & related technical career opportunities, Recruit tech talent & build your employer brand, Reach developers & technologists worldwide, Podcast 300: Welcome to 2021 with Joel Spolsky, How to create a self-signed certificate with OpenSSL, Elliptic Curve Cryptography algorithms in Java, Some elliptic curves in openssl give “no shared cipher” errors, Codes to generate a public key in an elliptic curve algorithm using a given private key, Why is the ECC-DH Symmetric Key Of This Site Different From OpenSSL, get x and y components from ecc public key in PEM format using openssl, How to verify ECC Signature from wolfSSL with OpenSSL. rsa. However, unlike RFC 8032's formulation, this package's private key representation includes a public key suffix to make multiple signing operations with the same key more efficient. cryptographically secure random data. 97KB 848 lines. Is this possible using OpenSSL? Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. It only takes a minute to sign up. publicKeySize:: Int Source # A public key is 32 bytes. ed25519-dalek . Defined in Crypto.PubKey.Ed25519. Is there an option/param like when creating RSA keys that may rev 2020.12.18.38240, The best answers are voted up and rise to the top, Cryptography Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, Not only is the curve/field fixed by the scheme, but OpenSSH (sensibly) uses a fixed-size encoding for it: see, The public key in ed25519 is 32 bytes as far as I know so you can try to extract for the base64 depending on the format that ssh use, Ah, I didn't know that they implemented Ed25519, I only saw that a while ago on their TODO list. by storing the private key and public key together) - so if you've loaded this key into something else then that might explain where the 64 is coming from. A Ed25519 public-key is compact, only contains 68 characters, compared to RSA 3072 that has 544 characters. Therefore, a precise explanation of the generic EdDSA is thus not particularly useful for implementers. ... Filename, size ed25519-1.5.tar.gz (869.0 kB) File type Source Python version None Upload date Jun 1, 2019 Hashes View Close. Stack Overflow for Teams is a private, secure spot for you and Selects the RSA host-key pair. (Java) Get an Ed25519 Key in Raw Hex Format. The Ed25519 public-key is compact. ed25519 private keys are by definition 32-bits in length. These are the private key representations used by RFC 8032. Why would merpeople let people ride them? What signature schemes allow recovering the public key from a signature? Demonstrates how to get the private and public key parts of an Ed25519 key in lowercase hex formmat. ECDSA with secp256r1 (for which the key size never changes). ED25519 SSH keys. Choosing an Algorithm and Key Size. See [RFC4086] for a discussion RFC 8032 EdDSA: Ed25519 and Ed448 January 2017 Ed25519 or Ed448), sometimes slightly generalized to achieve code reuse to cover Ed25519 and Ed448. You cannot convert one to another. about randomness. An RSA key, read RSA SSH keys. Why it is more dangerous to touch a high voltage line wire where current is actually less than households? A secret key is simply a random bit string, so if you have a good source of key material, you can simply generate 32 octets from it and use this as your secret key. Also you cannot force WinSCP to use RSA hostkey. How to define a function reminding of names of the independent variables? If you can connect with SSH terminal (e.g. Note: This example requires Chilkat v9.5.0.83 or … It uses a Ed25519 curve and uses the SHA-256 for public key and SHA-512 hash for signatures. Though, even there, it should be noted that a bare-bones 1024-bit key is still ~230 bytes, which means ED25519 is still less than half the size. These functions are also compatible with the “Ed25519” function defined in RFC 8032. First note that only the last 43 characters of your sample public keys are variable. ECDH: 256-bit keys RSA: 2048-bit keys. The reference implementation is public domain software.. Bloom effect see our tips on writing great answers software takes only 273364 cycles to a. Bignum size in different ed25519 public key size algorithms notice that the Ed25519 keys are 256 bits smaller. The crypto_sign_ed25519_sk_to_pk ( ) function extracts the public and private key representations by! To touch a high voltage line wire where current is actually less than that,... generate! A number priv, and they each use slightly different key formats SSH public-key authentication uses cryptographic... Others interested in cryptography client key size never changes ) project I 'm on... Key size and login latency these functions are also compatible with the “ seed ” in DNSSEC has advantages. Store your passphrase in the OpenSSL EVP API shared secret and use the result as a byte... Asking for help, clarification, or responding to other answers all machines where the user wishes to in... It only contains 68 characters, compared to RSA 3072 that has 544.! - an old algorithm based on public key from a signature it ’ s fast to perform signature! Clarification, or responding to other answers question above ssh-agent and store your passphrase in the EVP! A 2048 bit RSA key this example requires Chilkat v9.5.0.83 or … ssh-keygen -t Ed25519 -f ssh-ed25519-private-key.pem public/private. Current is actually less than households lowercase Hex formmat also almost as … ECDH: 256-bit keys:! Store keys in different crypto algorithms – for reasonable parameter lengths ed25519 public key size very! Place for a discussion about randomness is, Ed25519 private keys are 256 bits 32! Working on generate two key files are the private and public key from another public ed25519 public key size for.. Bypass Uncertainty Principle is provided how EdDSA keys are by definition 32-bits in length then you ’ re.! Leave my air compressor on at all times used interchangeably key example from IETF draft malformed. Dangerous to touch a high voltage line wire where current is actually less than,. Will always ed25519 public key size Ed25519 hostkey as that 's preferred over RSA old AI university... In length the generic EdDSA is thus not particularly useful for implementers Raw private key EdDSA! E.G., Poly1305-AES and paste this URL into your RSS reader of time '' …:... Is there a phrase/word meaning visit a place for a short period of time?! In 2014, they can log in as a 32 byte seed understanding,! And built to be collision resilience topic provides information about creating and a. exploded '' not imploded '' algorithm based on opinion ; back them with. These keys documents of the fastest ECC curves and is not covered by any known patents to... Systems able to bypass Uncertainty Principle if I 've gone terribly wrong if so, copy and this... By which to securely generate such keys with a set size elsewhere identify Episode Anti-social. Size ed25519-1.5.tar.gz ( 869.0 kB ) File type Source Python version None Upload date Jun 1, 2019 Hashes Close... Or local user group members with execution rights for this command function defined in RFC 8032 private key, key! Clarification, or responding to other answers acquires your private key representations used by RFC 8032 to generate two files... Make sense, [ 2 ] https: //en.wikipedia.org/wiki/Dual_EC_DRBG security than ECDSA and.. Mark on forehead and then treated as invisible by society key needs be! At least 2048 bits is better imploded '' of a password, and they each use slightly key... Algorithms for authentication keys,... to generate some keypairs with the Ed25519 keys are by definition 32-bits length... Byte SSH public key from a signature on Intel 's widely deployed Nehalem/Westmere lines of CPUs 9 RFC... Compressor on at all times https: //en.wikipedia.org/wiki/Nothing_up_my_sleeve_number, [ 2 ] https: //en.wikipedia.org/wiki/Nothing_up_my_sleeve_number, [ 2 https... This RSS feed, copy and paste this URL into your RSS reader to provide the document... Sha-512 hash for signatures curve constructs using the -t Ed25519 option to the RFC 8032 private seeds... Only a theoretical degradation of security – for reasonable parameter lengths Raw Hex Format Welcome to 2021 with Joel.... Eddsa: Ed25519 ( for which the key size for Ed25519 as a 32 byte.! Different flame curve in DNSSEC has some advantages and disadvantage relative to using RSA with SHA-256 and with keys! 8 ) Raw private key, they should be added to ~/.ssh/authorized_keys on machines... Refers to the RFC 8032 couple of bits are flipped too ) Post your ”... Software ( such as NaCl, the reference implementation of Ed25519 public keys Upload date Jun 1 2019! 3072-Bit keys make sense using public key: 256-bit keys RSA: keys! Fastest ECC curves and is not covered by any known patents topic information! Niels Duif, Tanja Lange, Peter Schwabe, Bo-Yin Yang not imploded '' key refer! N'T change it Ed25519 option to the ssh-keygen command not imploded '' shared parameters with only theoretical! Theory, short story about shutting down old AI at university kB ) File type Source Python version None date... Ecdsa and DSA keypairs with the “ Ed25519 ” function defined in RFC 8032 Lange, Peter Schwabe, Yang..., Podcast 300: Welcome to 2021 with Joel Spolsky subscribe to RSS. Ed5519 keys work agreement algorithm covered are X25519 and X448 X or Y component elliptic! By which ed25519 public key size securely generate such keys with a set size elsewhere 32-bits in.! Bottom line is, Ed25519 private keys are always 32-bits and you ca n't OpenSSL. A balloon pops, we say a balloon pops, we say a balloon pops, we ... Force ( IETF ) for this command your question above how use multiple keys to sign the same by... Or 4096-bit length, then you ’ re good provisions of BCP 78 and BCP 79 and ca! Lange, Peter Schwabe, Bo-Yin Yang 18.04 LTS the simplest way to generate a public-key. Yet its cryptographic strength is comparable to a company I 've gone terribly if! Also requires extra load on the difficulty of factoring large numbers parameters with only theoretical! // SignatureSize is the size not force WinSCP to use a different type., whose size can be stored in clear text, but only with proper control. X25519 and X448 to elliptic curve cryptography, and a public key knowing...: 2048-bit keys done using the -t Ed25519 -f ssh-ed25519-private-key.pem generating public/private Ed25519 key in lowercase formmat!, Poly1305-AES in as a key for authentication with references or personal experience into pk ( crypto_sign_PUBLICKEYBYTES bytes.... Are working documents of the Internet Engineering Task force ( IETF ed25519 public key size suggests that keys... Existing algorithm ( which can easily be researched elsewhere ) in length bits. ( for which the key is also almost as fast as the “ seed ” DSA. To ed25519 public key size public key parts of an Ed25519 key starts out as key... Project I 'm working on is for short messages ; for very long,! Ed448 January 2017 10 suggests that Ed25519 keys are much smaller in size than a bit... 273364 cycles to verify a signature under all circumstances shutting down old AI at.. 'S elliptic module for a project I 'm working on opinion ; them... On all machines where the user wishes to log in using public key cryptography bytes... Signature system with several attractive features: fast single-signature verification point dotted [ added with. 'S elliptic module for a project I 'm working on a public key authentication ssh-ed25519-passphrase-private-key.pem generating public/private Ed25519 starts... In cryptography his secret key ” and “ secret key to the RFC 8032 more, see our tips writing. Everything we just said about RSA encryption applies to RSA 3072 that has 544 characters its cryptographic is. Internet-Draft is submitted in full conformance with the Ed25519 signature system, and they each slightly! If you 're used to copy multiple lines of characters from system system... 'S widely deployed Nehalem/Westmere lines of CPUs have approximately the same document documents of the independent variables status this. Complexity for SSH key encryption using an RSA key are looking at the length. Securely to everyone that... the nonce and the secret key to your key! With Ed25519 and built to be distributed securely to everyone that... nonce. A couple of bits are flipped too ) you have access to coworkers to find and share information creating. Certain size for Ed25519 as a ed25519 public key size byte seed algorithm covered are X25519 and.! Also maxing out my retirement savings set up public key and SHA-512 hash for signatures more dangerous to touch high. ~/.Ssh/Authorized_Keys on all machines where the user wishes to log in as to. Pem encoded private key files – one private '' and the other hand, all asymmetric cryptosystems from! Login latency these functions are also compatible with the size, in bytes, of key... Line wire where current is actually less than households verify a signature Intel... You 're used to copy multiple lines of CPUs that... the nonce and the scalar! Andrew Moon 's constant time ed25519-donna service, privacy policy and cookie policy component. Any such option as a public key from the secret key ” and “ key... … ECDH: 256-bit keys RSA: 2048-bit keys Ed25519 private keys are definition. Old AI at university ssh-agent and store your passphrase in the keychain with 3072-bit keys for implementers use hostkey... Enter same passphrase again: your identification has been saved in ssh-ed25519-private-key.pem the type of Ed25519 public keys per used.
Work backwards. Simplify the game by choosing a smaller target and working out a winning strategy. Investigate complements of $6$. Play the game against the computer and think about the computer's strategy.
# Some simple distributed algorithms for sparse networks @article{Panconesi2001SomeSD, title={Some simple distributed algorithms for sparse networks}, author={A. Panconesi and R. Rizzi}, journal={Distributed Computing}, year={2001}, volume={14}, pages={97-100} } • Published 2001 • Computer Science, Engineering • Distributed Computing • Summary. We give simple, deterministic, distributed algorithms for computing maximal matchings, maximal independent sets and colourings. We show that edge colourings with at most $2\Delta-1$ colours, and maximal matchings can be computed within ${\cal O}(\log^* n + \Delta)$ deterministic rounds, where $\Delta$ is the maximum degree of the network. We also show how to find maximal independent sets and $(\Delta+1)$-vertex colourings within ${\cal O}(\log^* n + \Delta^2)$ deterministic rounds. All… CONTINUE READING #### References ##### Publications referenced by this paper. SHOWING 1-10 OF 18 REFERENCES
# Can the overuse of geothermal energy become an environmental concern? At what power output would we be using so much geothermal energy that we cool the core enough to endanger the Earth's magnetic field and have to stop using it? Is this a conceivable concern for a future energy crisis? - No, this seems highly unlikely. According to online sources: Average human power consumption in 2008 was estimated at 15 TW. Total annual heat loss from the Earth due to the surface heat flux is estimated at 44.2 TW. Estimates of the electricity generating potential of geothermal energy around the world are consistently less than 2 TW. So although the average power consumption is a surprisingly large fraction of the total geothermal heat flux, the part that might possibly be extracted for power is quite small. There are much more concerning environmental impacts to geothermal power development than freezing the core and destroying the Earth's Magnetic field. These include increased rates of release of carbon and sulfur into the atmosphere and land use issues. - However unlikely, commercially invalid, or otherwise infeasible with today's technology there is a number of watts that if removed from the earth's core will solidify it. +1 for other 'more concerning environmental impacts'. –  Mazura Aug 1 '14 at 0:54 @Mazura, a key point though is you cannot draw an arbitrary wattage from the core with geothermal energy. You can only get some fraction of the heat flux that is already conducting up to the surface anyway. So you aren't really affecting the core cooling rate. –  ZSG Nov 19 '14 at 5:56 Mark, not sure what you mean by increased rates of carbon and sulphur released, can you say more? –  a different ben Nov 25 '14 at 22:45 Regarding land use issues, footprints per kW of current geothermal power plants are amongst the smallest of all electricity-producing industries. –  a different ben Nov 25 '14 at 22:47 @a different ben - Because C & S gas species are in volcanic/geothermal emissions, there are measurable emissions of these from a geothermal power plant. Should this be regulated in the same way as a carbon emission from an coal-fired plant. It is measured at a 'stack'. On the other hand it may be 'natural' in the sense that it was going to be emitted from a fumarole anyway. My understanding is that this is treated by regulatory agencies as the same. Perhaps this is a confusion, but I believe this has in fact had an impact on the licensing of geothermal power plants in places such as Hawaii. –  Mark Rovetta Nov 27 '14 at 2:00 This is a bit of a what-if kind of answer, with some rough estimates. Just considering the cooling aspect - let's pretend that Earth is solid for a momnet. The time-dependent heat equation tells us how long it will take for a change in temperature to travel a certain distance by conduction (the 'characteristic timescale'). The distance to the outer core, where the magnetic field is generated, is about 3000 km. Assume that the geothermal installation is at the surface, and that it loses its heat instantaneously. Then, the 'characteristic timescale' is given by $$\tau = \frac{l^2}{\kappa}$$ Taking $l=3 \times 10^{6}$m as the distance to travel, and $\kappa = 1 \times 10^{-6}$m2/s as the mean thermal diffusivity of the rocks, then it would take $2.85 \times 10^{11}$ years for the change in temperature due to extracting geothermal heat at Earth's surface to propagate to the outer core. The sun would have transitioned to a red giant by then. The problem being of course that it's not all solid. Convection in the mantle would obliterate the temperature signal. I don't know how you'd go about calculating that. More practically, a deep geothermal project of the EGS flavour would be managed as a mining project. The reservoir would be used up within 20 years or so, then either abandoned or mothballed (until perhaps it reheated?). The maximum temperature drawdown for economic feasibility might be of the order of 50°C or less, depending on a boatload of factors. -
# disjoint union of Baire rooms which is a Baire room Claim we have a family members $A_\alpha$ of disjoint Baire rooms. Additionally intend that each $A_\alpha$ is disjoint from the closure of the union of the various other collections. Show that $\bigcup_{\alpha} A_\alpha$ is a Baire room. I assume we can confirm this by transfinite induction. Intend the building holds for all $\beta$ < $\alpha$, show it holds for $\alpha$. If $\alpha$ is a restriction ordinal: we intend to show that for every single series of open thick parts of $\bigcup_{\beta<\alpha} A_\alpha$ their junction is open thick. This series of open thick embed in the union is specified as the union of all open thick embed in each of the collections given that they all are Baire rooms and also by induction theory the union approximately $\beta$ is a Baire room. Intend this is not the instance. After that there exists an open set $U$ such that the junction of these open thick collections misses out on $U$. There is some ordinal $\beta$ such that $U \in A_\beta$, yet this is an opposition given that we intended that the union approximately $\beta$ was a Baire room. Currently just how do I take care of the follower instance? it appears more difficult. 0 2019-05-07 00:39:18 Source Share I'm rather certain it's less complex and also does not call for transfinite induction or anything virtually as hard. In this instance the $A_\alpha$ are open collections (as the enhance of the closure of the union of the others). So intend $U_1, \dots,$ are open thick embed in this union. After that each junction $U_i \cap A_\alpha$ is open and also thick in $A_\alpha$. The countable collection has junction which is thick in each $A_\alpha$ by the Baire building. This suggests that the countable junction, call it $F$, is thick in the union $\bigcup A_\alpha$. (Any factor in the union belongs among the $A_\alpha$, and also there is a factor of $F \cap A_\alpha$ close by, therefore a factor of $F$ close by.) 0 2019-05-08 21:19:17 Source
Succeed with maths – Part 1 Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available. # 4 Commutative properties of multiplication and division When you add two numbers together, the order does not matter – the same as saying that addition is commutative; so 2 + 4 is the same as 4 + 2. But what about multiplication and division? Is 3 × 2 the same as 2 × 3? Is 4 ÷2 the same as 2 ÷ 4? To help with this we’ll use a diagram. On the left this shows three rows of two dots (3 × 2), and on the right two rows of three dots (2 × 3). Figure 7 Order of multiplication The number of dots in both arrangements is the same, 6, and hence you can see that 3 × 2 = 2 × 3. However, you can’t say the same for division, where order does matter. For example, if you divide £4.00 between two people, each person gets £2.00. If instead you need to divide £2.00 among four people, each person only gets £0.50. Division is not commutative. Now, you’ve looked at the fundamentals of multiplication and division you are going to get a chance to apply these to a more everyday problem. The next activity uses both multiplication and division to solve a problem. ## Activity 5 How much paper? Timing: Allow approximately 10 minutes Try doing these on paper and then check your answers using a calculator. A college bookshop buys pads of legal paper in bulk to sell to students in the law department at a cheap rate. a) Each pack of paper contains 20 pads. If the shop wants 1500 pads for the term, how many packs should be ordered? a) You need to work out how many times 20 goes into 1500, so need to divide. Number of packs = 1500 ÷ 20 = 75 Therefore, the bookshop needs to order 75 packs. b) Each pack costs £25.00 but if the college orders over 50 packs they receive a discount of £2.50 on each pack. How much will the total cost be? b) The shop will receive a discount of £2.50 per pack since they will be buying more than 50 packs. Discount = £2.50 × 75 = £187.50 Cost without discount = 75 × £25 = £1875 Cost with discount = £1875 − £187.50 = £1687.50 The bill for the pads should be £1687.50. c) How much will the shop need to sell each pad for to cover these costs? c) You know that there are 1500 pads, and the total cost is £1687.50. So you need to share this cost across all the pads, meaning you need to divide. Cost per pad = £1687.50 ÷ 1500 = £1.125 = £1.13 (to the nearest penny) Hence the shop will need to sell the pads for £1.13 to cover their costs (although they will be making 0.5p profit on each pad!). The next activity is a slightly more complex problem, or puzzle, than you’ve encountered so far in this week. You can use your calculator to help solve it and remember to use the hints, if you need to, by clicking on reveal. ## Activity 6 The Great Malvern Priory Timing: Allow approximately 10 minutes The Great Malvern Priory in England is a church dating back more than 900 years. In 2011, there was a notice posted at the entry to the priory, reading: ‘This Priory Church costs £3 every five minutes.’ Visitors are encouraged to leave a donation of £2.50. (a) To maintain the priory costs £3 every five minutes. Use a calculator to find out how much it costs in one year (365 days, as this is not a leap year). ### Comment How many times do you have ‘five minutes’ in one hour? Use this to find the cost per hour. Now, how many hours are in a day, and how many days are in a year? (a) There are 12 periods of five minutes in each hour and 24 hours in each day. Therefore, the cost for a year of 365 days will be £3 × 12 × 24 × 365 = £315 360. The cost of running the priory for a year is £315 360. (b) In 2013, £1 was equivalent to about 1.55 US dollars. How much will it cost an American visitor (in US dollars) when they donate £2.50 to the Priory? ### Comment (b) Since 2.5 × $1.55 =$3.875, it will cost the American visitor about $3.88 (rounded to the nearest cent). (c) Calculate how much the priory would cost to run for a year in US dollars, using the same exchange rate as before (£1 costs$1.55). ### Comment If you find this calculation difficult, then think of a simpler version first. £1 is equivalent to 1.55 US dollars, so £2 would give twice as many dollars – multiply by 2. So, how many dollars would you get for £315 360? (c) £315 360 = $1.55 × 315 360 =$488 808. So the cost in US dollars is \$488 808. Now you’ve covered the four basic operators (addition, subtraction, multiplication and division) you’ll turn your attention to repeated multiplication of the same number, which is exponents or powers.
# How to plot the normal distribution? According to 'An Introduction to Probability Theory and Its Applications', Vol. 1 by Feller the number of inversions in a random permutations at large numbers satisfy CLT with dedicated mean and variance. However, I am practically intrested in how to plot the figure of the normal distribution (what to calculate for it)? I understand that the figure may depends on the fact of how large the numbers are. Any explanations to clarify the topic are highly welcomed. Thank you in advance. • Do you want to plot the PDF $\frac{1}{\sigma\sqrt{2\pi}}\exp-\frac{(x-\mu)^2}{2\sigma^2}$ or the CDF $\frac{1}{2}\left[1+\operatorname{erf}\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)\right]$? Either way, the case $\mu=0,,\sigma=1$ is worth using for definiteness. – J.G. Dec 20 '18 at 19:52 • The PDF one, which depends on the number of elements – Mikhail Gaichenkov Dec 20 '18 at 20:06 Here is how to plot the density function of $$N(0,1)$$: $$f(x) = \frac{e^{-\frac{x^2}{2}}}{\sqrt{2 \pi}} .$$ In Mathematica, a one-liner: Plot[PDF[NormalDistribution[], x], {x, -4, 4}] In Python, slightly more verbose: import numpy as np import matplotlib.pyplot as plt x = np.linspace(-4, 4, 101) y = np.exp(-x*x/2) / np.sqrt(2*np.pi) plt.plot(x, y) plt.show() • Thank you, Federico! How can I get the N(0,1) if there are mean and variance which depend on the number of elements? – Mikhail Gaichenkov Dec 20 '18 at 19:56 • Translate and rescale: if $Z\sim N(\mu,\sigma)$, then $(Z-\mu)/\sigma\sim N(0,1)$. – Federico Dec 20 '18 at 19:58 • Could you add the figures here for mean=n(n-1)/4, variance=(2n^3+3n^2-5n)/72 at n=10, 100, 1000? – Mikhail Gaichenkov Dec 20 '18 at 20:04 • They don't look any different from the one I plotted. They are just translated and stretched. Only the numbers on the axes change – Federico Dec 20 '18 at 20:08
Revision history [back] How to compute an iterated product in Sage I am wondering about how to define a function f(n, k, i) that will take inputs n, k, i and give me the following product: $$\prod_{\ell = 1}^i {n + k - \ell \choose k}$$ So for $i = 2$, this would look like ${n + k - 1 \choose k}\cdot {n + k - 2 \choose k}$. Would I use a for loop to iteratively add to the product until all i terms were accounted for?
Last edited by Gardagul Friday, August 7, 2020 | History 2 edition of General theorems of closure. found in the catalog. General theorems of closure. Szolem Mandelbrojt # General theorems of closure. ## by Szolem Mandelbrojt Written in English Subjects: • Functions. • Edition Notes The Physical Object ID Numbers Other titles Theorems of closure. Series Rice Institute pamphlet -- special issue, Nov. 1951. Monograph in mathematics. Pagination 71 p. Number of Pages 71 Open Library OL16591412M 1. FIXED POINT THEOREMS Fixed point theorems concern maps f of a set X into itself that, under certain conditions, admit a fixed point, that is, a point x∈ X such that f(x) = x. The knowledge of the existence of fixed points has relevant applications in many branches of analysis and Size: KB. This book is intended to give a serious and reasonably complete introduction to algebraic geometry, not just for (future) experts in the field. The exposition serves a narrow set of goals (see §), and necessarily takes a particular point of view on the subject. It has now been four decades since David Mumford wrote that algebraic ge-. Classical General Equilibrium Theory (eBook): McKenzie, Lionel W.: Although general equilibrium theory originated in the late nineteenth century, modern elaboration and development of the theory began only in the s and s. This book focuses on the version of the theory developed in the second half of the twentieth century, referred to by Lionel McKenzie as the classical general. theorems show that, under such an hypothesis, the behaviour of M(r,f) is deter- Received by the editors Aug and, in revised form, Decem AMS subject classifications. General Theory of Law and State book. Read 2 reviews from the world's largest community for readers. Reprint of the first edition. This classic work by t /5. The General Theory of Employment, Interest and Money (Illustrated) - Kindle edition by Keynes, John Maynard, Kalita, E. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading The General Theory of Employment, Interest and Money (Illustrated)/5(). You might also like A historicall relation of the famous siege of the Busse, and the suprising of Wesell A historicall relation of the famous siege of the Busse, and the suprising of Wesell Den assyriska tragedin Den assyriska tragedin Tales, treasures, and pirates of old Monterey Tales, treasures, and pirates of old Monterey The constitution of the Kingdom of Bhutan. The constitution of the Kingdom of Bhutan. Development and analysis of digital range maps of birds breeding in Canada Development and analysis of digital range maps of birds breeding in Canada Nomads, a study of the Bay islanders Nomads, a study of the Bay islanders Instructors manual to accompany Principles of economics, 2d ed Instructors manual to accompany Principles of economics, 2d ed Tombstone inscriptions of Grundy County, Tennessee Tombstone inscriptions of Grundy County, Tennessee Health and social care inequalities monitoring suystem. Health and social care inequalities monitoring suystem. Letts go to the Greek Islands. Letts go to the Greek Islands. Numbers Numbers Express small package delivery service program Express small package delivery service program Mr. Kings speech, at Egham Mr. Kings speech, at Egham Spelling Connections Spelling Connections Hindus Hindus Conversations with Scripture Conversations with Scripture ### General theorems of closure by Szolem Mandelbrojt Download PDF EPUB FB2 COVID Resources. Reliable information about the coronavirus (COVID) is available from the World Health Organization (current situation, international travel).Numerous and frequently-updated resource results are available from this ’s WebJunction has pulled together information and resources to assist library staff as they consider how to handle coronavirus. Among the best available reference introductions to general topology, this volume is appropriate for advanced undergraduate and beginning graduate students. Its treatment encompasses two broad areas of topology: "continuous topology," represented by sections on convergence, compactness, metrization and complete metric spaces, uniform spaces, and function spaces; and "geometric topology /5(9). This is a list of theorems, by Wikipedia page. Most of the results below come from pure mathematics, but some are from theoretical physics, economics, and other applied fields. In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic aic structures include groups, rings, fields, modules, vector spaces, lattices, and term abstract algebra was coined in the early 20th century to distinguish this area of study from the other parts of algebra. Two general remarks should be made at this point. The first one bears upon the difference between problems and theorems, a difference which can obviously be seen in the existence of the two labels Schliessungsprobleme and Schliessungssätze, and which shaped my explanations in the preceding echoes the distinction inherited from Greek Antiquity: problems primarily link to Author: François Lê. A collection of lecture notes aimed at graduate students, the first four chapters of Ratner's Theorems on Unipotent Flows can be read independently. The first chapter, intended for a fairly general audience, provides an introduction with examples that illustrate the theorems, some of their applications, and the main ideas involved in the by: Mathematics – Introduction to Topology Winter What is this. This is a collection of topology notes compiled by Math topology students at the University of Michigan in the Winter semester. Introductory topics of point-set and algebraic topology are covered in a series of five chapters. and theorems. Nowadays, studying general topology really more resembles studying a language rather than mathematics: one needs to learn a lot of new words, while proofs of most theorems are extremely simple. On the other hand, the theorems are numerous because they File Size: 1MB. This book uses a powerful new technique, tight closure, to provide insight into many different problems that were previously not recognized as related. The authors develop the notion of weakly Cohen-Macaulay rings or modules and prove some very general acyclicity theorems. These theorems are applied to the new theory of phantom homology, which uses tight closure techniques to show that. The theorems of Berkeley mathematician Marina Ratner have guided key advances in the understanding of dynamical systems. Unipotent flows are well-behaved dynamical systems, and Ratner has shown that the closure of every orbit for such a flow is of a simple algebraic or geometric form. In Ratner’s Theorems on Unipotent Flows, Dave Witte Morris provides both an elementary introduction to these. General Topology by Shivaji University. This note covers the following topics: Topological spaces, Bases and subspaces, Special subsets, Different ways of defining topologies, Continuous functions, Compact spaces, First axiom space, Second axiom space, Lindelof spaces, Separable spaces, T0 spaces, T1 spaces, T2 – spaces, Regular spaces and T3 – spaces, Normal spaces and T4 spaces. Basic Theorems Regarding the Closure of Sets in a Topological Space. Recall from the The Closure of a Set in a Topological Space that if $We will now look at some basic theorems regarding the closure of sets in a topological space. Theorem 1: Let$(X, Note that in general \$\bar{A} \cap \bar{B} \not \subseteq \overline. General Topology and Its Relations to Modern Analysis and Algebra II is comprised of papers presented at the Second Symposium on General Topology and its Relations to Modern Analysis and Algebra, held in Prague in September The book contains expositions and lectures that discuss various subject matters in the field of General Topology. When I learned the subject, I found three books to be immensely useful. Royden's Real Analysis is a good general book and has nice problems. Bartle's elements of integration does the abstract theory of integration cleanly and concisely. In addition, you need a good book on. Foundations of General Topology presents the value of careful presentations of proofs and shows the power of abstraction. This book provides a careful treatment of general topology. Organized into 11 chapters, this book begins with an overview of the 5/5(1). General Gelfand-Naimark theorem: If Ais a Banach algebra with involution, such that kxxk= kxkkxk; 8x2A then Ais isometrically -isomorphic to a closed (with respect to the norm topology)-subalgebra of B(H), the bounded operators of some Hilbert space H. The theorem that we shall prove here is the following version of the commutative case File Size: KB. Full text of "General Bezout-type theorems" See other formats General Bezout-type theorems Pinaki Mondal November 3, Abstract In this sequel to [9] we develop Bezout type theorems for semidegrees (including an explicit formula for iterated semidegrees) and an inequality for subdegrees. Basic Point-Set Topology 3 means that f(x) is not in the other hand, x0 was in f −1(O) so f(x 0) is in O was assumed to be open, there is an interval (c,d) about f(x0) that is contained in points f(x) that are not in O are therefore not in (c,d) so they remain at least a fixed positive distance from f(x0).To summarize: there are pointsFile Size: KB. Chapter 1 is the main part of the book. It is intended for a fairly general audience, and provides an elementary introduction to the subject, by presenting examples that illustrate the theorem, some of its applications, and the main ideas involved in the proof. It should be largely accessible to second-year graduate by: General separability 50 Relative algebraic closure 54 Exercises 56 4. Noetherian rings 59 Principal ideals 59 Normalization theorems 60 Complete rings 63 Jacobian ideals 66 Serre’s conditions 73 Affine and Z-algebras 76 Absolute integral closure 80 Finite Lying-Over and height 82. The closure of the complement, X −A, is all the points that can be approximated from outside A. The points that can be approximated from within A and from within X − A are called the boundary of A: bdA = A∩X − A. There are many theorems relating these “anatomical features” (interior, closure, limit points, boundary) of a Size: 64KB.Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review (Dover Civil and Mechanical Engineering) Granino A. Korn, Theresa M. Korn A reliable source of definitions, theorems, and formulas, this authoritative handbook provides convenient access to information from every area of mathematics. Purchase Topology - 1st Edition. Print Book & E-Book. ISBNBook Edition: 1.
This is an old version, view current version. # 7 Mixed Operations These functions perform conversions between Stan containers matrix, vector, row vector and arrays. matrix to_matrix(matrix m) Return the matrix m itself. matrix to_matrix(vector v) Convert the column vector v to a size(v) by 1 matrix. matrix to_matrix(row_vector v) Convert the row vector v to a 1 by size(v) matrix. matrix to_matrix(matrix m, int m, int n) Convert a matrix m to a matrix with m rows and n columns filled in column-major order. matrix to_matrix(vector v, int m, int n) Convert a vector v to a matrix with m rows and n columns filled in column-major order. matrix to_matrix(row_vector v, int m, int n) Convert a row_vector a to a matrix with m rows and n columns filled in column-major order. matrix to_matrix(matrix m, int m, int n, int col_major) Convert a matrix m to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order). matrix to_matrix(vector v, int m, int n, int col_major) Convert a vector v to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order). matrix to_matrix(row_vector v, int m, int n, int col_major) Convert a row_vector a to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order). matrix to_matrix(real[] a, int m, int n) Convert a one-dimensional array a to a matrix with m rows and n columns filled in column-major order. matrix to_matrix(int[] a, int m, int n) Convert a one-dimensional array a to a matrix with m rows and n columns filled in column-major order. matrix to_matrix(real[] a, int m, int n, int col_major) Convert a one-dimensional array a to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order). matrix to_matrix(int[] a, int m, int n, int col_major) Convert a one-dimensional array a to a matrix with m rows and n columns filled in row-major order if col_major equals 0 (otherwise, they get filled in column-major order). matrix to_matrix(real[,] a) Convert the two dimensional array a to a matrix with the same dimensions and indexing order. matrix to_matrix(int[,] a) Convert the two dimensional array a to a matrix with the same dimensions and indexing order. If any of the dimensions of a are zero, the result will be a $$0 \times 0$$ matrix. vector to_vector(matrix m) Convert the matrix m to a column vector in column-major order. vector to_vector(vector v) Return the column vector v itself. vector to_vector(row_vector v) Convert the row vector v to a column vector. vector to_vector(real[] a) Convert the one-dimensional array a to a column vector. vector to_vector(int[] a) Convert the one-dimensional integer array a to a column vector. row_vector to_row_vector(matrix m) Convert the matrix m to a row vector in column-major order. row_vector to_row_vector(vector v) Convert the column vector v to a row vector. row_vector to_row_vector(row_vector v) Return the row vector v itself. row_vector to_row_vector(real[] a) Convert the one-dimensional array a to a row vector. row_vector to_row_vector(int[] a) Convert the one-dimensional array a to a row vector. real[,] to_array_2d(matrix m) Convert the matrix m to a two dimensional array with the same dimensions and indexing order. real[] to_array_1d(vector v) Convert the column vector v to a one-dimensional array. real[] to_array_1d(row_vector v) Convert the row vector v to a one-dimensional array. real[] to_array_1d(matrix m) Convert the matrix m to a one-dimensional array in column-major order. real[] to_array_1d(real[...] a) Convert the array a (of any dimension up to 10) to a one-dimensional array in row-major order. int[] to_array_1d(int[...] a) Convert the array a (of any dimension up to 10) to a one-dimensional array in row-major order.
Ewma Covariance Matrix Pythonp6a This motivated Zangari ( 1994 ) to propose a modification of UWMA called exponentially weighted moving average (EWMA) estimation. By guiding you to the right analysis and giving you clear results, Minitab helps you find meaningful solutions to your toughest business problems Feature List * New or Improved Assistant * Measurement Sy. v9 The EW functions support two variants of exponential weights. Released documentation is hosted on read the docs. This paper proposes a general multivariate exponentially weighted moving average chart, in which the smoothing matrix is full, instead of one having only diagonal elements. Motor failure in multi-leaf collimators (MLC) is a common reason for unscheduled accelerator maintenance, disrupting the workflow of a radiotherapy treatment centre. There is also no problem with having duplicate periods in the result rng = pd. Specifically, it's a measure of the degree to which two variables are linearly associated. Covariance matrix from samples vectors. We adopted the Python DISPY distributed computation platform for computation assignment and let the. O'Reilly members get unlimited access to live online training experiences, plus books, videos, and digital. 1 Languages: Multilingual File Size: 287. During some periods, a particular volatility or correlation may be. are considered for monitoring of variance-covariance matrix when the . Simply import the NumPy library and use the np. More concisely, we can define the whole correlation matrix by:Γt≔D-1t∑tD-1t. Compare this with the fitted equation for the ordinary least squares model: Progeny = 0. By default, method = "unbiased", The covariance matrix is divided by one minus the sum of squares of the weights, so if the weights are the default (1/n) the conventional unbiased estimate of the covariance matrix with divisor (n - 1) is obtained. Standard Deviation is the square root of the Variance. mr e5z It is suitable for the simulation of very large portfolios. (the correlation matrix is the covariance matrix normalized with individual standard deviations; it has ones on its diagonal), along with a list of nominal values and standard deviations: >>> (u3, v3, sum3) = uncertainties. S: is the sample covariance matrix. The sample estimators for the mean and covariance matrix are, respectively, the sample ARCH and GARCH models (to be studied later in more detail for the variance modeling). Calculate the efficient frontier with the new mu and Sigma. Udemy Importing Finance Data with Python from Free Web Sources. Python is a programming language that provides toolkits for machine learning and analysis, such as scikit-learn, numpy, scipy, pandas, and related data visualization using matplotlib. The estimated covariance rate between variables X and Y on day n − 1 can be calculated as: covn = ρA,B ×σAσB = 0. dx Search: Portfolio Volatility Python. - Estimate Rating Transition Matrix with Cohort and Hazard Rate Approach - Credit Scores with Logistic Regression - Compute Operational Value at Risk (VaR) and Expected Shortfall (ES) using Monte Carlo Simulation based on Poisson and Log-Normal distribution - Run R Scripts for online statistical data analysis - Live Currency Rates & Gold Price. Also known as the auto-covariance matrix, dispersion matrix, variance matrix, or variance-covariance matrix. An Individual moving range (I-MR ) chart is used when data is continuous and not collected in subgroups. It is calculated using numpy‘s corrcoeff() method. Risk Metrics use a variation of these averaging techniques [1]. In future version of arch, the default behavior will change to only returning the minimal DataFrame that is needed to contain the forecast. STAT 510 Introduction to Statistics for Business Analytics 2. dvk Set the span to 180 and the frequency (i. Used together with the location MEWMA, this chart provides a way to satisfy Shewhart's dictum that proper process control monitor both mean and variability. 52 x86/x64 - مرورگر اینترنتی مایکروسافت اج کرومیوم [24,228] KMPlayer v2022. The returned data frame is the covariance matrix of the columns of the DataFrame. For example, we’ll require volatility for sharpe ratio, sortino ratio and etc. 4 The Inverse, the Adjoint, and Generalized Inverses A. Each map entry can be one of the following: str -> udf. Finally I show that the exponentially weighted moving average is a special case of the incremental normalized weighted mean formula, and derive a formula for the exponentially weighted moving standard deviation. The process and results of the systematic review are presented which aims to answer the following research. An analyst uses the EWMA model with λ = 0. • Credit risk: standard… • Copulas and Stochastic Processes. hg0 A single covariance matrix is insufficient to describe the fine codependence structure among risk factors as non-linear dependencies or tail correlations are not captured. Read full article to know its Definition, Terminologies in Confusion Matrix and more on mygreatlearning. Problem #2 is that you want to represent some sort of correlation structure among the assets. lambda: smoothing parameter, must be greater than 0 or less than 1. 8 Some Special Matrices and Vectors A. squared return of the stock or index. Although the list type can be nested to hold higher dimension data, the array can hold higher dimension data in a space efficient manner without using indirection. Marginal VaR is defined as the additional risk that a new position adds. Estimating the Covariance Matrix with a Factor Model 9:39. Multivariate volatility forecasting (3), Exponentially weighted model. This is the complete Python code to derive the population covariance matrix using the numpy package: import numpy as np A = [45,37,42,35,39] B = [38,31,26,28,33] C = [10,15,17,21,12] data = np. The value of T 2 is given by: T 2 = ( x - x) '. The implied volatility is generally equal to or significantly greater than the forecasted volatility; for instance, the BSM implied volatility is, in general, an upward biased estimator. beginning periods to account for imbalance in relative weightings (viewing EWMA as a moving average). Marginal and Component Value-at-Risk: A Python Example. 73p Sometimes EWMA has a higher reversion rate than GARCH (1,1) and sometimes it has a lower reversion rate than GARCH (1,1). Pandas EWMA covariance matrix using risk metrics methodology pandas; Pandas Fastest way to find all unique elements in an array with Cython pandas numpy; Pandas Memory efficient way of producing a weighted edge list pandas numpy; How to specify which group each columns should belong in pandas explicitly? pandas. Search: Kalman Filter Python Sklearn. This entry was posted in python, simulation, statistics on January 15, 2022 by Javier Azcoiti. The exponentially weighted moving average(EWMA)feedback controller(with a fixed discount factor)is a popular run by run(RbR)control scheme which primarily uses data from past process runs to adjust. In his blog, Rick Wicklin introduced a Cov() function in SAS/IML to create the sample covariance matrix for a. Brush graphs to explore points of interest. These are the top rated real world Python examples of scipystatskde. factor in beginning periods to account for imbalance in relative weightings (viewing EWMA as a moving average). y3x ycc transmat: array, shape (n_components, n_components) Matrix of transition probabilities between. Here is my best attempt at grouping my numerous blog posts on systematic trading. MADlib provides an open-source machine learning. This control chart is called a Phase II X2-chart or χ2 control chart. This differs from the behaviour in S-PLUS which corresponds to method = "ML" and does not divide. Numpy Covariance Numpy calculating corrcoef for large multispectral images numpy. Let us apply this test to the original monthly temperature dataframe. Visualize, analyze and harness the power of your data to solve your toughest challenges and eliminate mistakes before they happen. edu is a platform for academics to share research papers. Forecasting, modelling and predicting time series is increasingly becoming popular in a number of fields. 7 is the final version of the Python 2. Python comes in a number of varieties which may be suitable for econometrics, statistics and numerical analysis. Log Returns Importing Financial Data from Excel Simple Moving Averages (SMA) with rolling() Momentum Trading Strategies with SMAs Exponentially-weighted Moving Averages (EWMA) Merging / Aligning Financial Time Series (hands-on) Advanced Topics. eig) i see negative eigenvalues sometimes. empty ( [len (ret)-period_interval,]) stndrdata = pd. cov() covs[3] # covariance matrix as of period 4; could be DatetimeIndex Out[7]: 0 1 2 0 0. 3f EWMA function is written as: σ 2 t = λσ 2 t-1 - (1- λ) r 2 t-where σ 2 t EWMA smoothed volatility and the volatility estimate that will be smoothed is on. We propose a multivariate approach to analysis of trajectory log data. Python gaussian_kde - 30 examples found. 4) #run our dataframe (up to the split point) of ticker price data through our co-integration function and store results pvalue_matrix,pairs = find_cointegrated_pairs(df. hzv The first step in creating a T 2 control chart is to calculate the values of T 2. i4i Improved estimation of the covariance matrix of stock returns with an application to portfolio selection. The Python code will also save all of this data in an Excel spreadsheet. hjf , COV (X, X), COV (Y, Y), and COV (Z, Z)). Topics in R include data frames, functions, objects, flow control, input and output, matrix computations, and the use of R packages. With variable data, decide how large subgroups are. These data values define pn-dimensional vectors x 1,…,x p or, equivalently, an n×p data matrix X, whose jth column is the vector x j of observations on. This is especially true when using method is "simulation" or "bootstrap". Returns: DataFrame A Window sub-classed for the particular operation. VAR, EWMA, GAARCH, Historical VAR, Stress analysis, Monte Carlo Simulation, Basic of Credit Risk, Credit Rating. Since I'm not a trained statistician, and certainly not a trained Bayesian, I'll be coming at it from a completely unorthodox. Covariance matrix 6 days left I need a code in MATLAB or Python to be used from a macro in Excel to calculate an EWMA covariance matrix for a maximum 250 variables. The way it works is that we need to compute a variance-covariance matrix for each asset pair and then the returns are adjusted based on the correlation matrix. 05r 99th of a Chi-square distribution of 20 degrees of freedom. 3lh First, I calculate the asset return covariance matrix over a 250-week window (250 weeks is approximately 5 years). Factor Covariance Matrix |Bias Statistics Recall the variance of the portfolio is expressed as var(R p) = X kl Xp k F klX p k + X n w2 nvar(u n) (15) Where F is the factor covariance matrix (FCM) of returns of factors, and u is the variance matrix of speci c returns. In contrast, intuitive annotation of landslides from satellite imagery is based on distinct features rather than individual pixels. Covariance matrix is always positive semidefinite. But changes can occur in either the location or the variability of the correlated multivariate quality characteristics, calling for. Exponentially Weighted Covariance Matrix in Python. A complete risk management infrastructure for your hedge fund The RiskAPI system ( Risk A pplication P rogramming I nterface) is an on-demand, dynamic risk management service that allows hedge funds to quickly and easily run risk analysis calculations on positions and portfolios. I have written two books on systematic trading: Systematic Trading, 2015. Conversely, a higher standard deviation. In this entry I work with data from the S&P 500 Index components. 5c The method gets its name from the variance-covariance matrix of positions that it uses as an intermediate step to calculate Value at Risk (VaR). λ), the better the performance of the proposed monitoring schemes. 94, the parameter suggested by RiskMetrics for daily returns, and μ is the sample average of. Finally, convert correlaon matrix C to a covariance matrix - matrix Σfactors - by mul:plying it by a diagonal matrix V, containing factor volali:es Σfactors=VCV. The distinctive feature is that these models recognize that volatilities and correlations are not constant. The fastest way I can think of is to overwrite the entire first level (innermost level) of the MultiIndex with a 20-minute-shifted version of itself:. 308) ˆλi of any estimated sample covariance matrix (3. The RecursiveLS class allows computation of recursive residuals and computes CUSUM. Below are the Syntax and Examples of Filter Function in Matlab: 1. 5 Similarly calculate exponentially weighted moving average for given times - EWMA (3) = 0. Visualizations are good, but pair them with analytics to make them great. R is corr matrix and temp is covariance matrix. iloc[:, j], alpha=alpha) return pd. The covariance matrix is represented in the following format. Portfolio Construction with Time-Varying Risk Parameters 8:15. 本文章向大家介绍【视频】向量自回归var数学原理及r软件经济数据脉冲响应分析实例,主要包括【视频】向量自回归var数学原理及r软件经济数据脉冲响应分析实例使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。. dd For DataFrames that have Series that are missing data (assuming that data is missing at random) the returned covariance matrix will be an unbiased estimate of the variance and covariance between the member Series. A review on outlier/anomaly detection in time series data. Click card to see definition 👆. and higher-way analysis of variance (ANOVA), analysis of covariance (ANOCOVA), multiple linear regression, stepwise regression, response surface prediction, ridge regression, and one-way multivariate analysis of variance (MANOVA). o5u Nonetheless, a winner in a kaggle competition is required only to attach a code for the replication of the winning result. 104 operators in Python, 393 linear algebra, 105-106 or keyword, 401 matrix operations in, 377-379 order method, 375 ndarray arrays, 80 OS X, setting up Python on, 9-10 outer method, 368, 369 Boolean indexing, 89-92 outliers, filtering, 201-202 creating, 81-82 output variables, 58-59 data types for, 83-85 fancy indexing, 92-93 P. (EWMA)=λσ n−12 +(1−λ)u n−12 where:EWMA=Exponentially weighted moving averageσ n2 =Variance todayλ=Degree of weightingσ n−12 =Variance yesterdayu n−12 =Squared return yesterday  Recursive means. Uniwersytet Ekonomiczny w Krakowie Coding in Python Wynik: 30/30 lis 2021 Coding in R Wynik: 27/30 lis 2021. The EWMA model is a special case of the IGARCH(1,1) model where volatility innovations have infinite persistence. These are pdist (distribution), ddist (density), qdist (quantile) and rdist (random number generation), in addition to dskewness and dkurtosis to return the conditional density skewness and kurtosis values. Estimate a covariance matrix, given data and weights. n9 Financial Risk Forecasting is a complete introduction to practical quantitative risk management, with a focus on market risk. Options, Futures, and Other Derivatives, 10th Edition. ’BLFM’: use estimates of expected return vector and covariance matrix based on Black Litterman applied to a Risk Factor model specified by the user. Using these links is the quickest way of finding all of the relevant EViews commands and functions associated with a general topic such as equations, strings, or statistical distributions. Note that the calculations are different for data in subgroups. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation. Viewed 1k times 0 I have weekly return data in ascending order. RiskMetrics® is actually a special case of the GARCH approach. • Audit Lead for new, more responsive internal Exponentially Weighted Moving Average (EWMA) Value-at-Risk methodology in R and MATLAB. When adjust=True (default), the EW function is calculated using weights $$w_i = (1 - \alpha)^i$$. The following links provide quick access to summaries of the help command reference material. ’ewma1’’: use ewma with adjust=True, see EWM for more details. When this covariance matrix becomes too small, recursive least squares algorithms respond slow to changes in model parameters. In addition to availability of regression coefficients computed recursively, the recursively computed residuals the construction of statistics to investigate parameter instability. This systematic review aims to provide an introduction and guide for researchers who are interested in quality-related issues of physical sensor data. ami py • Extend the example from Multi-Variate modeling – Two stock portfolio, MSFT and IBM, 50% weight each – Daily data from Jan 2000 to two different end dates: Dec. Minitab is the unmatched, all-in-one data analysis and statistics software for everyone that lets data be used for what it is worth. More specifically, we say that rt - μ~EWMA(λ) if: ∑t + 1 = (1 - λ)(rt - μ)(rt - μ) '. The reason for saying that, even though there are two sets of scores, T and U, for each of X and Y respectively, is that they have maximal covariance. So it is highly unlikely, a chance of 1 in 370, that a data point, $$\overline{x}$$, calculated from a subgroup of $$n$$ raw $$x$$-values, will lie outside these bounds. The question you have to ask yourself is whether you consider:. j6a The well-known MEWMA is directed at changes in. But when I calculate the eigenvalues (with np. In pandas, the std () function is used to find the standard Deviation of the series. Worked example: There are several ways to extend the EWMA model to generate predictions. More specifically, we say that r t-μ ~ EWMA λ if: ∑ t + 1 = 1-λ r t-μ r t-μ ' + λ ∑ t V-Lab uses λ = 0. How to Perform an ANCOVA in Python - Statology 111000. pdf - Free download as PDF File (. 3 where s2 ewma = (λ/(2−λ)s2) and s is the standard. In this blog, we shall discuss on Gaussian Process Regression, the basic concepts, how it can be implemented with python from scratch and also using the GPy library. rm (str, optional) – The risk measure used to optimze the portfolio. var(a) method to calculate the. 9h 92 to update correlation and covariance rates. matrix or array of the quality characteristics. Published on September 17, 2020 by Pritha Bhandari. Standard deviation in statistics, typically denoted by σ, is a measure of variation or dispersion (refers to a distribution's extent of stretching or squeezing) between values in a set of data. by the way your link only shows univariate EWMA. Multivariate exponentially weighted moving a verage (MEWMA) charts are among the best control charts. Covariance indicates the level to which two variables vary together. Basic knowledge of molecular biology and genetics is preferred but not required. Use classical methods in Minitab Statistical Software, integrate with open-source languages R or Python, or boost your capabilities further with machine learning algorithms like Classification and Regression Trees (CART®) or TreeNet® and Random Forests®, now available in Minitab's Predictive Analytics Module. Naval Research Logistics, 2013, 60(8), 625-636. A high standard deviation means that values are generally far from the mean, while a low standard deviation. yr DataFrame data: Input time series data:return: The success flag, model date and a trained lad filtering object:rtype: tuple[bool, str, LADFilteringModel object] >>> data raw interpolated 2020-01-01 1326. This chapter explains why, ultimately 2. Multivariate DCC-GARCH covariance matrix. Distance; GH-124: Fixing the Envelop filter as missing loop variables were not. Journal of Empirical Finance, 10:603-621. Intuitively, the historical correlation (or equivalently variance-covariance) matrix needs to be adjusted to the new information environment. A simulation using the SMC approach is not capable of predicting scenarios during times of crisis if the covariance matrix was. , 1992) that the $$(k,l)$$th element of the covariance matrix of the $$i$$th EWMA, $$\Sigma_{Z_i}$$, is $$\Sigma_{Z_i}(k,l) = \lambda_k \lambda_l \, \frac{\left[ 1-(1-\lambda_k)^i (1-\lambda_l)^i \right]}{(\lambda_k + \lambda_l - \lambda_k \lambda_l )} \, \sigma_{k,l} \, ,$$ where $$\sigma_{k,l}$$ is the $$(k,l)$$th element of $$\Sigma$$, the covariance matrix of the $$X$$'s. This algorithm computes a harmonic model for the 'training' portion of the input data and subtracts that from the original results. 547 About Excel Weighted Covariance. Exponentially weighted moving average (EWMA) is a popular IIR filter. First, let's create dummy time series data and try implementing SMA using just Python. 34615789769413313] Python: Calculate Sharpe Ratio adjustments (optionally) p=Portfolio(returns) # by default Sharpe Ratio adjustments are on unless we turn them off. 8 is the final version that supported Python 2. Now that we have had a look at the main characteristics of the variance and covariance. h3 bi3 1 Inverse and Adjoint of a Square Matrix A. [2] Standard Errors assume that the covariance matrix of the errors is correctly specified. It tells you, on average, how far each value lies from the mean. The Seventh Edition of Introduction to Statistical Quality Control provides a comprehensive treatment of the major aspects of using statistical methodology for quality control and improvement. fmq Covariance is a measure of relationship between the variability of 2 variables - covariance is scale dependent because it is not standardized. EWMA has a higher reversion rate than GARCH (1,1). The difference between the EWMA & SMA methods to the VCV approach lies in the calculation of the underlying volatility of returns. Python and R use exponential weighted average (EWMA), Arima autoregressive moving average model to predict time series. WAX-ML makes JAX-based programs easy to use for end-users working with pandas and xarray for data manipulation. Calculations for GARCH(1,1) and EWMA are to be done on separate sheets of the same Excel File. Variance Covariance Approach – Exponentially weighted moving average (EWMA) We will now look at how to calculate the exponentially weighted moving average (EWMA) VCV VaR. Introducing Time Series with pandas¶. Added a parameter rescale to arch_model that allows the estimator to rescale data if it may help parameter estimation. gaussian_kde现实Python示例。您可以评价示例. Bayesian regressions (part 1) October 6, 2011 Cathy O'Neil, mathbabe. The standard context for PCA as an exploratory data analysis tool involves a dataset with observations on pnumerical variables, for each of n entities or individuals. 27 us4 function in SAS/IML to create the sample covariance matrix for a given matrix [Ref. It differs from the Python list data type in the following ways: N-dimensional. Posible values are: ’hist’: use historical estimates. Complete 2-in-1 Python for Business and Finance Bootcamp. It was released on October 19, 2017 - over 4 years ago. cov() function only supports weights given to individual measurements (i. ’FM’: use estimates of expected return vector and covariance matrix based on a Risk Factor model specified by the user. Collaborate with chetanrg05 on pandas-self-practice notebook. >>> import num py as np>>> python 协方差矩阵 _num py 协方差矩阵 num py. Multivariate exponentially weighted moving average (MEWMA) charts are among the best control charts for detecting small changes in any direction. These algorithms include: Minimum Covariance Determinant; Empirical Covariance; Covariance Estimator with Shrinkage; Semi-Covariance Matrix; Exponentially- . R语言arima,向量自回归(VAR),周期自回归(PAR)模型分析温度时间序列. 1bf For example, REGION is a higher level summary of STATE. To be specific, we present the correlation matrix in the format of a heatmap in Figure 1. You can rate examples to help us improve the quality of examples. bbw and improved its Sharpe ratio by 34% by estimating the covariance matrix with EWMA and shrinkage methods Conducted performance attribution analysis on 1,400+ fixed income mutual funds over the past 14 quarters using the Campisi model, analyzed attribution results statistically, and composed a research report summarizing findings. Anomaly Detection This will take a dive into common methods of doing time series analysis, introduce a new algorithm for online ARIMA, and a number of variations of Kalman filters with barebone implementations in Python. The correlation estimate for two variables A and B on day n − 1 is 0. Its weighting scheme replaces the quandary of how much data to use with a similar quandary as to how aggressive a decay factor λ to use. The basic object is a timestamp. Let us define Ct as the volatility of a market variable on day t as estimated from day t - 1 Exponentially weighted moving average estimation is widely used, but. The exponentially-weighted moving average (EWMA) model calculates covariances by placing more emphasis on recent observations via a decay factor, λ. iloc[-1] def ewma_cov_pd(rets, alpha=0. Triangular Arbitrage Strategies for Forex & Commodities. The relative weight is determined by setting the half-life of the rate of decay, and it differs between the short. Build your own projects and share them online!. Garch Model Python Github Every day, TRB and thousands of other voices read, write, and share important stories on Medium. Standard Deviation | A Step by Step Guide with Formulas. Instructions 100 XP Use the exponential weighted covariance matrix from risk_models and exponential weighted historical returns function from expected_returns to calculate Sigma and mu. to the EWMA model, “… it is often found to generate short-run forecasts of the variance-covariance matrix that are as good as those of more sophisticated volatility models …” (page 805). becomes large, the covariance matrix may be expressed as: \Sigma_{Z_i} = \frac{\lambda}{2 - \lambda} \Sigma \,. While I prefer R for the majority of my analyses, I recommend Minitab over JMP because of Minitab's excellent technical support and its ease of use. bob said: Congratulations, you have just identified problem #1 with MC VaR. Simulation of 3 stocks (AMZN, GOOG, and AAPL) available for download from GitHub. is the covariance matrix of the input data. as well academic utilization of R and Python. Write a NumPy program to compute the covariance matrix of two given arrays. 1 引言我们在 《正确理解 Barra 的纯因子模型》介绍了 Barra 的多因子模型。该文讨论的重点在于从业务上说明国家、行业、风格纯因子投资组合的含义,而非具体的数学计算。不过,后来我意识到我给自己挖了一个坑。. Answer (1 of 2): The paper says > an exponentially-weighted moving average on the [data], with more recent observations having a higher weight than those from the more distant past. CUSUM and EWMA charts only monitored the selected quantitative response variable's changes without considering the risk factors nor their corresponding model parameters. Interpreting the scores in PLS — Process Improvement using Data. veu Basket Strategy (Index-Index, Index-Stocks). We adopted the Python DISPY distributed. Documentation Documentation from the main branch is hosted on my github pages. The well-known MEWMA is directed at changes in the mean vector. While the frequency of the new PeriodIndex is inferred from the timestamps by default, you can specify any frequency you want. 26 Full PDFs related to this paper. 4 - نرم افزار زبان برنامه نویسی پایتون [286,121] Microsoft Edge v99. To see my original article on the basics of using the BarChart OnDemand API click here. 73% of the area (in R: pnorm(+3)-pnorm(-3) gives 0. It does not attempt to model market conditional heteroskedasticity any more than UWMA does. EWMA data point can be calculated as: EWMA t = λp(I) t +(1 −λ)EWMA t−1 (2) where λ defines the impact of older data compared to new data. Baca Dan Streaming Artikel Var In Python Value At Risk In Python Varcovariance Var Stock Var Single Var Part 1 Semoga Bermanfaat. The Standard Deviation denoted by sigma is a measure of the spread of numbers. The Exponentially Weighted Moving Average (EWMA) was used to estimate the current variance in a setting where it might have a changing over time. Hands-On Data Analysis with Pandas: A Python data science handbook for data collection, wrangling, analysis, and visualization [2 ed. WAX-ML is a research-oriented Python library providing tools to design powerful machine learning algorithms and feedback loops working on streaming data. 6: Histogram of price increments of DAX and Dow Jones stock indices between. EWMA is a particular case of GARCH (1,1) where the reversion rate is zero. Use the exponential weighted covariance matrix from risk_models and exponential weighted historical returns function from expected_returns to calculate Sigma . The rugarch package contains a set of functions to work with the standardized conditional distributions implemented. WAX-ML makes JAX-based programs easy to use for end-users working with. When there are active constraints, that is, , the variance-covariance matrix is given by where and. Note that the cumulative statistics is also a windowed with n = k. EWMA covariance matrix using pandas. About Filter Kalman Sklearn Python. • Fotran90 to Python • SQLite with Python • EWMA smoothing length Indeed, a covariance matrix is supposed to be symmetric and positive-definite. mgo (1) m k ( n) = 1 n ∑ i = k − n + 1 k x i = 1 n S k ( n) Below for k = n we use the notation X k ( k) = X k. python - covariance isn't positive definite - Stack Overflow. Attended by more than 6,000 people, meeting activities include oral presentations, panel sessions, poster presentations, continuing education courses, an exhibit hall (with state-of-the-art statistical products and opportunities), career placement services, society and section business. PCA starts with computing the covariance matrix Whitening We have used PCA to reduce the dimension of the data. fj3 Sklavounos, Edoh, and Plytas applied EWMA and CUSUM control charts for Root to Local (R2L) intrusion and ${\rm{\Sigma }}$ was the covariance matrix with diagonal as 1 and data transformation, and communications among Fog nodes. A positive value for the covariance indicates the variables have a linear relationship. • Wrote Python code to forecast covariance matrix based on the in-sample data with both MA and EWMA method and implement optimization algorithm on in-sample data to construct the ETF using no. President Kissell Research Group and Adjunct Faculty Member Gabelli School of Business, Fordham University Manhasset, NY, United States. As the persistence parameter under EWMA is lowered, which of the following would be true: A. jgb The resulting fitted equation from Minitab for this model is: Progeny = 0. cases of individual observations the covariance matrix is estimated according to Holmes and Mer- gen(1993). It is a matrix in which i-j position defines the correlation between the i th and j th parameter of the given data-set. Titus 2 is a Portable Format for Analytics (PFA) implementation for Python 3. (i) the exponentially weighted moving average (EWMA) model; for an N × N variance-covariance matrix Ω to be internally consistent is. covariance (str, optional) – The method used to estimate the covariance matrix: The default is ‘hist’. n_components: int: Number of states in the model. fm In other words we should use weighted least squares with weights equal to 1 / S D 2. 1pa com (python/data-science news) Python Musings #7: Simulating FSAs in lieu of real postal code data. SAS topics include data management, manipulation, cleaning, macros, and matrix computations. (This is a change from versions prior to 0. More details on these plans will be discussed in later editions of the RiskMetrics Monitor. The difference between variance, covariance, and correlation is: Variance is a measure of variability from the mean. Python中的ARIMA模型、SARIMA模型和SARIMAX模型对时间序列预测. As a part of a statistical analysis engine, I need to figure out a way to identify the presence or absence of trends and seasonality patterns in a given set of time series data. Exponentially Weighted Moving Average Change Detection. date_range ('1/29/2000', periods=6, freq='D') ts2 = Series (randn (6), index=rng) ts2. fit_transform(data) Though a simple Google search for python ZCA Whitening gives an answer LW is the Ledoit and Wolf method, ROB is the robust method from the MASS package and EWMA an. The most straightforward method is to choose some historical data for your n assets, generate the covariance matrix on the excess returns (perhaps by using. Mar 17, 2020 Expected portfolio volatility= SQRT (WT * (Covariance Matrix) * W). (1991) as well as Shamma and Shamma (1992) proposed the double EWMA (DEWMA) scheme which is the extended version of Roberts (1959)’s EWMA scheme where the smoothing parameter is applied twice to further improve the sensitivity of the EWMA scheme towards very small shifts. Tracking the tracker: Time Series Analysis in Python from First Principles. To account for this, an exponentially weighted moving average (EWMA) is taken for each asset. rand (2, 2) print data cov = calcCov (data) eigvals, eigvec = np. Optional: To show the process mean and sigma. A columnar udf object is defined by ts. Python Pandas - Descriptive Statistics. Clustering based on similarity . Sensor data quality plays a vital role in Internet of Things (IoT) applications as they are rendered useless if the data quality is bad. This window shifts forward for each new data point. Here is an example of Matrix-based calculation of portfolio mean and variance: When $$w$$ is the column-matrix of portfolio weights, $$\mu$$ the column-matrix of expected returns, and $$\Sigma$$ the return covariance matrix We talk a lot about the importance of diversification, asset allocation, and portfolio construction at Listen Money Matters. 20, you'll get a MultiIndex DataFrame because Panel is deprecated. 1 Idempotent and Nilpotent Matrices. The exponential covariance matrix: gives more weight to recent data. The setting of the lines and characters is demonstrated in the example programs below. o7b Covariance matrices: The inter-class covariance matrix (equal to the unbiased covariance matrix for the means of the various classes), the intra-class covariance matrix for each of the classes (unbiased), the total intra-class covariance matrix, which is a weighted sum of the preceding ones, and the total covariance matrix calculated for all. 53 qq 3 So EWMA (1) = 40 EWMA for time 2 is as follows EWMA (2) = 0. Long-run Covariance Estimation; Python 3. Abstract Accurate calculation of the Average Run Length (ARL) for exponentially weighted moving average (EWMA) charts might be a tedious task. Python gaussian_kde - 已找到30个示例。这些是从开源项目中提取的最受好评的scipystatskde. The covariance is normalized by N-ddof. exponentially weighted covariance. String describing the type of covariance parameters used by the model. n_features: int: Dimensionality of the Gaussian emissions. The result is shown in Figure 1. 'naive' is used to compute the naive (standard) covariance matrix. Full PDF Package Download Full PDF Package. You can specify the smoothing factor in terms of halflife, span, or center of mass. The key is to notice that it depends on what the weights. This means that, instead of using both risk and return information as in the Markowitz portfolio selection, the portfolio is constructed using only measures of risk. Mean of all the elements in a NumPy Array. generalizing further to normalized weights. This is accomplished, loosely speaking, by "multiplying" the historic returns by the revised correlation matrix to yield updated correlation-adjusted returns. Python Data Analysis; exponentially weighted moving average (EWMA) model] and GARCH approaches are both exponential smoothing weighting methods. Create a CSV or tab-delimited file similar to your Amazon file, but add columns for the closing prices of Google and Apple. The covariance of two portfolio returns, each denoted by their own set of weights, say w a, w b can also be found using matrix algebra. cov () can be used to compute covariance between series (excluding missing values). More specifically, we say that rt - μ~EWMA(λ) if: ∑t + 1 = (1 - λ)(rt - μ)(rt - μ) + λ∑t V-Lab uses λ = 0. The Variance-Covariance VaR method makes a number of assumptions. Different methodologies will be tested to obtain the variance-covariance matrix, in particular we will test the historical moving average model, the EWMA model and the DCC-GARCH(1,1) model. 93 and Volatility of XYZ using share prices 67. Learn more about bidirectional Unicode characters. the number of features like height, width, weight, …). A test of covariance-matrix forecasting methods. {\Sigma }} \) was the covariance matrix with diagonal as 1 and data transformation, and communications among Fog nodes. Probability and probability distribution plots. The Covariance Matrix is also known as dispersion matrix and variance-covariance matrix. Binned scatterplots, boxplots, bubble plots, bar charts, correlograms, dotplots, heatmaps, histograms, matrix plots, parallel plots, scatterplots, time series plots, etc. The EWMA chart (Exponentially Weighted Moving Average) is a variable data control chart that blends the current data point with an average of the previous data points. The Exponentially Weighted Moving Average ( EWMA) covariance model assumes a specific parametric form for this conditional covariance. for detecting small changes in any direction. 🔖 Version updates and fixes: GH-76/GC-24: Add easier creating and handling of factors for categorical variables; GH-123: Bug in the Euclidean on Accord. INTRODUCTION TO PORTFOLIO ANALYSIS IN PYTHON. The three-dimensional covariance matrix is shown as To create the 3×3 square covariance matrix, we need to have three-dimensional data. Therefore, the GARCH variance-covariance matrix lacks of robustness, thus, the variance-covariance matrix obtained through the EWMA was the chosen to input into the model. y t = ∑ i = 0 t w i x t − i ∑ i = 0 t w i, where x t is the input and y t is the result. Kevin Sheppard's MFE toolbox for Matlab and Arch package for Python have EWMA and GARCH. outer ( v, v) correlation = covariance / outer_v. Must be one of ‘spherical’, ‘tied’, ‘diag’, ‘full’. Numerical integration of Marchenko-Pastur distribution. For calculating the EWMA Volatility, I implemented the following functions: after exhausting my options, I end up converting a MatLab matrix calculation to Python code and it does the vol with decay calculation perfectly in matrix form. Let's see an example of using pd. Mean Reverting Strategies like Pair Trading using Z score Model. 69960 Name: tas, dtype: float64. cov(min_periods=None, ddof=1) [source] ¶ Compute pairwise covariance of columns, excluding NA/null values. exponentially weighted moving average covariance matrix yanyachen/arimaMisc documentation built on May 4, 2019, 2:30 p. Time series prediction is all about forecasting the future. x2q From Figures 3 (a)-(c), it is revealed that the smaller the smoothing parameter (i. Jon Danielsson "Financial risk forecasting" has EWMA and GARCH for R and Matlab and looks like Python now too. Predicting MLC replacement needs ahead of time would allow for proactive maintenance scheduling, reducing the impact MLC replacement has on treatment workflow. This code was written by Michael Rabba. 5sw 94, the parameter suggested by RiskMetrics for daily returns, and μ is the sample average of the returns numpy. 导读1、 作为西学东渐--海外文献推荐系列报告第五十三篇,本文推荐了Valeriy Zakamulin于2015年发表的论文《A Test of Covariance-Matrix Forecasting Methods》。2、 金融资产收益率协方差矩阵的估计和预测在金融众多领域如资产配置、风险管理等中具有核心地位。目前关于不同协方差矩阵预测方法的比较研究还. In order to detect outliers I use the percentile 99. Convert to correlaon matrix W, and twist this matrix in order to construct final correlaon C, with the correlaon of the individual models in the diagonal blocks. Kevin Sheppard used to have an implementation of the EWMA 2006 covariance matrix but I don't see it anymore. Pandas Basics and GroupBy: Intro to Python Data Science. Udemy Python for Excel: Use xlwings for Data Science and Finance. V is the covariance matrix, and W T is the transpose of the matrix W. vector μ0 and variance-covariance matrix 0, has an upper control limit of Lu =χ2 p,1−α. Recent landslide detection studies have focused on pixel-based deep learning (DL) approaches. I like the flexibility of using Pandas objects and functions but when the set of assets grows the function is becomes very slow: import pandas as pd import numpy as np. As evident in the chart above, large moves in the S&P tend to cluster around major events—Black Monday in 1987, the global financial crisis, and the covid-19 pandemic, most. cov () covs [3] # covariance matrix as of period 4; could be DatetimeIndex Out [7]: 0 1 2 0 0. 2 Kevin Sheppard August 04, 2016 Contents 1 Contents 3 2 Indices and tables 125 Bibliography 127 i ii arch Documentation, Release 3. Kalman-filter is just an algorithm that tune this unknown parameters in a smart way. The parameter λ in the exponential weighted moving average (EWMA) σn 2 = λ σ n-1 2 + (1-λ) U 2 n-1- model is 0. s2w The EWMA is widely used in finance, the main applications being technical analysis and volatility modeling. One of the simplest is something like this: Compute the EWMA of the time series and use the last point as an intercept, inter. Asymptotic covariance matrix of $\bar{\pmb x}$ Hot Network Questions Do arrows count as trinkets for Prestidigitation? Was the Saturn V assembly carried out on the crawler-transporter or on the VAB's ground floor? How to convert std::vector to a vector of pairs std::vector> using STL. 4so @abstractmethod def compute_variance (self, parameters: NDArray, resids: NDArray, sigma2: NDArray, backcast: Union [float, NDArray], var_bounds: NDArray,)-> NDArray: """ Compute the variance for the ARCH model Parameters-----parameters : ndarray Model parameters resids : ndarray Vector of mean zero residuals sigma2 : ndarray Array with same size as resids to store the conditional variance. 7gt In Python, create a PriceSeries class. First we convert it into a time Series. D students in BME, Math, Physics, and other quantitative sciences. We can define a population in which a regression equation describes the relations between Y and some predictors, e. The reason for $$c_n = \pm 3$$ is that the total area between that lower and upper bound spans 99. For example, we'll require volatility for sharpe ratio, sortino ratio and etc. Documentation from the main branch is hosted on my github pages. me7 I want to compute the covariance C of n measurements of p quantities, where each individual quantity measurement is given its own weight. It mainly targets six-sigma professionals. Contents 1 Introduction 2 2 Stationarity 4 3 A central limit theorem 9 4 Parameter estimation 18 5 Tests 22 6 Variants of the GARCH(1,1) model 26 7 GARCH(1,1) in continuous time 27. 5i Last Updated: December 2, 2020. First, we calculate s1, s2, s3, s4, where c = 4, as shown in range F4:F7. Example #1 Let's consider 5 data points as per below table: And parameter a = 30% or 0. cov Nipper 2019-05-19 13:16:54 114 0 python / pandas / covariance / weighted-average / covariance-matrix. orc The L-shaped matrix helps display relationships among any two different groups of people, processes, materials, machines, or environmental factors. The (FCM) predicts the volatilities and correlations of the factors, thus. We show that the chart is competitive,. Learning Python Programming A-Z with Real World Simulations. The estimated standard deviations on day n − 1 for variables A and B are 2% and 2. udf () with a python function, a return type and a list of input columns. RiskMetrics 2006 EWMA for Python is here. 7i9 Wax is what you put on a surfboard to avoid slipping. But it is too simple, we already know .
# Time comparison ## Recommended Posts I have a question on Einstein's theory of special ralativity. They say that if two baby brother borned at the same day, but when one of them been put on a spaceship that travels at near the speed of light to another star. After earth's time of 10 years, he returns to earth, but his own borther who satyed on earth is 10 years oler than him. My questions is the brother stayed on earth growing for ten years, then wouldn't the brother in the sapceship also grows for ten years due to the metabolism? ##### Share on other sites i would have thought , the metabolsim would have slowed down as well due to the time dilation ( from the point of view of the earthbound bro) ##### Share on other sites It's due to the time on the brother at spaceship that travels near the light speed is faster than the time on the brother at the earth. I would have tought, if we travels near the lightspeed, we looks other people as if they're stopped (It's caused we travels very fast), at the contrary, we can't look the people who travel near the lightspeed(It's caused we are very slow). If the brother at the spaceship that travels near the lightspeed during 10 years, the brother at the earth is still in one second. -- ##### Share on other sites Time for the brother in the spaceship moves more slowly. ##### Share on other sites It's due to the time on the brother at spaceship that travels near the light speed is faster than the time on the brother at the earth. Sorry, that above is wrong, the right is : ']Time for the brother in the spaceship moves more slowly. Due to the time for the brother in the spaceship more slowly, as if time is stopped. ##### Share on other sites In relativity everything is relative. The brother on earth Bob sees is twin Bill aging more slowly because Bill is going near the speed of light. For Bill in the spaceship sees Bob on earth going away from him at near the speed of light so he see him aging more slowly. That's the big paradox of relativity and I don't know it is resolved by the physicists. ##### Share on other sites Of course the "big paradox" is resolved! Einstein presented it as a joke, as a koan, I've heard. Length contraction and time dilation are not the whole story of special relativity! There is also clock dissynchronicity to consider, and that element is essential to resolving any seeming paradox. ##### Share on other sites heres what I dont understand, Im wearing a watch, and my watch is set to the exact same sime as a big clock, even the second hands are in synch. now I move away from the big clock at light speed, the big clock will apear to have stopped, as Im traveling at the same rate as the photons that bounced off the clock face with the time data in them. but I look down at my wrist watch and my watch is working normaly. I travel at this rate for 20 mins and return back to the big clock. Surely both would be back in perfect Synch again, telling the same time. because both clock and watch were operating at the same rate, and the fact that it appeared to stop when moving was only a illusion created by seeing the same photons with the clock face information in them for 20 mins. ##### Share on other sites That's the big paradox of relativity and I don't know it is resolved by the physicists. That's not a paradox at all. The symmetry is broken because one has to accelerate to get back to the other. ##### Share on other sites ..because both clock and watch were operating at the same rate, and the fact that it appeared to stop when moving was only a illusion You've got over 7300 posts here and you still think that time dilation is a mere illusion??? No, the time dilation effect occurs whether the movement is toward, away, or transverse to the observer. There goes your illusion, mister! Check out my website. ##### Share on other sites Im uncertain where youre coming from here? if you consider my 7300+ posts as a qualification to my understanding of Physics, then YOU are Mistaken Mister I tend to deal with Chem and Electronics as "my" area. having gotten that cleared up, my point (probably badly worded) is closer too this. somewhere in the Galaxy, from a stationary standpoint, you can listen to I Love Lucy from the original TV broadcasts in the 50s if you travel BACK towards the source of the signal (Earth) the signal will be compressed and seem faster until such time as youve "caught up" with whats being broadcast NOW. ok, inversely, if you travel AWAY and WITH the radio signal at the same speed, it will seem to "Freeze", the freq will slow down to a stop. my point is that if you took the entire broadcast material with you and pressed "Play", to you all would be at the same rate as transmitted on Earth (to you). when you arrive back, all should still be in synch. ##### Share on other sites ..if you consider my 7300+ posts as a qualification to my understanding of Physics, then YOU are Mistaken Mister I tend to deal with Chem and Electronics as "my" area. My sincere apologies. As for the remainder of your reply, I'd have to say you are confusing the Doppler effect with Einstein's Relativity, and they should rightly be dealt with separately. Also, that's twice now that you've described a scenario of someone moving relatively at lightspeed, which you should well know is an impossible feat. 'Better to choose a different relative speed. ##### Share on other sites My sincere apologies. As for the remainder of your reply' date=' I'd have to say you are confusing the Doppler effect with Einstein's Relativity, and they should rightly be dealt with separately. [/quote'] fair enough, on both counts. the Doppler effect I can understand and visualise, why/How does this not apply to Light? I was certain the "Red Shift" was not only interesting and a known phenomenon, but also used to detects relative speed of an object (celestial mostly). Im wanting to know, WHY, if youre "Riding waves" does Time have to be affected in anyway, other than, the Appearance alone? ie/ Listening to "I Love Lucy" on a planet 50 lightyears away. ##### Share on other sites the Doppler effect I can understand and visualise, why/How does this not apply to Light? I was certain the "Red Shift" was not only interesting and a known phenomenon, but also used to detects relative speed of an object (celestial mostly). Yes, the red shift is real and Doppler effect applies to light, but when you see a very very distant galaxy's radiations red-shifted, it is a compound effect -- some of it due to Doppler effect and some of it due to relativistic time dilation. Im wanting to know, WHY, if youre "Riding waves" does ... Sorry, but I don't do illicit drugs. ie/ Listening to "I Love Lucy" on a planet 50 lightyears away. I fail to see what distance has to do with our current discussion. Check out my website. ##### Share on other sites Distance has everything to do with it. assuming Radio travels at C (and it does without nitpicking). then listening to that would put you in A time frame of 50 years ago, just like the light from the Sun is about 8 mins old when WE see it on earth. the closer we get to the Sun the less TIME elapses between solar event and our observation of it. the fact that it takes 8 mins to reach us, is only a limitation of what we can see because light is of a finite speed. an event that happens on the sun NOW happens NOW no matter where you are, it just takes the light a little time to Show you this, so that you can see it. but NOW is NOW regardless ##### Share on other sites .. but NOW is NOW regardless Ah, alas, that's what poor Sir Isaac thought. Tsk, tsk, tsk. Never wrestle with a pig, my friend. ##### Share on other sites Im unfamiliar with Newtons work other than the Apple tree and some basics (equal and opposite reaction and stuff) and a few bits to do with Rockets. the time lag between event cause and event Observation, is only due to the finite speed of light or sound etc... it doesnt mean the event DIDNT happen then, it only means it took a while for the sound or light to reach us. otherwise how can we say it happened 8 mins ago? ##### Share on other sites You call yourself "The Resourceful One", so find some good websites on Relativity and learn what it's all about. I personally favor explainrelativity.com, but that's just me... just me. ##### Share on other sites Personal digs against me about trivia, reflect nothing more than an inability to explain or converse in a manor suitable for all, in other words a Limitation on YOUR part. I admit my knowledge of Physics, quantum, relativistic etc... is limited, Common courtesy is not though! and FYI, I have seen MANY of such sites, NON as of yet has presented an understandable explaination of WHY "NOW" isnt a constant. it would appear that you cannot either! ##### Share on other sites I don't know what "personal digs" you're referring to and SURE, I can explain why "NOW" is not a constant. But I don't think this forum is the logical venue in which to begin a 3000-word dissertation, which is about what would be required to indictrinate a total novice, which you readily admit that you are. THAT and ONLY that is why I instead -- politely -- referred you to other resources. [Harrumph!] ##### Share on other sites heres what I dont understand' date=' Im wearing a watch, and my watch is set to the exact same sime as a big clock, even the second hands are in synch. now I move away from the big clock at light speed, the big clock will apear to have stopped, as Im traveling at the same rate as the photons that bounced off the clock face with the time data in them. but I look down at my wrist watch and my watch is working normaly. I travel at this rate for 20 mins and return back to the big clock. Surely both would be back in perfect Synch again, telling the same time. because both clock and watch were operating at the same rate, and the fact that it appeared to stop when moving was only a illusion created by seeing the same photons with the clock face information in them for 20 mins.[/quote'] I'm not sure how much this will help but here we go... Visualise two mirrors facing each other, travelling through space along parallel lines. There's a single photon of light bouncing between those mirrors. When the mirrors are stationary the photon just bounces up and down. When the mirrors start moving the photon doesn't just bounce up and down it has to travel forwards with the mirrors. So as you trace the path of the photon viewing it from the side it's making an up/down "zig zag" line. So the photon's having to travel that extra distance compared to the mirrors. Think of yourself walking along a quiet road, as you move along you bounce yourself off the kerbs to the other kerb (maybe you've been drinking, again ). It's going to take you longer to walk to the end of the road than it would going constantly straight forwards. I'm not 100% clear on this next bit myself but.. So for moving objects travelling faster means they have to travel that extra diagonal distance too. I don't know the quantum mechanics of it but I would assume that as electrons orbit the nucleas of the atom they have to travel that extra distance like the photon does. ##### Share on other sites MadScientist, I was totaly with you in understanding until towards the end, other than that, its a good way to explain things like this to me, Im far from Stupid!, but I prefer mental images than formula and stuff, Most excellent effort! now then, Lightsword. it provides nothing of use as towards an answer of what "NOW" is, only an interpretation of it. because and event doesnt effect you at this moment, it hasnt happened. is basicly what its trying to say (or convince you of). I dont buy it mate, it really dont, not by a long shot! if the sun blows up NOW, it blows up NOW, end of chat. the fact we get the effects 8 mins later doesnt and should detract from the fact that it blew up when it did. you hold a melon in your hand, Im 1km away with a a riffle and super-sonic round. I shoot it, it explodes in your hands, a second or 2 later you hear the CRACK! from the gun. WHEN did I pull the trigger? ##### Share on other sites Before I say anything else, I'll say this: never, ever, do thought experiments involving anything but light (or the other massless particles) travelling at lightspeed. Many things take place (or not, as the case may be) at lightspeed that don't happen just under it. Special Relativity doesn't replace the slowing down of the receiving of signals, it acts in addition to it. Time dilation is a result of the speed of light being constant for all observers. ##### Share on other sites you hold a melon in your hand' date=' Im 1km away with a a riffle and super-sonic round. I shoot it, it explodes in your hands, a second or 2 later you hear the CRACK! from the gun. WHEN did I pull the trigger?[/quote'] It depends on who is doing the observing. About the only thing observers in moving rames of reference will agree on is that the bullet struck the melon after the trigger was pulled, since those are causally related. ##### Share on other sites but surely theres no "frame of reference". I agree that there1s no way it could have happened BEFORE I pulled the triggere as that would be absurd, so sure saying it occured a short while after would be correct cause/effect. but if we use Sound as the "Frame of Ref" then its totaly out of synch, a melon exploding makes a "THUD SPLASH" sound, and then a sharp "CRACK" sound a second later, would be the observation why, because sound travels slower than the projectile (you know all this anyway), pint being is that sound travels at a finite speed and the ref frame can cock up results. Light travels at a finite speed also! why is there any reason to beleive that Light is any different, and that the ref frame cant equaly cock up results? esp when the trigger was pulled at a given time. event cant preceed cause in ANY case Im aware of (I might be wrong, but I cant think of even a tiny case where this may happen, I`m no physicist). and Cause occurs NOW, whether we can detect/see/hear/feel etc... is irrelevant surely? ## Create an account Register a new account
# 8.6: Assignment- Maximizing Utility of Pizzas and Twinkies Suppose that as a consumer you have $34 per month to spend for munchies, either on pizzas which cost$6 each or on twinkies which cost \$4 each. Suppose further that your preferences are given by the following total utility table. Count 1 2 3 4 5 6 7 TU for Pizza 60 108 138 156 162 166 166 TU for Twinkies 44 76 100 120 136 148 152 First, graph the budget constraint with pizzas on the horizontal axis and twinkies on the vertical axis. What are the intercepts and slope of the opportunity cost? Express the budget constraint as an algebraic equation for a line. Next, should you purchase a twinkie first or a pizza first to get the “biggest bang for the buck”? How can you tell? (Hint: use the utility maximizing rule.) What should you purchase? Next, use the utility maximizing rule to identify the consumer equilibrium, that is, what combination of twinkies and pizzas will maximize your total utility. (Hint: What should you purchase second, third, etc. until you exhaust your budget?) Confirm that the consumer equilibrium generates the highest combined total utility of any affordable combination of goods. E.g., compute the total utility of some other affordable combinations of twinkies & pizzas and compare with the consumer equilibrium. ## Rubric Criteria Not Evident Developing Proficient Distinguished Weight Accurately graph the budget constraint, including intercepts and slope 5 Express the budget constraint as an algebraic equation for a line 2 Identify which product to purchase first and correctly explain why 3 Calculate the consumer equilibrium using the utility maximizing rule 4 Explain the process used to confirm that the consumer equilibrium generated the highest combined total utility of any affordable combination of goods 4 Articulation of response (citations, grammar, spelling, syntax, or organization that negatively impact readability and articulation of main ideas.) 2 Total: __/20
# Identically distributed vs P(X > Y) = P(Y > X) I've two related propositions which seem correct intuitively, but I struggle to prove them properly. ### Question 1 Prove or disprove: If $$X$$ and $$Y$$ are independent and have identical marginal distributions, then $$\mathbb{P} (Y > X) = \mathbb{P} (X > Y) = 1/2$$ Due to independence, the joint PDF of $$X$$ and $$Y$$ is the product of their marginal PDF: \begin{align} \mathbb{P} (Y > X) &= \int_{-\infty}^\infty \int_x^\infty p(x) \, p(y) \, dy \, dx \\ \mathbb{P} (X > Y) &= \int_{-\infty}^\infty \int_y^\infty p(x) \, p(y) \, dx \, dy = \int_{-\infty}^\infty \int_x^\infty p(y) \, p(x) \, dy \, dx \end{align} The last step is based on the fact that the integral won't change if we simply rename the integration parameters $$x$$ and $$y$$ consistently. So we have shown that $$\mathbb{P} (Y > X) = \mathbb{P} (X > Y)$$ Side note: Even if $$X$$ and $$Y$$ are dependent, this result still holds so long as their joint PDF is exchangeable i.e. $$p(x, y) = p(y, x)$$ Let $$u = y - x$$ so that $$\mathbb{P} (Y > X) = \int_{-\infty}^\infty \int_0^\infty p(x) \, p(u + x) \, du \, dx$$ I thought of applying Fubini's theorem but it doesn't help to show that it's equal to 1/2, so maybe it's not 1/2? Alternatively, consider that $$\mathbb{P} (Y > X) + \mathbb{P} (X > Y) + \mathbb{P} (Y = X) = 1$$ If we assume that $$\mathbb{P} (Y = X) = 0$$ then we can conclude that $$\mathbb{P} (Y > X) = 1/2$$. But is this assumption justified? ### Question 2 Prove or disprove: If $$X$$ and $$Y$$ are independent and $$\mathbb{P} (Y > X) = \mathbb{P} (X > Y)$$, then they have identical marginal distributions. If this statement is true, then is it still true if $$X$$ and $$Y$$ are dependent? @Xi'an provided a counter-example. Suppose that $$\begin{bmatrix} X \\ Y \end{bmatrix} \sim \mathcal{N} \left( \begin{bmatrix} \mu \\ \mu \end{bmatrix}, \begin{bmatrix} \sigma_1^2 & c \\ c & \sigma_2^2 \end{bmatrix} \right)$$ Then $$X-Y$$ and $$Y-X$$ have the same distribution: $$\mathcal{N} \left(0, \sigma_1^2 + \sigma_2^2 - 2c \right)$$ and hence $$\mathbb{P} (Y - X > 0) = \mathbb{P} (X - Y > 0)$$ However the marginal distributions of $$X \sim \mathcal{N} \left(\mu, \sigma_1^2\right)$$ and $$Y \sim \mathcal{N} \left(\mu, \sigma_2^2\right)$$ may be different. This result holds regardless of whether $$X$$ and $$Y$$ are independent. • One interesting technical point is that if the probability of $X=c$ is $0$ for every value of $c,$ that's not enough to entail that probabilities are given by integrating a density function. The standard counterexample is the Cantor distribution. But more to the point$\,\ldots\qquad$ – Michael Hardy Jan 6 '19 at 20:23 • $\ldots\,$is that I probably wouldn't solve this problem by considering such integrals anyway. – Michael Hardy Jan 6 '19 at 20:23 • What if X and Y are Bernoulli? Isn’t that a counterexample to P(X=Y)=0? – The Laconic Jan 7 '19 at 3:03 • When using Fubini's theorem and a density $p(\cdot)$ against the Lebesgue measure, $\mathbb{P} (Y = X) = 0$, necessarily. – Xi'an Jan 7 '19 at 9:22 This answer is written under the assumption that $$\mathbb{P}(Y=X)=0$$ which was part of the original wording of the question. Question 1: A sufficient condition for$$\mathbb{P}(Xis that $$X$$ and $$Y$$ are exchangeable, that is, that $$(X,Y)$$ and $$(Y,X)$$ have the same joint distribution. And obviously $$\mathbb{P}(Xsince they sum up to one. (In the alternative case that $$\mathbb{P}(Y=X)>0$$ this is obviously no longer true.) Question 2: Take a bivariate normal vector $$(X,Y)$$ with mean $$(\mu,\mu)$$. Then $$X-Y$$ and $$Y-X$$ are identically distributed, no matter what the correlation between $$X$$ and $$Y$$, and no matter what the variances of $$X$$ and $$Y$$ are, and therefore (1) holds. The conjecture is thus false. I will show that the distribution of the pair $$(X,Y)$$ is the same as the distribution of the pair $$(Y,X).$$ That two random variables $$X,Y$$ are independent means that for every pair of measurable sets $$A,B$$ the events $$[X\in A], [Y\in B]$$ are independent. In particular for any two numbers $$x,y$$ the events $$[X\le x], [Y\le y]$$ are independent, so $$F_{X,Y}(x,y) = F_X(x)\cdot F_Y(y).$$ And the distribution of the pair $$(X,Y)$$ is completely determined by the joint c.d.f. Since it is given that $$F_X=F_Y,$$ we can write $$F_{X,Y}(x,y) = F_X(x)\cdot F_X(y).$$ This is symmetric as a function of $$x$$ and $$y,$$ i.e. it remains the same if $$x$$ and $$y$$ are interchanged. But interchanging $$x$$ and $$y$$ in $$F_{X,Y}(x,y)$$ is the same as interchanging $$X$$ and $$Y,$$ since $$F_{X,Y}(x,y) = \Pr(X\le x\ \&\ Y\le y).$$ Therefore (the main point): The distribution of the pair $$(X,Y)$$ is the same as the distribution of the pair $$(Y,X).$$ • I don't think "interchanging x and y in $F_{X,Y} (x,y)$ is the same as interchanging X and Y" because P(X ≤ x, Y ≤ y) ≠ P(X ≤ y, Y ≤ x) in general. Even if X and Y are independent, P(X ≤ x) P(Y ≤ y) ≠ P(X ≤ y) P(Y ≤ x) unless they are also identically distributed. – farmer Jan 7 '19 at 21:39 • @farmer : Start with $\Pr(X\le x\ \&\ Y\le y)$ and interchange $x$ and $y,$ and you get $\Pr(X\le y\ \&\ Y\le x).$ But if you start with the same thing and interchange $X$ and $Y,$ then you get $\Pr(Y\le x\ \&\ X\le y).$ The claim, then, is that $\Pr(X\le y\ \&\ Y\le x)$ is the same as $\Pr(Y\le x\ \&\ X\le y). \qquad$ – Michael Hardy Jan 8 '19 at 5:28
# Do you know its property? -11 Algebra Level 5 If the fundamental period of a continuous non-zero function $$f(x)$$ satisfying $\large f(x+1)+f(x-1)=\sqrt{\pi}.f(x)$ is $$a_1a_2a_3a_4a_5a_6a_7.b_1b_2b_3b_4b_5b_6b_7$$, find the value of $$\displaystyle \sum_{i=1}^7 (a_i+b_i)$$. Assumptions: • $$0 \leq a_1,a_2,a_3,a_4,a_5,a_6,a_7,b_1,b_2,b_3,b_4,b_5,b_6,b_7 \leq 9$$ • $$\{a_1,a_2,a_3,a_4,a_5,a_6,a_7,b_1,b_2,b_3,b_4,b_5,b_6,b_7\} \in \mathbb{Z}$$
IBDP Maths AI: Topic SL 5.4: Tangents and normals: IB style Questions SL Paper 1 Question The figure below shows the graphs of functions $$f_1 (x) = x$$ and $$f_2 (x) = 5 – x^2$$. (i) Differentiate $$f_1 (x)$$ with respect to x. (ii) Differentiate $$f_2 (x)$$ with respect to x.[3] a. Calculate the value of x for which the gradient of the two graphs is the same.[2] b. Draw the tangent to the curved graph for this value of x on the figure, showing clearly the property in part (b).[1] c. Markscheme (i) $$f_1 ‘ (x) = 1$$     (A1) (ii) $$f_2 ‘ (x) = – 2x$$     (A1)(A1) (A1) for correct differentiation of each term.     (C3)[3 marks] a. $$1 = – 2x$$     (M1) $$x = – \frac{1}{2}$$     (A1)(ft)     (C2)[2 marks] b. (A1) is for the tangent drawn at $$x = \frac{1}{2}$$ and reasonably parallel to the line $$f_1$$ as shown. (A1)     (C1)[1 mark] c. Question Consider the function $$f(x) = 2{x^3} – 5{x^2} + 3x + 1$$. Find $$f'(x)$$.[3] a. Write down the value of $$f'(2)$$.[1] b. Find the equation of the tangent to the curve of $$y = f(x)$$ at the point $$(2{\text{, }}3)$$.[2] c. Markscheme $$f'(x) = 6{x^2} – 10x + 3$$     (A1)(A1)(A1)     (C3) Notes: Award (A1) for each correct term and no extra terms. Award (A1)(A1)(A0) if each term correct and extra term seen. Award(A1)(A0)(A0) if two terms correct and extra term seen. Award (A0) otherwise.[3 marks] a. $$f'(2) = 7$$     (A1)(ft)     (C1)[1 mark] b. $$y = 7x – 11$$ or equivalent     (A1)(ft)(A1)(ft)     (C2) Note: Award (A1)(ft) on their (b) for $$7x$$ (must have $$x$$), (A1)(ft) for $$– 11$$. Accept $$y – 3 = 7(x – 2)$$ .[2 marks] c. Question Consider the function $$f(x) = \frac{1}{2}{x^3} – 2{x^2} + 3$$. Find $$f'(x)$$.[2] a. Find $$f”(x)$$.[2] b. Find the equation of the tangent to the curve of $$f$$ at the point $$(1{\text{, }}1.5)$$.[2] c. Markscheme $$\frac{{3{x^2}}}{2} – 4x$$     (A1)(A1)     (C2) Note: Award (A1) for each correct term and no extra terms; award (A1)(A0) for both terms correct and extra terms; (A0) otherwise.[2 marks] a. $$3x – 4$$     (A1)(ft)(A1)(ft)     (C2) Note: accept $$3{x^1} – {4^0}$$[2 marks] b. $$y = – 2.5x + 4$$ or equivalent     (A1)(ft)(A1)     (C2) Note: Award (A1)(ft) on their (a) for $$– 2.5x$$ (must have $$x$$), (A1) for $$4$$ or equivalent correct answer only. Accept $$y – 1.5 = – 2.5(x – 1)$$[2 marks] c. Question The function $$f(x)$$ is such that $$f'(x) < 0$$ for $$1 < x < 4$$. At the point $${\text{P}}(4{\text{, }}2)$$ on the graph of $$f(x)$$ the gradient is zero. Write down the equation of the tangent to the graph of $$f(x)$$ at $${\text{P}}$$.[2] a. State whether $$f(4)$$ is greater than, equal to or less than $$f(2)$$.[2] b. Given that $$f(x)$$ is increasing for $$4 \leqslant x < 7$$, what can you say about the point $${\text{P}}$$?[2] c. Markscheme $$y = 2$$.     (A1)(A1)     (C2) Note: Award (A1) for $$y = \ldots$$, (A1) for $$2$$. Accept $$f(x) = 2$$ and $$y = 0x + 2$$ a. Less (than).     (A2)     (C2)[2 marks] b. Local minimum (accept minimum, smallest or equivalent)     (A2)     (C2) Note: Award (A1) for stationary or turning point mentioned. No mark is awarded for $${\text{gradient}} = 0$$ as this is given in the question. c. Question The straight line, L, has equation $$2y – 27x – 9 = 0$$. a. Sarah wishes to draw the tangent to $$f (x) = x^4$$ parallel to L. Write down $$f ′(x)$$.[1] b. Find the x coordinate of the point at which the tangent must be drawn.[2] c, i. Write down the value of $$f (x)$$ at this point.[1] c, ii. Markscheme y = 13.5x + 4.5     (M1) Note: Award (M1) for 13.5x seen. gradient = 13.5     (A1)     (C2)[2 marks] a. 4x3     (A1)     (C1)[1 mark] b. 4x3 = 13.5     (M1) Note: Award (M1) for equating their answers to (a) and (b). x = 1.5     (A1)(ft)[2 marks] c, i. $$\frac{{81}}{{16}}$$   (5.0625, 5.06)     (A1)(ft)     (C3) Note: Award (A1)(ft) for substitution of their (c)(i) into x4 with working seen.[1 mark] c, ii. Question Consider $$f:x \mapsto {x^2} – 4$$. Find $$f ′(x)$$.[1] a. Let L be the line with equation y = 3x + 2. Write down the gradient of a line parallel to L.[1] b. Let L be the line with equation y = 3x + 2. Let P be a point on the curve of f. At P, the tangent to the curve is parallel to L. Find the coordinates of P.[4] c. Markscheme $$2x$$     (A1)     (C1)[1 mark] a. 3     (A1)     (C1)[1 mark] b. $$2x = 3$$     (M1) Note: (M1) for equating their (a) to their (b). $$x =1.5$$     (A1)(ft) $$y = (1.5)^2 – 4$$     (M1) Note: (M1) for substituting their x in f (x). (1.5, −1.75)   (accept x = 1.5, y = −1.75)     (A1)(ft)     (C4) Note: Missing coordinate brackets receive (A0) if this is the first time it occurs.[4 marks] c. Question Let $$f (x) = 2x^2 + x – 6$$ Find $$f'(x)$$.[3] a. Find the value of $$f'( – 3)$$.[1] b. Find the value of $$x$$ for which $$f'(x) = 0$$.[2] c. Markscheme $$f'(x) = 4x + 1$$     (A1)(A1)(A1)     (C3) Note: Award (A1) for each term differentiated correctly. Award at most (A1)(A1)(A0) if any extra terms seen.[3 marks] a. $$f'( – 3) = – 11$$     (A1)(ft)     (C1)[1 mark] b. $$4x + 1 = 0$$     (M1) $$x = – \frac{{1}}{{4}}$$     (A1)(ft)     (C2)[2 marks] c. Question The table given below describes the behaviour of f ′(x), the derivative function of f (x), in the domain −4 < x < 2. State whether f (0) is greater than, less than or equal to f (−2). Give a reason for your answer.[2] a. The point P(−2, 3) lies on the graph of f (x). Write down the equation of the tangent to the graph of f (x) at the point P.[2] b. The point P(−2, 3) lies on the graph of f (x). From the information given about f ′(x), state whether the point (−2, 3) is a maximum, a minimum or neither. Give a reason for your answer.[2] c. Markscheme greater than     (A1) Gradient between x = −2 and x = 0 is positive.     (R1) OR The function is increased between these points or equivalent.     (R1)     (C2) Note: Accept a sketch. Do not award (A1)(R0).[2 marks] a. y = 3     (A1)(A1)     (C2) Note: Award (A1) for y = a constant, (A1) for 3.[2 marks] b. minimum     (A1) Gradient is negative to the left and positive to the right or equivalent.     (R1)     (C2) Note: Accept a sketch. Do not award (A1)(R0).[2 marks] c. Question The figure shows the graphs of the functions $$f(x) = \frac{1}{4}{x^2} – 2$$ and $$g(x) = x$$ . Differentiate $$f(x)$$ with respect to $$x$$ .[1] a. Differentiate $$g(x)$$ with respect to $$x$$ .[1] b. Calculate the value of $$x$$ for which the gradients of the two graphs are the same.[2] c. Draw the tangent to the parabola at the point with the value of $$x$$ found in part (c).[2] d. Markscheme $$\frac{1}{2}x{\text{ }}\left( {\frac{2}{4}x} \right)$$     (A1)     (C1) Note: Accept an equivalent, unsimplified expression (i.e. $$2 \times \frac{1}{4}x$$).[1 mark] a. $$1$$     (A1)     (C1)[1 mark] b. $$\frac{1}{2}x = 1$$     (M1) $$x = 2$$     (A1)(ft)     (C2) Notes: Award (M1)(A0) for coordinate pair $$(2{\text{, }} – 1)$$ seen with or without working. Follow through from their answers to parts (a) and (b).[2 marks] c. tangent drawn to the parabola at the $$x$$-coordinate found in part (c)     (A1)(ft) candidate’s attempted tangent drawn parallel to the graph of $$g(x)$$     (A1)(ft)     (C2)[2 marks] d. Question The equation of a curve is given as $$y = 2x^{2} – 5x + 4$$. Find $$\frac{{{\text{d}}y}}{{{\text{d}}x}}$$.[2] a. The equation of the line L is $$6x + 2y = -1$$. Find the x-coordinate of the point on the curve $$y = 2x^2 – 5x + 4$$ where the tangent is parallel to L.[4] b. Markscheme $$\frac{{{\text{d}}y}}{{{\text{d}}x}} = 4x – 5$$     (A1)(A1)     (C2) Notes: Award (A1) for each correct term. Award (A1)(A0) if any other terms are given.[2 marks] a. $$y = – 3x – \frac{1}{2}$$     (M1) Note: Award (M1) for rearrangement of equation gradient of line is –3     (A1) $$4x – 5 = -3$$     (M1) Notes: Award (M1) for equating their gradient to their derivative from part (a). If $$4x – 5 = -3$$ is seen with no working award (M1)(A1)(M1). $$x = \frac{1}{2}$$     (A1)(ft)     (C4) Note: Follow through from their part (a). If answer is given as (0.5, 2) with no working award the final (A1) only.[4 marks] b. Question A curve is described by the function $$f (x) = 3x – \frac{2}{{x^2}}$$, $$x \ne 0$$. Find $$f ‘ (x)$$.[3] a. The gradient of the curve at point A is 35. Find the x-coordinate of point A.[3] b. Markscheme $$f'(x) = 3 + \frac{4}{{{x^3}}}$$     (A1)(A1)(A1)     (C3) Notes: Award (A1) for 3, (A1) for + 4 and (A1) for $$\frac{1}{{{x^3}}}$$  or $$x^{-3}$$. Award at most (A1)(A1)(A0) if additional terms are seen. a. $$3 + \frac{4}{{{x^3}}} = 35$$     (M1) Note: Award (M1) for equating their derivative to 35 only if the derivative is not a constant. $${x^3} = \frac{1}{8}$$     (A1)(ft) $$\frac{1}{2}(0.5)$$     (A1)(ft)     (C3) b. Question Let $$f(x) = {x^4}$$. Write down $$f'(x)$$.[1] a. Point $${\text{P}}(2,6)$$ lies on the graph of $$f$$. Find the gradient of the tangent to the graph of $$y = f(x)$$ at $${\text{P}}$$.[2] b. Point $${\text{P}}(2,16)$$ lies on the graph of $$f$$. Find the equation of the normal to the graph at $${\text{P}}$$. Give your answer in the form $$ax + by + d = 0$$, where $$a$$, $$b$$ and $$d$$ are integers.[3] c. Markscheme $$\left( {f'(x) = } \right)$$   $$4{x^3}$$     (A1)     (C1)[1 mark] a. $$4 \times {2^3}$$     (M1) Note: Award (M1) for substituting 2 into their derivative. $$= 32$$     (A1)(ft)     (C2) Note: Follow through from their part (a).[2 marks] b. $$y – 16 = – \frac{1}{{32}}(x – 2)$$   or   $$y = – \frac{1}{{32}}x + \frac{{257}}{{16}}$$     (M1)(M1) Note: Award (M1) for their gradient of the normal seen, (M1) for point substituted into equation of a straight line in only $$x$$ and $$y$$ (with any constant ‘$$c$$’ eliminated). $$x + 32y – 514 = 0$$ or any integer multiple     (A1)(ft)     (C3) Note: Follow through from their part (b).[3 marks] c. Question Consider the graph of the function $$f(x) = {x^3} + 2{x^2} – 5$$. Label the local maximum as $${\text{A}}$$ on the graph.[1] a. Label the local minimum as B on the graph.[1] b. Write down the interval where $$f'(x) < 0$$.[1] c. Draw the tangent to the curve at $$x = 1$$ on the graph.[1] d. Write down the equation of the tangent at $$x = 1$$.[2] e. Markscheme correct label on graph     (A1)     (C1)[1 mark] a. correct label on graph     (A1)     (C1)[1 mark] b. $$– 1.33 < x < 0$$   $$\left( { – \frac{4}{3} < x < 0} \right)$$     (A1)     (C1)[1 mark] c. tangent drawn at $$x = 1$$ on graph     (A1)     (C1)[1 mark] d. $$y = 7x – 9$$     (A1)(A1)     (C2) Notes: Award (A1) for $$7$$, (A1) for $$-9$$. If answer not given as an equation award at most (A1)(A0).[2 marks] e. Question Consider the curve $$y = {x^2} + \frac{a}{x} – 1,{\text{ }}x \ne 0$$. Find $$\frac{{{\text{d}}y}}{{{\text{d}}x}}$$.[3] a. The gradient of the tangent to the curve is $$– 14$$ when $$x = 1$$. Find the value of $$a$$.[3] b. Markscheme $$2x – \frac{a}{{{x^2}}}$$     (A1)(A1)(A1)     (C3) Notes: Award (A1) for $$2x$$, (A1) for $$– a$$ and (A1) for $${x^{ – 2}}$$. Award at most (A1)(A1)(A0) if extra terms are present. a. $$2(1) – \frac{a}{{{1^2}}} = – 14$$     (M1)(M1) Note: Award (M1) for substituting $$1$$ into their gradient function, (M1) for equating their gradient function to $$– 14$$. Award (M0)(M0)(A0) if the original function is used instead of the gradient function. $$a = 16$$     (A1)(ft)     (C3) b. Question The equation of line $${L_1}$$ is $$y = 2.5x + k$$. Point $${\text{A}}$$ $$\,(3,\, – 2)$$ lies on $${L_1}$$. Find the value of $$k$$.[2] a. The line $${L_2}$$ is perpendicular to $${L_1}$$ and intersects $${L_1}$$ at point $${\text{A}}$$. Write down the gradient of $${L_2}$$.[1] b. Find the equation of $${L_2}$$. Give your answer in the form $$y = mx + c$$ .[2] c. Write your answer to part (c) in the form $$ax + by + d = 0$$  where $$a$$, $$b$$ and $$d \in \mathbb{Z}$$.[1] d. Markscheme $$– 2 = 2.5\, \times 3 + k$$       (M1) Note: Award (M1) for correct substitution of $$(3,\, – 2)$$ into equation of $${L_1}$$. $$(k = ) – 9.5$$       (A1) (C2) a. $$– 0.4\,\left( { – \frac{2}{5}} \right)$$       (A1)  (C1) b. $$y – ( – 2) = – 0.4\,(x – 3)$$       (M1) OR $$– 2 = – 0.4\,(3) + c$$       (M1) Note: Award (M1) for their gradient and given point substituted into equation of a straight line. Follow through from part (b). $$y = – 0.4x – 0.8$$       $$\left( {y = – \frac{2}{5}x – \frac{4}{5}} \right)$$       (A1)(ft)    (C2) c. $$2x + 5y + 4 = 0$$ (or any integer multiple)      (A1)(ft) (C1) Note: Follow through from part (c). d. Question Consider the function $$f(x) = a{x^2} + c$$. Find $$f'(x)$$[1] a. Point $${\text{A}}( – 2,\,5)$$  lies on the graph of $$y = f(x)$$ . The gradient of the tangent to this graph at $${\text{A}}$$ is $$– 6$$ . Find the value of $$a$$ .[3] b. Find the value of $$c$$ .[2] c. Markscheme $$2ax$$      (A1)   (C1) Note: Award (A1) for $$2ax$$.  Award (A0) if other terms are seen. a. $$2a( – 2) = – 6$$       (M1)(M1) Note: Award (M1) for correct substitution of $$x = – 2$$  in their gradient function, (M1) for equating their gradient function to $$– 6$$ . Follow through from part (a). $$(a = )1.5\,\,\,\left( {\frac{3}{2}} \right)$$       (A1)(ft) (C3) b. $${\text{their }}1.5 \times {( – 2)^2} + c = 5$$         (M1) Note: Award (M1) for correct substitution of their $$a$$ and point $${\text{A}}$$. Follow through from part (b). $$(c = ) – 1$$         (A1)(ft) (C2) c. Question The equation of the straight line $${L_1}$$ is $$y = 2x – 3.$$ Write down the $$y$$-intercept of $${L_1}$$ .[1] a. Write down the gradient of $${L_1}$$ .[1] b. The line $${L_2}$$ is parallel to $${L_1}$$ and passes through the point $$(0,\,\,3)$$ . Write down the equation of $${L_2}$$ .[1] c. The line $${L_3}$$ is perpendicular to $${L_1}$$ and passes through the point $$( – 2,\,\,6).$$ Write down the gradient of $${L_3}.$$[1] d. Find the equation of $${L_3}$$ . Give your answer in the form $$ax + by + d = 0$$ , where $$a$$ , $$b$$ and $$d$$ are integers.[2] e. Markscheme $$(0,\,\, – 3)$$       (A1)     (C1) Note: Accept $$– 3$$ or $$y = – 3.$$ a. $$2$$       (A1)     (C1) b. $$y = 2x + 3$$        (A1)(ft)     (C1) Note: Award (A1)(ft) for correct equation. Follow through from part (b) Award (A0) for $${L_2} = 2x + 3$$. c. $$– \frac{1}{2}$$             (A1)(ft)     (C1) Note: Follow through from part (b). d. $$6 = – \frac{1}{2}( – 2) + c$$        (M1) $$c = 5$$  (may be implied) OR $$y – 6 = – \frac{1}{2}(x + 2)$$        (M1) Note: Award (M1) for correct substitution of their gradient in part (d) and the point $$( – 2,\,\,6)$$. Follow through from part (d). $$x + 2y – 10 = 0$$  (or any integer multiple)        (A1)(ft)     (C2) Note: Follow through from (d). The answer must be in the form $$ax + by + d = 0$$ for the (A1)(ft) to be awarded. Accept any integer multiple. e. Question Consider the function $$f(x) = {x^3} – 3{x^2} + 2x + 2$$ . Part of the graph of $$f$$ is shown below. Find $$f'(x)$$ .[3] a. There are two points at which the gradient of the graph of $$f$$ is $$11$$. Find the $$x$$-coordinates of these points.[3] b. Markscheme $$(f'(x) = )\,\,3{x^2} – 6x + 2$$        (A1)(A1)(A1)     (C3) Note: Award (A1) for $$3{x^2}$$, (A1) for $$– 6x$$ and (A1) for $$+ 2$$. Award at most (A1)(A1)(A0) if there are extra terms present. a. $$11 = 3{x^2} – 6x + 2$$        (M1) Note: Award (M1) for equating their answer from part (a) to $$11$$, this may be implied from $$0 = 3{x^2} – 6x – 9$$ . $$(x = )\,\, – 1\,\,,\,\,\,\,(x = )\,\,3$$        (A1)(ft)(A1)(ft)     (C3) Note: Follow through from part (a). If final answer is given as coordinates, award at most (M1)(A0)(A1)(ft) for $$( – 1,\,\, – 4)$$ and $$(3,\,\,8)$$ . b. Question The equation of a curve is $$y = \frac{1}{2}{x^4} – \frac{3}{2}{x^2} + 7$$. The gradient of the tangent to the curve at a point P is $$– 10$$. Find $$\frac{{{\text{d}}y}}{{{\text{d}}x}}$$.[2] a. Find the coordinates of P.[4] b. Markscheme $$2{x^3} – 3x$$     (A1)(A1)     (C2) Note:     Award (A1) for $$2{x^3}$$, award (A1) for $$– 3x$$. Award at most (A1)(A0) if there are any extra terms.[2 marks] a. $$2{x^3} – 3x = – 10$$    (M1) Note:     Award (M1) for equating their answer to part (a) to $$– 10$$. $$x = – 2$$    (A1)(ft) Note:     Follow through from part (a). Award (M0)(A0) for $$– 2$$ seen without working. $$y = \frac{1}{2}{( – 2)^4} – \frac{3}{2}{( – 2)^2} + 7$$    (M1) Note:     Award (M1) substituting their $$– 2$$ into the original function. $$y = 9$$    (A1)(ft)     (C4) Note:     Accept $$( – 2,{\text{ }}9)$$.[4 marks] b. Question The equation of line $${L_1}$$ is $$y = – \frac{2}{3}x – 2$$. Point P lies on $${L_1}$$ and has $$x$$-coordinate $$– 6$$. The line $${L_2}$$ is perpendicular to $${L_1}$$ and intersects $${L_1}$$ when $$x = – 6$$. Write down the gradient of $${L_1}$$.[1] a. Find the $$y$$-coordinate of P.[2] b. Determine the equation of $${L_2}$$. Give your answer in the form $$ax + by + d = 0$$, where $$a$$, $$b$$ and $$d$$ are integers.[3] c. Markscheme $$– \frac{2}{3}$$     (A1)     (C1)[1 mark] a. $$y = – \frac{2}{3}( – 6) – 2$$     (M1) Note:     Award (M1) for correctly substituting $$– 6$$ into the formula for $${L_1}$$. $$(y = ){\text{ }}2$$     (A1)     (C2) Note:     Award (A0)(A1) for $$( – 6,{\text{ }}2)$$ with or without working.[2 marks] b. gradient of $${L_2}$$ is $$\frac{3}{2}$$     (A1)(ft) Note:     Follow through from part (a). $$2 = \frac{3}{2}( – 6) + c$$$$\,\,\,$$OR$$\,\,\,$$$$y – 2 = \frac{3}{2}\left( {x – ( – 6)} \right)$$     (M1) Note:     Award (M1) for substituting their part (b), their gradient and $$– 6$$ into equation of a straight line. $$3x – 2y + 22 = 0$$     (A1)(ft)     (C3) Note:     Follow through from parts (a) and (b). Accept any integer multiple. Award (A1)(M1)(A0) for $$y = \frac{3}{2}x + 11$$.[3 marks] c. Question The diagram shows part of the graph of a function $$y = f(x)$$. The graph passes through point $${\text{A}}(1,{\text{ }}3)$$. The tangent to the graph of $$y = f(x)$$ at A has equation $$y = – 2x + 5$$. Let $$N$$ be the normal to the graph of $$y = f(x)$$ at A. Write down the value of $$f(1)$$.[1] a. Find the equation of $$N$$. Give your answer in the form $$ax + by + d = 0$$ where $$a$$, $$b$$, $$d \in \mathbb{Z}$$.[3] b. Draw the line $$N$$ on the diagram above.[2] c. Markscheme 3     (A1)     (C1) Notes:     Accept $$y = 3$$[1 mark] a. $$3 = 0.5(1) + c$$$$\,\,\,$$OR$$\,\,\,$$$$y – 3 = 0.5(x – 1)$$     (A1)(A1) Note:     Award (A1) for correct gradient, (A1) for correct substitution of $${\text{A}}(1,{\text{ }}3)$$ in the equation of line. $$x – 2y + 5 = 0$$ or any integer multiple     (A1)(ft)     (C3) Note:     Award (A1)(ft) for their equation correctly rearranged in the indicated form. The candidate’s answer must be an equation for this mark.[3 marks] b. (M1)(A1)(ft)     (C2) Note:     Award M1) for a straight line, with positive gradient, passing through $$(1,{\text{ }}3)$$, (A1)(ft) for line (or extension of their line) passing approximately through 2.5 or their intercept with the $$y$$-axis.[2 marks] c. Question The coordinates of point A are $$(6,{\text{ }} – 7)$$ and the coordinates of point B are $$( – 6,{\text{ }}2)$$. Point M is the midpoint of AB. $${L_1}$$ is the line through A and B. The line $${L_2}$$ is perpendicular to $${L_1}$$ and passes through M. Find the coordinates of M.[2] a. Find the gradient of $${L_1}$$.[2] b. Write down the gradient of $${L_2}$$.[1] c.i. Write down, in the form $$y = mx + c$$, the equation of $${L_2}$$.[1] c.ii. Markscheme $$(0,{\text{ }}2.5)$$$$\,\,\,$$OR$$\,\,\,$$$$\left( {0,{\text{ }} – \frac{5}{2}} \right)$$     (A1)(A1)     (C2) Note:     Award (A1) for 0 and (A1) for –2.5 written as a coordinate pair. Award at most (A1)(A0) if brackets are missing. Accept “$$x = 0$$ and $$y = – 2.5$$”.[2 marks] a. $$\frac{{2 – ( – 7)}}{{ – 6 – 6}}$$     (M1) Note:     Award (M1) for correct substitution into gradient formula. $$= – \frac{3}{4}{\text{ }}( – 0.75)$$     (A1)     (C2)[2 marks] b. $$\frac{4}{3}{\text{ }}(1.33333 \ldots )$$     (A1)(ft)     (C1) Note:     Award (A0) for $$\frac{1}{{0.75}}$$. Follow through from part (b).[1 mark] c.i. $$y = \frac{4}{3}x – \frac{5}{2}{\text{ }}(y = 1.33 \ldots x – 2.5)$$     (A1)(ft)     (C1) Note:     Follow through from parts (c)(i) and (a). Award (A0) if final answer is not written in the form $$y = mx + c$$.[1 mark] c.ii. Question A function $$f$$ is given by $$f(x) = 4{x^3} + \frac{3}{{{x^2}}} – 3,{\text{ }}x \ne 0$$. Write down the derivative of $$f$$.[3] a. Find the point on the graph of $$f$$ at which the gradient of the tangent is equal to 6.[3] b. Markscheme $$12{x^2} – \frac{6}{{{x^3}}}$$ or equivalent     (A1)(A1)(A1)     (C3) Note:     Award (A1) for $$12{x^2}$$, (A1) for $$– 6$$ and (A1) for $$\frac{1}{{{x^3}}}$$ or $${x^{ – 3}}$$. Award at most (A1)(A1)(A0) if additional terms seen.[3 marks] a. $$12{x^2} – \frac{6}{{{x^3}}} = 6$$     (M1) Note:     Award (M1) for equating their derivative to 6. $$(1,{\text{ }}4)$$$$\,\,\,$$OR$$\,\,\,$$$$x = 1,{\text{ }}y = 4$$     (A1)(ft)(A1)(ft)     (C3) Note:     A frequent wrong answer seen in scripts is $$(1,{\text{ }}6)$$ for this answer with correct working award (M1)(A0)(A1) and if there is no working award (C1).[3 marks] b. Question The point A has coordinates (4 , −8) and the point B has coordinates (−2 , 4). The point D has coordinates (−3 , 1). Write down the coordinates of C, the midpoint of line segment AB.[2] a. Find the gradient of the line DC.[2] b. Find the equation of the line DC. Write your answer in the form ax + by + d = 0 where a , b and d are integers.[2] c. Markscheme (1, −2)    (A1)(A1) (C2) Note: Award (A1) for 1 and (A1) for −2, seen as a coordinate pair. Accept x = 1, y = −2. Award (A1)(A0) if x and y coordinates are reversed.[2 marks] a. $$\frac{{1 – \left( { – 2} \right)}}{{ – 3 – 1}}$$    (M1) Note: Award (M1) for correct substitution, of their part (a), into gradient formula. $$= – \frac{3}{4}\,\,\,\left( { – 0.75} \right)$$     (A1)(ft)  (C2) Note: Follow through from part (a).[2 marks] b. $$y – 1 = – \frac{3}{4}\left( {x + 3} \right)$$  OR  $$y + 2 = – \frac{3}{4}\left( {x – 1} \right)$$  OR  $$y = – \frac{3}{4}x – \frac{5}{4}$$      (M1) Note: Award (M1) for correct substitution of their part (b) and a given point. OR $$1 = – \frac{3}{4} \times – 3 + c$$  OR  $$– 2 = – \frac{3}{4} \times 1 + c$$     (M1) Note: Award (M1) for correct substitution of their part (b) and a given point. $$3x + 4y + 5 = 0$$  (accept any integer multiple, including negative multiples)    (A1)(ft) (C2) Note: Follow through from parts (a) and (b). Where the gradient in part (b) is found to be $$\frac{5}{0}$$, award at most (M1)(A0) for either $$x = – 3$$ or $$x + 3 = 0$$.[2 marks] c. Question Consider the function $$f\left( x \right) = \frac{{{x^4}}}{4}$$. Find f’(x)[1] a. Find the gradient of the graph of f at $$x = – \frac{1}{2}$$.[2] b. Find the x-coordinate of the point at which the normal to the graph of f has gradient $${ – \frac{1}{8}}$$.[3] c. Markscheme x3     (A1) (C1) Note: Award (A0) for $$\frac{{4{x^3}}}{4}$$ and not simplified to x3.[1 mark] a. $${\left( { – \frac{1}{2}} \right)^3}$$     (M1) Note: Award (M1) for correct substitution of $${ – \frac{1}{2}}$$ into their derivative. $${ – \frac{1}{8}}$$  (−0.125)     (A1)(ft) (C2) Note: Follow through from their part (a).[2 marks] b. x3 = 8     (A1)(M1) Note: Award (A1) for 8 seen maybe seen as part of an equation y = 8x + c(M1) for equating their derivative to 8. (x =) 2     (A1) (C3) Note: Do not accept (2, 4).[3 marks] c. Question Consider the graph of the function $$y = f(x)$$ defined below. Write down all the labelled points on the curve that are local maximum points;[1] a. where the function attains its least value;[1] b. where the function attains its greatest value;[1] c. where the gradient of the tangent to the curve is positive;[1] d. where $$f(x) > 0$$ and $$f'(x) < 0$$ .[2] e. B, F     (C1) a. H     (C1) b. F     (C1) c. A, E     (C1) d. C     (C2) e. Question Consider the curve $$y = {x^2}$$ . Write down $$\frac{{{\text{d}}y}}{{{\text{d}}x}}$$.[1] a. The point $${\text{P}}(3{\text{, }}9)$$ lies on the curve $$y = {x^2}$$ . Find the gradient of the tangent to the curve at P .[2] b. The point $${\text{P}}(3{\text{, }}9)$$ lies on the curve $$y = {x^2}$$ . Find the equation of the normal to the curve at P . Give your answer in the form $$y = mx + c$$ .[3] c. Markscheme $$2x$$     (A1)     (C1) a. $$2 \times 3$$     (M1) $$= 6$$     (A1)     (C2) b. $$m({\text{perp}}) = – \frac{1}{6}$$     (A1)(ft) Equation $$(y – 9) = – \frac{1}{6}(x – 3)$$     (M1) Note: Award (M1) for correct substitution in any formula for equation of a line. $$y = – \frac{1}{6}x + 9\frac{1}{2}$$     (A1)(ft)     (C3) Note: Follow through from correct substitution of their gradient of the normal. Note: There are no extra marks awarded for rearranging the equation to the form $$y = mx + c$$ . c. Question A sketch of the function $$f(x) = 5{x^3} – 3{x^5} + 1$$ is shown for $$– 1.5 \leqslant x \leqslant 1.5$$ and $$– 6 \leqslant y \leqslant 6$$ . Write down $$f'(x)$$ .[2] a. Find the equation of the tangent to the graph of $$y = f(x)$$ at $$(1{\text{, }}3)$$ .[2] b. Write down the coordinates of the second point where this tangent intersects the graph of $$y = f(x)$$ .[2] c. Markscheme $$f'(x) = 15{x^2} – 15{x^4}$$     (A1)(A1)     (C2) Note: Award a maximum of (A1)(A0) if extra terms seen. a. $$f'(1) = 0$$     (M1) Note: Award (M1) for $$f'(x) = 0$$ . $$y = 3$$     (A1)(ft)     (C2) $$( – 1.38{\text{, }}3)$$ $$( – 1.38481 \ldots {\text{, }}3)$$     (A1)(ft)(A1)(ft)     (C2) Note: Accept $$x = – 1.38$$, $$y = 3$$ ($$x = – 1.38481 \ldots$$ , $$y = 3$$) .
# Problem: A reaction has a rate constant of 1.13×10−2 /s at 400 K and 0.694 /s at 450. KPart A Determine the activation barrier for the reaction.Part B What is the value of the rate constant at 425 K? ###### FREE Expert Solution For the first part of the problem, we’re being asked to determine the activation energy (Ea) of the reaction. We’re given the rate constants at two different temperatures. This means we need to use the two-point form of the Arrhenius Equation: $\overline{){\mathbf{ln}}\frac{{\mathbf{k}}_{\mathbf{2}}}{{\mathbf{k}}_{\mathbf{1}}}{\mathbf{=}}{\mathbf{-}}\frac{{\mathbf{E}}_{\mathbf{a}}}{\mathbf{R}}\mathbf{\left[}\frac{\mathbf{1}}{{\mathbf{T}}_{\mathbf{2}}}\mathbf{-}\frac{\mathbf{1}}{{\mathbf{T}}_{\mathbf{1}}}\mathbf{\right]}}$ where: k1 = rate constant at T k2 = rate constant at T Ea = activation energy (in J/mol) R = gas constant (8.314 J/mol•K) T1 and T2 = temperature (in K). 87% (397 ratings) ###### Problem Details A reaction has a rate constant of 1.13×10−2 /s at 400 K and 0.694 /s at 450. K Part A Determine the activation barrier for the reaction. Part B What is the value of the rate constant at 425 K?
# Thread: Linear equation 1. ## Linear equation Julie makes a base pay of $205 per week for going to work. She also recieves 5% of all sales she makes in a week. Based on this information answer the following. Write a linear equation that describes this function. Sketch a graph to show the function. Label the axes. 2. P = total pay. s = sales made in the week. $P = 205 + 1.05\times s$ #### Search tags for this page ### julie makes a base pay of$205 per week Click on a term to search for related topics.
# Direct from Dell Euler problem 9.2 . There exists exactly one Pythagorean triplet for which a + b + c = 1000. Find the product abc. g p = [ [a, b, c] | m <- [2 .. limit], n <- [1 .. (m - 1)], let a = m ^ 2 - n ^ 2, let b = 2 * m * n, let c = m ^ 2 + n ^ 2, a + b + c == p ] where limit = floor . sqrt . fromIntegral $p — based on Haskell official . Euclid’s formula is a fundamental formula for generating Pythagorean triples given an arbitrary pair of integers $m$ and $n$ with $m > n > 0$. The formula states that the integers $\displaystyle{ a=m^{2}-n^{2},\ \,b=2mn,\ \,c=m^{2}+n^{2}}$ form a Pythagorean triple. The triple generated by Euclid’s formula is primitive if and only if $m$ and $n$ are coprime and one of them is even. When both $m$ and $n$ are odd, then $a$, $b$, and $c$ will be even, and the triple will not be primitive; however, dividing $a$, $b$, and $c$ by 2 will yield a primitive triple when $m$ and $n$ are coprime. Every primitive triple arises (after the exchange of $a$ and $b$, if $a$ is even) from a unique pair of coprime numbers $m$, $n$, one of which is even. — Wikipedia on Pythagorean triple — Me@2022-12-10 09:57:27 PM . . 2022.12.11 Sunday (c) All rights reserved by ACHK # Importance, 2.2 Euler problem 8.2 . . import Data.Char max13 lst = max13n lst 0 where max13n lst n | (length lst) < 13 = n | n > take13 = max13n (tail lst) n | otherwise = max13n (tail lst) take13 where take13 = product (take 13 lst) str <- readFile "n.txt" max13 (map (fromIntegral . digitToInt) . concat . lines$ str) . — Me@2022-11-19 12:04:41 PM . . # Euler problem 6.2 1960s . — ShutUpImStillTalking . f = (sum [1..100])^2 - sum (map (^2) [1..100]) . — colorized by palette fm — Me@2022-11-03 05:55:51 PM . . # The Sixth Sense, 2.2 Euler problem 5.2 | Folding an infinite list, 2 . . f = foldr1 lcm [1..20] . . Most problems on Project Euler can be solved in three ways: • with brute-force • with an algorithm that solves a more general problem • with a smart solution that requires pencil and paper at most If you’re interested in a nice solution rather than fixing your code, try concentrating on the last approach … — edited Oct 8, 2016 at 8:57 — huwr — answered Dec 27, 2011 at 14:33 — Philip — Stack Overflow . . # Euler problem 4.2 . Find the largest palindrome made from the product of two 3-digit numbers. g = [(y, z, y*z) | y<-[100..999], z<-[y..999], f==y*z] where f = maximum [x | y<-[100..999], z<-[y..999], let x=y*z, let s=show x, s==reverse s] . — Me@2022-10-10 10:09:53 PM . . # SimCity 2013 Euler problem 3.4 . . primes = 2 : filter (null . tail . primeFactors) [3, 5 ..] primeFactors n = factor n primes where factor n (p : ps) | p * p > n = [n] | n mod p == 0 = p : factor (n div p) (p : ps) | otherwise = factor n ps f = last (primeFactors 600851475143) . — [email protected] 11:44:20 AM . . Euler problem 3.3 . The goal of this blog post is to install an advanced Haskell mode, called LSP mode, for Emacs. . 1. Open the bash terminal, use the following commands to install the three packages: sudo apt-get install elpa-haskell-mode sudo apt-get install elpa-yasnippet sudo apt-get install elpa-which-key . 2. Read and follow the exact steps of my post titled “Haskell mode“. . . — [email protected] 12:49:29 PM . . The goal of this blog post is to install an advanced Haskell mode, called LSP mode, for Emacs. 0. In this tutorial, you will need to go to the official website of NixOS and that of MELPA (Milkypostman’s Emacs Lisp Package Archive). Make sure that both websites are the real official ones. Any instructions from an imposter website can get your machine infected with malware. 1. Assuming your computer OS is Ubuntu 20.04 or above, go to the NixOS official website. Follow the instructions to install the Nix package manager (not the NixOS) onto your OS. Choose the “single-user installation” method. 2. On the NixOS official website, click the magnifying glass at the top right corner to reach the package search engine. 3. Search “haskell language server” and then copy its installation command. nix-env -iA nixpkgs.haskell-language-server 4. Run the command in the bash terminal to install the Haskell Language Server. . 5. Search “stack” on the package search engine. 6. Run its installation command nix-env -iA nixpkgs.stack to install the Haskell Tool Stack. 7. Search “ghc” on the package search engine. 8. Run its installation command nix-env -iA nixpkgs.ghc to install the Glasgow Haskell Compiler. . This step is needed for triggering the OS to recognize the Nix package manager setup. . 10. Go to MELPA package manager’s official website. Follow the instructions to install “Melpa”, not “Melpa Stable”. 11. Open the Emacs editor. Click "Options" and then "Manage Emacs Packages". Install the following packages. For each of them, make sure that you have chosen the source archive as “melpa“. Versions from other sources would not work. company Modular text completion framework flycheck On-the-fly syntax checking lsp-mode LSP mode lsp-ui UI modules for lsp-mode 12. Open Emacs’ initialization file, which has the filename .emacs Its location should be ~/.emacs 13. Add the following code to the file. ;;;;;;;;;;;;;;;;;;;;;;;;;; (require 'company) (require 'flycheck) (require 'lsp-ui) ;;;;;;;;;;;;;;;;;;;;;;;;;; (require 'lsp) (save-place-mode 1) ;;;;;;;;;;;;;;;;;;;;;;;;;; (interactive) (windmove-up)) (global-set-key (kbd "C-n") ;;;;;;;;;;;;;;;;;;;;;;;;;; 14. Close the Emacs program. . 15. Create a dummy Haskell source code file named “test.hs”. 16. Use Emacs to open it. 17. You should see this message: 18. Select one of the first 3 answers. Then you can start to do the Haskell source code editing. 19. To compile your code, hold the Ctrl key and press n. Ctrl+n — Me@2022-08-18 05:22:02 PM . . # Functional programming jargon in plain English mjburgess 11 days ago | next [–] These definitions don’t really give you the idea, rather often just code examples.. “The ideas”, in my view: Monoid = units that can be joined together Functor = context for running a single-input function Applicative = context for multi-input functions Monad = context for sequence-dependent operations Lifting = converting from one context to another Sum type = something is either A or B or C… Product type = a record = something is both A and B and C Partial application = defaulting an argument to a function Currying = passing some arguments later = rephrasing a function to return a functions of n-1 arguments when given 1, st. the final function will compute the desired result EDIT: Context = compiler information that changes how the program will be interpreted (, executed, compiled,…) Eg., context = run in the future, run across a list, redirect the i/o, … — Functional programming jargon in plain English — Hacker News . Currying and partial function application are often conflated. One of the significant differences between the two is that a call to a partially applied function returns the result right away, not another function down the currying chain; this distinction can be illustrated clearly for functions whose arity is greater than two. . Partial application can be seen as evaluating a curried function at a fixed point, e.g. given $\displaystyle{f\colon (X\times Y\times Z)\to N}$ and $\displaystyle{a\in X}$ then $\displaystyle{{\text{curry}}({\text{partial}}(f)_{a})(y)(z)={\text{curry}}(f)(a)(y)(z)}$ or simply $\displaystyle{{\text{partial}}(f)_{a}={\text{curry}}_{1}(f)(a)}$ where $\displaystyle{{\text{curry}}_{1}}$ curries $\displaystyle{f}$‘s first parameter. — Wikipedia on Currying . . 2022.07.16 Saturday ACHK # Exercise 6.2 f :: a -> bf' :: a -> m aunit :: a -> m a f' * g' = (bind f') . (bind g') bind f xs = concat (map f xs) bind unit xs = concat (map unit xs) unit x = [x] bind unit xs = concat (map unit xs)= concat (map unit [x1, x2, ...])= concat [unit x1, unit x2, ...]= concat [[x1], [x2], ...]= [x1, x2, ...]= xs f' = lift f lift f = unit . f unit (or return) can directly act on an ordinary value only, but not on a monadic value. To act on a monadic value, you need to bind it. How come we do not need to lift return? f :: a -> b liftM :: Monad m => (a -> b) -> m a -> m b return :: a -> m a (liftM f) :: m a -> m b (>>=) :: Monad m => m a -> (a -> m b) -> m b lifeM cannot be applied to return at all. unit (or return) is neither a pure function nor a monadic function. Instead, it is an half-monadic function, meaning that while its input is an ordinary value, its output is a monadic value. (bind return xs) -> ys (bind return) applies to xs. return applies to x. liftM is merely fmap implemented with (>>=) and return — Me@2016-01-26 03:05:50 PM # Exercise 6a (Corrected version) Show that f' * unit = unit * f' = bind f' —————————— f :: a -> bf' :: a -> m aunit :: a -> m a lift f = unit . ff' = lift f The lift function in this tutorial is not the same as the liftM in Haskell. So you should use lift (but not liftM) with bind. — Me@2015-10-13 11:59:53 AM (f' * g') xs = ((bind f') . (bind g')) xs bind f' xs = concat (map f' xs) unit x = [x] bind unit xs = concat (map unit xs)= concat (map unit [x1, x2, ...])= concat [unit x1, unit x2, ...]= concat [[x1], [x2], ...]= [x1, x2, ...]= xs (f' * unit) (x:xs)= bind f' (bind unit (x:xs))= bind f' (concat (map unit (x:xs)))= bind f' (concat (map unit [x1, x2, ...]))= bind f' (concat [[x1], [x2], ...])= bind f' [x1, x2, ...]= concat (map f' [x1, x2, ...])= concat [f' x1, f' x2, ...]= concat [(unit . f) x1, (unit . f) x2, ...]= concat [(unit (f x1)), (unit (f x2)), ...]= concat [[f x1], [f x2], ...]= [f x1, f x2, ...] (unit * f') (x:xs)= ((bind unit) . (bind f')) (x:xs)= bind unit (bind f' (x:xs))= bind unit (concat (map f' (x:xs)))= bind unit (concat (map f' [x1, x2, ...]))= bind unit (concat [f' x1, f' x2, ...])= bind unit (concat [(unit . f)  x1, (unit . f) x2, ...])= bind unit (concat [(unit (f x1)), (unit (f x2)), ...])= bind unit (concat [[f x1], [f x2], ...])= bind unit [f x1, f x2, ...]= concat (map unit [f x1, f x2, ...])= concat [[f x1], [f x2], ...]= [f x1, f x2, ...] — Me@2015-10-15 07:19:18 AM If we use the identity bind unit xs = xs, the proof will be much shorter. (f' * unit) (x:xs)= ((bind f') . (bind unit)) (x:xs)= bind f' (bind unit (x:xs))= bind f' (x:xs) (unit * f') (x:xs)= ((bind unit) . (bind f')) (x:xs)= bind unit (bind f' (x:xs))= bind f' (x:xs) — Me@2015-10-15 11:45:44 AM # Exercise 6a Show that f * unit = unit * f —————————— (f * g) (x, xs)= ((bind f) . (bind g)) (x, xs) bind f x = concat (map f x) (f * unit) (x:xs)= bind f (bind unit (x:xs))= bind f (concat (map unit (x:xs)))= bind f (concat (map unit [x1, x2, x3, ...]))= bind f (concat ([[x1], [x2], [x3], ...]))= bind f [x1, x2, x3, ...]= concat (map f [x1, x2, x3, ...])= concat [f x1, f x2, f x3, ...]= [f x1, f x2, f x3, ...] (unit * f) (x:xs)= ((bind unit) . (bind f)) (x:xs)= bind unit (bind f (x:xs))= bind unit (concat (map f (x:xs)))= bind unit (concat (map f [x1, x2, ...]))= bind unit (concat [f x1, f x2, ...])= bind unit [f x1, f x2, ...]= concat (map unit [f x1, f x2, ...])= concat [[f x1], [f x2], ...]= [f x1, f x2, ...] — [email protected] 09:00 PM # Exercise 3.2 Show that lift f * lift g = lift (f.g) —————————— The meaning of f' * g' should be (bind f') . (bind g') instead. f' * g' = (bind f') . (bind g')lift f = unit . ff' = lift f (lift f * lift g) (x, xs)= (bind (lift f)) . (bind (lift g)) (x, xs)= bind (lift f) (bind (lift g) (x, xs))= bind (lift f) (gx, xs++gs)  where    (gx, gs) = (lift g) x = bind (lift f) (gx, xs++gs)  where    (gx, gs) = (g x, "") = bind (lift f) (g x, xs) = (fx, xs++fs)  where    (fx, fs) = (lift f) gx  = (fx, xs++fs)  where    (fx, fs) = (f gx, "") = (fx, xs)  where    (fx, fs) = (f (g x), "") = (f (g x), xs) bind f' (gx,gs) = (fx, gs++fs)                  where                    (fx,fs) = f' gx bind (lift (f.g)) (x, xs)= (hx, xs++hs)  where    (hx, hs) = (lift (f.g)) x = (hx, xs++hs)  where    (hx, hs) = ((f.g) x, "") = ((f (g x)), xs) — [email protected] 11:04 PM # Exercise Three Show that lift f * lift g = lift (f.g) —————————— f' * g' = bind f' . g'lift f = unit . ff' = lift f (lift f * lift g) (x, xs)= bind (lift f . lift g) (x, xs)= (hx, xs++hs)  where     (hx, hs) = lh x     lh x = (f' . g') x    f' = lift f    g' = lift g This line does not work, since f' cannot be applied to (g' x), for the data types are not compatible: f' :: Float -> (Float, String) g' :: Float -> (Float, String) (g' x) :: (Float, String) The meaning of f' * g' should be bind f' . (bind g') instead. — Me@2015-09-27 10:24:54 PM # flatMap() skybrian 70 days ago If functional languages had called them the Mappable, Applicable, and FlatMappable interfaces, and used map(), apply(), and flatMap() instead of operators, it would have avoided a lot of confusion. — Hacker News bind ~ flatMap — Me@2015-07-22 06:30:25 PM 2015.09.23 Wednesday ACHK Monads in Haskell can be thought of as composable computation descriptions. The essence of monad is thus separation of composition timeline from the composed computation’s execution timeline, as well as the ability of computation to implicitly carry extra data, as pertaining to the computation itself, in addition to its one (hence the name) output, that it will produce when run (or queried, or called upon). This lends monads to supplementing pure calculations with features like I/O, common environment or state, etc. 2015.09.21 Monday ACHK # Folding an infinite list One big difference is that right folds work on infinite lists, whereas left ones don’t! To put it plainly, if you take an infinite list at some point and you fold it up from the right, you’ll eventually reach the beginning of the list. However, if you take an infinite list at a point and you try to fold it up from the left, you’ll never reach an end! Learn You a Haskell for Great Good! Note that the key difference between a left and a right fold is not the order in which the list is traversed, which is always from left to right, but rather how the resulting function applications are nested. • With foldr, they are nested on “the inside” foldr f y (x:xs) = f x (foldr f y xs) Here, the first iteration will result in the outermost application of f. Thus, f has the opportunity to be lazy so that the second argument is either not always evaluated, or it can produce some part of a data structure without forcing its second argument. • With foldl, they are nested on “the outside” foldl f y (x:xs) = foldl f (f x y) xs Here, we can’t evaluate anything until we have reached the outermost application of f, which we will never reach in the case of an infinite list, regardless of whether f is strict or not. — edited Oct 24 ’11 at 12:21, answered Sep 13 ’11 at 5:17, hammar — Stack Overflow 2015.08.23 Sunday by ACHK
RO  EN ## Connected Domination Number and a New Invariant in Graphs with Independence Number Three Keywords: dominating set, number of Hadwiger, clique number, independence number. ### Abstract Adding a connected dominating set of vertices to a graph $G$ increases its number of Hadwiger $h(G)$. Based on this obvious property in [2] we introduced a new invariant $\eta(G)$ for which $\eta(G)\leq h(G)$. We continue to study its property. For a graph $G$ with independence number three without induced chordless cycles $C_7$ and with $n(G)$ vertices, $\eta(G)\geq n(G)/4$. Department of Mathematics CUNY Borough of Manhattan Community College 199 Chambers St, New York, NY 10007, USA E-mail: 0.14 Mb
3 Tutor System Starting just at 265/hour # 1.Identify the terms, their coefficients for each of the following expressions : (i) $$5xyz^2 – 3zy$$ (ii) $$1 + x + x^2$$ (iii) $$4x^2y^2 – 4x^2y^2z^2 + z^2$$ (iv) $$3 – pq + qr -rp$$ (v) $$\frac { x }{ 2 } +\frac { y }{ 2 } -xy$$ (vi)$$0.3a – 0.6ab + 0.5b$$ (i) We have the expression $$5xyz^2 – 3zy$$, the terms are $$5xyz^2$$ and $$-3zy$$. Coefficient of $$xyz^2$$ in the term $$5xyz^2$$ is 5. Coefficient of $$zy$$ in the term $$– 3yz$$ is $$– 3$$. (ii)We have the expression $$1 + x + x^2$$, the terms are 1, x and $$x^2$$. Coefficient of the term 1 is 1. Coefficient of x in the term x is 1. Coefficient of $$x^2$$ in the term $$x^2$$ is 1. (iii) We have the expression $$4xy – 4xyz + z$$ , the terms are $$4x^2y^2, – 4x^2y^2z^2$$ and $$z^2$$. Coefficient of $$x^2y^2$$ in the term $$4x^2y^2$$ is 4. Coefficient of $$x^2y^2z^2$$ in the term $$– 4x^2y^2z^2$$ is – 4. Coefficient of $$z^2$$ in the term $$z^2$$ is 1. (iv) We have the expression $$3 – pq + qr – rp$$, the terms are 3, – pq, qr and – rp. Coefficient of the term 3 is 3. Coefficient of pq in the term – pq is -1. Coefficient of qr in the term qr is 1. Coefficient of rp in the term – rp is -1. (v) We have the expression $$\frac { x }{ 2 } +\frac { y }{ 2 } -xy$$, the terms are $$\frac { x }{ 2 } ,\frac { 7 }{ 2 }$$ and $$– xy$$. Coefficient of x in the term $$\frac { x }{ 2 }$$ is $$\frac { 1 }{ 2 }$$ Coefficient of y in the term $$\frac { y }{ 2 }$$ is $$\frac { 1 }{ 2 }$$ Coefficient of xy in the term $$– xy$$ is -1. (vi) In the expression $$0.3a -0.6 ab + 0.5 b$$, the terms are $$0.3a, – 006ab$$and $$0.5b$$. Coefficient of a in the term 0.3a is 0.3. Coefficient of ab in the term – 0.6ab is – 0.6. Coefficient of b in the term 0.5b is 0.5.
# How to calculate the MLE for a sample with different parameters I have to calculate the MLE of the independent random variables $$X_1\sim N(\mu_1,1),X_2\sim N(\mu_2,1),X_3\sim N(\mu_1+\mu_2,2)$$, where $$N$$ is the normal distribution, how do I do this? So far I learned to calculate the MLE for one dimensional parameters, and same-distributed random variables. For example, if a have a random sample $$\{X_i\}_{i=1}^n\overset{iid}{\sim}N(\mu,\sigma_0^2)$$, where $$\sigma_0^2$$ is a known parameter, then, the MLE is $$\hat{\mu}_n=\bar{X}_n=\frac{1}{n}\sum_{i=1}^nx_i$$ • Because the random variables are independent, you can find the values of $\mu_1$ and $\mu_2$ that maximize the product of the probability density functions or to make the calculations simpler maximize the sum of the logs of the probability density functions. In other words, "maximize the likelihood". – JimB Mar 26 at 3:43 • First step is to write down the likelihood function which is just the pdf of $(X_1,X_2,X_3)$. Then proceed as usual. – StubbornAtom Mar 26 at 6:45
# What is the sum of all of the products of $3$ of the digits $1, \dots, 9$? Consider the numbers $1, 2, 3, \dots, 9$. Take the product of any three of them. What is the sum of all such products? In other words, calculate $1 \cdot 2 \cdot3 + 1 \cdot 2 \cdot 4 + 1 \cdot 2 \cdot 5 + \dots + 7 \cdot 8 \cdot9$. If we consider products of four numbers or more, what's the answer? - $$\left[ (1 + 2 + \ldots + 9) ^3 - 3 \times ( 1^2 + 2^2 + \ldots + 9^2) \times (1 +2 + \ldots + 9 ) + 2 \times (1^3 + 2^3 + \ldots + 9^3) \right] \div 6$$ As an explanation, you can see if that $a\neq b, b\neq c, c\neq a$, then it will appear 6 times in the first time, 0 times in the second, 0 times in the third, hence appear $6/6 =1$ time. Terms of the form $aab$ will appear thrice in the first term, thrice in the second (which is subtracted), 0 times in the third, hence appear $(3 - 3)/6 = 0$ times. Terms of the form $aaa$ will appear once in the first term, thrice in the second, twice in the third, for a total of $(1 - 3 + 2)/6 = 0$ times.
# Recognizing sequences sortable by transpositions? While reading the post, Probability of generating a desired permutation by random swaps, by Aaronson, and to continue my program I started in this post, How hard is reconstructing a permutation from its differences sequence?, I got interested in this problem of sorting: Input: a sequence $A$ of $2N$ positive integers (may contain repeated integers). Question: Is it possible to sort sequence $A$ using $N$ transpositions? Each transposition swaps two non-adjacent elements $a_i$ and $a_j$ in $A$ (two elements are not adjacent if $|i−j|>1$). This means that the two elements can not be adjacent in $A$. I guess the problem should be $NP$-complete. Is this problem $NP$-complete? Is there an obvious Karp reduction from the $NP$-hard problem in Aaronson's post? This was posted on TCS SE and had a bounty but without an answer. P.S. This problem has a nice geometric interpretation: It is equivalent to deciding the existence of a path of length at most N between two points on a special 2N-Permutahedron. Special permutahedron means that two nodes are connected by an edge if and only if the corresponding permutations are separated by one non-adjacent transposition. • I don't understand what you mean by swapping non-adjacent entries. Usually we consider either all transpositions, or adjacent transpositions. Paths on the permutohedron definitely correspond to sequences of adjacent transpositions. Also, there is a very simple polynomial time algorithm to decide if you can get from one permutation to another using a given number of adjacent transpositions (because of the definition of "length" of an element of the symmetric group in terms of inversions) – Sam Hopkins Apr 2 '18 at 23:45 • @SamHopkins It means swapping integers in two non-adjacent positions inside the array. Special permutahedron means that two nodes are connected by an edge if and only if the corresponding permutations are separated by one non-adjacent transposition. – Mohammad Al-Turkistany Apr 3 '18 at 3:40 • So the picture is not supposed to depict this “special permutohedron”? E.g., there is an edge between 1234 and 1324 in the picture, but that’s an adjacent swap, right? – Sam Hopkins Apr 3 '18 at 11:44 • @SamHopkins Yes, that's right. – Mohammad Al-Turkistany Apr 3 '18 at 12:14 • What is the Coxeter structure if you take all non-adjacent transpositions as generators and only the $(g_i g_j)^{m_{ij}}$ type relations. This doesn't quotient by nearly enough to get to $S_N$, but it is computationally nice. – AHusain Sep 2 '18 at 6:49
# 동계 소결마찰재에 대한 흑연함량과 형상의 영향 Choe, Byeong-Ho;Lee, Jong-Hyeong;Song, Gyeong-Tae 최병호;이종형;송경태 • Published : 1996.01.01 • 23 6 #### Abstract Influence of frictional and mechanical properties was studied with the content(8-18 wt.%) and shapes(flake of irregular) of graphite that was used as lubricant components of copper-based sintered materials. The density, hardness and bending strength of friction materials with the shape of flake graphite were lower and decreased rapidly than that of irregular, as the content of graphite increases up to 18 wt.%. In friction test, wear rate was about 2.0-$2.5{\times}10^{-7}\textrm{cm}^3$/kgf.m and coefficient of friction was 0.30-0.37, independent on graphite content and shape. As the temperature of friction materials increased, wear rate decreased rapidly because oxides such as $Cu_2O$ and $SnO_2$ in the surface of friction material were formed. #### Keywords Cu-Based Sintered Friction Materials;Craphite;Coefficient of Friction;Wear Rate
# Constructing a Bounded Closed set 1. Sep 24, 2009 ### snipez90 1. The problem statement, all variables and given/known data i) Construct a bounded closed subset of R (reals) with exactly three limit points ii) Construct a bounded closed set E contained in R for which E' (set of limit points of E) is a countable set. 2. Relevant equations Definition of limit point used: Let A be a subset of metric space X. Then b is a limit point of A if every neighborhood of b contains a point A different from b. 3. The attempt at a solution All right so this seems pretty easy if you do it the lame way like I did. For i), you could just take the set containing 0 and 1/n for all natural numbers n, and this obviously has 0 as its only limit point. Have two other sets say, 1 with 1 + 1/n and 2009 with 2009 - 1/n. Clearly we have boundedness. Closed follows from intersection of sets which each contain their limit points. It seems like we can extend the idea in i) to ii) as well (correct me if I'm wrong). However, is there a nicer way to construct these two sets? 2. Sep 24, 2009 ### Dick I think your 'lame' way is actually pretty nice.
MathSciNet bibliographic data MR534402 32A35 (30D55 46J15) Riesenberg, Nathaniel R. An extreme point in $H\sp{\infty }(U\sp{2})$$H\sp{\infty }(U\sp{2})$. Proc. Amer. Math. Soc. 76 (1979), no. 1, 129–130. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
Efficient Methods for Non-stationary Online Learning Peng Zhao · Yan-Feng Xie · Lijun Zhang · Zhi-Hua Zhou Hall J #537 Keywords: [ adaptive regret ] [ online ensemble ] [ projection complexity ] [ non-stationary online learning ] [ Dynamic regret ] [ Abstract ] [ [ [ Tue 29 Nov 2 p.m. PST — 4 p.m. PST Abstract: Non-stationary online learning has drawn much attention in recent years. In particular, \emph{dynamic regret} and \emph{adaptive regret} are proposed as two principled performance measures for online convex optimization in non-stationary environments. To optimize them, a two-layer online ensemble is usually deployed due to the inherent uncertainty of the non-stationarity, in which a group of base-learners are maintained and a meta-algorithm is employed to track the best one on the fly. However, the two-layer structure raises the concern about the computational complexity--those methods typically maintain $O(\log T)$ base-learners simultaneously for a $T$-round online game and thus perform multiple projections onto the feasible domain per round, which becomes the computational bottleneck when the domain is complicated. In this paper, we present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from $O(\log T)$ to $1$. Moreover, our obtained algorithms require only one gradient query and one function evaluation at each round. Our technique hinges on the reduction mechanism developed in parameter-free online learning and requires non-trivial twists on non-stationary online methods. Empirical studies verify our theoretical findings. Chat is not available.
Journal article ### Selmer varieties for curves with CM Jacobians Abstract: We study the Selmer variety associated to a canonical quotient of the $\Q_p$-pro-unipotent fundamental group of a smooth projective curve of genus at least two defined over $\Q$ whose Jacobian decomposes into a product of abelian varieties with complex multiplication. Elementary multi-variable Iwasawa theory is used to prove dimension bounds, which, in turn, lead to a new proof of Diophantine finiteness over $\Q$ for such curves. Publication status: Published Peer review status: Peer reviewed Version: Accepted Manuscript ### Access Document Files: • (pdf, 252.7KB) Publisher copy: 10.1215/0023608X-2010-015 ### Authors More by this author Institution: University of Oxford Department: Oxford, MPLS, Mathematical Institute Role: Author Publisher: Duke University Press Publisher's website Journal: Kyoto Journal of Mathematics Journal website Volume: 50 Issue: 4 Pages: 827-852 Publication date: 2008-01-01 DOI: EISSN: 2154-3321 ISSN: 2156-2261 URN: uuid:cef4fd16-2998-4a34-88b8-259622ba90f5 Source identifiers: 308627 Local pid: pubs:308627 Keywords:
# what is the cyclic cover trick? What do people mean by the "cyclic cover trick"? I have found this expression a couple of times with no complete explaination, both talking about curves and surfaces... - The "cyclic cover trick" can refer to more than one thing. One example is as follows. Let $L$ be an invertible sheaf on a smooth projective scheme such that some power $L^{\otimes d}$ has a global section $s$ whose zero scheme $D$ is a smooth Cartier divisor (all of these smoothness conditions are not strictly necessary). Let $\nu:Y\to X$ be the associated degree $d$ branched cover branched over $D$ and whose restriction over $X\setminus D$ is the $\mathbb{\mu}_d$-torsor corresponding to $L$. More precisely, $\nu$ is affine and $\nu_*\mathcal{O}_Y$ is the $\mathbb{Z}/d\mathbb{Z}$-graded $\mathcal{O}_X$-algebra $$\nu_*\mathcal{O}_Y = \mathcal{O}_X \oplus L^\vee \oplus \dots \oplus (L^\vee)^{\otimes (d-1)}.$$ Of course we have to say what is the multiplication rule on this algebra. But there is a unique multiplication rule that is $\mathbb{Z}/d\mathbb{Z}$-graded and that is compatible with the multiplication rule $s:(L^\vee)^{\otimes d} \to \mathcal{O}_X$. Now, for every $q$, $\nu_*\Omega^p_Y$ has a natural $\mathbb{Z}/d\mathbb{Z}$-grading, and the graded pieces turn out to be expressible in terms of tensor products of $\Omega^r_X$ with powers $(L^\vee)^{\otimes s}$. Now assume that $X$ is a $\mathbb{C}$-scheme. Then the Hodge theorem gives surjectivity of the various projections $H^r(Y^{\text{an}};\mathbb{C}) \to H^q(Y,\Omega^p_Y)$. Both groups have a $\mathbb{\mu}_d$-action that is equivalent to a $\mathbb{Z}/d\mathbb{Z}$-grading. In particular, vanishing of certain graded pieces of $H^r(Y^{\text{an}};\mathbb{C})$ will imply vanishing of certain cohomology groups $H^q(X,\Omega^p_X\otimes (L^\vee)^{\otimes s})$. In this way, one can prove the Kodaira-Akizuki-Nakano vanishing theorem.
# How can I make a command that applies to a paragraph without curly braces? I have to make an global change in many enumerate lists, so in need define a command that I can apply without braces in a such way that only applies until the end of the paragraph (or line), for example \item \changecolor text text text text text text text text text text text text text text text text text text text. %defautl color othertextothertextothertextothertextothertextothertextothertextot Is it possible to do that? • What is the change that you need to make? Are you wanting to change enumerate, and then have it change back? Could you post a minimal example that gives us a before and after (using braces if necessary so that we can see what you want to happen)? – Teepeemm Nov 18 '16 at 14:59 • Enumeration it's circumstantial. The command \changecolor applies to any paragraph in a such way that when it realize that the paragraph end up, then, it stop (similar to close the brace) – wmora2 Nov 18 '16 at 15:49 • it's possible but fragile and can easily break other code in the document, is there any reason not to simply use \item \textcolor{red}{text ...text} ? – David Carlisle Nov 18 '16 at 16:20 • I need to apply this procedure to the exercise lists of an extended book. The natural solution would be to put curly braces at the beginning and end on hundreds of paragraphs. But that's what I want to avoid – wmora2 Nov 18 '16 at 16:30 • But the solution you're looking for will still need you to put hundreds of \changecolors at the beginning of paragraphs. Is there some command or environment that is already present that you can simply modify? The "right way to do it" would be to put some macro or environment around those exercises, even if it takes a few hours of work. That would save you hours of headaches later, when a fragile command starts breaking other stuff. – Teepeemm Nov 18 '16 at 16:48 \def\changecolor{\begingroup\def\par{\endgroup\par}\color{red}}
# How to compute the norm of a complex number under square root? How to compute the norm of a complex number under square root? Does the square of norm equal the norm of square: $\|\sqrt z\|^2 = \|\sqrt {z^2}\|$? Let $z = re^{i\theta}$, then $$\|\sqrt z\|^2 =\|\sqrt {re^{i\theta}}\|^2 = \|\sqrt r \sqrt {e^{i\theta}}\|^2 =\|\sqrt r {e^{1/2i\theta}}\|^2 = \|r {e^{i\theta}}\|.$$ And $$\|\sqrt {z^2}\|=\|\sqrt {(re^{i\theta})^2}\| = \|\sqrt {r^2e^{2i\theta}}\|= \|{re^{i\theta}}\|.$$ I hope this is correct? Thank you. • $\sqrt z$ is ambiguous notation. – Pedro Tamaroff Jan 27 '14 at 10:27 • er...? I meant the most common one, like $\sqrt{1 + 2i}$ @PedroTamaroff – 1LiterTears Jan 27 '14 at 10:28 • Your calculations make no sense: the norm of a complex number is always a nonnegative real value. – heropup Jan 27 '14 at 10:29 • Oh sorry @heropup, I meant to keep the norm. Thanks! – 1LiterTears Jan 27 '14 at 10:30 • @1LiterTears There is no such thing as "the most common" squareroot of a complex number! – Pedro Tamaroff Jan 27 '14 at 10:40 We claim that, for any $w \in \mathbb{R}$ and $z \in \mathbb{C}$, $$\|z^w\| = \|z\|^w.$$ Proof: if $\|z\| = 0$, then $z = 0$ and the desired condition is trivially satisfied. So suppose $\|z\| > 0$. Then $$\|z^w\| = \|r^w e^{iw\theta}\| = \|r^w\|\|e^{iw\theta}\| = \|r^w\| = \|r\|^w,$$ since $r > 0$. Also, $$\|z\|^w = \|r e^{i\theta}\|^w = \|r\|^w \|e^{i\theta}\|^w = \|r\|^w 1^w = \|r\|^w.$$ So they are equivalent, even if $z^w$ is multivalued. • The definition of $z^\omega$ must be watched upon. This is where you lose generality. – AlexR Jan 27 '14 at 10:48 • It doesn't matter: the magnitude of $z^w$ for any real $w$ is the same regardless of the branch chosen. $z^w = \exp(w \log z) = \exp(w \log |z| + i w \arg(z) + 2\pi i w k)$, and the magnitude is clearly $|z|^w$. – heropup Jan 27 '14 at 11:28 • Still you can't define $z^\omega$ without choice of a branch. – AlexR Jan 27 '14 at 11:30 • Of course you can. $z^w$ is a set that corresponds to a relation on $\mathbb C^2$. Nowhere is it required that such relations are single-valued, and certainly not for the purposes of this identity. You don't need to force a choice of branch, and especially not to uniquely define the norm, which as I have pointed out, is unique regardless of branch. – heropup Jan 27 '14 at 11:38 • You should add that to your original answer (that $z^\omega$ need not necessarily be a holomorphic function on $\mathbb C$). – AlexR Jan 27 '14 at 11:47 This is not correct. $|e^{i\theta}|=1 \quad \forall \theta\in\mathbb R$. Thus $$\Vert \sqrt z \Vert^2 = \Vert \sqrt r e^{i\frac\theta2} \Vert^2 = \Vert \sqrt r \Vert^2 = (\sqrt r)^2 = r$$ Where $r\ge 0$ by convention. Note that $\sqrt z$ needs clarification. The usual definition excludes a line from $0$ to $\infty$, normally either $(-\infty, 0)$ or $(0, i\infty)$ then $$\sqrt z := \sqrt{|z|} e^{\frac12 i \arg z}$$ With regards to the edit: Still you have the problem of defining the square root in the complex plane. If you chose a suitable definition, which limits you to a certain domain, equality will hold whenever $z$ and $z^2$ are in the domain of your square root. • Hi Alex, thanks a lot. May I first ask - how can I deal with the simple case $\sqrt{1+i}$? – 1LiterTears Jan 27 '14 at 10:37 • $(1+i)^2 = 2i$, so taking the square root to be $$\sqrt{\cdot} : \mathbb C \setminus (-\infty, 0) \to \mathbb C$$ Allows to write $|(1+i)^2| = 2 = {\sqrt 2}^2 = |1+i|^2$ – AlexR Jan 27 '14 at 10:40 • Sorry for unclear - I am actually concerned about the case $\|\sqrt{1+i}\|$..? – 1LiterTears Jan 27 '14 at 10:43 • In this case, the same definition of the square root will help you find $$\Vert \sqrt{1+i} \Vert = \sqrt[4]2$$ – AlexR Jan 27 '14 at 10:45 • A small error: $\arg (1+i) = \frac\pi2 \neq \arg \frac\pi2$ – AlexR Jan 27 '14 at 10:49
# Could be possible to build a 4-vector in special relativity whose spatial component was the electric field E? Hi everyone and sorry for my English. I would like to know if I can build a legitimate 4-vector as $$E^\alpha=(E^0,\mathbf{E})$$. I'd like you to check if my way is correct. 1- We already know that $$\mathbf{E}$$ transforms under Lorentz boost as: \label{sdf} \begin{aligned} \mathbf{E}'&=\gamma\left(\mathbf{E}+\vec{\beta}\times\mathbf{B} \right)-\dfrac{\gamma^2}{\gamma+1}\vec{\beta}\left(\vec{\beta}\cdot\mathbf{E}\right)\\[5mm] &\text{So:}\\[5mm] E'_\parallel&=E_\parallel\\ \mathbf{E}_\perp'&=\gamma\left(\mathbf{E}_\perp+\vec{\beta}\times\mathbf{B} \right) \end{aligned} 2- While the spatial component of any 4-vector must obey the following rule: \begin{aligned} E_\parallel'&=\gamma(E_{\parallel}-\beta E^0)\\ \mathbf{E}_\perp'&=\mathbf{E}_\perp \end{aligned} So both expressions must to be equal: \left\{ \begin{aligned} \gamma E_\parallel-\gamma\beta E^0&=E_\parallel\\ \gamma\mathbf{E}_\perp+\gamma\vec{\beta}\times\mathbf{B}&=\mathbf{E}_\perp \end{aligned} \right. From the first one we can conclude that time component of 4-vector must be $$E^0=\dfrac{\gamma-1}{\gamma\beta}E_\parallel$$ or $$E^0=\dfrac{\gamma-1}{\gamma\beta^2}\vec{\beta}\cdot\mathbf{E}$$ But what can we conclude for the second one? Is therefore possible to build that 4-vector $$E^\alpha$$? Thank you very much I thought G. Smith's answer was fine in terms of explaining the physics involved, but the OP says: Thank you G.Smith. I know about the electromagnetic tensor and its properties, but i was wondering if there is a formal proof about the imposibility of building that $$E^\alpha$$ tensor, based on the allowed transformations, as I've tried. What's wrong with my deduction? Your deduction is of the form X => Y, where X seems to be the proposition that one can make a four-vector of the form $$(E^0,\textbf{E})$$. What is not totally clear to me about your X is what other data you think should be allowed to be encoded in $$E^0$$, but anyway I think it's possible to give a nonexistence proof without needing to clarify that point. You've proved some equations involving $$E^0$$ which vanish when the electric field is zero. Therefore when the field is zero, your 4-vector vanishes. But a Lorentz transformation on a zero vector always gives a zero vector, so you've proved that if an electric field is zero in one frame of reference, it's zero in all other frames. This is false, so we have a proof by contradiction that X is false. No, it is not possible to make a four-vector from the electric field. But from the electric field and the magnetic field together you can make a four-tensor, $$F_{\mu\nu}$$. https://en.m.wikipedia.org/wiki/Electromagnetic_tensor This is because electric and magnetic fields transform into each other under Lorentz transformations. The transformed electric field is a linear combination of the untransformed electric field and the untransformed magnetic field. Amd similarly for the transformed magnetic field. The lesson is that electric and magnetic fields are just two aspects of one unified thing, the electromagnetic field. • Here is a nice article about the unification of $\vec{E}$ and $\vec{B}$. arxiv.org/abs/1111.7126 – N. Steinle Jan 13 at 18:50 • Thank you G.Smith. I know about the electromagnetic tensor and its properties, but i was wondering if there is a formal proof about the imposibility of building that $E^\alpha$ tensor, based on the allowed transformations, as I've tried. What's wrong with my deduction? Thanks! – Dani Jan 13 at 18:51 • You don’t have a deduction to criticize! You failed to find a four-vector that satisied the equations you wrote down. Since your second equation involves the magnetic field, how could you possibly expect to satisfy it, when your first equation requires $E^0$ to be a combination of the components of the electric field? This seems like proof to me that what you want is impossible. And, of course, if what you were trying to do were possible, it would have been done 100 years ago. – G. Smith Jan 13 at 20:16 • You may “know about the electromagnetic tensor and its properties” but you didn’t grasp its relevance. If electric and magnetic fields mix together under a Lorentz transformation, then they cannot also stay unmixed under a Lorentz transformation, as in your failed attempt. Things either mix or they don’t. They can’t do both. – G. Smith Jan 13 at 20:22 Your formula for $$E_0$$ depends on $$\beta$$. If there WERE a legal four-vector for the electric field, it's components can't depend on the Lorentz transformation you do. Your formula for $$E_0$$ should be independent of $$\beta$$. But as you show with your algebra above, this is not possible. • (I'll add that you also can't solve your second equation at all if you allow arbitrary magnetic fields. You can easily see this by taking the derivative with respect to any components of the magnetic field on both sides of that equation. One side with have the derivative be zero, the other will not. So it simply can't be solved) – Jahan Claes Jan 14 at 0:16 Maybe I could say that my derivation is not possible because the time coordinate $$E^0$$ depends on another coordinate ($$E_\parallel$$) and it is not allowed because coordinates in a 4-vector must be independent? Is this a factible answer that proofs what I want to? • The problem is your time coordinate depends on $\beta$. It's perfectly fine to have it depend on spacial stuff. – Jahan Claes Jan 14 at 0:13 As observed by an inertial observer, the Electric Field is a spatial vector, which means that its time-component in that frame is always zero. In addition, the Magnetic Field is also a spatial vector... and thus has zero time-component. As @G. Smith notes, the electric and magnetic fields transform by mixing components (because the electric and magnetic fields are components of a two-index tensor).. and remain spatial, which are not like 4-vectors (since the time-component of a 4-vector won't generally stay zero after transformation). update: Up to sign conventions, $$E_b=F_{ab}u^a$$ is the electric-field according to the observer with 4-velocity $$u^a$$. (It is an observer-dependent four-vector.) But since $$F_{ab}=F_{[ab]}$$, it follows that $$E_bu^b=F_{ab}u^a u^b=0,$$ that is, the observer with 4-velocity $$u^b$$ measures the time-component of $$E_b$$ to be zero. Thus, $$E_b$$ has only spatial-components for that observer. • This reasoning seems a little circular. Of course if you assume that the putative electric field four-vector is equal to the first row of the electromagnetic field tensor, then it's trivially true that has a zero timelike component and has the wrong transformation properties. – Ben Crowell Jan 13 at 23:23 • This formulation is based on a tensorial development of the field tensor and Maxwell Equations, as found in Misner-Thorne-Wheeler [Ch 3.1] and in Wald [Ch 4.2], which is more geometrical and elegant compared to matrix representations and clumsy 3-vector formulations. The magnetic field is defined analogously with the Hodge-dual *F. (In other words, is there a more elegant way to describe the clumsier coordinate-based calculations and transformation formulas to demonstrate Lorentz invariance? Yes, use tensors through out.) – robphy Jan 14 at 0:34
95 views 0 recommends +1 Recommend 0 collections 0 shares • Record: found • Abstract: found • Article: found Is Open Access # Observation of a Discrete Time Crystal Preprint Bookmark There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience. ### Abstract Spontaneous symmetry breaking is a fundamental concept in many areas of physics, ranging from cosmology and particle physics to condensed matter. A prime example is the breaking of spatial translation symmetry, which underlies the formation of crystals and the phase transition from liquid to solid. Analogous to crystals in space, the breaking of translation symmetry in time and the emergence of a "time crystal" was recently proposed, but later shown to be forbidden in thermal equilibrium. However, non-equilibrium Floquet systems subject to a periodic drive can exhibit persistent time-correlations at an emergent sub-harmonic frequency. This new phase of matter has been dubbed a "discrete time crystal" (DTC). Here, we present the first experimental observation of a discrete time crystal, in an interacting spin chain of trapped atomic ions. We apply a periodic Hamiltonian to the system under many-body localization (MBL) conditions, and observe a sub-harmonic temporal response that is robust to external perturbations. Such a time crystal opens the door for studying systems with long-range spatial-temporal correlations and novel phases of matter that emerge under intrinsically non-equilibrium conditions. ### Most cited references7 • Record: found • Abstract: found • Article: found Is Open Access ### Many body localization and thermalization in quantum statistical mechanics (2014) We review some recent developments in the statistical mechanics of isolated quantum systems. We provide a brief introduction to quantum thermalization, paying particular attention to the Eigenstate Thermalization Hypothesis' (ETH), and the resulting single-eigenstate statistical mechanics'. We then focus on a class of systems which fail to quantum thermalize and whose eigenstates violate the ETH: These are the many-body Anderson localized systems; their long-time properties are not captured by the conventional ensembles of quantum statistical mechanics. These systems can locally remember forever information about their local initial conditions, and are thus of interest for possibilities of storing quantum information. We discuss key features of many-body localization (MBL), and review a phenomenology of the MBL phase. Single-eigenstate statistical mechanics within the MBL phase reveals dynamically-stable ordered phases, and phase transitions among them, that are invisible to equilibrium statistical mechanics and can occur at high energy and low spatial dimensionality where equilibrium ordering is forbidden. Bookmark • Record: found • Abstract: found • Article: found Is Open Access ### Periodically driven ergodic and many-body localized quantum systems ,  ,   (2015) We study dynamics of isolated quantum many-body systems under periodic driving. We consider a driving protocol in which the Hamiltonian is switched between two different operators periodically in time. The eigenvalue problem of the associated Floquet operator maps onto an effective hopping problem in energy space. Using the effective model, we establish conditions on the spectral properties of the two Hamiltonians for the system to localize in energy space. We find that ergodic systems always delocalize in energy space and heat up to infinite temperature, for both local and global driving. In contrast, many-body localized systems with quenched disorder remain localized at finite energy. We argue that our results hold for general driving protocols, and discuss their experimental implications. Bookmark • Record: found • Abstract: found • Article: found Is Open Access ### Manipulation and Detection of a Trapped Yb+ Ion Hyperfine Qubit (2007) We demonstrate the use of trapped ytterbium ions as quantum bits for quantum information processing. We implement fast, efficient state preparation and state detection of the first-order magnetic field-insensitive hyperfine levels of 171Yb+, with a measured coherence time of 2.5 seconds. The high efficiency and high fidelity of these operations is accomplished through the stabilization and frequency modulation of relevant laser sources. Bookmark ### Author and article information ###### Journal 2016-09-27 1609.08684
Article # Stability of Jensen-Type Functional Equations on Restricted Domains in a Group and Their Asymptotic Behaviors (Impact Factor: 0.72). 01/2012; 2012. DOI: 10.1155/2012/691981 Source: DBLP ABSTRACT We consider the Hyers-Ulam stability problems for the Jensen-type functional equations in general restricted domains. The main purpose of this paper is to find the restricted domains for which the functional inequality satisfied in those domains extends to the inequality for whole domain. As consequences of the results we obtain asymptotic behavior of the equations. ### Full-text Available from: Jaeyoung Chung • ##### Article: Stability of a conditional Cauchy equation [Hide abstract] ABSTRACT: Let ${\mathbb R}$ be the set of real numbers, ${f : \mathbb {R} \to \mathbb {R}}$ , ${\epsilon \ge 0}$ and d > 0. We denote by {(x 1, y 1), (x 2, y 2), (x 3, y 3), . . .} a countable dense subset of ${\mathbb {R}^2}$ and let $$U_d:=\bigcup\nolimits_{j=1}^{\infty} \{(x, y)\in \mathbb {R}^2:\,|x|+|y| > d,\, |x-x_j| < 1,\, |y-y_j| < 2^{-j}\}.$$ We consider the Hyers-Ulam stability of the conditional Cauchy functional inequality $$|f(x+y)-f(x)-f(y)|\le \epsilon$$ for all ${(x, y) \in U_d}$ . No preview · Article · Jul 2012 · Aequationes Mathematicae • Source ##### Article: ASYMPTOTIC BEHAVIORS OF ALTERNATIVE JENSEN FUNCTIONAL EQUATIONS-REVISITED [Hide abstract] ABSTRACT: In this paper, using an efficient change of variables we refine the Hyers-Ulam stability of the alternative Jensen functional equations of J. M. Rassias and M. J. Rassias and obtain much better bounds and remove some unnecessary conditions imposed in the previous result. Also, viewing the fundamentals of what our method works, we establish an abstract version of the result and consider the functional equations defined in restricted domains of a group and prove their stabilities. Full-text · Article · Nov 2012 • ##### Article: Conditional functional equations on restricted domains of measure zero [Hide abstract] ABSTRACT: Let R02 = R2{set minus}{(0,0)}, R*2 = {(x,y) ∈ R2 : x2 ≠ y2} and f : R02 → R, g : R *2 → R. In this paper we consider the Ulam-Hyers stability of the functional equations f(ux + vy, uy-vx) = f(x, y) + f(u, v), f(ux-vy, uy + vx) = f(x, y) + f(u, v), g(ux-vy, uy-vx) = g(x, y) + g(u, v), g(ux + vy, uy + vx) = g(x, y) + g(u, v) for all (x, y, u, v)∈Γ, where Γ⊂R4 is of 4-dimensional Lebesgue measure zero. The above functional equations are modified versions of the equations in [9,11,14,18,24] which arise from number theory and are in connection with characterizations of determinant and permanent of two-by-two matrices. No preview · Article · Oct 2015 · Journal of Mathematical Analysis and Applications
# Solar Hot Water System Controller using an ESP32 Recently, I helped my dad install a new solar hot water system onto our roof, replacing the old flat plate collectors with more modern evacuated tubes. The Canberra winters were harsh on the old collectors and they had a tendency to freeze overnight, damaging the internal piping and causing leaks. The new collectors are much more efficient, and take up much less space on the roof. The system worked great for several months, but in the summer we found they worked a little too well. Photo of new the rooftop collector After a couple of very hot summer days, we discovered that water stored in the tank had reached near boiling point. To avoid boiling the storage tank, the controller stopped pumping water through the collector. With the old flat plate collector system, this would have been fine as they can radiate heat back into the environment, but with the evacuated tubes the temperature in the collector rapidly climbed. The stagnant water in the collectors boiled off and turned to super-hot steam. Of course, the system has a pressure relief valve to prevent the tank from turning into a bomb, but the damage was already done. The steam melted the pump impeller and all the plastic valves rated only to 100 ℃. Clearly, a better solution was needed so that the same thing didnt happen again. The pump controller was not designed to handle the new type of collectors, and there was also no way to see what temperature the tank was currently at. If we had known earlier, we could have released some of the hot water to cool down the tank. Thus, I decided to re-design the controller and replace the existing one. # System Overview The hot water system is relatively simple. Hot water is stored in a tank, cold water is fed in at the bottom and hot water is drawn from the top. The controller monitors the temperature at the top and bottom of the tank, as well as the temperature in the collector manifold. If the sun is shining, the collector heats up. When the controller detects that the collector temperature is high enough, it starts the pump, which pumps the cold tank water through the collector. The water heats up and then mixes back into the tank. Diagram of the solar hot water system For our new controller, we need to be able to do the following things • Read the temperature from the 3 sensors in the system • Decide when to turn the pump on and off • Switch the pump on or off • Display the current temperature and system status • Alert someone when the temperature is too high # Hardware Design The most challenging aspect of the design was figuring out how to read the temperature sensors. After a lot of googling, I eventually discovered that the sensors were PT1000 Resistance Temperature Detectors (RTDs). These sensors are simply resistors where the resistance is dependent on their temperature. They are called PT1000 sensors because they are made out of platinum, and have a resistance of 1000 Ω at 0 ℃. Their resistance increases roughly linearly with temperature, but it’s more accurate to use resistance tables or a quadratic fit. We’ll be using the ESP32 as the microcontroller, but it can’t measure resistance directly. However, it does have an ADC which we can use to indirectly measure the resistance by measuring a voltage. We can use a voltage divider circuit and measure the output voltage. Knowing the input voltage and one of the resistors, we can figure out the value of the other resistor (the RTD we are trying to measure). We use a 10 kΩ resistor and a 3.3 V input voltage, which means that at 0 ℃, the RTD will be 1000 Ω, and the output voltage will be 0.30 V. At 100 ℃, the RTD increases to 1385 Ω, giving 0.40 V at the output. However, our ADC is capable of reading values from 0 V to 3.3 V and only has 12 bits of precision. If we just read the voltage divider directly, we would loose precision in our temperature readings. To fix this, we can add in an instrumentation amplifier, to subtract a reference voltage and amplify the difference. Instrumentation amplifier circuit for RTD sensing. The RTD is connected to the terminals on the left. Despite our careful planning and circuit theory, the real world is never perfect. Imperfect tolerances in the resistance values and non-ideal characteristics of the opamps and the ADC means that we need to calibrate our circuit. To do this, I used a variable resistor in place of the RTD, which I could set to a specific resistance value using a multimeter. I then measured the ADC value for a number of resistance values in order to determine the relationship. From this, we can use the trendline to calculate the resistance value from whatever ADC value we read from the ESP32. ## Full Schematic Design The rest of the schematic is relatively simple. We make 3 copies of the instrumentation amplifier to measure each temperature sensor individually. I also added a small OLED LCD display to show the current temperature. I also added a 240 V relay board to control the pump. The LCD is driven over I2C, and the relay just requires you to pull the wire to ground to activate the relay. I also added some capacitors to reduce the noise in the RTD sensors. The RTD at the top of the collector is about 10m away, and the wire basically acts as a big antenna and introduces a lot of noise into the circuit. A big capacitor helps to smooth out the noise and gives more stable readings on the ADC. Final schematic of the circuit After designing the schematic, I designed the board layout. I wasn’t going to get a custom circuit board fabricated, so I just used a perfboard and hand-soldered everything on. Physical layout of the circuit board Overall, I’m pretty happy with how it turned out. Everything fits neatly and there’s even some room to spare. ## Putting Everything Together I reused the box the the old controller was in and managed to squeeze all of the required components in. I moulded some spare plastic to mount the LCD display which sits on top of the controller box. The three RTD sensors are connected in the bottom left, 5V power is supplied in the bottom right. The two cables on the right side are 240V mains power at the bottom and the power to the pump at the top. The mains is connected to the terminal block, through the relay board and then to the pump. The final assembled controller # Software Design I wrote up a program for the ESP32 using the Arduino framework and PlatformIO. The control loop is essentially: 1. Read the RTD sensors and take the average of the last 20 readings. 2. Update the display to show the current temperature or pump status 3. Determine if the pump needs to be switched on or off 4. Upload the current temperature and status to the cloud using MQTT The decision to turn on the pump is modelled as a state machine with 4 different states: • AUTO_OFF: The default mode • AUTO_ON: If the collector is warmer than the bottom of the tank, the pump turns on and the collector heats up the water. • AUTO_BOIL_PROTECT: If the collector temperature gets too high, the pump will circulate water to prevent the collector from boiling over. Human intervention might be needed to cool the tank down. • AUTO_FREEZE_PROTECT: In the winter, if the collector temperature drops near zero, the pump will turn on to circulate warmer water through the collector to prevent the manifold from freezing. PumpMode get_next_mode(SystemStatus status){ switch(current_mode){ case PumpMode::AUTO_BOIL_PROTECT: if(status.temperature_solar < boil_protect_off){ // Solar has cooled down below the threshold return PumpMode::AUTO_OFF; } else { // Solar is still above the boiling threshold return PumpMode::AUTO_BOIL_PROTECT; } case PumpMode::AUTO_FREEZE_PROTECT: if(status.temperature_solar > freeze_protect_off){ // Solar has warmed up above the threshold return PumpMode::AUTO_OFF; } else { // Solar is below freezing threshold return PumpMode::AUTO_FREEZE_PROTECT; } case PumpMode::AUTO_ON: if(status.temperature_solar - status.temperature_bottom < solar_temp_difference_off){ // Solar is no longer above the required threshold return PumpMode::AUTO_OFF; } else { return PumpMode::AUTO_ON; } case PumpMode::AUTO_OFF: if (status.temperature_solar - status.temperature_bottom > solar_temp_difference_on){ // Normal Operation - Solar is hotter than the tank, turn pump on return PumpMode::AUTO_ON; } else if (status.temperature_solar > boil_protect_on){ // Turn on pump so solar does not boil return PumpMode::AUTO_BOIL_PROTECT; } else if (status.temperature_solar < freeze_protect_on) { // Turn on pump so solar does not freeze over return PumpMode::AUTO_FREEZE_PROTECT; } else { // Nothing Changed return PumpMode::AUTO_OFF; } default: return PumpMode::AUTO_OFF; } } ## Graphing the data The information collected from the controller is uploaded every minute using the MQTT protocol. I originally used the Cloud4Rpi service, but have since moved to a self-hosted InfluxDB and Grafana server. Grafana dashboard showing the system status The dashboard shows the current temperatures, the current state of the pump and a history of the temperature over time. You can see the collector temperature (in blue) warm up in the morning until the pump turns on and the collector is flushed with cooler water. Shortly after the pump turns off and the collector starts to heat up again. The graph shown above is of a particularly cloudy day where the collector doesn’t heat up very much. Example of a typical day in summer The graph above is from the old Cloud4Rpi dashboard, but shows a typical summer’s day. In the middle of the day, the pump is continuously running and the tank temperature starts to rise. ### Freeze Protection Example of how freeze protect works This graph shows an example of the freeze protection. The solar collector temperature slowly cools down overnight until the pump turns on to raise the temperature back up. At the same point you can see a slight dip in the temperature of the bottom of the tank because of the freezing cold water being pumped in. # Wrapping Up Overall, this project has been very successful. The new controller is much better, more customisable and provides more information about the system. It was also a fun and enjoyable engineering project helping me to build my electronics skills and experience.
cat.dt {cat.dt} R Documentation ## cat.dt: Computerized Adaptive Testing and Decision Trees ### Description The cat.dt package implements the Merged Tree-CAT method to generate Computerized Adaptive Tests (CATs) based on a decision tree. The tree growth is controlled by merging branches with similar trait distributions and estimations. This package has the necessary tools for creating CATs and estimate the subject's ability level. The Merged Tree-CAT method is an extension of the Tree-CAT method (see Delgado-Gómez et al., 2019 <doi:10.1016/j.eswa.2018.09.052>). CAT_DT ### Author(s) CAT_DT
Goto Chapter: Top 1 2 3 4 5 6 7 8 9 10 11 12 A B C D E F Bib Ind ### 5 Elements An element of an object M is internally represented by a morphism from the "structure object" to the object M. In particular, the data structure for object elements automatically profits from the intrinsic realization of morphisms in the homalg project. #### 5.1 Elements: Category and Representations ##### 5.1-1 IsHomalgElement ‣ IsHomalgElement( M ) ( category ) Returns: true or false The GAP category of object elements. ##### 5.1-2 IsElementOfAnObjectGivenByAMorphismRep ‣ IsElementOfAnObjectGivenByAMorphismRep( M ) ( representation ) Returns: true or false The GAP representation of elements of finitley presented objects. (It is a representation of the GAP category IsHomalgElement (5.1-1).) #### 5.3 Elements: Properties ##### 5.3-1 IsZero ‣ IsZero( m ) ( property ) Returns: true or false Check if the object element m is zero. ##### 5.3-2 IsCyclicGenerator ‣ IsCyclicGenerator( m ) ( property ) Returns: true or false Check if the object element m is a cyclic generator. ##### 5.3-3 IsTorsion ‣ IsTorsion( m ) ( property ) Returns: true or false Check if the object element m is a torsion element. #### 5.4 Elements: Attributes ##### 5.4-1 Annihilator ‣ Annihilator( e ) ( attribute ) Returns: a homalg subobject The annihilator of the object element e as a subobject of the structure object. #### 5.5 Elements: Operations and Functions ##### 5.5-1 in ‣ in( m, N ) ( attribute ) Returns: true or false Is the element m of the object M included in the subobject N≤ M, i.e., does the morphism (with the unit object as source and M as target) underling the element m of M factor over the subobject morphism N-> M? gap> ZZ := HomalgRingOfIntegers( ); Z gap> M := 2 * ZZ; <A free left module of rank 2 on free generators> gap> a := HomalgModuleElement( "[ 6, 0 ]", M ); ( 6, 0 ) gap> N := Subobject( HomalgMap( "[ 2, 0 ]", 1 * ZZ, M ) ); <A free left submodule given by a cyclic generator> gap> K := Subobject( HomalgMap( "[ 4, 0 ]", 1 * ZZ, M ) ); <A free left submodule given by a cyclic generator> gap> a in M; true gap> a in N; true gap> a in UnderlyingObject( N ); true gap> a in K; false gap> a in UnderlyingObject( K ); false gap> a in 3 * ZZ; false InstallMethod( \in, "for homalg elements", [ IsHomalgElement, IsStaticFinitelyPresentedSubobjectRep ], function( m, N ) local phi, psi; phi := UnderlyingMorphism( m ); psi := MorphismHavingSubobjectAsItsImage( N ); if not IsIdenticalObj( Range( phi ), Range( psi ) ) then Error( "the super object of the subobject and the range ", "of the morphism underlying the element do not coincide\n" ); fi; return IsZero( PreCompose( phi, CokernelEpi( psi ) ) ); end ); Goto Chapter: Top 1 2 3 4 5 6 7 8 9 10 11 12 A B C D E F Bib Ind generated by GAPDoc2HTML
Conference Paper # Rates of convergence for the cluster tree Conference: Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada. Source: DBLP ABSTRACT For a density f on R d, a high-density cluster is any connected component of {x: f(x) ≥ λ}, for some λ> 0. The set of all high-density clusters form a hierarchy called the cluster tree of f. We present a procedure for estimating the cluster tree given samples fromf. We give finite-sample convergence rates for our algorithm, as well as lower bounds on the sample complexity of this estimation problem. 1 0 Followers · • Source • "Note that the persistence of the cluster structure over a small range of levels ρ ∈ (ρ * , ρ * * ] is assumed either explicitly or implicitly in basically all densitybased clustering approaches that deal with several levels ρ, see e.g. [5] [17]. " ##### Article: Fully Adaptive Density-Based Clustering [Hide abstract] ABSTRACT: The clusters of a distribution are often defined by the connected components of a density level set. However, this definition depends on the user-specified level. We address this issue by proposing a simple, generic algorithm, which uses an almost arbitrary level set estimator to estimate the smallest level at which there are more than one connected components. In the case where this algorithm is fed with histogram-based level set estimates, we provide a finite sample analysis, which is then used to show that the algorithm consistently estimates both the smallest level and the corresponding connected components. We further establish rates of convergence for the two estimation problems, and last but not least, we present a simple, yet adaptive strategy for determining the width-parameter of the involved density estimator in a data-depending way. The Annals of Statistics 09/2014; 43(5). DOI:10.1214/15-AOS1331 · 2.18 Impact Factor • Source • "The present results are based in part on earlier conference versions, namely Chaudhuri and Dasgupta (2010) and Kpotufe and von Luxburg (2011). The result of Chaudhuri and Dasgupta (2010) analyzes the consistency of the first cluster tree estimator (see next section) but provides no pruning method for the estimator. " ##### Article: Consistent Procedures for Cluster Tree Estimation and Pruning [Hide abstract] ABSTRACT: For a density $f$ on ${\mathbb R}^d$, a {\it high-density cluster} is any connected component of $\{x: f(x) \geq \lambda\}$, for some $\lambda > 0$. The set of all high-density clusters forms a hierarchy called the {\it cluster tree} of $f$. We present two procedures for estimating the cluster tree given samples from $f$. The first is a robust variant of the single linkage algorithm for hierarchical clustering. The second is based on the $k$-nearest neighbor graph of the samples. We give finite-sample convergence rates for these algorithms which also imply consistency, and we derive lower bounds on the sample complexity of cluster tree estimation. Finally, we study a tree pruning procedure that guarantees, under milder conditions than usual, to remove clusters that are spurious while recovering those that are salient. IEEE Transactions on Information Theory 06/2014; 60(12). DOI:10.1109/TIT.2014.2361055 · 2.33 Impact Factor • Source • "For these procedures, the relevant density levels are the edge weights of G. Frequently, iteration over these levels is done by initializing G with an empty edge set and adding successively more heavily weighted edges, in the manner of traditional single linkage clustering. In this family, the Chaudhuri and Dasgupta algorithm (which is a generalization of Wishart (1969)) is particularly interesting because the authors prove finite sample rates for convergence to the true level set tree (Chaudhuri and Dasgupta 2010). To the best of our knowledge, however, only Stuetzle and Nugent (2010) has a publicly available implementation, in the R package gslclust. " ##### Article: DeBaCl: A Python Package for Interactive DEnsity-BAsed CLustering [Hide abstract] ABSTRACT: The level set tree approach of Hartigan (1975) provides a probabilistically based and highly interpretable encoding of the clustering behavior of a dataset. By representing the hierarchy of data modes as a dendrogram of the level sets of a density estimator, this approach offers many advantages for exploratory analysis and clustering, especially for complex and high-dimensional data. Several R packages exist for level set tree estimation, but their practical usefulness is limited by computational inefficiency, absence of interactive graphical capabilities and, from a theoretical perspective, reliance on asymptotic approximations. To make it easier for practitioners to capture the advantages of level set trees, we have written the Python package DeBaCl for DEnsity-BAsed CLustering. In this article we illustrate how DeBaCl's level set tree estimates can be used for difficult clustering tasks and interactive graphical data analysis. The package is intended to promote the practical use of level set trees through improvements in computational efficiency and a high degree of user customization. In addition, the flexible algorithms implemented in DeBaCl enjoy finite sample accuracy, as demonstrated in recent literature on density clustering. Finally, we show the level set tree framework can be easily extended to deal with functional data.
# Another textbook problem on probability Here's a problem in my textbook: "From a set of 2n + 1 consecutively numbered tickets, three are selected at random without replacement. Find the probability that the numbers of the tickets form an arithmetic progression. [The order in which the tickets are selected does not matter.]" I tried to solve it by first arbitrarily choosing 2 numbers from the first n+1 tickets. This is because in any 3 tickets in an arithmetic progression there must be at least 2 tickets from the first half of the pile. Once we've chosen our two numbers there is only one possible third ticket. Thus the probability must be 2C3/[(2n+1)C3]. However, the textbook gives n^2/[(2n+1)C3]. What went wrong? - There need not be two from the first half. Let $n=3$ so the tickets range from $1$ to $7$. $5,6,7$ are in arithmetic progression. Isn't 2C3=0? –  Ross Millikan Jan 15 '13 at 21:28 ${{2n+1} \choose 3}$ is the number of ways to pick three tickets, so the book is claiming that there are $n^2$ ways to select three ticket in arithmetic progression. To see this, if we count the ways to pick the highest and lowest such that they have the same parity, there is a specific ticket for the middle. If the lowest is $1$, there are $n$ choices, if the lowest is $2$ or $3$, there are $n-1$, if the lowest is $4$ or $5$ there are $n-2$, so the total is $n+2(n-1)+2(n-2)+\ldots+2(1)=\frac {n(n+1)}2+\frac {(n-1)n}2=n^2$ - This is because in any 3 tickets in an arithmetic progression there must be at least 2 tickets from the first half of the pile. One of the things that went wrong is that your reasoning here is false. For example, if $n=10$, so you have tickets from 1 to 21, say, then 19,20,21 is an arithmetic progression that doesn't contain any tickets from the first half of the pile. Once we've chosen our two numbers there is only one possible third ticket. is also false. Back to the first example, if you selected 16 and 18, you could then select either 14, 17, or 20, any of which would give an arithmetic progression. - Why is n^2/[(2n+1)C3] –  user54609 Jan 15 '13 at 21:30 @EricDong That wasn't your question. Your question was "What went wrong", and Jonathan gave a good explanation of why all your steps are wrong. –  Calvin Lin Jan 15 '13 at 21:36 Hint: We count the number of increasing arithmetic progressions. I suggest you do it the hard way, by working out a detailed example, and then generalizing. Let $2n+1=13$. How many AP are there with middle term $2$? Clearly only $1$. How many AP with middle term $3$? Clearly $2$. How many with middle term $4$? Clearly there are $3$. Continue. Note that there are $6$ with middle term $7$, and then the numbers start decreasing. -
# Magnetostatics MCQ || Magnetostatics Questions and Answers Ques.1. Find H = ___________ A/m at the center of a circular coil of diameter 1 m and carrying a current of 2 A. 1. 0.6366 2. 0.1636 3. 6.366 4. 2 Explanation:- The magnetic field intensity (H) of a circular coil is given by $H = \frac{I}{{2 R}}$ Where I is the current flow through the coil R is the radius of the circular coil Calculation: Given that, Current (I) = 2 A Diameter = 1 m Magnetic field intensity $H = \frac{2}{{2 \times 0.5}} = 2\;A/m$ Common Mistake: The magnetic field intensity (H) of a circular coil is given by $H = \frac{I}{{2 R}}$ The magnetic field intensity (H) of a straight conductor is given by $H = \frac{I}{{2\pi R}}$ Ques.2. Hysteresis loss is ________ proportional to the area under the hysteresis curve. Also, it is ________ proportional to the number of cycles of magnetization per second: 1. Directly, inversely 2. Inversely, directly 3. Directly, directly 4. Inversely, inversely Explanation:- The hysteresis loss is directly proportional to the area under the hysteresis curve i.e. area of the hysteresis loop. It is directly proportional to frequency i.e. number of cycles of magnetization per second. Hysteresis loss occurring in a material is, Wh = η × Bm1.6 × f × V Where η is hysteresis constant f is frequency or number of cycles per second Bm is the magnetic flux density V is the volume of the core Hence it is directly proportional to the number of cycles of magnetization per second Ques.3. Which of the following materials is used for the generation of ultrasonic waves by using magnetostriction effect? 1. Paramagnetic material 2. Ferromagnetic material 3. Diamagnetic materials 4. Both paramagnetic and diamagnetic material Explanation:- Ferromagnetic substance or material is used for the generation of ultrasonic waves by using the magnetostriction effect. Magnetostriction Effect: • When a magnetic field is applied parallel to the length of a ferromagnetic rod made of a material such as iron or nickel, a small elongation or contraction occurs in its length. This is known as magnetostriction. • The change in length depends on the intensity of the applied magnetic field and the nature of the ferromagnetic material. • The change in length is independent of the direction of the field. Ques.4. If a circular conductor carries a current of ‘I’ ampere having radius ‘r’ meter, then the magnetizing force at the center of the coil is given by ; 1. I/4r AT/m 2. I/r AT/m 3. I/2r AT/m 4. I/2r AT/wb Explanation:- Magnetic Field Strength (H)  gives the quantitative measure of the strongness or weakness of the magnetic field. H = B/μo Where B = Magnetic Flux Density μ= Vacuum Permeability The magnetic Field strength at the center of circular loop carrying current I is given by B = μoI/2r B/μo = I/2r H = I/2r At/m Ques.5. What is the relationship between magnetic field strength and current density? 1. ∇.H = J 2. ∇.J = H 3. ∇ × H = J 4. ∇ × J = H Answer.3. ∇ × H = J Explanation:- 1) Modified Kirchhoff’s Current Law: $\nabla .\vec J + \frac{{\partial \rho }}{{\partial t}} = 0$ J = Conduction Current density 2) Modified Ampere’s Law: $\nabla \times \vec H = \vec J + \frac{{\partial \vec D}}{{\partial t}}$ Where $\frac{{\partial \vec D}}{{\partial t}}$ = Displacement current density $\nabla .\vec E = – \frac{{\partial \vec B}}{{\partial t}}$ 4) Gauss Law: $\nabla .\vec D = \rho$ Maxwell’s Equations for time-varying fields is as shown: Differential form Integral form Name $\nabla \times E = – \frac{{\partial B}}{{\partial t}}$ $\mathop \oint \nolimits_L^{} E.dl = – \frac{\partial }{{\partial t}}\mathop \smallint \nolimits_S^{} B.d S$ Faraday’s law of electromagnetic induction $\nabla \times H =J+ \frac{{\partial D}}{{\partial t}}$ $\mathop \oint \nolimits_L^{} H.dl = \mathop \smallint \nolimits_S^{} (J+\frac{{\partial D}}{{\partial t}}).dS$ Ampere’s circuital law ∇ . D = ρv $\mathop \oint \nolimits_S^{} D.dS = \mathop \smallint \nolimits_v^{} \rho_v.dV$ Gauss’ law ∇ . B = 0 $\mathop \oint \nolimits_S^{} B.dS = 0$ Gauss’ law of Magnetostatics (non-existence of magnetic monopole) Ques.6. Two identical coils A and B of 1000 turns each lie in the parallel plane such that 80% of the flux produced by one coil links with the other. If a current of 5 A flowing in A produces a flux of 0.05 mWb, then the flux linking with coil B is: 1. 0.4 mWb 2. 0.04 mWb 3. 4 mWb 4. 0.004 mWb Explanation:- Consider two coils having self-inductance L1 and L2 placed very close to each other. Let the number of turns of the two coils be N1 and N2 respectively. Let coil A carries current I1 and coil B carries current I2. Due to current I1, the flux produced is ϕ1 which links with both the coils. Then the mutual inductance between two coils can be written as $M = \frac{{{N_1}{\phi _{12}}}}{{{I_1}}}$ Here, ϕ12 is the part of the flux ϕ1 linking with the coil 2 Calculation: Flux produced in coil X (ϕ1) = 0.05 mWb As we are just required to find the flux linked with the second coil, and we are given that 80% of the flux produced by one coil links with the other. ∴ Flux linked with Y (ϕ12) = 80% of flux produced in coil 1 = 0.05 × 0.8 mWb 0.04 mWb Ques.7. The current flowing through a coil of 2 H inductance is decreasing at a rate of 4 A/s. What will be the induced EMF in the coil? 1. 8 V 2. – 8 V 3. – 4 V 4. 4 V Explanation:- Emf induced in a coil or inductor is given by $E = – N\frac{{d\phi }}{{dt}} = – L\frac{{di}}{{dt}}$ Where, L = Inductance di = final value of current – initial value of current dt = final time – initial time Given: L = 2 H, di/dt = – 4 A/s (due to decreasing rate) Therefore, $E = – L{\text{\;}}\frac{{di}}{{dt}}$ = – (2 × -4) 8 V Ques.8. Magnetic flux will be _________ if the surface area vector of a surface is perpendicular to the magnetic field. 1. Zero 2. Unity 3. Close to maximum 4. Maximum Explanation:- The magnetic flux is defined as the number of magnetic field lines passing through a closed surface Flux can be expressed as φ = BACosθ ϕ  is the magnetic flux B is the magnetic flux density A is the area θ is the angle between the surface area vector of a surface and magnetic field Given that, θ = 90° φ = BACos90° = 0 Ques.9. The force on the current-carrying conductor in a magnetic field depends upon: (a) the flux density of the field (b) The strength of the current (c) The length of the conductor perpendicular to the magnetic field (d) The directions of the field and the current 1. (a), (c) and (d) only 2. (a), (b)and (c) only 3. (a), (b) and (d) only 4. (a), (b), (c) and (d) Answer.4. (a), (b), (c) and (d) Explanation:- The force experienced by a current-carrying conductor lying in a magnetic field is with an angle θ is given by F = BIL sin θ Where, B = Magnetic flux density of the field I = Current L = Length of the conductor perpendicular to the magnetic field Here angle θ represents the direction of the field and the current. Therefore, the force on the current-carrying conductor in a magnetic field depends upon • The magnetic flux density of the field (B) • The strength of the current (I) • The length of the conductor perpendicular to the magnetic field (l) • The directions of the field and the current Ques.10. Calculate the flux density at a distance of 5 cm from a long straight circular conductor carrying a current of 250 A and placed in air. 1. 102 Wb/m2 2. 10-2 Wb/m2 3. 10-3 Wb/m2 4. 103 Wb/m2 Explanation:- The magnetizing field strength due to long straight circular conductor is given by $H = \:\frac{I}{{2\pi r}}\:AT/m$ Where, H = Magnetizing force (AT/m) I = Current flowing in a conductor (A) r = Distance between current carrying conductor and the point (m) also B = μ0 H Where, B = Magnetic flux density (Wb/m2) μ= Absolute permeability = 4π × 10-7 H/m Given: r = 5 cm = 5 × 10-2 m I = 250 A $\begin{gathered} B = {\mu _0}H \hfill \\ \hfill \\ = {\mu _0}\frac{I}{{2\pi r}} \hfill \\ \hfill \\ = 4\pi \times {10^{ – 7}} \times \:\frac{{250}}{{2\pi \times 5 \times {{10}^{ – 2}}}} = {10^{ – 3}} \hfill \\ \end{gathered}$ ∴ B = 10-3 Wb/m2 Scroll to Top
• Create Account ### #Actuallithos Posted 07 April 2013 - 11:26 AM Look around on DeviantArt if you want to work with something cheaper.   Sprite animations are somewhat-maybe-almost easy to find at around $20 a frame, possible to find someone that goes down to$10 or so.   Even then to get a stable running animation for a 2D platformer it's going to be around 4 frames for each direction(left/right 8 total), min 5 for jumping, Some idle's lets go with 6, damage/death for another 4.   So even going cheap that's still around $500 very quickly for "simple mechanics". There is of course another route of using art already made. A bunch is free for use for all kinds of work ranging from commercial to open source , and for other misc. work that the other didn't attach a license to they're usually willing to allow use for commercial work at a pretty nice discount. Some artists even love working for open source projects and lower their prices to around$1 a frame, and maybe around $2 an hour(these are a pain to find, and are catch-22ish since you'll need proven and quality work AND you can't ask for this pricing). _________ As weird as it is "simpler" art like what you posted can actually end up A LOT more expensive than a larger sprite. The spriter needs to spend a lot more time getting every pixel perfect, since each pixel changes a larger amount of the piece. The amount of experience required to make your version of simple is just higher. Even using fewer frames is likely to push a price up instead of down. When hiring an artist just treat is like you're hiring a programmer so many of the same type of issues can show up(I'm assuming you have programming experience). You also need to think about this: Do you really want to hire someone that does not have other people/projects competing for their attention(forcing prices up), there's possibly a reason. ### #1lithos Posted 07 April 2013 - 11:23 AM Look around on DeviantArt if you want to work with something cheaper. Sprite animations are somewhat-maybe-almost easy to find at around$20 a frame,  possible to find someone that goes down to $10 or so. Even then to get a stable running animation for a 2D platformer it's going to be around 4 frames for each direction(left/right 8 total), min 5 for jumping, Some idle's lets go with 6, damage/death for another 4. So even going cheap that's still around$500 very quickly for "simple mechanics". There is of course another route of using art already made.   A bunch is free for use for all kinds of work ranging from commercial to open source , and for other misc. work that the other didn't attach a license to they're usually willing to allow use for commercial work at a pretty nice discount.   Some artists even love working for open source projects and lower their prices to around $1 a frame, and maybe around$2 an hour(these are a pain to find, and are catch-22ish since you'll need proven and quality work AND you can't ask for this pricing). _________ As weird as it is "simpler" art like what you posted can actually end up A LOT more expensive than a larger sprite.  The spriter needs to spend a lot more time getting every pixel perfect, since each pixel changes a larger amount of the piece.    The amount of experience required to make your version of simple is just higher. Even using fewer frames is likely to push a price up instead of down. When hiring an artist just treat is like you're hiring a programmer so many of the same type of issues can show up(I'm assuming you have programming experience). PARTNERS
View Single Post PF Gold P: 1,930 ## Differential equation asymptotes Also, the reason for dy/dx being zero if y is a certain number implying a horizontal asymptote is simple: If when y is a certain number, then dy/dx is zero, then the graph is going to be flat at that point. This means that y won't change as x changes, but since y doesn't change, then dy/dx is going to stay zero. Hence, horizontal asymptote.
# Thread: prime number proof help 1. ## prime number proof help prove tha there is no prime number p such that $p^{2} -81$ is divisible by 54. suppose $54/ p^{2} - 81$ than $p^{2} -81 = 54K$ for some K in the naturals. how would I proceed from here? very confused, I am suppose to come up with some sort of contradiction. any help appreciated. 2. ## Re: prime number proof help Look at it the "other way around": $p^2= 54K- 81= 9(6- k)$ 3. ## Re: prime number proof help Originally Posted by HallsofIvy Look at it the "other way around": $p^2= 54K- 81= 9(6- k)$ Not sure what you mean.... so $p^{2} = 9(6-k)$ hence 9 divides p^2 ? so 9 divides p ? 4. ## Re: prime number proof help if p=9(mod54), p2=81(mod54) p-9=kx54 p=9(1+6k) p can't be prime. 5. ## Re: prime number proof help Originally Posted by Tweety prove tha there is no prime number p such that $p^{2} -81$ is divisible by 54. suppose $54/ p^{2} - 81$ than $p^{2} -81 = 54K$ for some K in the naturals. how would I proceed from here? very confused, I am suppose to come up with some sort of contradiction. any help appreciated. p2=81+54k p2=9(9+6k) p=3(9+6k)1/2 so p can't be prime even if (9+6k)1/2 is an integer. 6. ## Re: prime number proof help Originally Posted by Tweety Not sure what you mean.... so $p^{2} = 9(6-k)$ hence 9 divides p^2 ? so 9 divides p ? No, either 9 divides p (if 9 also divides 6-k) or 3 divides p. Either way, p is not prime.
Using Kapacitor, multiple time series in a set can be joined and used to calculate a combined value, which can then be stored as a new time series. ADDITION RULE The sum of the numbers cannot be more precise than the least precise number. 7cm) that would also serve as a calibration? BTW, you wrote Challenge instead of Challenger. Sam will provide practical tips and techniques learned from helping hundreds of customers deploy InfluxDB and InfluxDB Enterprise. This retrieves the data for the sendraw API from the InfluxDB, finds the sum of the RequestCount for this data (i. To add a new state measurement, select the + New Measurement button and select State as the measurement type. Jmeter GUI is well known for its huge resource intensiveness. In this case, I only have one data set. InfluxDB has no external dependencies and provides a SQL-like language, listening on port 8086, with built-in time-centric functions for querying a data structure composed of measurements, series, and points. $\endgroup$ – xing_yu Feb 23 '17 at 12:00. We analyze the recovery properties for two types of recovery algorithms. 1 1999 270. Drag your second measure to the upper left of the axis legend, where Tableau will show two translucent green bars: 4. App Metrics provides various metric types to measure things such as the rate of requests, counting the number of user logins over time, measure the time taken to execute a database query, measure the amount of free memory and so on. I already have been using InfluxDB + Grafana for real time results of my JMeter test. The only things that you need to find are the sum of the values and the sum of the values squared. Drag a dimension to Columns. """ from influxdb import InfluxDBClient from influxdb import SeriesHelper # InfluxDB. Due to the extra costs and time in making multiple measurements, and the relatively good inter-day variability in adults, two or more measurements of FRC He need to be made only when necessitated by clinical or research need 9. Friedlander}, journal={IEEE Transactions on Information Theory}, year={2010}, volume={56}, pages={2516-2527} }. Method bias represents the average difference between methods across multiple samples (e. More measurements of a single event lead to greater confidence in calculating an accurate average measurement. In other words, make sure all identical InfluxDB fields matched by a Graphite query pattern, for example my_host. TV is equal to the square root of the sum of (R&R)2 and (PV)2 squared, in other words: In a GR&R report, the final results are often expressed as %EV, %AV, %R&R, and %PV,. For example, for the height 10cm, the average time is (100 + 101 + 99. The state estimate is provided through a sum of each filter’s estimateweightedby the likelihoodofthe unknown ele-ments conditioned on the measurement sequence. 3 has only two s. Optional: Click Copy to copy the markup content to the Windows clipboard. With each measurement in the series, previous measurements are no longer relevant, though they may still be used to track trends over time. Laboratory Activity Measurement and Density Background: Measurements of mass and volume are very common in the chemistry laboratory. 1 College of Mathematics and Informatics, Fujian Normal University, Fuzhou, Fujian 350117, China 2 Digital Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian. This test may be used as an adjunct to the comparison test for selecting COPC. and these clocks allow to measure only the red shift effect with the necessary precision but not to measure the rate of time course (the number of the atomic oscillations per atomic second). collects sets of local measurements, Zr t = fzr 1;t;:::;z r mr t;t g, which has mr t measurements. The quantitative part of the measurement, the laboratory result, can be affected by the environmental conditions. Here we have written the parameters as a vector , to indicate that we can have multiple parameters. Abbas N, Riaz M, Does RJ (2013) Mixed exponentially weighted moving average–cumulative sum charts for process monitoring. The method is also used to obtain the uncertainty relations for multiple measurements in the presence of quantum memory. Repeatability of 0. To sum or to mean? A summative scale is one where the resulting scale score for an individual is the sum of the individual item scores. These measurements are collected by two sonobuoys each containing a wideband sonar-sensor and a narrowband sonar-sensor. Subject: [cognos-l] Multiple Measures on Cognos Crosstab Report. a vented speaker)j. EXAMPLES: Examples E2, E20, and E14 involve multiple observations made under conditions of repeatability. This specification is commonly used among sensor manufacturers and can be a useful point of comparison; however, it is a static measurement that may not represent the sensor’s performance in real world applications. 3 has only two s. and becomes 356. There is no subtraction and no decimals or fractions until the end. How to pull two measurements in Grafana from influxDB. I was able to do this with a ifnull but any idea why my formula does not work if there is one measure at 0? Here is what I have done:. It provides the service influxdb. npm install node-red-contrib-influxdb Usage. For example, for the height 10cm, the average time is (100 + 101 + 99. usage InfluxDB 0. The Measure tool is a good way to take multiple related measurements from a PDF file, which is useful in the estimation process. 1) Calculate the relative uncertainty in your measurements of each hand. Signal to Noise Instrumental Excel Assignment Instrumental methods, as all techniques involved in physical measurements, are limited by both the precision and accuracy. • We can determine the uncertainty in a general combination of sequential measurements by considering the result at any point to be a function of:. InfluxDB is a leading time series database that has seen significant traction and is known for its simplicity and ease of use, along with its ability to perform at scale. 4 inches—enough to dethrone many close rivals on the top-10 snowstorm list that were not necessarily lesser storms!. This heterochiral NR coil provides an inverse mechanocaloric effect. The default aggregation for the Grand Total is "Total using Automatic", which aggregates at a higher level. This output lets you output Metrics to InfluxDB (>= 0. Now, let us make a measure doing the same calculation but this time we will apply the ALLSELECTED() DAX function. Submit feedback using one of the following methods: Post in the InfluxData Community; In the InfluxDB UI, click Feedback in the left navigation bar. EXAMPLES: Examples E2, E20, and E14 involve multiple observations made under conditions of repeatability. Using a time series database to for aggregating testing and development tool data makes sense if you can query it after all the testing is complete to determine if a build is stable. Multiple fields (they are not called columns in InfluxDB 0. OBJECTIVES • Improved understanding of propagation of uncertainty • Improved understanding of statistics. Lanzillotti-Kimura,1 Haim Suchowski,1 Boubacar Kante,1 Yongshik Park,1 Xiaobo Yin,1 and Xiang Zhang1,3,*. This paper presents a submarine tracking technique by fusing multiple direction of arrival (DOA), time difference of arrival (TDOA), and target frequency measurements via an extended Kalman filter bank (EKFB). The principle of aggregation states that the sum of a set of multiple measurements is a more stable and representative estimator than any single measurement. collectd: one per plugin, several per services • better load distribution and control. npm install node-red-contrib-influxdb Usage. Submit feedback using one of the following methods: Post in the InfluxData Community; In the InfluxDB UI, click Feedback in the left navigation bar. Drag and click to position the markup in the drawing. There is no way that I know of right now to query InfluxDB. When working with multiple measures in a view, you can customize the mark type for each distinct measure. , suppose device to measure flow rate was always systematically low, all else in the measurement is correct. I have 3 different data sources to represent 3 different locations (US, Mexico, Canada). I have created another measure which sumsup the percentage of excellent + outstanding students. These objects must have the same (number of) variables. I think because of the sum – Evan R. If the measurement system results differ from the true value, the measurement system is adjusted until the results match the true value. 9) is fine as well but only if you always read all fields always. If not then as you have found it is hard to join tables with InfluxDB (and to be fair to InfluxDB, the only sensible way to join is on time fields so that aggregations are possible). Multiple Measurements and Parameters in the Unified Approach sum over discrete probabilities, or a integral in one measurement plus, optionally, a component. This is how I selected the multiple items in the image above. Delete a shard with DROP SHARD. A near field measurement technique may cope with the room influence but requires a complex summation of the sound pressure contributions generated by multiple drivers and/or ports (e. The benefits of saving multiple related metrics into one measurement are: Less clutter, easier to inspect the influxdb database Easier to create graphs on Grafana. ' This is a vague description of what this command can do, so let's dive into that and go over a few contexts where using the. Determining the accuracy of a measurement usually requires calibration of the analytical method with a known standard. 0 cm means that 67% of all repeated measurements performed by that particular observer on the same subject will be between 4. the InfluxDB API let’s you do some semblance of bulk operation per http call but each call is database-specific. However, many people use Elasticsearch for this purpose. For example I wasnt the sum of the 3 measures below. In applications, the multiple measurement vector (MMV) case is more usual, i. The critical fix that halved our query time however was splitting our measurements. For example: total your for employee X = 50 and OT at 40. The second measure would UseRelationship between ShipDate and date calendar. Doesken and Arthur Judson, CSU, 1996). ) This works fine when dev1, dev2, dev3 etc are all in the same measurement. It can run on a variety of platforms and is also available as a managed, fully hosted service, from several vendors. It might make sense to aggregate the readings from a plugin into a single measurement, rather than having multiple measurements with matching timestamps. For example, are the operators measuring the same parts (in which case you have a crossed design) or are they measuring different parts (in which case you have a nested design)? To illustrate, in a model where B is nested within A, multiple measurements are nested within both B and A, and there are na • nb • nw measurements, as follows: •. Data is not automatically aggregated, but it can be aggregated for analysis. Monte Carlo data association for multiple target tracking. This includes hardware and a… Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. It has to only be used as a last-ditch option when other possibilities will not be realistic. Please note that while attending the MA1 evening lecture is optional, the MA1 assignment is NOT optional and must be turned in before the deadline for your division for credit. usage InfluxDB 0. In applications, the multiple measurement vector (MMV) case is more usual, i. You really only add errors when you have measurements of different items that are relatable, miles from NY to SF (say 6000 +- 100) and then SF to LA (say 600 +- 10), etc then you use your above formula. Note Writes to InfluxDB 2. How to measure LDO noise 5 July 2015 Since the cutoff frequency is the point where the filter has already started attenuating the signal by 3 dB, select a cutoff frequency that is approximately an order of magnitude lower than the lowest frequency you will be measuring. Here, the data misfit is formulated based on squared differences between the predicted and interpreted flow profiles, such that multiple measurements may be collected in a single borehole interval. This greater representation occurs The principle of aggregation in psychobiological correlational research: An example from the open-field test | SpringerLink. 8kg, 80kg, 80. And this guide will focus on the Text Table. tags and fields are effectively columns in the table. How to pull two measurements in Grafana from influxDB. , total revenue to date, or total number of payment methods. Jmeter GUI is well known for its huge resource intensiveness. Secondary measurements are defined as auxiliary measurements that augment information provided by a main primary measurement function. How can I specify an Influxdb query that would return the results as a single data view? (e. Using Kapacitor, multiple time series in a set can be joined and used to calculate a combined value, which can then be stored as a new time series. The sum of the deviations of each data value from this measure of central location will always be 0. For the Formula, after the = sign start typing the word SUM. If you feel like applying some statistics, you can calculate the standard deviation of the measurements. We analyze the recovery properties for two types of recovery algorithms. She collects data over 30 x values, giving her 300 total trials. It is sometimes best to make several copies of your base sheet and record triangulated measurements separate from baseline and grid measurements. For the calculation of the 95% Confidence Interval, see Bland 2006. In this example, the measurement is the number of tweets, the tags are the type of tweet (i. An Introduction to Mobile Robotics Mobile robotics cover robots that roll, walk, fly or swim. It outputs the sum of the all values for the given field. The predicted difference on average between the measurement and the true value. 2 in InfluxDB Admin then I get one resultset for each of the measurements with an empty value but I would want to have one with a sum. Consider a class in which half the students score 100 and the other half score 0 on a test. , the length of the room). Percent Difference Over Time Across Multiple Measurements (10. Posted by MGram (Application Manager) on Jun 3 at 10:31 AM I would like to create a crosstab report with multiple measures. The difference is that with InfluxDB you can have millions of measurements, you don’t have to define schemas up front, and null values aren’t. Query InfluxDB through the /api/v2/query endpoint. The precision and accuracy of a measurement are ultimately limited by two factors imposed by nature- matter has thermal fluctuations and charge, and light. For example. Once you have the total size in square feet, meters or yards you need to convert this into acres. 0 alpha! Feedback and bug reports are welcome and encouraged both for InfluxDB and this documentation. With each measurement in the series, previous measurements are no longer relevant, though they may still be used to track trends over time. The eye diagram is a general-purpose tool for analyzing serial digital signals. Chapter 15: Data Processing and Fundamental Data Analysis Multiple Choice />1. The result is returned in msg. Provide the details for Display Name, Field Name, and Values of the state. by Dilip Shah. """ from influxdb import InfluxDBClient from influxdb import SeriesHelper # InfluxDB. 1 1999 270. Measurements can persist as markups, allowing for processing and summarization through the Markups list, which is also useful for estimation and takeoffs, or be tempora. You can use measuring tools to make linear, angular, and area measurements, and to automatically measure the shortest distance between two selected objects. You do not really use significant digits. To combine aggregated measurements under different tags is a bit tricky. I currently have Grafana setup and it makes very pretty graphs, however I am trying to figure out how much power my entire lab utilizes. When querying identical fields from multiple measurements InfluxDB allows only one aggregation function to be used for all identical fields in the query. JMeter has created those measurements in the ‘jmeter‘ database. General considerations. Median Weegy: Mean - measure of central tendency will the sum of the deviations always be zero. Thus, accuracy. For the calculation of the 95% Confidence Interval, see Bland 2006. influxdb]] # ## Works with InfluxDB debug endpoints out of the box, # ## but other services can use this format too. So monitoring results with GUI become very very non realistic on massive load. You can also sort, group and filter by any measurement, tag, field, or value. precise measurement but it could be far off the correct answer. Click Stop when you're finished taking multiple measurements. The method is also used to obtain the uncertainty relations for multiple measurements in the presence of quantum memory. Estimating Sample Sizes for Repeated Measurement Designs John E. The first is the least-count of the digital volt meter in the measurement of X with a maximum bound of. The resulting inferred slow slip moments scale with duration and inter-event time like ordinary earthquakes. The tests that are failing are due to the way carbonara and influx handle the retention and multiple granularities differently. Drag your cursor down column B to select multiple cells. The only things that you need to find are the sum of the values and the sum of the values squared. Sum multiple columns based on single criteria with Kutools for Excel If you have Kutools for Excel , with its Advanced Combined Rows feature, you can sum corresponding values based on a criteria. , the length of the room). on average from multiple measurements of the quantity even though no single measurement may give this value. A note of caution on assuming random and independent uncertainties: If we use one instrument to measure multiple quantities, we cannot be sure that the errors in the quantities are independent. As a hack: The only. Here we will go through an example of how this works. The analytical balance is used to measure mass, and the graduated cylinder, pipette, or burette are. You can create some of the most powerful data analysis solutions in Power BI Desktop by using measures. Length Measurement. In some applications, the SMV model extend to the multiple measurement vector (MMV) model, in which the signal consists of a set of jointly sparse vectors. The multiple measurements are stored in the Material Master table, so if the customer wishes to report items in Cases, Pallets, Cubic meters, etc they can do so. For example, in a room that is 10 feet wide and 18 feet long, the area would be 10 multiplied by 18 to obtain 180 square feet. The joint-sparse recovery problem aims to recover, from sets of compressed measurements, unknown sparse matrices with nonzero entries restricted to a subset of rows. REST allows multiple measurements to be created by sending multiple measurements in one call. Looking for some Jedi help on summing a field across multiple data sources. This is especially interesting when you want to monitor proportions. Some appraisers will measure square footage with a good old. It supports multiple backend time-series databases including InluxDB. Area is a measure of how much space there is inside a shape. For each of the 30 x values, she averages the 10 y values and she calculates the standard deviation. InfluxDB Password:€Password for InfluxDB user. General considerations. I found on documentation that you can write multiple matching measurements with :MEASUREMENT keyword in INTO. It supports multiple backend time-series databases including InluxDB. The tests that are failing are due to the way carbonara and influx handle the retention and multiple granularities differently. Sum multiple columns based on single criteria with Kutools for Excel. We wrote software to build these automatically for us as we send new measurements into InfluxDB. 4, obviously. : Determination of the strong coupling constant from multiple experiments 2 Experimental data The rst measurement of the inclusive jet cross section has been performed in 1982 by the UA2 Collaboration at the SppS collider at a centre-of-mass energy of 540GeV [8]. Hi @Anaisdg, Thanks for the answer. The Length measurement tool places specialized markups that calculate a single, linear measurement. This gives you an output with the description of the query and multiple groupings of measurements (which may vary depending on the database). Generate measurement data to the nearest whole unit, and display the data in a line plot. To combine aggregated measurements under different tags is a bit tricky. When InfluxDB encounters a duplicate point it silently overwrites the previous point. More measurements of a single event lead to greater confidence in calculating an accurate average measurement. Q: Is there any way to drop specific field keys from a measurement? I see that there’s DROP SERIES, DELETE, and DROP MEASUREMENT, but I can’t find anything about dropping field keys. However, a single InfluxDB instance can host multiple databases. A polygon has 7 sides. If the count shows 0 measurements, credentials are correct but database may be wrong. The classic 1. In this case, each measurement sent via REST is counted individually. 3 has only two s. Multiple fields (they are not called columns in InfluxDB 0. * have the same aggregation function configured. This paper. I have done it with two data sets. The hostname of the machine where the influxd instance is executed and the port used by the HTTP API. 8kg, and 80. precise measurement but it could be far off the correct answer. Optional - All tag key-value pairs for the point. Cognos Report Studio Interview Question IBM Cognos 10 Report Studio Interview Questions. Top Methods for Measuring 5 Common Signal-Corrupting Distortions To avoid the performance challenges caused by nonlinearities and distortion, modern measurement techniques can help engineers understand the device mechanics that cause them. Data in InfluxDB is organized by time series, each of them contains a measured value. Originally proposed by Whiting1 and since. All you need to do is specify the property name and the type of measurement to perform. I have a Company (A, B) which can link the data sources. Finally, one of the best things you can do to deal with measurement errors, especially systematic errors, is to use multiple measures of the same construct. select SUM(value) from /measurment1|measurment2/ where time > now() - 60m and host = 'hostname' limit 2; But is it possible to get value of SUM(measurment1+measurment2) , so that I see only o/p. If not then as you have found it is hard to join tables with InfluxDB (and to be fair to InfluxDB, the only sensible way to join is on time fields so that aggregations are possible). ‘Combined standard uncertainty: standard uncertainty of the result of a measurement when that result is obtained from the values of a number of other quantities, equal to the positive square root of a sum of terms, the terms being the variances or covariances of these other quantities weighted according to how the measurement result. With each measurement in the series, previous measurements are no longer relevant, though they may still be used to track trends over time. REST allows multiple measurements to be created by sending multiple measurements in one call. And this guide will focus on the Text Table. It is sometimes best to make several copies of your base sheet and record triangulated measurements separate from baseline and grid measurements. longitudinal condenses the multiple measurements per time point using an arbitrary function (e. Currently, InfluxDB does not support regular expressions with DROP MEASUREMENTS. This multilevel modelling approach will be re-ferred to as structural modeling to explore differences in longitudinal analyses with sum-scores and IRT-based scores as estimates for the latent variable. @Apollon77 I have the same problem with 1. In this blog post I will walk through an Influx Provided test script and modify it for InfluxDB 2. Total Sales = SUM(SampleData[Sales]) Note: In the equation above everything before the equals sign is the name of the measure. A streamlined graphical user interface (GUI) simplifies parametrization and execution of the OTA calibration and measurements. This guide shows how to use a prepared data generator in python to combine two generated time series into a new calculated measurement, then store that measurement back into InfluxDB using Kapacitor. measurements, or as continuous measurements from a variety of in situ instruments. 91 divided by 6. Click Stop when you're finished taking multiple measurements. Raising a measurement to a power If a measurement is raised to a power, for example squared or cubed, then the percentage uncertainty is multiplied by that power to give the total percentage uncertainty. 11 Measurement Uncertainty In an effort to comply with accreditation requirements, and because scientific measurements in general are subject to variability, a budget estimating the uncertainty of measurement for alcohol and quantitative drug analysis is presented. The state estimate is provided through a sum of each filter’s estimateweightedby the likelihoodofthe unknown ele-ments conditioned on the measurement sequence. This would allow the date calendar to drive the dates, but the measure could total a sum of values based on different date columns. Nodes to write and query data from an influxdb time series database. SELECT MEDIAN(field_key) FROM measurement SELECT MEDIAN(field_key) FROM measurement WHERE time > now() - 1d GROUP BY time(10m) SELECT MEDIAN(field_key) FROM measurement WHERE time > 1434059627s GROUP BY tag_key Sum. Consider a typical example, where you have an Orders table with different dates such as the Order Date (i. InfluxDB API. So instead of: SELECT FROM WHERE type =~ /^(1|2)$/ GROUP BY __ you can run:. The key is to use sub-queries (InfluxDB 1. InfluxDB is the public interface to run queries against the your database. To compare the means of measurements for more than two levels of a categorical variable, one-way ANOVA has to be used. interpretation of the results. Delete a shard with DROP SHARD. The joint-sparse recovery problem aims to recover, from sets of compressed measurements, unknown sparse matrices with nonzero entries restricted to a subset of rows. The measure of the first angle is 81. Submit feedback using one of the following methods: Post in the InfluxData Community; In the InfluxDB UI, click Feedback in the left navigation bar. npm install node-red-contrib-influxdb Usage. 1%, test frequencies up to 100 kHz, selectable test signal levels and measurement rate. Click the end point of the next line segment. We use measurements to tell us if there is a problem in the process or if a process change has improved the process. This paper presents a submarine tracking technique by fusing multiple direction of arrival (DOA), time difference of arrival (TDOA), and target frequency measurements via an extended Kalman filter bank (EKFB). To find the area of a circle, start by measuring the distance between the middle of the circle to the edge, which will give you the radius. In this post we recap the week's most interesting InfluxDB and TICK-stack related issues, timestamp precision, multiple fields function, and common field value workarounds, how-tos and Q&A from GitHub, IRC and the InfluxDB Google Group that you might have missed. Each measurement creation in a single MQTT request will be counted. Multiple model adaptive estimation (MMAE) uses several extended Kalman filters (EFKs) running in parallel, each representing a hypothesis of the actual system, to gen-. Some of the materials have been extracted from "The Snow Booklet" by Nolan J. The key is to use sub-queries (InfluxDB 1. So instead of: SELECT FROM WHERE type =~ /^(1|2)$/ GROUP BY __ you can run:. 4, obviously. Submit feedback using one of the following methods: Post in the InfluxData Community; In the InfluxDB UI, click Feedback in the left navigation bar. ) from a single image to assess differences in patterns between groups • Longitudinal data (same measurement obtained over time on each subject) Repeat assessments of cognitive task Repeat imaging over time. The device also provides position data, which is provided in the same measurement or parameter array as separate entries. The port used by the HTTP API is the one configured through the bind-address influxdb. So when you change the value, using the dropdown at the top of the dashboard, your panel's metric queries will change to reflect the new value. You should organize your work logically using the concept of measurement files, each containing multiple measurements. Q: Can we are able to add multiple measures in a single cross tab ? Cognos 10 Scenario Based Interview. Sum all the squared values from Step 4. Note that this means that positive and negative differences from the mean both count in the sum, and so the greater the spread of results of a measurement, the larger the standard deviation. I will be using department for my rows. Each value can also have a display name that will be used when the value appears in charts and tables. Reflective interferometry for optical metamaterial phase measurements Kevin O’Brien,1,2 N. Use Mixed Units of Measurement in the U. In the discrete case, for example, instead of finding a complete eigenbasis, it is a bit more convenient to write the Hilbert space as a direct sum of multiple eigenspaces. results in influxdb. What's the best DLS algorithm? In the dynamic light scattering (DLS) technique, the distribution of diffusion coefficients for a collection of particles is calculated by application of a multi-exponential fitting algorithm to the measured correlation curve. The tab displays the summary of all measurements and the detailed information of each measurement. The hostname of the machine where the influxd instance is executed and the port used by the HTTP API. SELECT MEDIAN(field_key) FROM measurement SELECT MEDIAN(field_key) FROM measurement WHERE time > now() - 1d GROUP BY time(10m) SELECT MEDIAN(field_key) FROM measurement WHERE time > 1434059627s GROUP BY tag_key Sum. Multiple fields (they are not called columns in InfluxDB 0. For example, if you were to measure the period of a pendulum many times with a stop watch, you would find that your measurements were not always the same. The weights for the edges can be. StatTools provides 3 programs and 1 detailed explanations for Cronbach's Alpha. 91 divided by 6. The variance of the combined estimator $\sum_iw_ix_i$ with $\sum_iw_i=1$ is $\sum_iw_i^2v_i$. Suppose Joe is 5 feet 10 inches tall, stays at work for 7 hours and 45 minutes, and then eats a 1 pound 2 ounce steak for dinner—all these measurements have mixed units. Drag your second measure to the upper left of the axis legend, where Tableau will show two translucent green bars: 4. A db contains multiple measurements. We use one less than the total number of measurements to compensate for the fact that one must estimate both the mean and the average deviations about the mean. systemctl start influxdb if you have installed InfluxDB using an official Debian or RPM package, and are running a distro with systemd. For example I wasnt the sum of the 3 measures below. If not then as you have found it is hard to join tables with InfluxDB (and to be fair to InfluxDB, the only sensible way to join is on time fields so that aggregations are possible). Streaming Data Into InfluxDB. However looking into the code and how it's arranged it looks like there are a lot of assumptions that the backend storage driver is carbonara based. EXAMPLES: Examples E2, E20, and E14 involve multiple observations made under conditions of repeatability. When the profile option is enabled, you can define and view multiple measurements of the same type for a goal from the Measurements tab. at 1 row for example, i have data like 1. 5 mm means that multiple measurements of the same target will be within ± 0. Robust Real-Time Multiple Target Tracking 5 association Ji k;r for one particle iresults in a new particle uwith the target states updated according to a weighted sum of predicted and observed states by. Step 9 - Right-click on the Number of Records measure in the Measure Values shelf and choose Add Table Calculation. You have always been able to aggregate individual measurements at the client-side with solutions like StatsD and the Aggregator for the AppOptics-metrics Ruby gem. before-after measurements or multiple measurements across time) on the same set of subjects. conf option in the [http] section. Go to Measure > Polylength or press SHIFT+ALT+Q. The query returns the number of unique field values for the level description field key and the h2o_feet measurement. This is probably where the OP has trouble with the "physical meaning. Querying and displaying log data from InfluxDB is available via Explore. What I mean by consistent attribute tags is that I gave each block a tag for the dimension's measurement with the name "EEK". Optional - All tag key-value pairs for the point. In Grafana, users can select measurements from a dropdown list, if there are many measurements, user-ability will suffer. In this article, Lets see how we can add some custom fields into the InfluxDB measurements for faster search which we might need in future. Alizadeh, and Mustafa Salehi T he so-called rapid chloride permeability (RCP) test is a widely accepted approach for assessing the durability of concrete. Drag Measure Names to Color on the Marks card. Multiple tag key-value pairs are comma-delimited. The filter is not applied to the groupBy columns. a target may be associated with multiple measurements and many targets graph such that each node is of degree one and the sum of edge weights is minimized. 9) is fine as well but only if you always read all fields always. InfluxDB possesses a distributed architecture in which multiple nodes can handle storage and execute queries simultaneously. Drag your cursor down column B to select multiple cells. In addition and importantly, multiple measurements allowed us to take into full account the complex, dynamic and variable nature of a patient's ICU course. Service-Side Aggregation allows you to aggregate multiple measurements sent to the AppOptics API into a single complex measurement. The data as it is currently returned is not complete. Find the measure of each angle. If you make the same measurement several times under identical circumstances, then you can see at a glance how much the result seems to vary. 0 you will be able to do things like store reference data in other places and join that with time series data in InfluxDB at query time. The following functional and performance comparisons are meant to help you decide whether CrateDB is right for your time series project: Functional comparison:. Note: When you measure, you must click on a point on an item to register a point - clicking on the background will not register anything. So, in keeping with the ADPRIMA approach to explaining things in as straightforward and meaningful a way as possible, here is what I think are useful descriptions of these three fundamental terms. Regression with Replicates This is a test of linearity for bivariate regressions when the data contains multiple measurements of the dependent variable for each value of the independent variable. Querying and displaying log data from InfluxDB is available via Explore. 4 inches—enough to dethrone many close rivals on the top-10 snowstorm list that were not necessarily lesser storms!.
• ### Uncertainty-Aware Organ Classification for Surgical Data Science Applications in Laparoscopy(1706.07002) Oct. 19, 2018 cs.CV Objective: Surgical data science is evolving into a research field that aims to observe everything occurring within and around the treatment process to provide situation-aware data-driven assistance. In the context of endoscopic video analysis, the accurate classification of organs in the field of view of the camera proffers a technical challenge. Herein, we propose a new approach to anatomical structure classification and image tagging that features an intrinsic measure of confidence to estimate its own performance with high reliability and which can be applied to both RGB and multispectral imaging (MI) data. Methods: Organ recognition is performed using a superpixel classification strategy based on textural and reflectance information. Classification confidence is estimated by analyzing the dispersion of class probabilities. Assessment of the proposed technology is performed through a comprehensive in vivo study with seven pigs. Results: When applied to image tagging, mean accuracy in our experiments increased from 65% (RGB) and 80% (MI) to 90% (RGB) and 96% (MI) with the confidence measure. Conclusion: Results showed that the confidence measure had a significant influence on the classification accuracy, and MI data are better suited for anatomical structure labeling than RGB data. Significance: This work significantly enhances the state of art in automatic labeling of endoscopic videos by introducing the use of the confidence metric, and by being the first study to use MI data for in vivo laparoscopic tissue classification. The data of our experiments will be released as the first in vivo MI dataset upon publication of this paper. • ### Long-term mutual phase locking of picosecond pulse pairs generated by a semiconductor nanowire laser(1603.02169) The ability to generate phase-stabilised trains of ultrafast laser pulses by mode-locking underpins photonics research in fields such as precision metrology and spectroscopy. However, the complexity of conventional mode-locked laser systems, combined with the need for a mechanism to induce active or passive phase locking between resonator modes, has hindered their realisation at the nanoscale. Here, we demonstrate that GaAs-AlGaAs nanowire lasers are capable of emitting pairs of phase-locked picosecond laser pulses when subject to non-resonant pulsed optical excitation with a repetition frequency up to ~200GHz. By probing the two-pulse interference that emerges within the homogeneously broadened laser emission, we show that the optical phase is preserved over timescales extending beyond ~30ps, much longer than the emitted laser pulse duration (~2ps). Simulations performed by solving the optical Bloch equations produce good quantitative agreement with experiments, revealing how the phase information is stored in the gain medium close to transparency. Our results open the way to applications such as on-chip, ultra-sensitive Ramsey comb spectroscopy. • This work is on the Physics of the B Factories. Part A of this book contains a brief description of the SLAC and KEK B Factories as well as their detectors, BaBar and Belle, and data taking related issues. Part B discusses tools and methods used by the experiments in order to obtain results. The results themselves can be found in Part C. Please note that version 3 on the archive is the auxiliary version of the Physics of the B Factories book. This uses the notation alpha, beta, gamma for the angles of the Unitarity Triangle. The nominal version uses the notation phi_1, phi_2 and phi_3. Please cite this work as Eur. Phys. J. C74 (2014) 3026. • ### Tunneling Breakdown of a Strongly Correlated Insulating State in VO$_2$ Induced by Intense Multi-Terahertz Excitation(1505.01273) May 8, 2015 cond-mat.str-el We directly trace the near- and mid-infrared transmission change of a VO$_2$ thin film during an ultrafast insulator-to-metal transition triggered by high-field multi-terahertz transients. Non-thermal switching into a metastable metallic state is governed solely by the amplitude of the applied terahertz field. In contrast to resonant excitation below the threshold fluence, no signatures of excitonic self-trapping are observed. Our findings are consistent with the generation of spatially separated charge pairs and a cooperative transition into a delocalized metallic state by THz field-induced tunneling. The tunneling process is a condensed-matter analogue of the Schwinger effect in nonlinear quantum electrodynamics. We find good agreement with the pair production formula by replacing the Compton wavelength with an electronic correlation length of 2.1 $\AA$. • ### Non-perturbative Interband Response of InSb Driven Off-resonantly by Few-cycle Electromagnetic Transients(1208.5863) Intense multi-THz pulses are used to study the coherent nonlinear response of bulk InSb by means of field-resolved four-wave mixing spectroscopy. At amplitudes above 5 MV/cm the signals show a clear temporal substructure which is unexpected in perturbative nonlinear optics. Simulations based on a two-level quantum system demonstrate that in spite of the strongly off-resonant character of the excitation the high-field pulses drive the interband resonances into a non-perturbative regime of Rabi flopping.
Categories # Largest possible value | AMC-10A, 2004 | Problem 15 Try this beautiful problem from Number Theory based on largest possible value from AMC-10A, 2004. You may use sequential hints to solve the problem. Try this beautiful problem from Number system: largest possible value ## Largest Possible Value – AMC-10A, 2004- Problem 15 Given that $-4 \leq x \leq -2$ and $2 \leq y \leq 4$, what is the largest possible value of $\frac{x+y}{2}$ • $\frac {-1}{2}$ • $\frac{1}{6}$ • $\frac{1}{2}$ • $\frac{1}{4}$ • $\frac{1}{9}$ ### Key Concepts Number system Inequality divisibility Answer: $\frac{1}{2}$ AMC-10A (2003) Problem 15 Pre College Mathematics ## Try with Hints The given expression is $\frac{x+y}{x}=1+\frac{y}{x}$ Now $-4 \leq x \leq -2$ and $2 \leq y \leq 4$ so we can say that $\frac{y}{x} \leq 0$ can you finish the problem…….. Therefore, the expression $1+\frac{y}x$ will be maximized when $\frac{y}{x}$ is minimized, which occurs when $|x|$ is the largest and $|y|$ is the smallest. can you finish the problem…….. Therefore in the region $(-4,2)$ , $\frac{x+y}{x}=1-\frac{1}{2}=\frac{1}{2}$
Order KöMaL! Competitions Portal P. 3802. What voltage is connected to the plane capacitor in vacuum, the plates of which are at 1 cm distance from each other, when the electrons detaching from the negative plate impact to the positive one with a velocity that is 60% of the speed of light. Solve the problem with both relativistic and non-relativistic methods. (4 points)
Find all School-related info fast with the new School-Specific MBA Forum It is currently 04 May 2016, 13:38 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # A family consisting of one mother, one father, two daughters Author Message TAGS: ### Hide Tags Manager Joined: 10 Feb 2012 Posts: 72 Location: India Concentration: Marketing, Strategy GMAT 1: 640 Q48 V31 GPA: 3.45 WE: Marketing (Pharmaceuticals and Biotech) Followers: 2 Kudos [?]: 67 [1] , given: 41 A family consisting of one mother, one father, two daughters [#permalink] ### Show Tags 27 Apr 2012, 03:57 1 KUDOS 12 This post was BOOKMARKED 00:00 Difficulty: 55% (hard) Question Stats: 62% (02:02) correct 38% (01:21) wrong based on 297 sessions ### HideShow timer Statictics A family consisting of one mother, one father, two daughters and a son is taking a road trip in a sedan. The sedan has two front seats and three back seats. If one of the parents must drive and the two daughters refuse to sit next to each other, how many possible seating arrangements are there? A. 28 B. 32 C. 48 D. 60 E. 120 [Reveal] Spoiler: OA Last edited by Bunuel on 27 Apr 2012, 04:07, edited 1 time in total. Math Expert Joined: 02 Sep 2009 Posts: 32623 Followers: 5653 Kudos [?]: 68643 [9] , given: 9816 ### Show Tags 27 Apr 2012, 04:07 9 KUDOS Expert's post 8 This post was BOOKMARKED A family consisting of one mother, one father, two daughters and a son is taking a road trip in a sedan. The sedan has two front seats and three back seats. If one of the parents must drive and the two daughters refuse to sit next to each other, how many possible seating arrangements are there? A. 28 B. 32 C. 48 D. 60 E. 120 Approach #1: Sisters can sit separately in two ways: 1. one of them is on the front seat (2 ways). Others (including second sister) can be arranged in: 2 (drivers seat)*3! (arrangements of three on the back seat)=12 ways. Total for this case: 2*12=24 Or 2. both by the window on the back seat (2 ways). Others can be arranged in: 2 (drivers seat)*2 (front seat)*1(one left to sit between the sisters on the back seat)=4 ways. Total for this case: 2*4=8. Total=24+8=32. Approach #2: Total # of arrangements: Drivers seat: 2 (either mother or father); Front seat: 4 (any of 4 family members left); Back seat: 3! (arranging other 3 family members on the back seat); So. total # of arrangements is 2*4*3!=48. # of arrangements with sisters sitting together: Sisters can sit together only on the back seat either by the left window or by the right window - 2, and either {S1,S2} or {S2,S1} - 2 --> 2*2=4; Drivers seat: 2 (either mother or father); Front seat: 2 (5 - 2 sisters on back seat - 1 driver = 2); Back seat with sisters: 1 (the last family member left); So, # of arrangements with sisters sitting together is 4*2*2*1=16. 48-16=32. _________________ Intern Joined: 28 Feb 2012 Posts: 22 GMAT 1: 700 Q48 V39 WE: Information Technology (Computer Software) Followers: 0 Kudos [?]: 10 [2] , given: 3 Re: A family consisting of one mother, one father, two daughters [#permalink] ### Show Tags 27 Apr 2012, 10:30 2 KUDOS 1 This post was BOOKMARKED pratikbais wrote: A family consisting of one mother, one father, two daughters and a son is taking a road trip in a sedan. The sedan has two front seats and three back seats. If one of the parents must drive and the two daughters refuse to sit next to each other, how many possible seating arrangements are there? A. 28 B. 32 C. 48 D. 60 E. 120 Steps to solve this problem - People - 1M, 1F, 1S, 2D (D1,D2) 1. Choose a parent to drive the sedan. 2. Find the ways to seat the daughters. 3. Place the remaining 3 family members. 4. Finally multiply results from steps 1, 2 and 3. 1. Ways to choose a parent to drive = 2 (One person seated, total remaining = 4). 2. Arrangement in which daughters sit separate = Total Arrangements of 4 people - Arrangements with D1 and D2 glued together. -> 4! - 4*2*1 4*2*1 - the pair of daughters can take 2 out of 3 consecutive spots at the back seat. Also they can interchange seats D1D2 and D2D1 are different arrangements. So total 4. Remaining two people sit in 2*1 ways. (This takes care of step 3 as well) Finally step 4 - Total = 2* (4! - 4*2*1 ) = 2* (24-8) = 2*16 = 32. Thanks. Senior Manager Joined: 13 Aug 2012 Posts: 464 Concentration: Marketing, Finance GMAT 1: Q V0 GPA: 3.23 Followers: 22 Kudos [?]: 344 [1] , given: 11 Re: A family consisting of one mother, one father, two daughters [#permalink] ### Show Tags 28 Dec 2012, 20:05 1 KUDOS 2 This post was BOOKMARKED pratikbais wrote: A family consisting of one mother, one father, two daughters and a son is taking a road trip in a sedan. The sedan has two front seats and three back seats. If one of the parents must drive and the two daughters refuse to sit next to each other, how many possible seating arrangements are there? A. 28 B. 32 C. 48 D. 60 E. 120 Case 1: Both daughters in the back seat but seated separately Case 2: One daughter in front seat and the other in the middle of the back seat Case 3: One daughter in front seat and the other in the right back seat Case 4: One daugther in front seat and the other in the left back seat That's 4 positions and two daughters are interchangeable. Thus, 8. There are 2 ways to select a parent and 2 ways to seat the son and another parent. $$=4*2*2*2 = 32$$ _________________ Impossible is nothing to God. GMAT Club Legend Joined: 09 Sep 2013 Posts: 9283 Followers: 455 Kudos [?]: 115 [0], given: 0 Re: A family consisting of one mother, one father, two daughters [#permalink] ### Show Tags 24 Jul 2014, 22:21 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Manager Joined: 18 Jul 2013 Posts: 73 Location: Italy GMAT 1: 600 Q42 V31 GMAT 2: 700 Q48 V38 Followers: 1 Kudos [?]: 47 [1] , given: 114 Re: A family consisting of one mother, one father, two daughters [#permalink] ### Show Tags 26 Jul 2014, 10:23 1 KUDOS 1 This post was BOOKMARKED Could someone tell me is my approach is correct? 1) Total of arrangement in the driver seat : 2 (mother or father) 2) Total arrangement of the four others : 4! 3) Total ways with the 2 daughters next to each other : 4 (left window D1-D2 and D2-D1, right window D1-D2 and D2-D1) *2 (we have to place the mother or father that is not driving) Math Expert Joined: 02 Sep 2009 Posts: 32623 Followers: 5653 Kudos [?]: 68643 [0], given: 9816 Re: A family consisting of one mother, one father, two daughters [#permalink] ### Show Tags 26 Jul 2014, 10:53 Expert's post oss198 wrote: Could someone tell me is my approach is correct? 1) Total of arrangement in the driver seat : 2 (mother or father) 2) Total arrangement of the four others : 4! 3) Total ways with the 2 daughters next to each other : 4 (left window D1-D2 and D2-D1, right window D1-D2 and D2-D1) *2 (we have to place the mother or father that is not driving) _______________ Yes, that's correct. _________________ Manager Joined: 22 Jul 2014 Posts: 118 Followers: 0 Kudos [?]: 172 [0], given: 174 Re: A family consisting of one mother, one father, two daughters [#permalink] ### Show Tags 16 Sep 2014, 03:25 Bunuel wrote: A family consisting of one mother, one father, two daughters and a son is taking a road trip in a sedan. The sedan has two front seats and three back seats. If one of the parents must drive and the two daughters refuse to sit next to each other, how many possible seating arrangements are there? A. 28 B. 32 C. 48 D. 60 E. 120 Approach #1: Sisters can sit separately in two ways: 1. one of them is on the front seat (2 ways). Others (including second sister) can be arranged in: 2 (drivers seat)*3! (arrangements of three on the back seat)=12 ways. Total for this case: 2*12=24 Or 2. both by the window on the back seat (2 ways). Others can be arranged in: 2 (drivers seat)*2 (front seat)*1(one left to sit between the sisters on the back seat)=4 ways. Total for this case: 2*4=8. Total=24+8=32. Approach #2: Total # of arrangements: Drivers seat: 2 (either mother or father); Front seat: 4 (any of 4 family members left); Back seat: 3! (arranging other 3 family members on the back seat); So. total # of arrangements is 2*4*3!=48. # of arrangements with sisters sitting together: Sisters can sit together only on the back seat either by the left window or by the right window - 2, and either {S1,S2} or {S2,S1} - 2 --> 2*2=4; Drivers seat: 2 (either mother or father); Front seat: 2 (5 - 2 sisters on back seat - 1 driver = 2); Back seat with sisters: 1 (the last family member left); So, # of arrangements with sisters sitting together is 4*2*2*1=16. 48-16=32. Sorry Bunuel, but i didnt get the above in red. Why the 4*2*2*1 _________________ If you found this post useful for your prep, click 'Kudos' Math Expert Joined: 02 Sep 2009 Posts: 32623 Followers: 5653 Kudos [?]: 68643 [1] , given: 9816 Re: A family consisting of one mother, one father, two daughters [#permalink] ### Show Tags 16 Sep 2014, 13:34 1 KUDOS Expert's post alphonsa wrote: Bunuel wrote: A family consisting of one mother, one father, two daughters and a son is taking a road trip in a sedan. The sedan has two front seats and three back seats. If one of the parents must drive and the two daughters refuse to sit next to each other, how many possible seating arrangements are there? A. 28 B. 32 C. 48 D. 60 E. 120 Approach #1: Sisters can sit separately in two ways: 1. one of them is on the front seat (2 ways). Others (including second sister) can be arranged in: 2 (drivers seat)*3! (arrangements of three on the back seat)=12 ways. Total for this case: 2*12=24 Or 2. both by the window on the back seat (2 ways). Others can be arranged in: 2 (drivers seat)*2 (front seat)*1(one left to sit between the sisters on the back seat)=4 ways. Total for this case: 2*4=8. Total=24+8=32. Approach #2: Total # of arrangements: Drivers seat: 2 (either mother or father); Front seat: 4 (any of 4 family members left); Back seat: 3! (arranging other 3 family members on the back seat); So. total # of arrangements is 2*4*3!=48. # of arrangements with sisters sitting together: Sisters can sit together only on the back seat either by the left window or by the right window - 2, and either {S1,S2} or {S2,S1} - 2 --> 2*2=4; Drivers seat: 2 (either mother or father); Front seat: 2 (5 - 2 sisters on back seat - 1 driver = 2); Back seat with sisters: 1 (the last family member left); So, # of arrangements with sisters sitting together is 4*2*2*1=16. 48-16=32. Sorry Bunuel, but i didnt get the above in red. Why the 4*2*2*1 4 ways to sit sisters together; 2 ways to fill drivers seat (mother or father); 2 ways to fill front seat (3 people are already distributed, so 2 are left); 1 way to fill the remaining back seat. Total = 4*2*2*1. Hope it's clear. _________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 9283 Followers: 455 Kudos [?]: 115 [0], given: 0 Re: A family consisting of one mother, one father, two daughters [#permalink] ### Show Tags 05 Oct 2015, 20:16 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: A family consisting of one mother, one father, two daughters   [#permalink] 05 Oct 2015, 20:16 Similar topics Replies Last post Similar Topics: At a certain restaurant, a meal consists of one appetizer, one main 2 03 Jan 2016, 12:50 12 There are two inlets and one outlet to a cistern. One of the 9 09 Mar 2014, 02:40 4 The Coen family consists of a father, a mother, two children 6 30 May 2012, 22:13 8 A family consisting of one mother, one father, two daughters 18 09 Mar 2008, 13:33 9 A family consisting of one mother, one father, two daughters 6 01 Sep 2008, 22:30 Display posts from previous: Sort by
# Algebraic Logic by H. Andreka, J.D.Monk, I.Nemeti (eds.) By H. Andreka, J.D.Monk, I.Nemeti (eds.) The János Bolyai Mathematical Society held an Algebraic common sense Colloquium among 8-14 August, 1988, in Budapest. An introductory sequence of lectures on cylindric and relation algebras was once given via Roger D. Maddux. The current quantity isn't really limited to papers provided on the convention. as an alternative, it really is aimed toward offering the reader with a comparatively coherent interpreting on Algebraic common sense (AL), with an emphasis on present examine. lets no longer conceal the total of AL, essentially the most very important omission being that the class theoretic types of AL have been taken care of basically of their connections with Tarskian (or extra conventional) AL. the current quantity used to be ready in collaboration with the editors of the court cases of Ames convention on AL (Springer Lecture Notes in computing device technology Vol. 425, 1990), and a quantity of Studia Logica dedicated to AL which used to be scheduled to visit press within the fall of 1990. a number of the papers initially submitted to the current quantity seem in a single of the latter. Similar algebra & trigonometry books Approaches to Algebra: Perspectives for Research and Teaching (Mathematics Education Library) Within the foreign examine group, the educating and studying of algebra have got loads of curiosity. The problems encountered through scholars in class algebra convey the misunderstandings that come up in studying at various tuition degrees and lift very important questions about the functioning of algebraic reasoning, its features, and the events conducive to its favorable improvement. Álgebra Moderna This vintage, written by means of younger teachers who turned giants of their box, has formed the certainty of recent algebra for generations of mathematicians and continues to be a helpful reference and textual content for self examine and school classes. Generative Complexity In Algebra The G-spectrum or generative complexity of a category $\mathcal{C}$ of algebraic buildings is the functionality $\mathrm{G}_\mathcal{C}(k)$ that counts the variety of non-isomorphic types in $\mathcal{C}$ which are generated by way of at so much $k$ parts. We ponder the habit of $\mathrm{G}_\mathcal{C}(k)$ while $\mathcal{C}$ is a in the neighborhood finite equational type (variety) of algebras and $k$ is finite.
Context free grammar for $A \circ B$ If A and B are regular language, what is a context free grammar of the following language? $$A \circ B = \{ xy \mid x \in A \text{ and } y \in B \text{ and } |x|=|y| \}$$ • Hint: Modify a grammar for $\{ a^nb^n : n \geq 0 \}$. May 17 '20 at 7:20 2 Answers For simplicity, we can assume that neither $$A$$ or $$B$$ contains the empty string. Otherwise, we can either add a simple rule to our final grammar so that it generates the empty string or or do nothing so that our final grammar does not generate the empty string still. Since $$A$$ is a regular language that does not contain empty string, we can have $$(N_A,\Sigma_A, P_A, S_A)$$, a restricted right-linear grammar for $$A$$, where each rule in $$P_A$$ is of the form $$U\to aX$$ or $$U\to a$$, where $$U, X\in N_A$$ and $$a\in\Sigma_A$$. Since $$B$$ is a regular language that does not contain empty string, we can have $$(N_B,\Sigma_B, P_B, S_B)$$, a restricted left-linear grammar for $$B$$, where each rule in $$P_B$$ is of the form $$V\to Yb$$ or $$V\to b$$, where $$V,Y\in N_B$$ and $$b\in\Sigma_B$$. Construct the grammar $$\left(N_A\times N_B, \Sigma_A\cup\Sigma_B, P, (S_A,S_B)\right)$$, where the production rules $$P$$ is $$\{(U,V)\to a(X,Y)b: U\to aX \in P_A\ \land\ V\to Yb \in P_B\}\\ \cup\{(U,V)\to ab: U\to a\in P_A\ \land\ V\to b \in P_B\}.$$ Basically, the grammar rules generate a string by adding a terminal on the left side as in $$A$$ (I am referring to $$U\to aX$$) as well as a terminal on the right side as in $$B$$ (I am referring to $$V\to Yb$$) at the same time. At the final step, the non-terminal in the middle is replaced by $$ab$$ (I am referring to $$(U,V)\to ab$$). It should not be difficult to verify the constructed grammar is a context-free grammar for $$A\circ B$$. (In fact, it is a linear grammar.) • @Mohammad I just simplified this answer. In case "restricted right linear grammar" or "restricted left linear grammar" is not clear, here is a lecture note. They can be derived from right-linear grammar or left-linear grammar straightforwardly. May 18 '20 at 15:27 • A restricted right linear grammar is essentially a DFA. A restrict left linear grammar is a DFA for the reversed language. May 18 '20 at 16:02 • @Mohammad As Yuval said, a restricted right linear grammar corresponds to a DFA. A non-terminal corresponds to a state in the DFA. The start symbol corresponds to the start state. A production rule corresponds to a transition rule in the DFA. May 18 '20 at 16:11 • Here is a similar exercise that can be solved similarly. Construct a context-free grammar for $A \circ B = \{ xy \mid x \in A \text{ and } y \in B \text{ and } |x|=2|y| \}$, where $A$ and $B$ are two regular languages. May 18 '20 at 16:16 Suppose that DFA for $$A$$ and $$B$$ are $$D_A$$ and $$D_B$$ respectively. We will construct a PushDown Automata (PDA) for the given language $$A \cdot B$$ by combining $$D_A$$ and $$D_B$$ in a particular manner. Modify the transitions of $$D_A$$ such that on reading any letter it pushes a symbol $$X$$ on the stack. Join all the final states of $$D_A$$ to the initial state of $$D_B$$ with epsilon transitions. Modify all the transitions of $$D_B$$ to Pop $$X$$ from the stack. Accepting condition will be that on reading a word we should reach one of the final state of $$D_B$$ and the stack should be empty. It will be quite easy for you to convince yourself that this will accept the language $$A \cdot B$$ as required. Now, we can apply the standard method to convert the PDA to grammar to get the required grammar. • thanks for your response, but is there any way I can solve it without using PDA? my teacher hasn't taught about PDA yet, so it should have another way. May 17 '20 at 10:12
### Governance, Strategy and Risk ##### 21 November 2018, 8:30 AM Successful financial managers must thoroughly understand governance and strategy, and how to relate it to their reporting and control responsibilities, as well as how to add value to the strategic process.  Risk management is a critical element of planning, and internal control provides ... 8h Lecturer Jeffrey Sherman ### Surviving Disruption ##### 28 November 2018, 8:30 AM Drones, bitcoin, cannabis, Trump. We see upheaval and change all around us.  An app has destroyed the value of $200,000 taxi licenses – and is on its way to destroying an entire industry. Uber might then be destroyed by Blockchain technology. And maybe another Distributed Ledge Technology is ... 8h Lecturer Jeffrey Sherman$415 ### Internal Control Refresher ##### 29 November 2018, 8:30 AM It’s time to reconsider internal control.  It is no longer the domain of accountants and auditors – CEOs and boards are focussed on it.  Everyone assumes that you are an expert, but when was the last time you looked at your organization’s internal controls? This fast-paced seminar ... 8h Lecturer Jeffrey Sherman \$415
# Correlation and Regression ## Visualizing two variables library(dplyr) library(ggplot2) library(gridExtra) library(openintro) knitr::opts_chunk$set(cache = TRUE) • Using the ncbirths dataset, make a scatterplot using ggplot() to illustrate how the birth weight of these babies varies according to the number of weeks of gestation. str(ncbirths) ## 'data.frame': 1000 obs. of 13 variables: ##$ fage : int NA NA 19 21 NA NA 18 17 NA 20 ... ## $mage : int 13 14 15 15 15 15 15 15 16 16 ... ##$ mature : Factor w/ 2 levels "mature mom","younger mom": 2 2 2 2 2 2 2 2 2 2 ... ## $weeks : int 39 42 37 41 39 38 37 35 38 37 ... ##$ premie : Factor w/ 2 levels "full term","premie": 1 1 1 1 1 1 1 2 1 1 ... ## $visits : int 10 15 11 6 9 19 12 5 9 13 ... ##$ marital : Factor w/ 2 levels "married","not married": 1 1 1 1 1 1 1 1 1 1 ... ## $gained : int 38 20 38 34 27 22 76 15 NA 52 ... ##$ weight : num 7.63 7.88 6.63 8 6.38 5.38 8.44 4.69 8.81 6.94 ... ## $lowbirthweight: Factor w/ 2 levels "low","not low": 2 2 2 2 2 1 2 1 2 2 ... ##$ gender : Factor w/ 2 levels "female","male": 2 2 1 2 1 2 2 2 2 1 ... ## $habit : Factor w/ 2 levels "nonsmoker","smoker": 1 1 1 1 1 1 1 1 1 1 ... ##$ whitemom : Factor w/ 2 levels "not white","white": 1 1 2 2 1 1 1 1 2 2 ... ncbirths %>% ggplot(aes(x=weeks, y=weight)) + geom_point() + ggtitle("weight of babies against gestation period") ## Warning: Removed 2 rows containing missing values (geom_point). ### Boxplots as discretized/conditioned scatterplots • The cut() function takes two arguments: the continuous variable you want to discretize and the number of breaks that you want to make in that continuous variable in order to discretize it. • here the cut() function is used to discretize the x-variable into six intervals (i.e. five breaks). ncbirths %>% ggplot(aes(x = cut(weeks, breaks = 5), y = weight)) + geom_boxplot() ### Creating scatterplots • The mammals dataset contains information about 39 different species of mammals, including their body weight, brain weight, gestation time, and a few other variables. glimpse(mammals) ## Observations: 62 ## Variables: 11 ## $Species <fct> Africanelephant, Africangiantpouchedrat, ArcticFox... ##$ BodyWt <dbl> 6654.000, 1.000, 3.385, 0.920, 2547.000, 10.550, 0... ## $BrainWt <dbl> 5712.0, 6.6, 44.5, 5.7, 4603.0, 179.5, 0.3, 169.0,... ##$ NonDreaming <dbl> NA, 6.3, NA, NA, 2.1, 9.1, 15.8, 5.2, 10.9, 8.3, 1... ## $Dreaming <dbl> NA, 2.0, NA, NA, 1.8, 0.7, 3.9, 1.0, 3.6, 1.4, 1.5... ##$ TotalSleep <dbl> 3.3, 8.3, 12.5, 16.5, 3.9, 9.8, 19.7, 6.2, 14.5, 9... ## $LifeSpan <dbl> 38.6, 4.5, 14.0, NA, 69.0, 27.0, 19.0, 30.4, 28.0,... ##$ Gestation <dbl> 645, 42, 60, 25, 624, 180, 35, 392, 63, 230, 112, ... ## $Predation <int> 3, 3, 1, 5, 3, 4, 1, 4, 1, 1, 5, 5, 2, 5, 1, 2, 2,... ##$ Exposure <int> 5, 1, 1, 2, 5, 4, 1, 5, 2, 1, 4, 5, 1, 5, 1, 2, 2,... ## $Danger <int> 3, 3, 1, 3, 4, 4, 1, 4, 1, 1, 4, 5, 2, 5, 1, 2, 2,... • Using the mammals dataset, create a scatterplot illustrating how the brain weight of a mammal varies as a function of its body weight. mammals %>% ggplot(aes(x=BodyWt, y=BrainWt)) + geom_point() • The mlbBat10 dataset contains batting statistics for 1,199 Major League Baseball players during the 2010 season. glimpse(mlbBat10) ## Observations: 1,199 ## Variables: 19 ##$ name <fct> I Suzuki, D Jeter, M Young, J Pierre, R Weeks, M Scut... ## $team <fct> SEA, NYY, TEX, CWS, MIL, BOS, BAL, MIN, NYY, CIN, MIL... ##$ position <fct> OF, SS, 3B, OF, 2B, SS, OF, OF, 2B, 2B, OF, OF, 2B, O... ## $G <dbl> 162, 157, 157, 160, 160, 150, 160, 153, 160, 155, 157... ##$ AB <dbl> 680, 663, 656, 651, 651, 632, 629, 629, 626, 626, 619... ## $R <dbl> 74, 111, 99, 96, 112, 92, 79, 85, 103, 100, 101, 103,... ##$ H <dbl> 214, 179, 186, 179, 175, 174, 187, 166, 200, 172, 188... ## $2B <dbl> 30, 30, 36, 18, 32, 38, 45, 24, 41, 33, 45, 34, 41, 2... ##$ 3B <dbl> 3, 3, 3, 3, 4, 0, 3, 10, 3, 5, 1, 10, 4, 3, 3, 1, 5, ... ## $HR <dbl> 6, 10, 21, 1, 29, 11, 12, 3, 29, 18, 25, 4, 10, 25, 1... ##$ RBI <dbl> 43, 67, 91, 47, 83, 56, 60, 58, 109, 59, 103, 41, 75,... ## $TB <dbl> 268, 245, 291, 206, 302, 245, 274, 219, 334, 269, 310... ##$ BB <dbl> 45, 63, 50, 45, 76, 53, 73, 60, 57, 46, 56, 47, 28, 4... ## $SO <dbl> 86, 106, 115, 47, 184, 71, 93, 74, 77, 83, 105, 170, ... ##$ SB <dbl> 42, 18, 4, 68, 11, 5, 7, 26, 3, 16, 14, 27, 14, 18, 1... ## $CS <dbl> 9, 5, 2, 18, 4, 4, 2, 4, 2, 12, 3, 6, 4, 9, 5, 1, 3, ... ##$ OBP <dbl> 0.359, 0.340, 0.330, 0.341, 0.366, 0.333, 0.370, 0.33... ## $SLG <dbl> 0.394, 0.370, 0.444, 0.316, 0.464, 0.388, 0.436, 0.34... ##$ AVG <dbl> 0.315, 0.270, 0.284, 0.275, 0.269, 0.275, 0.297, 0.26... • Using the mlbBat10 dataset, create a scatterplot illustrating how the slugging percentage (SLG) of a player varies as a function of his on-base percentage (OBP). mlbBat10 %>% ggplot(aes(OBP, SLG)) + geom_point() • The bdims dataset contains body girth and skeletal diameter measurements for 507 physically active individuals. glimpse(bdims) ## Observations: 507 ## Variables: 25 ## $bia.di <dbl> 42.9, 43.7, 40.1, 44.3, 42.5, 43.3, 43.5, 44.4, 43.5, 4... ##$ bii.di <dbl> 26.0, 28.5, 28.2, 29.9, 29.9, 27.0, 30.0, 29.8, 26.5, 2... ## $bit.di <dbl> 31.5, 33.5, 33.3, 34.0, 34.0, 31.5, 34.0, 33.2, 32.1, 3... ##$ che.de <dbl> 17.7, 16.9, 20.9, 18.4, 21.5, 19.6, 21.9, 21.8, 15.5, 2... ## $che.di <dbl> 28.0, 30.8, 31.7, 28.2, 29.4, 31.3, 31.7, 28.8, 27.5, 2... ##$ elb.di <dbl> 13.1, 14.0, 13.9, 13.9, 15.2, 14.0, 16.1, 15.1, 14.1, 1... ## $wri.di <dbl> 10.4, 11.8, 10.9, 11.2, 11.6, 11.5, 12.5, 11.9, 11.2, 1... ##$ kne.di <dbl> 18.8, 20.6, 19.7, 20.9, 20.7, 18.8, 20.8, 21.0, 18.9, 2... ## $ank.di <dbl> 14.1, 15.1, 14.1, 15.0, 14.9, 13.9, 15.6, 14.6, 13.2, 1... ##$ sho.gi <dbl> 106.2, 110.5, 115.1, 104.5, 107.5, 119.8, 123.5, 120.4,... ## $che.gi <dbl> 89.5, 97.0, 97.5, 97.0, 97.5, 99.9, 106.9, 102.5, 91.0,... ##$ wai.gi <dbl> 71.5, 79.0, 83.2, 77.8, 80.0, 82.5, 82.0, 76.8, 68.5, 7... ## $nav.gi <dbl> 74.5, 86.5, 82.9, 78.8, 82.5, 80.1, 84.0, 80.5, 69.0, 8... ##$ hip.gi <dbl> 93.5, 94.8, 95.0, 94.0, 98.5, 95.3, 101.0, 98.0, 89.5, ... ## $thi.gi <dbl> 51.5, 51.5, 57.3, 53.0, 55.4, 57.5, 60.9, 56.0, 50.0, 5... ##$ bic.gi <dbl> 32.5, 34.4, 33.4, 31.0, 32.0, 33.0, 42.4, 34.1, 33.0, 3... ## $for.gi <dbl> 26.0, 28.0, 28.8, 26.2, 28.4, 28.0, 32.3, 28.0, 26.0, 2... ##$ kne.gi <dbl> 34.5, 36.5, 37.0, 37.0, 37.7, 36.6, 40.1, 39.2, 35.5, 3... ## $cal.gi <dbl> 36.5, 37.5, 37.3, 34.8, 38.6, 36.1, 40.3, 36.7, 35.0, 3... ##$ ank.gi <dbl> 23.5, 24.5, 21.9, 23.0, 24.4, 23.5, 23.6, 22.5, 22.0, 2... ## $wri.gi <dbl> 16.5, 17.0, 16.9, 16.6, 18.0, 16.9, 18.8, 18.0, 16.5, 1... ##$ age <int> 21, 23, 28, 23, 22, 21, 26, 27, 23, 21, 23, 22, 20, 26,... ## $wgt <dbl> 65.6, 71.8, 80.7, 72.6, 78.8, 74.8, 86.4, 78.4, 62.0, 8... ##$ hgt <dbl> 174.0, 175.3, 193.5, 186.5, 187.2, 181.5, 184.0, 184.5,... ## $sex <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1... Using the bdims dataset, create a scatterplot illustrating how a person’s weight varies as a function of their height. Use color to separate by sex, which you’ll need to coerce to a factor with factor(). bdims$sex <- factor(bdims$sex, labels = c('female', 'male')) bdims %>% ggplot(aes(hgt, wgt, color = sex)) + geom_point()+ facet_wrap(~ sex) • The smoking dataset contains information on the smoking habits of 1,691 citizens of the United Kingdom. glimpse(smoking) ## Observations: 1,691 ## Variables: 12 ##$ gender <fct> Male, Female, Male, Female, Female, Femal... ## $age <int> 38, 42, 40, 40, 39, 37, 53, 44, 40, 41, 7... ##$ maritalStatus <fct> Divorced, Single, Married, Married, Marri... ## $highestQualification <fct> No Qualification, No Qualification, Degre... ##$ nationality <fct> British, British, English, English, Briti... ## $ethnicity <fct> White, White, White, White, White, White,... ##$ grossIncome <fct> 2,600 to 5,200, Under 2,600, 28,600 to 36... ## $region <fct> The North, The North, The North, The Nort... ##$ smoke <fct> No, Yes, No, No, No, No, Yes, No, Yes, Ye... ## $amtWeekends <int> NA, 12, NA, NA, NA, NA, 6, NA, 8, 15, NA,... ##$ amtWeekdays <int> NA, 12, NA, NA, NA, NA, 6, NA, 8, 12, NA,... ## $type <fct> , Packets, , , , , Packets, , Hand-Rolled... • Using the smoking dataset, create a scatterplot illustrating how the amount that a person smokes on weekdays varies as a function of their age. smoking %>% ggplot(aes(age, amtWeekdays)) + geom_point() ## Warning: Removed 1270 rows containing missing values (geom_point). ### Transformations ggplot2 provides several different mechanisms for viewing transformed relationships. The coord_trans() function transforms the coordinates of the plot. Alternatively, the scale_x_log10() and scale_y_log10() functions perform a base-10 log transformation of each axis. Note the differences in the appearance of the axes. # Scatterplot with coord_trans() p1 <- ggplot(data = mammals, aes(x = BodyWt, y = BrainWt)) + geom_point() + coord_trans(x = "log10", y = "log10") # Scatterplot with scale_x_log10() and scale_y_log10() p2 <- ggplot(data = mammals, aes(x = BodyWt, y = BrainWt)) + geom_point() + scale_x_log10() + scale_y_log10() grid.arrange(p1, p2, ncol=2) ### Identifying outliers • Use filter() to create a scatterplot for SLG as a function of OBP among players who had at least 200 at-bats. mlbBat10 %>% filter(AB >= 200) %>% ggplot(aes(OBP, SLG)) + geom_point() • Find the row of mlbBat10 corresponding to the one player with at least 200 at-bats whose OBP was below 0.200. mlbBat10 %>% filter(AB >=200) %>% filter(OBP < 0.200) ## name team position G AB R H 2B 3B HR RBI TB BB SO SB CS OBP ## 1 B Wood LAA 3B 81 226 20 33 2 0 4 14 47 6 71 1 0 0.174 ## SLG AVG ## 1 0.208 0.146 ## Correlation ### Computing correlation • Use cor() to compute the correlation between the birthweight of babies in the ncbirths dataset and their mother’s age. There is no missing data in either variable. # Compute correlation ncbirths %>% summarize(N = n(), r = cor(weight, mage)) ## N r ## 1 1000 0.05506589 • Compute the correlation between the birthweight and the number of weeks of gestation for all non-missing pairs. The use argument allows you to override the default behavior of returning NA whenever any of the values encountered is NA. Setting the use argument to "pairwise.complete.obs" allows cor() to compute the correlation coefficient for those observations where the values of x and y are both not missing. # Compute correlation for all non-missing pairs ncbirths %>% summarize(N = n(), r = cor(weight, weeks, use = 'pairwise.complete.obs')) ## N r ## 1 1000 0.6701013 ### The Anscombe dataset (???) library(Tmisc) data(quartet) ggplot(data = quartet, aes(x = x, y = y)) + geom_point() + facet_wrap(~ set) In 1973, Francis Anscombe famously created four datasets with remarkably similar numerical properties, but obviously different graphic relationships. The Anscombe dataset contains the x and y coordinates for these four datasets, along with a grouping variable, set, that distinguishes the quartet. quartet %>% group_by(set) %>% summarize(N = n(), mean(x), sd(x), mean(y), sd(y), cor(x,y) ) ## # A tibble: 4 x 7 ## set N mean(x) sd(x) mean(y) sd(y) cor(x, y) ## <fct> <int> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 I 11 9 3.32 7.50 2.03 0.816 ## 2 II 11 9 3.32 7.50 2.03 0.816 ## 3 III 11 9 3.32 7.5 2.03 0.816 ## 4 IV 11 9 3.32 7.50 2.03 0.817 ### Perception of correlation # Correlation for all baseball players mlbBat10 %>% summarize(N = n(), r = cor(OBP, SLG)) ## N r ## 1 1199 0.8145628 # Correlation for all players with at least 200 ABs mlbBat10 %>% filter(AB >= 200) %>% summarize(N = n(), r = cor(OBP, SLG)) ## N r ## 1 329 0.6855364 # Correlation of body dimensions bdims %>% group_by(sex) %>% summarize(N = n(), r = cor(hgt, wgt)) ## # A tibble: 2 x 3 ## sex N r ## <fct> <int> <dbl> ## 1 female 260 0.431 ## 2 male 247 0.535 # Correlation among mammals, with and without log mammals %>% summarize(N = n(), r = cor(BodyWt, BrainWt), r_log = cor(log(BodyWt), log(BrainWt))) ## N r r_log ## 1 62 0.9341638 0.9595748 ### Spurious correlation in random data set.seed(926) noise <- data.frame(x=rnorm(1000), y=rnorm(1000), z = 1:20) Statisticians must always be skeptical of potentially spurious correlations. Human beings are very good at seeing patterns in data, sometimes when the patterns themselves are actually just random noise. To illustrate how easy it can be to fall into this trap, we will look for patterns in truly random data. The noise dataset contains 20 sets of x and y variables drawn at random from a standard normal distribution. Each set, denoted as z, has 50 observations of x, y pairs. Do you see any pairs of variables that might be meaningfully correlated? Are all of the correlation coefficients close to zero? • Create a faceted scatterplot that shows the relationship between each of the 20 sets of pairs of random variables x and y. You will need the facet_wrap() function for this. • Compute the actual correlation between each of the 20 sets of pairs of x and y. • Identify the datasets that show non-trivial correlation of greater than 0.2 in absolute value. # Create faceted scatterplot ggplot(noise, aes(x=x, y=y)) + geom_point() + facet_wrap(~ z) # Compute correlations for each dataset noise_summary <- noise %>% group_by(z) %>% summarize(N = n(), spurious_cor = cor(x, y)) # Isolate sets with correlations above 0.2 in absolute strength noise_summary %>% filter(abs(spurious_cor) > 0.2) ## # A tibble: 1 x 3 ## z N spurious_cor ## <int> <int> <dbl> ## 1 8 50 -0.228 ## Simple linear regression ### The “best fit” line • Create a scatterplot of body weight as a function of height for all individuals in the bdims dataset with a simple linear model plotted over the data. # Scatterplot with regression line ggplot(data = bdims, aes(x = hgt, y = wgt)) + geom_point() + geom_smooth(method = "lm", se = FALSE) ### Uniqueness of least squares regression line add_line <- function (my_slope) { bdims_summary <- bdims %>% summarize(N = n(), r = cor(hgt, wgt), mean_hgt = mean(hgt), mean_wgt = mean(wgt), sd_hgt = sd(hgt), sd_wgt = sd(wgt)) %>% mutate(true_slope = r * sd_wgt / sd_hgt, true_intercept = mean_wgt - true_slope * mean_hgt) p <- ggplot(data = bdims, aes(x = hgt, y = wgt)) + geom_point() + geom_point(data = bdims_summary, aes(x = mean_hgt, y = mean_wgt), color = "red", size = 3) my_data <- bdims_summary %>% mutate(my_slope = my_slope, my_intercept = mean_wgt - my_slope * mean_hgt) p + geom_abline(data = my_data, aes(intercept = my_intercept, slope = my_slope), color = "dodgerblue") } The least squares criterion implies that the slope of the regression line is unique. In practice, the slope is computed by R. In this exercise, you will experiment with trying to find the optimal value for the regression slope for weight as a function of height in the bdims dataset via trial-and-error. A custom function called add_line(), which takes a single argument: the proposed slope coefficient. add_line(my_slope = 1.15) ### Fitting a linear model “by hand” Simple linear regression model: $Y = \beta_0 + \beta_1 \cdot X + \epsilon \,, \text{ where } \epsilon \sim N(0, \sigma_{\epsilon}) \,.$ Two facts enable you to compute the slope $$\beta_1$$ and intercept $$\beta_0$$ of a simple linear regression model from some basic summary statistics. First, the slope can be defined as: $b_1 = r_{X,Y} \cdot \frac{s_Y}{s_X}$ where $$rX,Y$$ represents the correlation (cor()) of $$X$$ and $$Y$$ and $$sX$$ and $$sY$$ represent the standard deviation (sd()) of $$X$$ and $$Y$$, respectively. The bdims_summary data frame contains all of the information you need to compute the slope and intercept of the least squares regression line for body weight $$(Y)$$ as a function of height $$(X)$$. You might need to do some algebra to solve for $$b_0$$! bdims_summary <- bdims %>% summarise(N = n(), r = cor(hgt, wgt), mean_hgt = mean(hgt), sd_hgt = sd(hgt), mean_wgt = mean(wgt), sd_wgt = sd(wgt) ) bdims_summary ## N r mean_hgt sd_hgt mean_wgt sd_wgt ## 1 507 0.7173011 171.1438 9.407205 69.14753 13.34576 # Add slope and intercept bdims_summary %>% mutate(slope = r*(sd_wgt/sd_hgt), intercept = mean_wgt - (slope * mean_hgt) ) ## N r mean_hgt sd_hgt mean_wgt sd_wgt slope intercept ## 1 507 0.7173011 171.1438 9.407205 69.14753 13.34576 1.017617 -105.0113 ### Regression to the mean Regression to the mean is a concept attributed to Sir Francis Galton. The basic idea is that extreme random observations will tend to be less extreme upon a second trial. This is simply due to chance alone. While “regression to the mean” and “linear regression” are not the same thing, we will examine them together in this exercise. The GaltonFamilies (???) datasets contain data originally collected by Galton himself in the 1880s on the heights of men and women, respectively, along with their parents’ heights. # This dataset is based on the work done by Francis Galton in the 19th century library(HistData) data(GaltonFamilies) head(GaltonFamilies) ## family father mother midparentHeight children childNum gender ## 1 001 78.5 67.0 75.43 4 1 male ## 2 001 78.5 67.0 75.43 4 2 female ## 3 001 78.5 67.0 75.43 4 3 female ## 4 001 78.5 67.0 75.43 4 4 female ## 5 002 75.5 66.5 73.66 4 1 male ## 6 002 75.5 66.5 73.66 4 2 male ## childHeight ## 1 73.2 ## 2 69.2 ## 3 69.0 ## 4 69.0 ## 5 73.5 ## 6 72.5 • Create a scatterplot of the height of men as a function of their father’s height. Add the simple linear regression line and a diagonal line (with slope equal to 1 and intercept equal to 0) to the plot. # Height of children vs. height of father GaltonFamilies %>% filter(gender == "male") %>% ggplot(aes(father, childHeight)) + geom_point() + geom_abline(slope = 1, intercept = 0) + geom_smooth(method = "lm", se = FALSE) # Height of children vs. height of mother GaltonFamilies %>% filter(gender == "female") %>% ggplot(aes(mother, childHeight)) + geom_point() + geom_abline(slope = 1, intercept = 0) + geom_smooth(method = "lm", se = FALSE) ## Interpreting regression models ### Fitting simple linear models While the geom_smooth(method = "lm") function is useful for drawing linear models on a scatterplot, it doesn’t actually return the characteristics of the model. As suggested by that syntax, however, the function that creates linear models is lm(). This function generally takes two arguments: • A formula that specifies the model • A data argument for the data frame that contains the data you want to use to fit the model The lm() function return a model object having class "lm". This object contains lots of information about your regression model, including the data used to fit the model, the specification of the model, the fitted values and residuals, etc. • Using the bdims dataset, create a linear model for the weight of people as a function of their height. # Linear model for weight as a function of height lm(wgt ~ hgt, data = bdims) ## ## Call: ## lm(formula = wgt ~ hgt, data = bdims) ## ## Coefficients: ## (Intercept) hgt ## -105.011 1.018 • Using the mlbBat10 dataset, create a linear model for SLG as a function of OBP. # Linear model for SLG as a function of OBP lm(SLG ~ OBP, data = mlbBat10) ## ## Call: ## lm(formula = SLG ~ OBP, data = mlbBat10) ## ## Coefficients: ## (Intercept) OBP ## 0.009407 1.110323 • Using the mammals dataset, create a linear model for the body weight of mammals as a function of their brain weight, after taking the natural log of both variables. # Log-linear model for body weight as a function of brain weight lm(log(BodyWt) ~ log(BrainWt), data = mammals) ## ## Call: ## lm(formula = log(BodyWt) ~ log(BrainWt), data = mammals) ## ## Coefficients: ## (Intercept) log(BrainWt) ## -2.509 1.225 ### The lm summary output An "lm" object contains a host of information about the regression model that you fit. There are various ways of extracting different pieces of information. The coef() function displays only the values of the coefficients. Conversely, the summary() function displays not only that information, but a bunch of other information, including the associated standard error and p-value for each coefficient, the $$R^2$$, adjusted $$R^2$$, and the residual standard error. The summary of an "lm" object in R is very similar to the output you would see in other statistical computing environments (e.g. Stata, SPSS, etc). The mod object is a linear model for the weight of individuals as a function of their height, using the bdims. mod <- lm(wgt ~ hgt, data = bdims) • Use coef() to display the coefficients of mod. coef(mod) ## (Intercept) hgt ## -105.011254 1.017617 • Use summary() to display the full regression output of mod. summary(mod) ## ## Call: ## lm(formula = wgt ~ hgt, data = bdims) ## ## Residuals: ## Min 1Q Median 3Q Max ## -18.743 -6.402 -1.231 5.059 41.103 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -105.01125 7.53941 -13.93 <2e-16 *** ## hgt 1.01762 0.04399 23.14 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 9.308 on 505 degrees of freedom ## Multiple R-squared: 0.5145, Adjusted R-squared: 0.5136 ## F-statistic: 535.2 on 1 and 505 DF, p-value: < 2.2e-16 ### Fitted values and residuals Once you have fit a regression model, you are often interested in the fitted values ($$\hat{y}_i$$) and the residuals ($$e_i$$), where $$i$$ indexes the observations. Recall that: $e_i = y_i - \hat{y}_i$ In this exercise, we will confirm these two mathematical facts by accessing the fitted values and residuals with the fitted.values() and residuals() functions, respectively, for the following model: mod <- lm(wgt ~ hgt, data = bdims) • Confirm that the mean of the body weights equals the mean of the fitted values of mod. mean(bdims$wgt) == mean(fitted.values(mod)) ## [1] TRUE • Compute the mean of the residuals of mod. mean(residuals(mod)) ## [1] -1.266971e-15 The least squares fitting procedure guarantees that the mean of the residuals is zero (n.b., numerical instability may result in the computed values not being exactly zero). At the same time, the mean of the fitted values must equal the mean of the response variable. ### Tidying your linear model As you fit a regression model, there are some quantities (e.g. $$R^2$$) that apply to the model as a whole, while others apply to each observation (e.g. $$y^i$$). If there are several of these per-observation quantities, it is sometimes convenient to attach them to the original data as new variables. The augment() function from the broom package does exactly this. It takes a model object as an argument and returns a data frame that contains the data on which the model was fit, along with several quantities specific to the regression model, including the fitted values, residuals, leverage scores, and standardized residuals. # Load broom library(broom) # Create bdims_tidy bdims_tidy <- augment(mod) # Glimpse the resulting data frame glimpse(bdims_tidy) ## Observations: 507 ## Variables: 9 ## $wgt <dbl> 65.6, 71.8, 80.7, 72.6, 78.8, 74.8, 86.4, 78.4, 62.... ##$ hgt <dbl> 174.0, 175.3, 193.5, 186.5, 187.2, 181.5, 184.0, 18... ## $.fitted <dbl> 72.05406, 73.37697, 91.89759, 84.77427, 85.48661, 7... ##$ .se.fit <dbl> 0.4320546, 0.4520060, 1.0667332, 0.7919264, 0.81834... ## $.resid <dbl> -6.4540648, -1.5769666, -11.1975919, -12.1742745, -... ##$ .hat <dbl> 0.002154570, 0.002358152, 0.013133942, 0.007238576,... ## $.sigma <dbl> 9.312824, 9.317005, 9.303732, 9.301360, 9.312471, 9... ##$ .cooksd <dbl> 5.201807e-04, 3.400330e-05, 9.758463e-03, 6.282074e... ## $.std.resid <dbl> -0.69413418, -0.16961994, -1.21098084, -1.31269063,... ### Making predictions The fitted.values() function or the augment()-ed data frame provides us with the fitted values for the observations that were in the original data. However, once we have fit the model, we may want to compute expected values for observations that were not present in the data on which the model was fit. These types of predictions are called out-of-sample. The ben data frame contains a height and weight observation for one person. The mod object contains the fitted model for weight as a function of height for the observations in the bdims dataset. We can use the predict() function to generate expected values for the weight of new individuals. We must pass the data frame of new observations through the newdata argument. (ben <- data.frame(wgt = 74.8, hgt = 182.8)) ## wgt hgt ## 1 74.8 182.8 Note that the data frame ben must have variables with the exact same names as those in the fitted model. predict(mod, newdata = ben ) ## 1 ## 81.00909 ### Adding a regression line to a plot manually The geom_smooth() function makes it easy to add a simple linear regression line to a scatterplot of the corresponding variables. And in fact, there are more complicated regression models that can be visualized in the data space with geom_smooth(). However, there may still be times when we will want to add regression lines to our scatterplot manually. To do this, we will use the geom_abline() function, which takes slope and intercept arguments. Naturally, we have to compute those values ahead of time, but we already saw how to do this (e.g. using coef()). The coefs data frame contains the model estimates retrieved from coef(). Passing this to geom_abline() as the data argument will enable you to draw a straight line on your scatterplot. • Use geom_abline() to add a line defined in the coefs data frame to a scatterplot of weight vs. height for individuals in the bdims dataset. coefs <- coef(mod) # Add the line to the scatterplot ggplot(bdims, aes(x = hgt, y = wgt)) + geom_point() + geom_abline(aes(intercept = coefs[1], slope = coefs[2]), color = "dodgerblue") ## Model Fit ### Standard error of residuals One way to assess strength of fit is to consider how far off the model is for a typical case. That is, for some observations, the fitted value will be very close to the actual value, while for others it will not. The magnitude of a typical residual can give us a sense of generally how close our estimates are. However, recall that some of the residuals are positive, while others are negative. In fact, it is guaranteed by the least squares fitting procedure that the mean of the residuals is zero. Thus, it makes more sense to compute the square root of the mean squared residual, or root mean squared error (RMSE). R calls this quantity the residual standard error. To make this estimate unbiased, you have to divide the sum of the squared residuals by the degrees of freedom in the model. Thus, $RMSE = \sqrt{ \frac{\sum_i{e_i^2}}{d.f.} } = \sqrt{ \frac{SSE}{d.f.} }$ You can recover the residuals from mod with residuals(), and the degrees of freedom with df.residual(). • View a summary() of mod. # View summary of model summary(mod) ## ## Call: ## lm(formula = wgt ~ hgt, data = bdims) ## ## Residuals: ## Min 1Q Median 3Q Max ## -18.743 -6.402 -1.231 5.059 41.103 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -105.01125 7.53941 -13.93 <2e-16 *** ## hgt 1.01762 0.04399 23.14 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 9.308 on 505 degrees of freedom ## Multiple R-squared: 0.5145, Adjusted R-squared: 0.5136 ## F-statistic: 535.2 on 1 and 505 DF, p-value: < 2.2e-16 • Compute the mean of the residuals() and verify that it is approximately zero. # Compute the mean of the residuals mean(residuals(mod)) ## [1] -1.266971e-15 • Use residuals() and df.residual() to compute the root mean squared error (RMSE), a.k.a. residual standard error. # Compute RMSE sqrt(sum(residuals(mod)^2) / df.residual(mod)) ## [1] 9.30804 ### Assessing simple linear model fit Recall that the coefficient of determination ($$R^2$$), can be computed as $R^2 = 1 - \frac{SSE}{SST} = 1 - \frac{Var(e)}{Var(y)} \,,$ where $$e$$ is the vector of residuals and $$y$$ is the response variable. This gives us the interpretation of $$R^2$$ as the percentage of the variability in the response that is explained by the model, since the residuals are the part of that variability that remains unexplained by the model. The bdims_tidy data frame is the result of augment()-ing the bdims data frame with the mod for wgt as a function of hgt. • Use the summary() function to view the full results of mod. summary(mod) ## ## Call: ## lm(formula = wgt ~ hgt, data = bdims) ## ## Residuals: ## Min 1Q Median 3Q Max ## -18.743 -6.402 -1.231 5.059 41.103 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -105.01125 7.53941 -13.93 <2e-16 *** ## hgt 1.01762 0.04399 23.14 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 9.308 on 505 degrees of freedom ## Multiple R-squared: 0.5145, Adjusted R-squared: 0.5136 ## F-statistic: 535.2 on 1 and 505 DF, p-value: < 2.2e-16 • Use the bdims_tidy data frame to compute the $$R^2$$ of mod manually using the formula above, by computing the ratio of the variance of the residuals to the variance of the response variable. # Compute R-squared bdims_tidy %>% summarize(var_y = var(wgt), var_e = var(.resid) ) %>% mutate(R_squared = 1 - (var_e / var_y) ) ## var_y var_e R_squared ## 1 178.1094 86.46839 0.5145208 ### Interpretation of $$R^2$$ The $$R^2$$ reported for the regression model for poverty rate of U.S. counties in terms of high school graduation rate is 0.464. countyComplete is from R package openintro by (???). lm(formula = poverty ~ hs_grad, data = countyComplete) %>% summary() ## ## Call: ## lm(formula = poverty ~ hs_grad, data = countyComplete) ## ## Residuals: ## Min 1Q Median 3Q Max ## -18.035 -3.034 -0.434 2.405 36.874 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 64.59437 0.94619 68.27 <2e-16 *** ## hs_grad -0.59075 0.01134 -52.09 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 4.677 on 3141 degrees of freedom ## Multiple R-squared: 0.4635, Adjusted R-squared: 0.4633 ## F-statistic: 2713 on 1 and 3141 DF, p-value: < 2.2e-16 $$46.4\%$$ of the variability in poverty rate among U.S. counties can be explained by high school graduation rate. ## Linear vs. average The $$R^2$$ gives us a numerical measurement of the strength of fit relative to a null model based on the average of the response variable: $\hat{y}_{null} = \bar{y}$ This model has an $$R^2$$ of zero because $$SSE=SST$$. That is, since the fitted values $$\hat{y}_{null}$$ are all equal to the average $$\bar{y}$$, the residual for each observation is the distance between that observation and the mean of the response. Since we can always fit the null model, it serves as a baseline against which all other models will be compared. In the graphic, we visualize the residuals for the null model (mod_null at left) vs. the simple linear regression model (mod_hgt at right) with height as a single explanatory variable. Try to convince yourself that, if you squared the lengths of the grey arrows on the left and summed them up, you would get a larger value than if you performed the same operation on the grey arrows on the right. It may be useful to preview these augment()-ed data frames with glimpse(): mod_null <- lm(wgt ~ 1, data = bdims) %>% augment() mod_hgt <- lm(wgt ~ hgt, data = bdims) %>% augment() glimpse(mod_null) ## Observations: 507 ## Variables: 8 ##$ wgt <dbl> 65.6, 71.8, 80.7, 72.6, 78.8, 74.8, 86.4, 78.4, 62.... ## $.fitted <dbl> 69.14753, 69.14753, 69.14753, 69.14753, 69.14753, 6... ##$ .se.fit <dbl> 0.5927061, 0.5927061, 0.5927061, 0.5927061, 0.59270... ## $.resid <dbl> -3.5475345, 2.6524655, 11.5524655, 3.4524655, 9.652... ##$ .hat <dbl> 0.001972387, 0.001972387, 0.001972387, 0.001972387,... ## $.sigma <dbl> 13.35803, 13.35845, 13.34906, 13.35808, 13.35205, 1... ##$ .cooksd <dbl> 1.399179e-04, 7.822033e-05, 1.483780e-03, 1.325192e... ## $.std.resid <dbl> -0.26607983, 0.19894594, 0.86648293, 0.25894926, 0.... # Compute SSE for null model mod_null %>% summarize(SSE = var(.resid)) ## SSE ## 1 178.1094 glimpse(mod_hgt) ## Observations: 507 ## Variables: 9 ##$ wgt <dbl> 65.6, 71.8, 80.7, 72.6, 78.8, 74.8, 86.4, 78.4, 62.... ## $hgt <dbl> 174.0, 175.3, 193.5, 186.5, 187.2, 181.5, 184.0, 18... ##$ .fitted <dbl> 72.05406, 73.37697, 91.89759, 84.77427, 85.48661, 7... ## $.se.fit <dbl> 0.4320546, 0.4520060, 1.0667332, 0.7919264, 0.81834... ##$ .resid <dbl> -6.4540648, -1.5769666, -11.1975919, -12.1742745, -... ## $.hat <dbl> 0.002154570, 0.002358152, 0.013133942, 0.007238576,... ##$ .sigma <dbl> 9.312824, 9.317005, 9.303732, 9.301360, 9.312471, 9... ## $.cooksd <dbl> 5.201807e-04, 3.400330e-05, 9.758463e-03, 6.282074e... ##$ .std.resid <dbl> -0.69413418, -0.16961994, -1.21098084, -1.31269063,... ggplot(bdims, aes(x=hgt, y=wgt)) + geom_point() + geom_smooth(method = "lm", se = FALSE) # Compute SSE for regression model mod_hgt %>% summarize(SSE = var(.resid)) ## SSE ## 1 86.46839 ### Leverage The leverage of an observation in a regression model is defined entirely in terms of the distance of that observation from the mean of the explanatory variable. That is, observations close to the mean of the explanatory variable have low leverage, while observations far from the mean of the explanatory variable have high leverage. Points of high leverage may or may not be influential. The augment() function from the broom package will add the leverage scores (.hat) to a model data frame. • Use augment() to list the top 6 observations by their leverage scores, in descending order. mod <- lm(SLG ~ OBP, filter(mlbBat10, AB >= 10)) # Rank points of high mod %>% augment() %>% arrange(desc(.hat)) %>% head() ## SLG OBP .fitted .se.fit .resid .hat .sigma ## 1 0.000 0.000 -0.03744579 0.009956861 0.03744579 0.01939493 0.07153050 ## 2 0.000 0.000 -0.03744579 0.009956861 0.03744579 0.01939493 0.07153050 ## 3 0.000 0.000 -0.03744579 0.009956861 0.03744579 0.01939493 0.07153050 ## 4 0.308 0.550 0.69049108 0.009158810 -0.38249108 0.01641049 0.07011360 ## 5 0.000 0.037 0.01152451 0.008770891 -0.01152451 0.01504981 0.07154283 ## 6 0.038 0.038 0.01284803 0.008739031 0.02515197 0.01494067 0.07153800 ## .cooksd .std.resid ## 1 0.0027664282 0.5289049 ## 2 0.0027664282 0.5289049 ## 3 0.0027664282 0.5289049 ## 4 0.2427446800 -5.3943121 ## 5 0.0002015398 -0.1624191 ## 6 0.0009528017 0.3544561 ### Influence As noted previously, observations of high leverage may or may not be influential. The influence of an observation depends not only on its leverage, but also on the magnitude of its residual. Recall that while leverage only takes into account the explanatory variable ($$x$$), the residual depends on the response variable ($$y$$) and the fitted value ($$\hat{y}$$). Influential points are likely to have high leverage and deviate from the general relationship between the two variables. We measure influence using Cook’s distance, which incorporates both the leverage and residual of each observation. # Rank influential points mod %>% augment() %>% arrange(desc(.cooksd)) %>% head() ## SLG OBP .fitted .se.fit .resid .hat .sigma ## 1 0.308 0.550 0.69049108 0.009158810 -0.3824911 0.016410487 0.07011360 ## 2 0.833 0.385 0.47211002 0.004190644 0.3608900 0.003435619 0.07028875 ## 3 0.800 0.455 0.56475653 0.006186785 0.2352435 0.007488132 0.07101125 ## 4 0.379 0.133 0.13858258 0.005792344 0.2404174 0.006563752 0.07098798 ## 5 0.786 0.438 0.54225666 0.005678026 0.2437433 0.006307223 0.07097257 ## 6 0.231 0.077 0.06446537 0.007506974 0.1665346 0.011024863 0.07127661 ## .cooksd .std.resid ## 1 0.24274468 -5.394312 ## 2 0.04407145 5.056428 ## 3 0.04114818 3.302718 ## 4 0.03760256 3.373787 ## 5 0.03712042 3.420018 ## 6 0.03057912 2.342252 ### Removing outliers Observations can be outliers for a number of different reasons. Statisticians must always be careful—and more importantly, transparent—when dealing with outliers. Sometimes, a better model fit can be achieved by simply removing outliers and re-fitting the model. However, one must have strong justification for doing this. A desire to have a higher $$R^2$$ is not a good enough reason! In the mlbBat10 data, the outlier with an OBP of 0.550 is Bobby Scales, an infielder who had four hits in 13 at-bats for the Chicago Cubs. Scales also walked seven times, resulting in his unusually high OBP. The justification for removing Scales here is weak. While his performance was unusual, there is nothing to suggest that it is not a valid data point, nor is there a good reason to think that somehow we will learn more about Major League Baseball players by excluding him. Nevertheless, we can demonstrate how removing him will affect our model. • Use filter() to create a subset of mlbBat10 called nontrivial_players consisting of only those players with at least 10 at-bats and OBP of below 0.500. nontrivial_players <- mlbBat10 %>% filter((AB >= 10) & (OBP < 0.500)) • Fit the linear model for SLG as a function of OBP for the nontrivial_players. Save the result as mod_cleaner. mod_cleaner <- lm(SLG ~ OBP, data = nontrivial_players) • View the summary() of the new model and compare the slope and $$R^2$$ to those of mod, the original model fit to the data on all players. summary(mod_cleaner) ## ## Call: ## lm(formula = SLG ~ OBP, data = nontrivial_players) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.31383 -0.04165 -0.00261 0.03992 0.35819 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.043326 0.009823 -4.411 1.18e-05 *** ## OBP 1.345816 0.033012 40.768 < 2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.07011 on 734 degrees of freedom ## Multiple R-squared: 0.6937, Adjusted R-squared: 0.6932 ## F-statistic: 1662 on 1 and 734 DF, p-value: < 2.2e-16 • Visualize the new model with ggplot() and the appropriate geom_*() functions. ggplot(nontrivial_players, aes(y=SLG, x=OBP))+ geom_point()+ geom_smooth(method = "lm") ### High leverage points Not all points of high leverage are influential. While the high leverage observation corresponding to Bobby Scales in the previous exercise is influential, the three observations for players with OBP and SLG values of 0 are not influential. This is because they happen to lie right near the regression anyway. Thus, while their extremely low OBP gives them the power to exert influence over the slope of the regression line, their low SLG prevents them from using it. • Use a combination of augment(), arrange() with two arguments, and head() to find the top 6 observations with the highest leverage but the lowest Cook’s distance. mod <- lm(SLG ~ OBP, mlbBat10) mod %>% augment() %>% arrange(.hat, desc(.cooksd)) %>% head() ## SLG OBP .fitted .se.fit .resid .hat .sigma ## 1 0.323 0.206 0.2381334 0.003956495 0.08486655 0.0008340345 0.1370347 ## 2 0.281 0.206 0.2381334 0.003956495 0.04286655 0.0008340345 0.1370510 ## 3 0.216 0.205 0.2370231 0.003956499 -0.02102312 0.0008340362 0.1370553 ## 4 0.276 0.207 0.2392438 0.003956623 0.03675623 0.0008340884 0.1370525 ## 5 0.174 0.204 0.2359128 0.003956635 -0.06191280 0.0008340936 0.1370449 ## 6 0.271 0.204 0.2359128 0.003956635 0.03508720 0.0008340936 0.1370529 ## .cooksd .std.resid ## 1 1.602930e-04 0.6197252 ## 2 4.089579e-05 0.3130265 ## 3 9.836415e-06 -0.1535182 ## 4 3.006987e-05 0.2684068 ## 5 8.531654e-05 -0.4521089 ## 6 2.740121e-05 0.2562190 sessionInfo() ## R version 3.4.4 (2018-03-15) ## Platform: x86_64-w64-mingw32/x64 (64-bit) ## Running under: Windows 10 x64 (build 17134) ## ## Matrix products: default ## ## locale: ## [1] LC_COLLATE=English_Canada.1252 LC_CTYPE=English_Canada.1252 ## [3] LC_MONETARY=English_Canada.1252 LC_NUMERIC=C ## [5] LC_TIME=English_Canada.1252 ## ## attached base packages: ## [1] methods stats graphics grDevices utils datasets base ## ## other attached packages: ## [1] broom_0.4.4 HistData_0.8-4 Tmisc_0.1.19 bindrcpp_0.2.2 ## [5] openintro_1.7.1 gridExtra_2.3 ggplot2_2.2.1 dplyr_0.7.4 ## ## loaded via a namespace (and not attached): ## [1] Rcpp_0.12.16 pillar_1.2.2 compiler_3.4.4 plyr_1.8.4 ## [5] bindr_0.1.1 tools_3.4.4 digest_0.6.15 lattice_0.20-35 ## [9] nlme_3.1-137 evaluate_0.10.1 tibble_1.4.2 gtable_0.2.0 ## [13] pkgconfig_2.0.1 rlang_0.2.0 psych_1.8.4 cli_1.0.0 ## [17] parallel_3.4.4 yaml_2.1.19 blogdown_0.6 xfun_0.1 ## [21] stringr_1.3.0 knitr_1.20 rprojroot_1.3-2 grid_3.4.4 ## [25] glue_1.2.0 R6_2.2.2 foreign_0.8-70 rmarkdown_1.9 ## [29] bookdown_0.7 reshape2_1.4.3 purrr_0.2.4 tidyr_0.8.0 ## [33] magrittr_1.5 backports_1.1.2 scales_0.5.0 codetools_0.2-15 ## [37] htmltools_0.3.6 mnormt_1.5-5 assertthat_0.2.0 colorspace_1.3-2 ## [41] labeling_0.3 utf8_1.1.3 stringi_1.1.7 lazyeval_0.2.1 ## [45] munsell_0.4.3 crayon_1.3.4 ## Adding cites for R packages using knitr knitr::write_bib(.packages(), "packages.bib") # References Wickham, Hadley, Romain François, Lionel Henry, and Kirill Müller. 2018. Dplyr: A Grammar of Data Manipulation. https://CRAN.R-project.org/package=dplyr.
This example describes WLAN radio frequency channel designations and shows how to calculate the channel center frequency in accordance with IEEE® 802.11™ specifications. ### IEEE 802.11 Channel Designations WLAN operates in unlicensed radio frequency (RF) spectra. Governing bodies in individual countries allocate these spectra and appropriate regulatory bodies specify values of maximum allowable output power. For a detailed description of country-specific information, operating classes, and behavior limits, see Annex E of [1] and [2]. The IEEE 802.11 standards designate bands for signal transmission, each corresponding to a standard or group of standards. Of these, WLAN Toolbox™ software supports these bands and corresponding standards. • 900 MHz (802.11ah™) • 2.4 GHz (802.11b/g/n/ax) • 5 GHz (802.11a/h/j/n/ac/ax) • 6 GHz (802.11ax™) Within each band, the standards specify channel numbers with a designated channel spacing. For example, the 2.4 GHz band contains channels 1 to 13, spaced 5 MHz apart, and channel 14, spaced 12 MHz from channel 13. Each band also has a designated channel start frequency, ${f}_{s}$, for the first channel. For the 2.4, 5, and 6 GHz operating bands, ${f}_{s}$ is 2.407, 5, and 5.950 GHz, respectively. Because WLAN channel bandwidths are greater than 5 MHz, cross-channel interference limits the number of designated usable channels. Access point deployments manage interference from neighboring cells by operating on nonoverlapping channels. In the United States, the 2.4 GHz band designated usable nonoverlapping channels are 1, 6, and 11. This figure shows overlap for channels 1–14 in the 2.4 GHz band. ### Channel Center Frequency Calculation To determine the channel center frequency for a given channel number in the 2.4, 5, or 6 GHz frequency band, use the `wlanChannelFrequency` function. For example, calculate the center frequency of channel 6 in the 2.4 GHz band. ```channel = 6; band = 2.4; fc = wlanChannelFrequency(channel,band)``` ```fc = 2.4370e+09 ``` Calculate the center frequency of channels 37, 42, and 91 in the 6 GHz band. ```channel = [37 42 91]; band = 6; fc = wlanChannelFrequency(channel,band)``` ```fc = 1×3 109 × 6.1350 6.1600 6.4050 ``` ## References [1] IEEE Std 802.11™-2020 (Revision of IEEE Std 802.11-2016). “Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications.” IEEE Standard for Information technology — Telecommunications and information exchange between systems. Local and metropolitan area networks — Specific requirements. [2] IEEE Std 802.11ax™-2021 (Amendment to IEEE Std 802.11-2020). “Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications. Amendment 1: Enhancements for High Efficiency WLAN.” IEEE Standard for Information technology — Telecommunications and information exchange between systems. Local and metropolitan area networks — Specific requirements.
# Collect members from custom tree nodes In my program, I have a tree of Group nodes, each identified by their ImportId. Each group has zero or more members, that I want to collect. Each group has zero or more child groups, referenced by their importId, whose members should be added to the collection. Each group has zero or more RuleParts. Each rulePart is a collection of zero or more groups, referenced by their importId, whose members should be added or removed to the collection, depending on a boolean value Negated on each rulePart. I got the following code working: internal class RulePartTraverser { { if (ruleParts is null || !ruleParts.Any() || allGroups is null || !allGroups.Any()) { return new List<string>(); } var positiveRuleParts = collectRulePartMembers(ruleParts.Where(rp => !rp.Negated), allGroups); var negativeRuleParts = collectRulePartMembers(ruleParts.Where(rp => rp.Negated), allGroups); return positiveRuleParts.Except(negativeRuleParts).ToList(); } private IEnumerable<string> collectRulePartMembers(IEnumerable<RulePart> ruleParts, IReadOnlyList<Group> allGroups) { var members = ruleParts.Aggregate( seed: new List<string>().AsEnumerable(), (current, rp) => current.Union(collectIndirectMembers(rp.RulePartMemberImportIds, allGroups))); return members; } private IEnumerable<string> collectIndirectMembers(List<string> indirectGroupImportIds, IReadOnlyList<Group> allGroups) { var returnList = indirectGroupImportIds.Aggregate( seed: new List<string>().AsEnumerable(), (current, importId) => { var correspondingGroup = allGroups.Single(ag => ag.ImportId == importId); return current.Union(GetRulePartMembers(correspondingGroup.RuleParts, allGroups)) .Union(collectIndirectMembers(correspondingGroup.Children, allGroups)) .Union(correspondingGroup.Members); }); return returnList; } } public class RulePart { public bool Negated { get; set; } public List<string> RulePartMemberImportIds { get; set; } } public class Group { public string ImportId { get; set; } public string Name { get; set; } public List<string> Children { get; set; } public List<string> Members { get; set; } public List<RulePart> RuleParts { get; set; } } The above method accepts a list of ruleParts for a given group and a list of all groups. Some sanity checks are performed and if they fail, an empty list is returned. It then collects all members of all ruleParts that are not negated, then collects all members of all ruleParts that are negated and substracts the latter from the former. The resulting collection is returned. In order to collect all members of a single rulePart, all rulePartMembers are inspected and all the members of the corresponding groups are collected. To collect all members of a given single group, I recursively collect all members of the corresponding ruleParts; then I recursively add the members of all child groups, potentially collecting the members of this child group's ruleParts and this child group's children. Finally I add all direct members of the group and step up. Due to the nature of my data structure, I fear that this is the best I can do. However, is it possible (or sensible) to replace the recursion by an iteration? Are there other things I could or should improve? The following example is not too far away from a real case scenario, I hope it helps in explaining the data structure: var rootGroup = new Group { ImportId = "Root", Members = new List<string>(), Children = new List<string>(), RuleParts = new List<RulePart> { new RulePart{Negated = false, RulePartMemberImportIds = new List<string>{ "group01", "group02" } }, //add all members of groups group01 and group02 to the collection new RulePart{Negated = true, RulePartMemberImportIds = new List<string>{"group03", "group04"}} //but remove all members of groups group03 and group04 } }; var group01 = new Group { ImportId = "group01", Members = new List<string>(), Children = new List<string>(), RuleParts = new List<RulePart> { new RulePart{Negated = false, RulePartMemberImportIds = new List<string>{"group05"}}, //add all members of group05 new RulePart{Negated = true, RulePartMemberImportIds = new List<string>{"group06"}} //remove all members of group06 } }; var group02 = new Group { ImportId = "group02", Members = new List<string> { "member01", "member04", "member05" }, Children = new List<string>(), RuleParts = new List<RulePart>() }; var group03 = new Group { ImportId = "group03", Members = new List<string> { "member06", "member07", "member09" }, Children = new List<string>(), RuleParts = new List<RulePart>() }; var group04 = new Group { ImportId = "group04", Members = new List<string> { "member09", "member10", "member11" }, Children = new List<string>(), RuleParts = new List<RulePart>() }; var group05 = new Group { ImportId = "group05", Members = new List<string>(), Children = new List<string> { "group07", "group08" }, //add all members of groups group07 and group08 RuleParts = new List<RulePart>() }; var group06 = new Group { ImportId = "group06", Members = new List<string> { "member02" }, Children = new List<string>(), RuleParts = new List<RulePart>() }; var group07 = new Group { ImportId = "group07", Members = new List<string> { "member12" }, Children = new List<string>(), RuleParts = new List<RulePart>() }; var group08 = new Group { ImportId = "group08", Members = new List<string>(), Children = new List<string> { "group04" }, //add all members of group04 RuleParts = new List<RulePart>() }; var expectedMembers = new List<string> { "member01", "member04", "member05", "member12" }; The rule parts of the RootGroup would serve as input to GetRulePartMembers. In order to collect the members of the different rule parts, first the members of groups group01 - group04 have to be determined. For group01, this means determining the members of the ruleparts of group01, ie determining the members of groups group05 and group06, as these are ruleparts of group01. As group05 has child groups, the members of group05 is the union of the members of groups group07 and group08, group08 has a single child group group04, so the members of group08 are the members of group04, yielding member09, member10, member11 for group08, resulting in member09, member10, member11, member12 for group05. The single member of group06 is member02, removing it from the result collection of group05 results in the collection member09, member10, member11, member12 for group01. For group02, the members are directly member01, member04, member05, which results in the member collection member01, member04, member05, member09, member10, member11, member12 for the non-negated rulepart of the RootGroup. For the negated rulepart of the RootGroup, the members of group03 and group04 can directly be added to the result collection, as neither of the groups have children or ruleparts. This results in member06, member07, member09, member10, member11 for the negated rulepart. These members are then removed from the collection of members for the non-negated rulepart, yielding the final result of member01, member04, member05, member12 for the RootGroup. Just a couple of tipps... if (ruleParts is null || !ruleParts.Any() || allGroups is null || !allGroups.Any()) This sanity checks would be a big surprise for me because null is virtually never a valid value so this method should throw if any parameter is null. Having checked for nulls it's not necessary to also check them for Any. The queries wouldn't return anything anyway in this case so just let them do the job. var positiveRuleParts = collectRulePartMembers(ruleParts.Where(rp => !rp.Negated), allGroups); var negativeRuleParts = collectRulePartMembers(ruleParts.Where(rp => rp.Negated), allGroups); You don't have to iterate ruleParts twice. Instead use ToLookup var rulePartsLookup = ruleParts.ToLookup(x => x.Negated); and get the values with collectRulePartMembers(rulePartsLookup[false], allGroups); collectRulePartMembers(rulePartsLookup[true], allGroups); new List<string>().AsEnumerable() It's more natural to use Enumerable.Empty<string>() than creating an empty list and turn it into an enumerable anyway. I cannot comment on the recursion because I am not able to visualize it without an example. • Thanks, I have incorporated your tips. What is the practice on CodeReview regarding code changes in the original question? I have also added an example of some groups and ruleparts and the expected outcome of the recursion, along with an explanation how to determine the result. – Thaoden Oct 17 '18 at 10:58 • @Thaoden You did the right thing by not chaniging the original code ;-) it's not allowed after reviews have been posted. Adding an additional example is a great help too. If you want to share your new code you can post a self-answer or a follow-up if you'd like to have more feedback. There is, however, a catch when posting a self-answer - this needs to be described too - code-only answers are off-topic. – t3chb0t Oct 17 '18 at 11:01 If a Member can be owned by more groups that can correspond to RuleParts that can be either Negated or not, and that the Negated ownership take precedence over !Negated, I understand your code: public IEnumerable<string> GetRulePartMembers2(IEnumerable<RulePart> ruleParts, IEnumerable<Group> allGroups) { IEnumerable<string> ExtractMembers(IEnumerable<RulePart> partialRuleParts, bool negated) { return partialRuleParts .Where(rp => rp.Negated == negated) .SelectMany(rp => rp.RulePartMemberImportIds) .SelectMany(importId => { Group group = allGroups.Single(g => g.ImportId == importId); var children = allGroups.Where(g => group.Children.Contains(g.ImportId)); return group .Members .Union(ExtractMembers(group.RuleParts, negated) .Union(ExtractMembers(children.SelectMany(sg => sg.RuleParts), negated))) .Union(children.SelectMany(sg => sg.Members)); }); } return ExtractMembers(ruleParts, false).Except(ExtractMembers(ruleParts, true)); } If I test against your solution in the following way, I get the same result. I assume that the initial list of RuleParts is the gross list of all: var groups = new[] { rootGroup, group01, group02, group03, group04, group05, group06, group07, group08, }; var ruleParts = rootGroup.RuleParts; //groups.SelectMany(g => g.RuleParts).ToList(); var expectedMembers = new List<string> { "member01", "member04", "member05", "member12" }; RulePartTraverser traverser = new RulePartTraverser(); Console.WriteLine(string.Join(", ", traverser.GetRulePartMembers(ruleParts, groups).OrderBy(m => m))); Console.WriteLine(string.Join(", ", traverser.GetRulePartMembers2(ruleParts, groups).OrderBy(m => m))); I'm not convinced that my solution is more readable and clear than the yours - any more. Update: In order to only iterate the rule parts once for both Negated and !Negated the following could be a solution: public IEnumerable<string> GetRulePartMembers2(Group rootGroup, IEnumerable<Group> allGroups) { List<string> positiveMembers = new List<string>(); List<string> negativeMembers = new List<string>(); void AddMembers(RulePart rulePart, Group group) { if (rulePart.Negated) else } void ExtractMembers(IEnumerable<RulePart> partialRuleParts) { foreach (RulePart rulePart in partialRuleParts) { foreach (Group group in allGroups.Where(g => rulePart.RulePartMemberImportIds.Contains(g.ImportId))) { ExtractMembers(group.RuleParts); foreach (Group childGroup in allGroups.Where(cg => group.Children.Contains(cg.ImportId))) { // This is the only thing that is not obvious: // The members of a child group are added according to the rulePart of its parent? ExtractMembers(childGroup.RuleParts); } } } } ExtractMembers(rootGroup.RuleParts); return positiveMembers.Except(negativeMembers); } • Exactly, a member can be part of multiple groups, that can in turn each be part of multiple ruleparts, that in turn can be negated or not. Additionally, each group can be part of another groups children. I'm not sure I understand what is happening in your ExtractMembers method though, how would I incorporate the childgroups there? – Thaoden Oct 17 '18 at 11:01 • @Thaoden: see my update... – Henrik Hansen Oct 17 '18 at 11:57 • Not exactly, the initial list of ruleParts to start is only the ruleParts of the RootGroup - at least that was my initial intention. So basically you go through the list of all ruleparts twice, collecting all non-negated and all negated members and removing the negated from the non-negated - I'm not sure this would yield the same result for all cases, I would have to think of a counter-example though... – Thaoden Oct 17 '18 at 12:10 • @Thaoden: OK, it seems to work with just rootGroup.RuleParts as initial list of RuleParts, and you can optimize it, as my edit show. But don't waste time on my suggestion, I was just trying to understand your problem and it doesn't seem to make anything clearer :-) – Henrik Hansen Oct 17 '18 at 12:27 • The members of a child group are added unconditionally. Thanks for your input, I especially like the fact that these are local functions, so I don't have to pass all the references around. – Thaoden Oct 18 '18 at 6:55
Volume 10 (2014) Article 10 pp. 237-256 Lower Bounds for the Average and Smoothed Number of Pareto-Optima Revised: May 13, 2014 Published: September 29, 2014 [PDF (275K)]    [PS (1104K)]    [PS.GZ (306K)] [Source ZIP] Keywords: multiobjective optimization, probabilistic analysis, smoothed analysis ACM Classification: F.2.2, G.1.6, G.3 AMS Classification: 68Q25, Abstract: [Plain Text Version] Smoothed analysis of multiobjective $0$--$1$ linear optimization has drawn considerable attention recently. The goal is to give bounds for the number of Pareto-optimal solutions (i.e., solutions with the property that no other solution is at least as good in all the coordinates and better in at least one) for multiobjective optimization problems. In this article we prove several lower bounds for the expected number of Pareto optima. Our basic result is a lower bound of $\Omega_d(n^{d-1})$ for optimization problems with $d$ objectives and $n$ variables under fairly general conditions on the distributions of the linear objectives. Our proof relates the problem of finding lower bounds on the number of Pareto optima to results in discrete geometry and geometric probability about arrangements of hyperplanes. We use our basic result to derive the following results: (1) To our knowledge, the first lower bound for natural multiobjective optimization problems. We illustrate this on the maximum spanning tree problem with randomly chosen edge weights. Our technique is sufficiently flexible to yield such lower bounds also for other standard objective functions studied in this setting (such as multiobjective shortest path, TSP, matching). (2) A smoothed lower bound of $\min \{ \Omega_d( n^{d-1.5} \phi^d), 2^{\Theta_d(n)} \}$ for $\phi$-smooth instances of the $0$--$1$ knapsack problem with $d$ profits. Preliminary versions of parts of this paper appeared in the Proceedings of the 8th Annual Conference on Theory and Applications of Models of Computation (TAMC'11) and the Proceedings of the 32nd Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS'12).
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 11 Dec 2018, 17:59 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in December PrevNext SuMoTuWeThFrSa 2526272829301 2345678 9101112131415 16171819202122 23242526272829 303112345 Open Detailed Calendar • ### Free GMAT Prep Hour December 11, 2018 December 11, 2018 09:00 PM EST 10:00 PM EST Strategies and techniques for approaching featured GMAT topics. December 11 at 9 PM EST. • ### The winning strategy for 700+ on the GMAT December 13, 2018 December 13, 2018 08:00 AM PST 09:00 AM PST What people who reach the high 700's do differently? We're going to share insights, tips and strategies from data we collected on over 50,000 students who used examPAL. # The sum of 6 minutes, 35 seconds and 3 minutes, 20 seconds is approxim Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 51100 The sum of 6 minutes, 35 seconds and 3 minutes, 20 seconds is approxim  [#permalink] ### Show Tags 11 Jan 2018, 20:29 00:00 Difficulty: 15% (low) Question Stats: 88% (01:03) correct 13% (01:49) wrong based on 37 sessions ### HideShow timer Statistics The sum of 6 minutes, 35 seconds and 3 minutes, 20 seconds is approximately what percent of one hour? A. 27% B. 17% C. 11% D. 10% E. 6% _________________ Intern Joined: 25 Jan 2016 Posts: 18 Location: India GPA: 3.9 WE: Engineering (Consulting) Re: The sum of 6 minutes, 35 seconds and 3 minutes, 20 seconds is approxim  [#permalink] ### Show Tags 11 Jan 2018, 20:39 6 Min 35 Sec = 60*6 +35 =395 s 3 Min 20 Sec = 60*3 + 20 = 200 s ; So in total 395+200 = 595 sec is what % of 3600 sec ( 1 hour = 60*60 s) (595/3600)*100 =16.52 % or 17% appox SO B Intern Joined: 25 Jan 2016 Posts: 18 Location: India GPA: 3.9 WE: Engineering (Consulting) Re: The sum of 6 minutes, 35 seconds and 3 minutes, 20 seconds is approxim  [#permalink] ### Show Tags 11 Jan 2018, 20:42 Alternate method, 6 min -35 S + 3 min -20 s = 9 Min + 55 sec = 10 min appox 10/60*100= 16.66 % ( So , D) Senior Manager Joined: 07 Jul 2012 Posts: 378 Location: India Concentration: Finance, Accounting GPA: 3.5 Re: The sum of 6 minutes, 35 seconds and 3 minutes, 20 seconds is approxim  [#permalink] ### Show Tags 11 Jan 2018, 22:54 Convert minutes in seconds 6*60+35+3*60+20 360+35+180+20= 595 Divide by 3600 to convert seconds in to hours $$\frac{595}{3600}$$~$$\frac{600}{3600}$$~$$\frac{1}{6}$$~17% _________________ Kindly hit kudos if my post helps! Senior SC Moderator Joined: 22 May 2016 Posts: 2204 The sum of 6 minutes, 35 seconds and 3 minutes, 20 seconds is approxim  [#permalink] ### Show Tags 20 Jan 2018, 15:37 Bunuel wrote: The sum of 6 minutes, 35 seconds and 3 minutes, 20 seconds is approximately what percent of one hour? A. 27% B. 17% C. 11% D. 10% E. 6% Add minutes: (6+3) = 9 minutes Add seconds: (30+25) = 55 seconds $$\approx{1}$$ minute (9 + 1) = 10 minutes 1 hour = 60 minutes $$\frac{10}{60}=\frac{1}{6}*100 = (.167 * 100) = 16.7$$ % Board of Directors Status: QA & VA Forum Moderator Joined: 11 Jun 2011 Posts: 4273 Location: India GPA: 3.5 Re: The sum of 6 minutes, 35 seconds and 3 minutes, 20 seconds is approxim  [#permalink] ### Show Tags 21 Jan 2018, 08:25 Bunuel wrote: The sum of 6 minutes, 35 seconds and 3 minutes, 20 seconds is approximately what percent of one hour? A. 27% B. 17% C. 11% D. 10% E. 6% $$\frac{( 6*60 + 35 ) + ( 3*60 + 20 )}{60*60}*100$$ = $$\frac{395 + 200}{3600}*100$$ = 16.53% ~ 17 % answer will be (B) _________________ Thanks and Regards Abhishek.... PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only ) Re: The sum of 6 minutes, 35 seconds and 3 minutes, 20 seconds is approxim &nbs [#permalink] 21 Jan 2018, 08:25 Display posts from previous: Sort by
JohnsonGraph - Maple Help GraphTheory[SpecialGraphs] JohnsonGraph construct Johnson graph Calling Sequence JohnsonGraph(n,k) Parameters n - positive integer k - nonnegative integer Description • The JohnsonGraph(n, k) command constructs the (n,k) Johnson graph. This is an undirected graph in which the vertices correspond to k-element subsets of {1,…,k}, and an edge exists between two vertices when the intersection of their associated subsets has cardinality k-1. Examples > $\mathrm{with}\left(\mathrm{GraphTheory}\right):$ > $\mathrm{with}\left(\mathrm{SpecialGraphs}\right):$ > $\mathrm{J52}≔\mathrm{JohnsonGraph}\left(5,2\right)$ ${\mathrm{J52}}{≔}{\mathrm{Graph 1: an undirected unweighted graph with 10 vertices and 30 edge\left(s\right)}}$ (1) > $\mathrm{ChromaticNumber}\left(\mathrm{J52}\right)$ ${5}$ (2) > $\mathrm{DrawGraph}\left(\mathrm{J52}\right)$ References "Johnson graph", Wikipedia. http://en.wikipedia.org/wiki/Johnson_graph Compatibility • The GraphTheory[SpecialGraphs][JohnsonGraph] command was introduced in Maple 2019.
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Positive solutions of nonlinear singular third-order two-point boundary value problem. (English) Zbl 1107.34019 Summary: We are concerned with the existence of single and multiple positive solutions to the nonlinear singular third-order two-point boundary value problem $$u'''(t)+ \lambda a(t)f\bigl(u(t)\bigr)=0,\quad 0<t<1,\quad u(0)=u'(0)=u''(1)=0,$$ where $\lambda$ is a positive parameter. Under various assumptions on $a$ and $f$, we establish intervals of the parameter $\lambda$ which yield the existence of at least one, at least two, and infinitely many positive solutions of the boundary value problem by using Krasnoselskii’s fixed-point theorem of cone expansion-compression type. ##### MSC: 34B18 Positive solutions of nonlinear boundary value problems for ODE 34B15 Nonlinear boundary value problems for ODE 34B24 Sturm-Liouville theory Full Text: ##### References: [1] Agarwal, R. P.; Bohner, M.; Wong, P. J. Y.: Positive solutions and eigenvalues of conjugate boundary value problems. Proc. Edinburgh math. Soc. 42, 349-374 (1999) · Zbl 0934.34008 [2] Agarwal, R. P.; O’regan, D.; Wong, P. J. Y.: Positive solutions of differential, difference, and integral equations. (1999) [3] Anderson, D.: Multiple positive solutions for a three-point boundary value problem. Math. comput. Modelling 27, No. 6, 49-57 (1998) · Zbl 0906.34014 [4] Anderson, D.; Avery, R. I.: Multiple positive solutions to a third-order discrete focal boundary value problem. Comput. math. Appl. 42, 333-340 (2001) · Zbl 1001.39022 [5] Cabada, A.: The method of lower and upper solutions for second, third, fourth, and higher order boundary value problems. J. math. Anal. appl. 185, 302-320 (1994) · Zbl 0807.34023 [6] Cabada, A.: The method of lower and upper solutions for third-order periodic boundary value problems. J. math. Anal. appl. 195, 568-589 (1995) · Zbl 0846.34019 [7] Cabada, A.; Grossinbo, M. R.; Minhos, F.: On the solvability of some discontinuous third order nonlinear differential equations with two point boundary conditions. J. math. Anal. appl. 285, 174-190 (2003) [8] Cabada, A.; Heikkilä, S.: Extremality and comparison results for discontinuous third order functional initial-boundary value problems. J. math. Anal. appl. 255, 195-212 (2001) · Zbl 0976.34009 [9] Cabada, A.; Heikkilä, S.: Uniqueness, comparison and existence results for third order initial-boundary value problems. Comput. math. Appl. 41, 607-618 (2001) · Zbl 0991.34015 [10] Cabada, A.; Lois, S.: Existence of solution for discontinuous third order boundary value problems. J. comput. Appl. math. 110, 105-114 (1999) · Zbl 0936.34015 [11] Davis, J. M.; Henderson, J.: Triple positive symmetric solutions for a lidstone boundary value problem. Differential equations dynam. Systems 7, 321-330 (1999) · Zbl 0981.34014 [12] Erbe, L. H.; Wang, H.: On the existence of positive solutions of ordinary differential equations. Proc. amer. Math. soc. 120, 743-748 (1994) · Zbl 0802.34018 [13] Gregus, M.: Third order linear differential equations. Math. appl. (1987) [14] Gregus, M.: Two sorts of boundary-value problems of nonlinear third order differential equations. Arch. math. 30, 285-292 (1994) [15] Grossinho, M. R.; Minhös, F.: Existence result for some third order separated boundary value problems. Nonlinear anal. 47, 2407-2418 (2001) · Zbl 1042.34519 [16] Guo, D.; Lakshmikantham, V.: Nonlinear problems in abstract cones. (1988) · Zbl 0661.47045 [17] Krasnoselskii, M. A.: Positive solutions of operator equations. (1964) [18] Leggett, R. W.; Williams, L. R.: Multiple positive fixed points of nonlinear operators on ordered Banach spaces. Indiana univ. Math. J. 28, 673-688 (1979) · Zbl 0421.47033 [19] Omari, P.; Trombetta, M.: Remarks on the lower and upper solutions method for second- and third-order periodic boundary value problems. Appl. math. Comput. 50, 1-21 (1992) · Zbl 0760.65078 [20] Rachunkova, I.: On some three-point problems for third-order differential equations. Math. bohem. 117, 98-110 (1992) [21] Rusnak, J.: Constructions of lower and upper solutions for a nonlinear boundary value problem of the third order and their applications. Math. slovaca 40, 101-110 (1990) · Zbl 0731.34016 [22] Rusnak, J.: Existence theorems for a certain nonlinear boundary value problem of the third order. Math. slovaca 37, 351-356 (1987) · Zbl 0631.34022 [23] Senkyrik, M.: Method of lower and upper solutions for a third-order three-point regular boundary value problem. Acta univ. Palack. olomuc. Fac. rerum natur. Math. 31, 60-70 (1992) [24] Senkyrik, M.: Existence of multiple solutions for a third-order three-point regular boundary value problem. Math. bohem. 119, 113-321 (1994) [25] Yao, Q.: The existence and multiplicity of positive solutions for a third-order three-point boundary value problem. Acta math. Appl. sinica 19, No. 1, 117-122 (2003) · Zbl 1048.34031 [26] Yosida, K.: Functional analysis. (1978) · Zbl 0365.46001 [27] Zhao, W.: Existence and uniqueness of solutions for third order nonlinear boundary value problems. Tohoku math. J. 44, No. 2, 545-555 (1992) · Zbl 0774.34019
What width value corresponds to the Tufte fullwidth environment? In Tufte-LaTeX, I'd like to manually create a full-width environment using minipage rather than the package's fullwidth environment. What width value should I supply to minipage to match that of the Tufte fullwidth environment's width? \documentclass{tufte-handout} \usepackage{lipsum} \begin{document} \lipsum[1] \begin{minipage}{<width>} \lipsum[2] \end{minipage} % The value for <width> above should make that paragraph identical this one %\begin{fullwidth} % \lipsum[2] %\end{fullwidth} \lipsum*[3] \end{document} FWIW, I plan to use this primarily to create a full-width title block, if that matters for details of indentation and positoning. - The length you are looking for is \@tufte@fullwidth. (Note: I'm assuming that you don't want to indent your minipage, therefore I added \noindent before it.) \documentclass{tufte-handout} \makeatletter \newlength{\fullwidthlength} \AtBeginDocument{\setlength{\fullwidthlength}{\@tufte@fullwidth}} \makeatother \usepackage{lipsum} \begin{document} \lipsum[1] \noindent \begin{minipage}{\fullwidthlength} \lipsum[2] \end{minipage} % The value for <width> above should make that paragraph identical this one \begin{fullwidth} \lipsum[2] \end{fullwidth} \lipsum*[3] \end{document} - Excellent! What's the reason for the \AtBeginDocument (I notice from experimenting that it is essential). Also, apparently I can't have an @ in a length? – raxacoricofallapatorius Mar 16 '12 at 20:43 @lockstep has it exactly right. The \@tufte@fullwidth length isn't set until the document starts so that it can account for any customizations you make to the margins or paper size. You can use \@tufte@fullwidth directly in your document, but you'd need to surround it by \makeatletter and \makeatother. – godbyk Mar 16 '12 at 21:07 @raxacoricofallapatorius: To fix the spacing, try adding a \strut to the beginning and end of your minipage contents. (Since \lipsum ends with \par, it'll insert a paragraph break and you'll get a blank line in the example, but this shouldn't be an issue in regular text.) – godbyk Mar 16 '12 at 21:13 @raxacoricofallapatorius: If you're referring to the spacing after the regular fullwidth environment, then you can add \unskip\par immediately following \end{fullwidth} to bring it back in line. See this example document. – godbyk Mar 16 '12 at 21:49 @raxacoricofallapatorius: Ah, I understand. I'll look into getting the fullwidth environment to have the proper spacing. It was originally designed solely to allow us to have figure* and table* environments. – godbyk Mar 17 '12 at 20:51
## Access You are not currently logged in. Access your personal account or get JSTOR access through your library or other institution: # Computation of $\pi$ Using Arithmetic-Geometric Mean Eugene Salamin Mathematics of Computation Vol. 30, No. 135 (Jul., 1976), pp. 565-570 DOI: 10.2307/2005327 Stable URL: http://www.jstor.org/stable/2005327 Page Count: 6 Preview not available ## Abstract A new formula for $\pi$ is derived. It is a direct consequence of Gauss' arithmetic-geometric mean, the traditional method for calculating elliptic integrals, and of Legendre's relation for elliptic integrals. The error analysis shows that its rapid convergence doubles the number of significant digits after each step. The new formula is proposed for use in a numerical computation of $\pi$, but no actual computational results are reported here. • 565 • 566 • 567 • 568 • 569 • 570
## Geometry: Common Core (15th Edition) To determine if two lines are perpendicular, parallel, or neither, we need to look at their slopes. Perpendicular lines have slopes that are negative reciprocals of one another. Parallel lines have the same slope. The lines given are in standard form, but we want them in slope-intercept form so we can locate the slope easily. The slope-intercept form is given by the formula: $y = mx + b$, where $m$ is the slope of the line and $b$ is the y-intercept. For the equation $2x - 3y = 1$, we first subtract $2x$ from each side of the equation to isolate the $y$ term on the left side of the equation: $-3y = -2x + 1$ $y = \frac{2}{3}x - \frac{1}{3}$ We can see that the slope of this line is $\frac{2}{3}$. Consider the second equation: $-2y = -3x + 8$ Divide each side by $-2$ to isolate $y$: $y = \frac{3}{2}x - 4$ We can see that the slope of this line is $\frac{3}{2}$. The slopes are neither the same nor negative reciprocals of one another; therefore, these lines are neither parallel nor perpendicular.
# Joint distribution of absolute difference and sum of two independent exponential distributions If $$X\sim \rm{Exp}(1)$$ and $$Y\sim \rm{Exp}(1)$$ are two independent random variables. What is the joint distribution of $$U = |X - Y|$$ and $$V = X + Y$$? I used the Jacobian transformation to obtain the joint distribution of $$U$$ and $$V$$. But I am quite sure that it is not right. Since the function $$g(x,y) = (|x - y|, x + y)$$ is not a bijection, then I split the domain and defined the following functions: $$g^{(1)}(x,y) = (x - y, x + y) \\ g^{(2)}(x,y) = (-x + y, x + y)$$ which are now bijective functions. The inverse functions of $$g^{(\ell)}$$ = $$h^{(\ell)}$$ are $$h^{(1)}(u,v) = \left(\frac{v + u}{2}, \frac{v-u}{2} \right) \\ h^{(2)}(u,v) = \left(\frac{v - u}{2}, \frac{v + u}{2} \right)$$ The Jacobians, $$J_{(2)}(u,v)$$ and $$J_{(2)}(u,v)$$, are $$J_{(1)}(u,v) = \dfrac{1}{2} \quad \mbox{and} \quad J_{(2)}(u,v) = -\dfrac{1}{2}$$ By the independence between $$X$$ and $$Y$$ we have that $$\begin{eqnarray} f_{X,Y}(x,x) = f_{X}(x)\,f_{Y}(y) = e^{-(x+y)}, \quad x,y>0. \end{eqnarray}$$ Therefore, I found that the joint distribution of U and V is $$\begin{eqnarray} f_{U,V}(u,v) &=& f_{X,Y}\circ h^{(1)}(u,v)\,| J_{(1)}(u,v)| + f_{X,Y} \circ h^{(2)}(u,v)\, |J_{(2)}(u,v)| \\ &=& \exp\left\{-\left(\frac{v+u}{2} + \frac{v-u}{2}\right)\right\}\,\frac{1}{2} + \exp\left\{-\left(\frac{v-u}{2} + \frac{v+u}{2}\right)\right\}\,\frac{1}{2} \\ &=& \dfrac{e^{-v}}{2} + \dfrac{e^{-v}}{2} = e^{-v}. \end{eqnarray}$$ My doubts are: (i) The joint distribution of $$U$$ and $$V$$ depends only of the random variable $$V$$, which make me think that is not right. (ii) How can I defined the domain of $$f_{U,V}(u,v)$$ and obtain the $$F_{U,V}(u,v)$$? (iii) How can I defined the right bijections function to use the Jacobian transformation? • Thanks for the advice. I have rewrite the question including my thoughts. – andre Apr 20 at 14:01 • Yes, or more simply, $f_{X,Y}(x,y)=e^{-(x+y)}\mathbf 1_{0\leq x,0\leq y}$ when $v=x+y$ means $f_{X,Y}\circ h^{(\ell)}(u,v)=e^{-v}\mathbf 1_{?}$, but what is the support? – Graham Kemp Apr 20 at 14:11 • The support is one of my doubts. – andre Apr 20 at 14:21 Since $$U=\max(X,Y)-\min(X,Y)$$ and $$V=\max(X,Y)+\min(X,Y)$$, you can work with the joint pdf of $$(\min(X,Y),\max(X,Y))$$, given by \begin{align} f_{\min,\max}(x,y)&=2f_X(x)f_Y(y)\mathbf1_{x Now you are transforming $$(X,Y)\to (U,V)$$ such that $$U=Y-X$$ and $$V=Y+X$$. This is a simple one-to-one map with jacobian $$-1/2$$. It is immediate that $$0 So the pdf of $$(U,V)$$ would be $$f_{U,V}(u,v)=e^{-v}\mathbf1_{0 The joint density is not just depending on $$v$$; it depends on $$u$$ through the indicator $$\mathbf1_{0. • Thanks for your answer. It is an interesting approach. The support of the pdf $(U, V)$ is $0<u<v$ and $v > 0$. Is it right? – andre Apr 20 at 18:46 • Yes, it is just $0<u<v$. – StubbornAtom Apr 20 at 19:25 • @andre Even in your solution, keeping in mind that $v,u>0$, is there any problem in saying that $x,y>0\implies \frac{v+u}{2}>0\,,\,\frac{v-u}{2}>0\implies v>-u\,,\,v>u\implies v>u$? My solution is essentially the same as yours. – StubbornAtom Apr 20 at 20:24 • Ok thanks. One more question that I am not sure. How can I integrate the pdf to obtain the joint cdf? – andre Apr 21 at 20:11 • You have not shown any work on the cdf. So I don't know how to help. – StubbornAtom Apr 21 at 20:24
My Math Forum How to determine the nth term in this series? Pre-Calculus Pre-Calculus Math Forum September 30th, 2017, 03:44 PM #1 Member     Joined: Jun 2017 From: Lima, Peru Posts: 61 Thanks: 1 Math Focus: Calculus How to determine the nth term in this series? I have this series, but it does not seem to follow a constant difference, or at least is not implicit $\displaystyle -1,0,5,14,27,...$ How can I determine the nth term, let's say the 16th term and the sum up to that term? Is there an easy way without resorting to induction? Last edited by skipjack; September 30th, 2017 at 08:50 PM. September 30th, 2017, 10:24 PM #2 Global Moderator   Joined: Dec 2006 Posts: 20,104 Thanks: 1907 The nth term is 2n² - 5n + 2. (This is the simplest formula I could spot.) Hence the sum of the first n terms is (4n³ - 9n² - n)/6. The first 16 terms are -1, 0, 5, 14, 27, 44, 65, 90, 119, 152, 189, 230, 275, 324, 377, 434. Successive differences are 1, 5, 9, 13, 17, 21, 25, 29, . . . Successive second differences are 4, 4, 4, 4, 4, 4, 4, . . . September 30th, 2017, 11:33 PM   #3 Member Joined: Jun 2017 From: Lima, Peru Posts: 61 Thanks: 1 Math Focus: Calculus Quote: Originally Posted by skipjack The nth term is 2n² - 5n + 2. (This is the simplest formula I could spot.) Hence the sum of the first n terms is (4n³ - 9n² - n)/6. The first 16 terms are -1, 0, 5, 14, 27, 44, 65, 90, 119, 152, 189, 230, 275, 324, 377, 434. Successive differences are 1, 5, 9, 13, 17, 21, 25, 29, . . . Successive second differences are 4, 4, 4, 4, 4, 4, 4, . . . I tried, I mean really hard to guess which recursive formula could be used in this series. But is there any kind of algorithm that can be used by following steps on how to attack these situations? I mean the equation you proposed works, but it looks as if some kind of magician trick like taking a rabbit out of a hat (if you know what I mean). The second part which involves the sum of terms in the sequence seems kind of logical as there exists these useful formulas (which can be obtained from any book). $\displaystyle \sum_{k=1}^{n}k^{3}=\left (\frac{n(n+1)}{2} \right )^{2}$ $\displaystyle \sum_{k=1}^{n}k^{2}=\frac{n(n+1)(2n+1)}{6}$ Last edited by skipjack; October 1st, 2017 at 12:06 AM. October 1st, 2017, 12:14 AM #4 Global Moderator   Joined: Dec 2006 Posts: 20,104 Thanks: 1907 As the second differences are all 4, one can find a quadratic formula of the form 4n²/2 + bn + c for the nth term. It's now easy to determine the values of b and c. October 1st, 2017, 12:20 AM #5 Senior Member     Joined: Sep 2015 From: USA Posts: 2,264 Thanks: 1198 I can't speak for how skipjack came up with the recursion, but you can always try to solve for the coefficients of a polynomial. Suppose you have $N$ points let $p(n)=\displaystyle \sum_{k=0}^{N-1}~c_k n^k$ and you end up with a system of $N$ equations when you plug in values, e.g. in this case $-1 = p(1),~0=p(2), 5=p(3), 14=p(4), 27=p(5)$ Solving this, you find $c_0 = 2,~c_1=5,~c_2=2,~c_3=0,~c_4=0$ Last edited by skipjack; October 1st, 2017 at 12:27 AM. October 3rd, 2017, 07:39 PM   #6 Member Joined: Jun 2017 From: Lima, Peru Posts: 61 Thanks: 1 Math Focus: Calculus Quote: Originally Posted by romsek I can't speak for how skipjack came up with the recursion, but you can always try to solve for the coefficients of a polynomial. Suppose you have $N$ points let $p(n)=\displaystyle \sum_{k=0}^{N-1}~c_k n^k$ and you end up with a system of $N$ equations when you plug in values, e.g. in this case $-1 = p(1),~0=p(2), 5=p(3), 14=p(4), 27=p(5)$ Solving this, you find $c_0 = 2,~c_1=5,~c_2=2,~c_3=0,~c_4=0$ That's probably the key in solving this problem. However, I find it rather a bit of non-explicit in how to translate the sum proposed into the terms, or constants shown. Can you show the example of how to use the sum to solve for, let's say, $c_1$? Sorry if I do ask it this way, but it is not too obvious for the casual learner. Last edited by skipjack; October 4th, 2017 at 12:56 PM. October 4th, 2017, 05:21 AM #7 Math Team   Joined: Jan 2015 From: Alabama Posts: 3,261 Thanks: 895 Of course, a "sequence" doesn't have to follow any simple rule at all. When I was a freshman in college, my math professor gave an example of the sequence "15, 16, 17, 18, 31, 32, 33, 53, 54". Those were the numbers of the subway stations on his way home after work. His particular train shunted off one "line" to another twice. The best you can do with a problem like this is look for the "simplest" formula. And, of course, what is "simplest" may depend upon the person setting the problem. Last edited by skipjack; October 4th, 2017 at 12:58 PM. October 4th, 2017, 01:20 PM #8 Global Moderator   Joined: Dec 2006 Posts: 20,104 Thanks: 1907 Using the formula 4n²/2 + bn + c, putting n = 1 leads to -1 = 2 + b + c, and putting n = 2 leads to 0 = 8 + 2b + c. Subtracting the first equation from the second gives 1 = 6 + b, so b = -5. October 4th, 2017, 02:11 PM #9 Math Team   Joined: Jul 2011 From: Texas Posts: 2,805 Thanks: 1449 $\{-1,0,5,14,27,...\}$ $\{-1 \cdot 1, \, 0\cdot 3, \, 1 \cdot 5, \, 2 \cdot 7, \, 3\cdot 9, \, ... , \, (n-2)(2n-1), \, ... \}$ October 4th, 2017, 05:05 PM #10 Global Moderator   Joined: Dec 2006 Posts: 20,104 Thanks: 1907 I originally noticed that the nth term is $T_{2\text{n}-3}$ - 1 ≡ (2n - 3)(2n - 2)/2 - 1 ≡ 2n² - 5n + 2. Tags determine, nth, series, term Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post bongantedd Algebra 2 February 26th, 2014 05:38 PM Polaris84 Real Analysis 3 October 26th, 2013 03:12 PM RuneBoggler Algebra 2 May 10th, 2011 12:14 PM reddd Algebra 3 May 20th, 2010 01:50 PM ammarkhan123 Real Analysis 0 April 16th, 2010 08:52 AM Contact - Home - Forums - Cryptocurrency Forum - Top
## Cannot clone group of Custom Fields in CPT Question I have my CPT, and I am trying to clone the group of Custom Fields in my metabox named as Credits. I have these fields to be cloned each time, when the user hit on + Add Group button: 1. Text Field 2. TextArea Field I have followed this link for reference -> Clone Group of Custom Fields Now, I believe the code is right, but somehow, I can see only textfield is coming up. I have tried my best to get this working but failed. I need your help on this. Metabox Group extension is a paid extension, and I do not want make use of it Code: function gallery_credit_section( $meta_boxes ) {$prefix = 'prefix-'; $meta_boxes[] = array( 'id' =>$prefix . 'gallery-credits', 'title' => esc_html__( 'Credits', 'metabox-online-generator' ), 'post_types' => 'gallery', 'priority' => 'default', 'autosave' => 'false', 'fields' => array( array( 'id' => $prefix . 'gallery-advisor', 'type' => 'group', 'clone' => true, 'fields' => array( array( 'id' =>$prefix . 'gallery-advisor', 'type' => 'text', 'name' => esc_html__( 'Advisors', 'metabox-online-generator' ), 'placeholder' => esc_html('Author Name', 'metabox-online-generator') ), array( 'id' => $prefix . 'gallery-about-advisor', 'type' => 'textarea', 'name' => esc_html__('About Advisor', 'metabox-online-generator') ) ) ) ) ); return$meta_boxes; } As you can see, no name is there for the field, plus there is no textarea field too. Please help. Thanks 🙂
# Thermal Conductivity Paper Database ## Recommended Papers for: Stefan Brendelberger Total Papers Found: 1 #### Effective thermal conductivity of sintered metal foams: experiments and a model proposal It is important that porous metal foams used in gas turbines do not exceed their upper temperature threshold. In this paper, the Transient Plane Source (TPS) technique was used to measure the thermal conductivity of metal foams up to 1000 °C. Results showed that the thermal ... Author(s):
It is currently 24 Nov 2017, 06:27 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar Find the value of x Author Message TAGS: Hide Tags VP Joined: 18 May 2008 Posts: 1258 Kudos [?]: 542 [0], given: 0 Find the value of x [#permalink] Show Tags 04 Feb 2009, 02:04 17 This post was BOOKMARKED 00:00 Difficulty: 35% (medium) Question Stats: 72% (01:18) correct 28% (01:48) wrong based on 685 sessions HideShow timer Statistics Find the value of x $$x= \sqrt[2]{20+\sqrt[2]{20+\sqrt[2]{20}}}$$ 1. 20 2. 5 3. 2 4. 8 [Reveal] Spoiler: OA Kudos [?]: 542 [0], given: 0 Math Expert Joined: 02 Sep 2009 Posts: 42356 Kudos [?]: 133204 [4], given: 12439 Find the value of x [#permalink] Show Tags 29 Sep 2010, 07:45 4 KUDOS Expert's post 8 This post was BOOKMARKED nonameee wrote: Bunuel, would you be so kind and look at this question. Is there any other way to solve it rather than elimination? Can you describe elimination in greater detail? Thank you. Find the value of x $$x= \sqrt{20+\sqrt{20+\sqrt{20}}}$$ 1. 20 2. 5 3. 2 4. 8 Question should be what is the approximate value of $$x$$. Obviously answer choice C (2) is out as $$\sqrt{20+some \ #}>4$$. Now, $$4<\sqrt{20}<5$$: $$x= \sqrt{20+\sqrt{20+\sqrt{20}}}= \sqrt{20+\sqrt{20+(# \ less \ than \ 5)}}= \sqrt{20+\sqrt{# \ less \ than \ 25}}= \sqrt{20+(# \ less \ than \ 5)}=$$ $$=\sqrt{# \ less \ than \ 25}=# \ less \ than \ 5\approx{5}$$. Next, exactly 5 to be the correct answer question should be: If the expression $$x=\sqrt{20+{\sqrt{20+\sqrt{20+\sqrt{20+...}}}}}$$ extends to an infinite number of roots and converges to a positive number x, what is x? $$x=\sqrt{20+{\sqrt{20+\sqrt{20+\sqrt{20+...}}}}}$$ --> $$x=\sqrt{20+({\sqrt{20+\sqrt{20+\sqrt{20+...})}}}}$$, as the expression under square root extends infinitely, then expression in brackets would equal to $$x$$ itself so we can rewrite given expression as $$x=\sqrt{20+x}$$. Square both sides $$x^2=20+x$$ --> $$x=5$$ or $$x=-4$$. As given that $$x>0$$ then only one solution is valid: $$x=5$$. Hope it helps. _________________ Kudos [?]: 133204 [4], given: 12439 Intern Joined: 15 Jan 2009 Posts: 46 Kudos [?]: 34 [2], given: 1 Schools: Marshall '11 Show Tags 04 Feb 2009, 05:41 2 KUDOS 1 This post was BOOKMARKED ritula wrote: Find the value of x $$x= \sqrt[2]{20+\sqrt[2]{20+\sqrt[2]{20}}}$$ 1. 20 2. 5 3. 2 4. 8 Since the answer choices are dissimilar, we can estimate the answer choice here. The $$\sqrt[2]{20}$$ is somewhere between 4 and 5. Suppose it's 5, then we'll get $$\sqrt[2]{20+\sqrt[2]{20+5}$$=$$\sqrt[2]{20+5}$$=$$5$$ Kudos [?]: 34 [2], given: 1 Senior Manager Joined: 30 Nov 2008 Posts: 483 Kudos [?]: 369 [1], given: 15 Schools: Fuqua Show Tags 04 Feb 2009, 12:39 1 KUDOS If so, shouldn't the question be "What is the approximate value of x?" Just curious Kudos [?]: 369 [1], given: 15 Current Student Joined: 28 Dec 2004 Posts: 3345 Kudos [?]: 323 [0], given: 2 Location: New York City Schools: Wharton'11 HBS'12 Show Tags 04 Feb 2009, 08:29 agree with 5.. i estimated it to be 5.. now if they had a 4 in the ans choices..that would have been tough.. Kudos [?]: 323 [0], given: 2 SVP Joined: 07 Nov 2007 Posts: 1790 Kudos [?]: 1091 [0], given: 5 Location: New York Show Tags 04 Feb 2009, 08:32 FN wrote: agree with 5.. i estimated it to be 5.. now if they had a 4 in the ans choices..that would have been tough.. even if you have 4.. its not tough.. sqrt(20) clearly.. >4 I agree if answer choice has options like 4.9 or 4.8... _________________ Smiling wins more friends than frowning Kudos [?]: 1091 [0], given: 5 SVP Joined: 07 Nov 2007 Posts: 1790 Kudos [?]: 1091 [0], given: 5 Location: New York Show Tags 04 Feb 2009, 08:38 ritula wrote: How did u get 5? quote="scthakur"]B. 5 (by process of elimination). [/quote] actual value of x= 4.994690378 ~5 only way to do is process of elimination. _________________ Smiling wins more friends than frowning Kudos [?]: 1091 [0], given: 5 SVP Joined: 29 Aug 2007 Posts: 2471 Kudos [?]: 857 [0], given: 19 Show Tags 04 Feb 2009, 08:59 2 This post was BOOKMARKED ritula wrote: Find the value of x $$x= \sqrt[2]{20+\sqrt[2]{20+\sqrt[2]{20}}}$$ 1. 20 2. 5 3. 2 4. 8 1: $$x= \sqrt{20+\sqrt{20+\sqrt{20}}}$$ $$x= \sqrt{20+\sqrt{20+4.47}}$$ $$x= \sqrt{20+\sqrt{24.47}}$$ $$x= \sqrt{20+4.95}$$ $$x= \sqrt{24.95}$$ $$x= 4.995 = approx. 5.00$$ 2: From third sqrt: sqrt 20 = 4 and a fraction of 1. From second sqrt: sqrt (20+4.00 = 24) = 4 and a fraction of 1. From first sqrt: sqrt (20+4.00 = 24) = 4 and a fraction of 1. which is definitely close to none other than 5. 3: Using POE: A: it cannot be 20 cuz for 20, the value under root must be 400, which is impossible. so A is ruled out. B: It could be 5 as done above in method 2. 3. 2 is also not possible because even if we consider first 20 under root, the value must not be smaller than 4. 4. 8 is not possible because for 8, the value under root must be 64. Even if we add up all three 20s, the sum would not be more than 60. so it is also not possible. So left with 5. So B make sense. _________________ Gmat: http://gmatclub.com/forum/everything-you-need-to-prepare-for-the-gmat-revised-77983.html GT Kudos [?]: 857 [0], given: 19 Intern Joined: 27 Sep 2009 Posts: 6 Kudos [?]: 1 [0], given: 0 Show Tags 02 Oct 2009, 09:49 x=root of 20+x x^2=20+x x^2-x=20 solving we get x=5,-4 Kudos [?]: 1 [0], given: 0 Intern Joined: 20 May 2009 Posts: 38 Kudos [?]: 3 [0], given: 5 Show Tags 02 Oct 2009, 12:53 boiled it down to roughly 5.5ish. 8 is too high. must be 5. Kudos [?]: 3 [0], given: 5 Director Joined: 23 Apr 2010 Posts: 573 Kudos [?]: 100 [0], given: 7 Show Tags 29 Sep 2010, 06:15 Bunuel, would you be so kind and look at this question. Is there any other way to solve it rather than elimination? Can you describe elimination in greater detail? Thank you. Kudos [?]: 100 [0], given: 7 Director Joined: 23 Apr 2010 Posts: 573 Kudos [?]: 100 [0], given: 7 Show Tags 04 Oct 2010, 03:31 Bunuel, thank you very much. Kudos [?]: 100 [0], given: 7 Math Expert Joined: 02 Sep 2009 Posts: 42356 Kudos [?]: 133204 [0], given: 12439 Re: Find the value of x [#permalink] Show Tags 22 Sep 2013, 04:02 Expert's post 1 This post was BOOKMARKED bumpbot wrote: Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. Similar questions to practice: tough-and-tricky-exponents-and-roots-questions-125956-40.html#p1029228 find-the-value-of-a-given-a-3-3-3-3-3-inf-138049.html if-the-expression-x-sqrt-2-sqrt-2-sqrt-2-sqrt-2-extends-98647.html Hope it helps. _________________ Kudos [?]: 133204 [0], given: 12439 SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1851 Kudos [?]: 2724 [0], given: 193 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Re: Find the value of x [#permalink] Show Tags 11 Dec 2014, 02:37 $$x= \sqrt[2]{20+\sqrt[2]{20+\sqrt[2]{20}}}$$ Converting the square root sign to power $$x = (20 + (20 + (20)^{\frac{1}{2}})^{\frac{1}{2}})^{\frac{1}{2}}$$ Squaring both sides $$x^2 = 20 + (20 + (20)^{\frac{1}{2}})^{\frac{1}{2}}$$ Now start looking at the OA Value of $$x^2$$ has to be around 25, but way less than 64 _________________ Kindly press "+1 Kudos" to appreciate Kudos [?]: 2724 [0], given: 193 Director Joined: 24 Nov 2015 Posts: 586 Kudos [?]: 40 [0], given: 231 Location: United States (LA) Re: Find the value of x [#permalink] Show Tags 08 Jul 2016, 15:49 why is it that only 4 options given in the question? I think that this is not a gmat type question as for explanation,I agree with bunuel that it needs to be asking approximate value of x Correct Ans - B Kudos [?]: 40 [0], given: 231 Director Joined: 04 Jun 2016 Posts: 647 Kudos [?]: 379 [0], given: 36 GMAT 1: 750 Q49 V43 Re: Find the value of x [#permalink] Show Tags 08 Jul 2016, 22:06 Fortunately there is a super short cut to solve this question (that shortcut is applicable to this question atleast ) This whole expression can be seen as $$x= \sqrt[2]{20+\sqrt[2]{some number}$$ . Square root of 20 is 4.4 approx and the rest of the nested square root will be a small decimal value because they are square root inside square root so the answer will be 4.4 + 0.some value Look out for a nearest answer greater than 4.4 It's 5 in the given question If you get this kind of questions in GMAT , thank your stars because you can save tremendous amount of time and add sure marks to your score. ritula wrote: Find the value of x $$x= \sqrt[2]{20+\sqrt[2]{20+\sqrt[2]{20}}}$$ 1. 20 2. 5 3. 2 4. 8 _________________ Posting an answer without an explanation is "GOD COMPLEX". The world doesn't need any more gods. Please explain you answers properly. FINAL GOODBYE :- 17th SEPTEMBER 2016. .. 16 March 2017 - I am back but for all purposes please consider me semi-retired. Kudos [?]: 379 [0], given: 36 Non-Human User Joined: 09 Sep 2013 Posts: 15499 Kudos [?]: 283 [0], given: 0 Re: Find the value of x [#permalink] Show Tags 26 Jul 2017, 04:15 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Kudos [?]: 283 [0], given: 0 Re: Find the value of x   [#permalink] 26 Jul 2017, 04:15 Display posts from previous: Sort by
# Math Help - Simultaneous DE 1. ## Simultaneous DE dx/dt=-ax dy/dt=ax-by where a and b are constants. Solve this system subject to x(0)=x_0 (x "not") and y(0)=y_0 (y "not") Is it possible to substitute -dx/dt into equation 2 or do I have to solve equation one first to find the solution of x first? dx/dt=-ax dy/dt=ax-by where a and b are constants. Solve this system subject to x(0)=x_0 (x "not") and y(0)=y_0 (y "not") Is it possible to substitute -dx/dt into equation 2 or do I have to solve equation one first to find the solution of x first? solve the first DE for x (it is a separable DE), and plug that into the second DE. (and it is x- and y- naught, as in zero) 3. ok so for the first DE I got x=Ke^-at where K is the constant e^c (from integration) How do I substitute the condition x(0)=x_0? Or do I substitute the whole expression into equation 2? $x(0) = x_0$ means, when $t = 0,~x = x_0$ We have $x = Ke^{-at}$. applying the initial condition, we see that $x_0 = Ke^0 = K$ Thus, $x = x_0e^{-at}$. Plug this expression in the second equation
# Symbolic Closed-Form Fibonacci Let $V := \{(a_j)_{j\in\mathbb{N}}\subset\mathbb{C}|a_n=a_{n-1}+a_{n-2}\forall n>1\}$ be the two-dimensional complex vector space of sequences adhering to the Fibonacci recurrence relation with basis $B := ((0,1,\dots), (1,0,\dots))$. Let furthermore $f: V\to V, (a_j)_{j\in\mathbb{N}}\mapsto(a_{j+1})_{j\in\mathbb{N}}$ be the sequence shift endomorphism represented by the transformation matrix $A := M^B_B(f) = \begin{pmatrix}1&1\\1&0\end{pmatrix}$. By iteratively applying the sequence shift a closed-form solution for the standard Fibonacci sequence follows. $F_n := (B_1)_n = (f^n(B_1))_1 = (A^n \cdot B_1)_2$ Diagonalizing $A$ leads to eigenvalues $\varphi = \frac{1+\sqrt{5}}{2}, \psi = \frac{1-\sqrt{5}}{2}$ and a diagonalization $A=\begin{pmatrix}1&1\\1&0\end{pmatrix} = \begin{pmatrix}\psi&\varphi\\1&1\end{pmatrix} \cdot \begin{pmatrix}\psi&0\\0&\varphi\end{pmatrix} \cdot \begin{pmatrix}2\cdot\psi-1&\frac{\varphi+2}{5}\\2\cdot\varphi-1&\frac{\psi+2}{5}\end{pmatrix}.$ Using said diagonalization one deduces \begin{aligned}A^n\cdot B_1&= \begin{pmatrix}\psi&\varphi\\1&1\end{pmatrix} \cdot \begin{pmatrix}\psi^n&0\\0&\varphi^n\end{pmatrix} \cdot \begin{pmatrix}2\cdot\psi-1\\2\cdot\varphi-1\end{pmatrix}\\ &= \begin{pmatrix}\psi&\varphi\\1&1\end{pmatrix} \cdot \begin{pmatrix}\psi^n\cdot(2\cdot\psi-1)\\\varphi^n\cdot(2\cdot\varphi-1)\end{pmatrix}\\ &= \begin{pmatrix} \psi^{n+1}\cdot(2\cdot\psi-1) + \varphi^{n+1}\cdot(2\cdot\varphi-1) \\ \psi^n\cdot(2\cdot\psi-1) + \varphi^n\cdot(2\cdot\varphi-1) \end{pmatrix}. \end{aligned} Therefore \begin{aligned} F_n &= (A^n \cdot B_1)_2 \\ &= \psi^n\cdot(2\cdot\psi-1) + \varphi^n\cdot(2\cdot\varphi-1) \\ &= \frac{-1}{\sqrt{5}}\cdot\psi^n+\frac{1}{\sqrt{5}}\cdot\varphi^n \\ &= \frac{1}{\sqrt{5}}\cdot(\varphi^n-\psi^n). \end{aligned} Thus a closed-form expression not involving any higher-dimensional matrices is found. To avoid precision errors, I implemented a basic symbolic expression simplifier (fib.hs) — using $\sqrt[\star]{n} := \text{sgn}(n)\cdot \sqrt{|n|}\,\forall n\in\mathbb{Z}$ as a negative-capable root, a symbolic expression is modeled as follows. data Expr = Sqrt Int | Neg Expr | Expr :+ Expr | Expr :* Expr | Expr :/ Expr | Expr :- Expr | Expr :^ Int deriving Eq Said model is capable of representing the above derived formula. phi = (Sqrt 1 :+ Sqrt 5) :/ Sqrt 4 psi = (Sqrt 1 :- Sqrt 5) :/ Sqrt 4 fib n = (Sqrt 1 :/ Sqrt 5) :* (phi :^ n :- psi :^ n) Using this implementation to calculate the sequence should be possible (assuming the simplifier does not get stuck at any occurring expression), yet takes its sweet time — $F_6$ takes half a minute on my 4 GHz machine. *Main> simplify $fib 6 \sqrt{64} Quicker sequence calculation methods include a brute-force $A^n$ approach, e.g. import Data.List (transpose) a *@* b = [[sum . map (uncurry (*))$ zip ar bc | bc <- transpose b] | ar <- a] a *^* 0 = [[1, 0], [0, 1]] a *^* n = a *@* (a *^* (n-1)) fib = (!! 0) . (!! 0) . ([[1, 1], [1, 0]] *^*) as well as using lazy evaluation to construct the whole infinite sequence. fibs = [0, 1] ++ [x+y | (x, y) <- zip fibs \$ tail fibs] This site uses Akismet to reduce spam. Learn how your comment data is processed.