text
stringlengths
100
356k
# Radio observations of candidate massive YSOs in the southern hemisphere Urquhart, J.S., Busfield, A.L., Hoare, Melvin G., Lumsden, Stuart L., Clarke, A.J., Moore, Toby J.T., Mottram, Joseph C., Oudmaijer, Rene D. (2007) Radio observations of candidate massive YSOs in the southern hemisphere. Astronomy and Astrophysics, 461 (1). pp. 11-23. ISSN 0004-6361. (doi:10.1051/0004-6361:20065837) (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided) (KAR id:52245) The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided. Official URLhttp://www.dx.doi.org/10.1051/0004-6361:20065837 ## Abstract Context.The Red MSX Source (RMS) survey is a multi-wavelength programme of follow-up observations designed to distinguish between genuine massive young stellar objects (MYSOs) and other embedded or dusty objects, such as ultra compact (UC) HII regions, evolved stars and planetary nebulae (PNe). We have identified nearly 2000 MYSOs candidates by comparing the colours of MSX and 2MASS point sources to those of known MYSOs. Aims.There are several other types of embedded or dust enshrouded objects that have similar colours as MYSOs and contaminate our sample. Two sources of contamination are from UCHII regions and PNe, both of which can be identified from the radio emission emitted by their ionised nebulae. Methods.In order to identify UCHII regions and PNe that contaminate our sample we have conducted high resolution radio continuum observations at 3.6 and 6 cm of all southern MYSOs candidates ( $235\degr< l < 350\degr$) using the Australia Telescope Compact Array (ATCA). These observations have a spatial resolution of ~1-2´´ and typical image rms noise values of ~0.3 mJy - sensitive enough to detect a HII region powered by B0.5 star at the far side of the Galaxy. Results.Of the 826 RMS sources observed we found 199 to be associated with radio emission, ~25% of the sample. The Galactic distribution, morphologies and spectral indices of the radio sources associated with the RMS sources are consistent with these sources being UCHII regions. Importantly, the 627 RMS sources for which no radio emission was detected are still potential MYSOs. In addition to the 802 RMS fields observed we present observations of a further 190 fields. These observations were made towards MSX sources that passed cuts in earlier versions of the survey, but were later excluded. Item Type: Article 10.1051/0004-6361:20065837 radio continuum, stars, formation, early-type, pre-main sequence Q Science > QB Astronomy > QB460 Astrophysics Divisions > Division of Natural Sciences > Physics and Astronomy James Urquhart 30 Nov 2015 16:45 UTC 16 Nov 2021 10:21 UTC https://kar.kent.ac.uk/id/eprint/52245 (The current URI for this page, for reference purposes) https://orcid.org/0000-0002-1605-8050
# help in Solving PDE heat problem with FFCT solve the following heat problem using Finite Fourier Coseine Transform(FFCT): A metal bar of length $L$, is at constant temperature of $U0$, at $t=0$ the end $x=L$ is suddenly given the constant temperature of $U_1$ and the end $x=0$ is insulated. Assuming that the surface of the bar is insulated, find the temperature at any point $x$ of the bar at any time $t>0$ , assume $k=1$ Equations used: 1. Heat equation: $$\frac {\partial^2 u} {\partial x^2} = \frac 1 k \frac {\partial u} {\partial t}$$ 2. the following FFCT Equations ( as in the attached pic): FFCT Equations My attempt at solutions goes like this: $$\frac {\partial^2 u} {\partial x^2} = \frac 1 k \frac {\partial u} {\partial t}$$ $$\mathcal{F}_{fc} \left[ \frac {\partial u} {\partial t} \right] = \mathcal{F}_{fc} \frac {\partial^2 u} {\partial x^2}$$ $$\frac {dU} {dt} = {-\left( \frac {{n} {\pi}} L \right)}ˆ{2} * F(x,t) + \left( {-1} \right)ˆn \frac {\partial{f(L,t)}} {\partial x} - \frac {\partial{f(0,t)}} {\partial x}$$ $$\frac {dU} {dt} = - \left( \frac {{n} {\pi}} L \right)ˆ(2) * F(x,t) + \left( {-1} \right)ˆn \frac {\partial{f(L,t)}} {\partial x}$$ and i don't know how to continue, can you provide the rest of the solution in details please, regards. • Hello Mr. @Harry49 , here is my complete problem. – aows61 Sep 14 '17 at 16:24 • i just want to make sure you got the problem @Harry49 – aows61 Sep 14 '17 at 16:30 • Hello Mr Harry @Harry49 – aows61 Sep 14 '17 at 16:31 • Please stop the spam! The most straightforward way to solve this PDE problem is separation of variables (see e.g. [1]). – Harry49 Sep 14 '17 at 16:32 • dear Mr. @Harry49 , i already solve it using separation of variables, but am required solve it using the Finite Fourier Cosine Transform (FFCT), and i tried but stopped somewhere. – aows61 Sep 14 '17 at 16:37
## Wikidot-ing! ### December 29, 2007 Remember the C-source code for calculating the elastic stress fields of a circular cavity in a square plate under uniaxial stress? I have found another neat way of shipping code with comments (and equations and figures, if need be) thanks to Abi: here is the wikidot page of code, schematic and equations. Have fun! ## Code: Circular hole under uniaxial stress ### September 4, 2007 The wonderful guys at WordPress have given a wraparound so that posting source code is easier; it can’t get any better than this. Look out these pages for more phase field codes! For now, as a test, here is the code which will calculate the elastic stress fields around a circular hole in a square plate which is under an uniaxial stress. /************ This program calculates the elastic stress fields of a circular hole in a square plate under an uniaxial stress. The stress is applied along the x-axis. For the expressions used in calculating the stress fields, see p. 109 of Elasticity by Barber -- Equations 8.74, 8.75, and 8.76. ************/ #include #include #include int main(void){ FILE *fp1, *fp2, *fp3; double S; /* Applied uniaxial stress */ double a; /* Radius of the circular hole */ double r; /* Distance along the x- or y-axis */ int i; double s11, s12, s22; /* The 11, 12 and 22 stress fields respectively */ S = 1.0; a = 2.5; fp1 = fopen("sigma11_x","w"); fp2 = fopen("sigma22_x","w"); fp3 = fopen("sigma12_x","w"); for(i=0;i<3000; ++i){ r = 0.01*i; if (r < a){ s11 = s12 = s22 = 0.0; } else{ s11 = 0.5*S*(1. - a*a/(r*r)) + 0.5*S*(3.*a*a*a*a/(r*r*r*r) - 4.*a*a/(r*r) + 1.); s22 = 0.5*S*(1. + a*a/(r*r)) - 0.5*S*(3.*a*a*a*a/(r*r*r*r) + 1.); s12 = 0.0; } fprintf(fp1,"%le %le\n",r,s11); fprintf(fp2,"%le %le\n",r,s22); fprintf(fp3,"%le %le\n",r,s12); } fclose(fp1); fclose(fp2); fclose(fp3); fp1 = fopen("sigma11_y","w"); fp2 = fopen("sigma22_y","w"); fp3 = fopen("sigma12_y","w"); for(i=0;i<3000; ++i){ r = 0.01*i; if (r < a){ s11 = s12 = s22 = 0.0; } else{ s22 = 0.5*S*(1. - a*a/(r*r)) - 0.5*S*(3.*a*a*a*a/(r*r*r*r) - 4.*a*a/(r*r) + 1.); s11 = 0.5*S*(1. + a*a/(r*r)) + 0.5*S*(3.*a*a*a*a/(r*r*r*r) + 1.); s12 = 0.0; } fprintf(fp1,"%le %le\n",r,s11); fprintf(fp2,"%le %le\n",r,s22); fprintf(fp3,"%le %le\n",r,s12); } fclose(fp1); fclose(fp2); fclose(fp3); return 0; } At the first go, looks like everything works fine except for (a) the #include things–for which, the stuff in triangular brackets, it seems to consider as html commands, and (b) the greater then signs, which it seem to have some problems showing as greater then signs. Code within the sourcecode wraparound are supposed to be formatted automatically. The language option I chose is cpp, which I assume is C++. Could that be the problem? Test: <> #include Hmm…I guess the wraparound requires a bit of fixing!!! Test 2: $features = file_get_contents( 'http://wordpress.com/features/' ); preg_match_all( '|<h3>(.*?)</h3>|is',$features, $why_wp_rocks ); foreach ($why_wp_rocks[1] as $slick_feature )$hotness[] = $slick_feature; var_dump($hotness ); ## Stress fields of a circular hole in a plate ### March 12, 2007 If you are writing a code for solving for the equations of mechanical equilibrium in an elastically inhomogeneous system, then there are several test cases that you might wish to run to make sure that your code is working fine. One of them is the stress fields of a circular hole in a plate which is under an applied uniaxial stress. Typically, the problem is solved in polar coordinates and the stresses at any given point $(r,\theta)$ are given by the following expressions; see Elasticity (2nd Edition) by J R Barber, p. 109, Equations 8.74 — 8.76, for example — by the way, a sample chapter from the book is available for download here (pdf). $\sigma_{rr} = \frac{S}{2} \left( 1 - \frac{a^{2}}{r^{2}}\right) + \frac{S \cos(2\theta)}{2}\left(\frac{3 a^{4}}{r^{4}} - \frac{4 a^{2}}{r^{2}} + 1 \right)$; $\sigma_{r \theta} = \frac{S \sin(2\theta)}{2}\left(\frac{3 a^{4}}{r^{4}} - \frac{2 a^{2}}{r^{2}} - 1 \right)$; $\sigma_{\theta \theta} = \frac{S}{2} \left( 1 + \frac{a^{2}}{r^{2}}\right) - \frac{S \cos(2\theta)}{2}\left(\frac{3 a^{4}}{r^{4}} + 1 \right)$; where $S$ is applied stress, and $a$ is the radius of the cavity. Here is a code written in C which will generate the data files for the stresses along the x– and y-axes from the center of the circular cavity of unit radius under an applied (tensile) uniaxial stress of unity. Note that you should have write permissions in the directory for the output data files to be written. Compile the code (gcc –lm circ_hole_under_uniaxial_stress.c) and execute the resultant binary a.out. You may have to tweak the code a bit if you have different applied stress and/or cavity radius. The data files generated using the code can then be plotted using gnuplot; and, such plots of the stress fields are shown here: Stress fields along x-axis(pdf) and Stress fields along y-axis (pdf). The following aspects of the plots are worth noting: 1. The $\sigma_{12}$ components of the stress fields are zero both along x– and y-axes; 2. When the applied stress is along the x-direction, the $\sigma_{11}$ component of the stress at the cavity along the y-axis jumps to thrice the applied stress; and, 3. Far away from the cavity, the stress fields in the plate are just the applied fields (as one would expect from St. Venant principle, which was used to obtain the stress fields in the first place). Have fun!
Followers 0 # Deleting temp files placed via FileInstall ## 2 posts in this topic #1 ·  Posted I have a script that I wanted to play a "goodbye" sound when exiting. I used FileInstall to copy the wav file to @TempDir. Being the neat freak that I am I want to delete the file after it is played. I used FileDelete at the end of my script to delete it but the file will not delete. Apparently, the script keeps the file in use so it cannot be deleted. What is the proper way to use a file as a temporary file and delete it when done? Thanks, Derek ##### Share on other sites #2 ·  Posted Why don't you try including a function that deletes the file when the script closes.  Try this: OnAutoItExitRegister ( "deletefile" ) ;code goes here Func deletefile () Run ( @ComSpec & ' /c timeout 4 & del "PAth\to\file.wav"' ) Exit EndFunc
1. ## Junior question I am not sure if this is the right forum, perhaps it should be in the basic algebra forum but it is the sort of question that involves concepts rather then techniques. To find a solution set for: (1): ln(2x+4) = x^2 must one use a graphing method? Either the intersection of y1 = ln(2x+4) and y2 = (x^2), or the intersection of y = ln(2x+4) -x^2 with the x-axis (the zeros of the function)? If one must use a graphing technique, why is that, what is missing or needed, some new identity? I note too that in applied mathematics there is such a thing as getting a rough check of your work using "dimensional analysis" but in mathematics, dimensions are not a factor. I can see that this is the case because all expressions made over, say the real numbers, must ultimately reduce to a real number therefore you are always dealing with a correspondence between pure numbers, attached "dimensions" then a separate issue. Still, in this case it does seem that the dividing line between a log equation that is solvable, at least by the basic algebra techniques that I know of, and one that is not, is delineated by mixed dimensions as it were, namely terms that represent exponents (i.e. ln(2x+4) ), with those that do not (i.e. x^2). In addition to the above question I suppose what I am also sort of grasping for is how one can look at variations of the above equation and know that a graphing solution is the only way to go. 2. ## Re: Junior question I did move this to the Algebra forum. Yes, unit analysis in Mathematics can get pretty odd. However Physics can get messy, too. For example, consider the torque equations: 1) $$\tau = I \alpha$$ and 2) $\tau = \vec{r} \times \vec{F}$. 1) has the unit N m rad and 2) has N m. Now radians has the unit 1 rad = 1 m/m, which is unitless but torque is clearly a rotational quantity and thus should contain angular information so a N m rad should not be the same as N m. As well, the unit for energy is 1 J = 1 N m, but torque is clearly not an energy. Basically it's a mess. As to your equation, yes you will have to use a graphing method or some other numerical technique. -Dan 3. ## Re: Junior question Originally Posted by Ray1234 I am not sure if this is the right forum, perhaps it should be in the basic algebra forum but it is the sort of question that involves concepts rather then techniques. To find a solution set for: (1): +/1 ln(2x+4) = x^2 must one use a graphing method? Either the intersection of y1 = ln(2x+4) and y2 = (x^2), or the intersection of y = ln(2x+4) -x^2 with the x-axis (the zeros of the function)? If one must use a graphing technique, why is that, what is missing or needed, some new identity? I note too that in applied mathematics there is such a thing as getting a rough check of your work using "dimensional analysis" but in mathematics, dimensions are not a factor. I can see that this is the case because all expressions made over, say the real numbers, must ultimately reduce to a real number therefore you are always dealing with a correspondence between pure numbers, attached "dimensions" then a separate issue. Still, in this case it does seem that the dividing line between a log equation that is solvable, at least by the basic algebra techniques that I know of, and one that is not, is delineated by mixed dimensions as it were, namely terms that represent exponents (i.e. ln(2x+4) ), with those that do not (i.e. x^2). In addition to the above question I suppose what I am also sort of grasping for is how one can look at variations of the above equation and know that a graphing solution is the only way to go. write your iteration as x = sqrt(ln(2*x + 4)) It took a few iterations to converge to 1.3827 4. ## Re: Junior question Either way works ... I prefer subtracting the two functions and looking for the zeros. 5. ## Re: Junior question Originally Posted by votan write your iteration as x = sqrt(ln(2*x + 4)) It took a few iterations to converge to 1.3827 Okay. Just how did you come up with the iteration formula? That's always been a mystery to me. -Dan 6. ## Re: Junior question Hmmm. My title is probably misleading. Sorry. Thanks for the iteration formula but I had originally put this question in the philosophy form. I am more interested in the nature of the problem rather then a solution, which I do take to be achievable by numerical means. One mostly studies techniques for solving statements that can be legitimately written. In early training a student (me in particular) assumes that if it is written correctly a statement can be solved. That is, that a statement can always be cooked up from/for a solution set. That turns out to be incredibly naive and idealistic. I am just wondering, if on its face, the above equation can be recognized as solvable only by numerical means and if so what is that feature. A secondary question was prompted by curiosity. If a mathematician were to attempt to find a method of solving such an equation analytically what would they be looking for? 7. ## Re: Junior question Dan, here it is x^2 = ln(2x + 4) sqrt(x^2) = x = +/- sqrt(ln(2x + 4)) use the positive sqrt first x = sqrt(ln(2*1 + 4 )) = 1.3386 use this value of x back in the log function x = sqrt(ln(2*1.3386 + 4 )) = 1.3779 Again, again feed this value back into the log function x = sqrt(ln(2*1.3779 + 4 )) = 1.3822 iterate x = sqrt(ln(2*1.3822 + 4)) = 1.3826 again x = sqrt(ln(2*1.3826)) = 1.3827 x = sqrt(ln(2*1.3826 + 4 )) = 1.33827 Convergence confirmed. This is the positive root. Do the same with – sqrt to obtain the negative root Now who told me to do it this way? Nobody. I did not search it and I cannot remember of anyone telling me this iteration. It works with me for almost all transcendental equations. Sometimes it diverges, rewrite it in a reverse order it converges. It was a byproduct from my early research on finding the real zeros of ill conditioned polynomial. At that time I used my calculator hp 15C, very powerful polish logic. 8. ## Re: Junior question As I think further I see that if I apply both sides of ln(2x+4)=x^2 to e I get, 2x+4 = e^(x^2) which is not an exponential equation since the variable on the left is not the argument of "e" or any other base. Is there a name for this type of equation? The point is then, that if you don't have an exponential equation you cannot use a log table and log properties to solve it. I am thinking that if you were to try and invent a technique for solving this sort of problem what you would be trying to do is to find some function other then the log function, say beep(x), that you could apply to each side of the equation that would allow you to consolidate it to a single function equal to a constant, and use beep-1(expression) to return you to "expression", you would also need a beep function table. I am just thinking aloud and expressing something that might be obvious to most (or obviously wrong) but that I have never seen clearly expressed, nor, personally realized before. 9. ## Re: Junior question here's a link to some "light" reading that may interest you regarding this topic ... http://www1.american.edu/cas/mathsta...files/glog.pdf 10. ## Re: Junior question Yes, this is excellent. A "light" read shows me the development paradigm while a deep read is proving to be perfect "treadmill reading". It is amazing how quickly time on the treadmill passes when you can space out. This is what I was looking for plus some, thanks.
Last 7 days     Results 1361-1380 of 69063.   64 65 66 67 68 69 70 71 72 73 74   Light exposure via head-mounted devices suppresses melatonin and improves vigilant attention without affecting cortisol and comfortSchmidt, Christina ; Xhrouet, Marine ; Hamacher, Manon et alPoster (2017, July 30)Detailed reference viewed: 22 (2 ULiège) Cation Distribution Dependent Magnetic Properties in CoCr 2-x Fe x O 4 (x= 0.1 to 0.5): EXAFS, Mӧssbauer and Magnetic MeasurementsKumar, Durgesh; Banerjee, Alok; Mahmoud, Abdelfattah et alin Dalton Transactions (2017), 46In this report, we have examined the evolution of the structure and rich magnetic transitions such as a paramagnetic to ferrimagnetic phase transition at the Curie temperature (TC), spiral ordering ... [more ▼]In this report, we have examined the evolution of the structure and rich magnetic transitions such as a paramagnetic to ferrimagnetic phase transition at the Curie temperature (TC), spiral ordering temperature (TS) and lock-in temperature (TL) observed in the CoCr2O4 spinel multiferroic after substituting Fe. The crystal structure, microstructure and cation distribution among the tetrahedral (A) and octahedral (B) sites in the spinel lattice are characterised by X-ray diffraction, transmission electron microscopy, Extended X-ray Absorption Fine Structure (EXAFS) and Mössbauer spectroscopy. Due to the same radial distances of the first coordination shell in both tetrahedral and octahedral environments observed in EXAFS spectra, the position of the second coordination shell specifies the preference of more Fe ions towards the A site at x = 0.1. At x = 0.5, more Fe ions favour the B site. The cation distribution quantitatively obtained from the Mössbauer spectral analysis shows that while 60% of Fe ions occupy the A site in x = 0.1, 40% occupy it in x = 0.5. Surprisingly at x = 0.3, Fe ions are distributed equally among the A and B sites. dc magnetization reveals an increase in TC from 102 K to 200 K and in TS from 26 to 40 K with an increase in Fe concentration, indicating an enhancement in A–B exchange interaction at the expense of B–B. No report has until now demonstrated such an enhancement in TS either in pure or in doped CoCr2O4. Furthermore, frequency-dependent ac susceptibility (χ) data fitted with different phenomenological models such as the Néel–Arrhenius, Vogel–Fulcher and power law confirm a spin-glass and/or cluster-glass behaviour in nanoparticles of CoCr2−xFexO4. [less ▲]Detailed reference viewed: 14 (3 ULiège) Le maraîchage périurbain à Libreville et Owendo (Gabon) : pratiques culturales et durabilitéBayendi-Loudit, Sandrine ; Ndoutoume Ndong, Auguste; Francis, Frédéric in Cahiers Agricultures (2017)In Gabon, peri-urban gardening is an opportunity to provide vegetables to the main cities, such as Libreville and Owendo. Following a survey conducted in three market gardening areas, an inventory was ... [more ▼]In Gabon, peri-urban gardening is an opportunity to provide vegetables to the main cities, such as Libreville and Owendo. Following a survey conducted in three market gardening areas, an inventory was conducted on the socio-economic characteristics, the diversity of crops, and pesticide uses. The cropped areas range from 0.08 ha to 0.4 ha per farmer, according to the site. National operators represent 51%, while people from Burkina Faso manage 40% of vegetable production. The most cultivated species throughout the year are amaranth (Amaranthus hybridus L.), lettuce (Lactuca sativa L.), Guinea sorrel (Hibiscus sabdariffa L.) and black nightshade (Solanum nigrum L.). The most important pests are Aphididae and some beetles. The most commonly used plant protection products are insecticides, mainly conventional neurotoxic. Best crop monitoring, pest control including pesticide application reduction, and the possibility to offer microcredit systems to small producers would help increasing peri-urban healthy vegetable production and increase local food autonomy [less ▲]Detailed reference viewed: 29 (6 ULiège) El impacto del fin del “giro al la izquierda” sobre las relaciones entre China y América LatinaWintgens, Sophie Conference (2017, July 28)La llegada de China en América Latina a finales de los años 2000 se ha presentado como una nueva opción en el contexto histórico de un “movimiento pendular” que siempre ha visto los países latino ... [more ▼]La llegada de China en América Latina a finales de los años 2000 se ha presentado como una nueva opción en el contexto histórico de un “movimiento pendular” que siempre ha visto los países latino-americanos girar principalmente en torno a Europa y sus antiguas potencias coloniales o en torno a los Estados Unidos. En el marco de esa dinámica del “triángulo Atlántico” en la que se encuentra encerrada América Latina, China ha también beneficiado de un entorno regional propicio como consecuencia de factores tanto estructurales (las debilidades del regionalismo latinoamericano), como coyunturales (la escalada de la “nueva” izquierda sudamericana). El inicio de la década de 2010, sin embargo, parece marcar el final del “giro à la izquierda” y los límites del modelo de desarrollo extractivo adoptado por la mayoría de los gobiernos progresistas latino-americanos. Al mismo tiempo, las autoridades chinas han también iniciado un cambio de modelo de desarrollo económico con el undécimo Plan Quinquenal (2006-2010) para que sea más autocentrado y basado en la producción de productos de alta calidad. Basándose en estas observaciones, este ponencia interroga el impacto de este contexto sobre las relaciones entre China y América Latina. [less ▲]Detailed reference viewed: 43 (1 ULiège) Spray-drying as a tool to disperse conductive carbon inside Na2FePO4F particles by addition of carbon black or carbon nanotubes to the precursor solutionMahmoud, Abdelfattah ; Caes, Sebastien; Brisbois, Magali et alin Journal of Solid State Electrochemistry (2017)In this work, Na2FePO4F-carbon composite powders were prepared by spray-drying a solution of inorganic precursors with 10 and 20 wt% added carbon black (CB) or carbon nanotubes (CNTs). In order to compare ... [more ▼]In this work, Na2FePO4F-carbon composite powders were prepared by spray-drying a solution of inorganic precursors with 10 and 20 wt% added carbon black (CB) or carbon nanotubes (CNTs). In order to compare the effect of CB and CNTwhen added to the precursor solutions, the structural, electrochemical, and morphological properties of the synthesized Na2FePO4F-xCB and Na2FePO4F-xCNT samples were systematically investigated. In both cases, X-ray diffraction shows that calcination at 600 °C in argon leads to the formation of Na2FePO4F as the major inorganic phase. 57Fe Mössbauer spectroscopy was used as complementary technique to probe the oxidation states, local environment, and identify the composition of the iron-containing phases. The electrochemical performance is markedly better in the case of Na2FePO4F-CNT (20 wt%), with specific capacities of about 100 mAh/g (Na2FePO4F-CNT) at C/4 rate vs. 50 mAh/g for Na2FePO4F-CB (20 wt%). SEM characterization of Na2FePO4F-CB particles revealed different particle morphologies for the Na2FePO4F-CNT and Na2FePO4F-CB powders. The carbon-poor surface observed for Na2FePO4FCB could be due to a slow diffusion of carbon in the droplets during drying. On the contrary, Na2FePO4F-CNT shows a better CNT dispersion inside and at the surface of the NFPF particles that improves the electrochemical performance. [less ▲]Detailed reference viewed: 31 (12 ULiège) Synthesis and complexation of superbulky imidazolium-2-dithiocarboxylate ligandsBeltran Alvarez, Tomás Francisco ; Zaragoza, Guillermo; Delaude, Lionel in Dalton Transactions (Cambridge, England : 2003) (2017), 46(28), 9036-9048The superbulky N-heterocyclic carbenes (NHCs) 2,6-bis(diphenylmethyl)-4-methylimidazol-2-ylidene (IDip*Me) and its 4-methoxy analogue (IDip*OMe) reacted instantaneously with carbon disulfide to afford the ... [more ▼]The superbulky N-heterocyclic carbenes (NHCs) 2,6-bis(diphenylmethyl)-4-methylimidazol-2-ylidene (IDip*Me) and its 4-methoxy analogue (IDip*OMe) reacted instantaneously with carbon disulfide to afford the corresponding imidazolium-2-dithiocarboxylate zwitterions in high yields. These new dithiolate ligands were fully characterized and their coordination chemistry toward common Re(i) and Ru(ii) metal sources was thoroughly investigated. Neutral [ReBr(CO)3(S2C[middle dot]NHC)] chelates featured three facially-arranged carbonyl groups on a distorted octahedron, whereas cationic [RuCl(p-cymene)(S2C[middle dot]NHC)]PF6 complexes displayed a piano-stool geometry. The molecular structures of the six new compounds revealed that the NHC[middle dot]CS2 inner salts were highly flexible. Indeed, the torsion angle between their anionic and cationic moieties varied between ca. 63[degree] in the free ligands and 3[degree] in the ruthenium complexes. Concomitantly, the S-C-S bite angle underwent a contraction from 131[degree] to 110-113[degree] upon chelation. Computation of the %VBur parameter showed that the dithiocarboxylate unit of the NHC[middle dot]CS2 betaines chiefly determined the steric requirements of the imidazolium moieties, irrespective of the metal center involved in the complexation. The replacement of the p-methyl substituents of IDip*Me with p-methoxy groups in IDip*OMe did not significantly affect the ligand bulkiness. The more electron-donating methoxy group led, however, to small changes in various IR wavenumbers used to probe the electron donor properties of the carbene moiety. [less ▲]Detailed reference viewed: 19 (2 ULiège) Le projet « Pratiques et stratégies alimentaires dans l'Antiquité tardive »Marganne, Marie-Hélène Conference (2017, July 28)Detailed reference viewed: 27 (4 ULiège) Écrire de “Guiron” en Flandre à la fin du Moyen ÂgeVeneziale, Marco Conference (2017, July 28)Detailed reference viewed: 16 (0 ULiège) Occupational social and mental stimulation and cognitive decline with advancing ageGrotz, Catherine ; Meillon, Céline; Amieva, Hélène et alin Age and Ageing (2017)Detailed reference viewed: 30 (1 ULiège) Is supercritical fluid chromatography hyphenated to massspectrometry suitable for the quality control of vitamin D3 oilyformulations?Andri, Bertyl ; Dispas, Amandine ; Klinkenberg, Régis et alin Journal of Chromatography. A (2017), 1515Nowadays, many efforts are devoted to improve analytical methods regarding efficiency, analysis timeand greenness. In this context, Supercritical Fluid Chromatography (SFC) is often regarded as a ... [more ▼]Nowadays, many efforts are devoted to improve analytical methods regarding efficiency, analysis timeand greenness. In this context, Supercritical Fluid Chromatography (SFC) is often regarded as a goodalternative over Normal Phase Liquid Chromatography (NPLC). Indeed, modern SFC separations arefast, efficient with suitable quantitative performances. Moreover, the hyphenation of SFC to mass spec-trometry (MS) provides additional gains in specificity and sensitivity. The present work aims at thedetermination of vitamin D3 by SFC-MS for routine Quality Control (QC) of medicines specifically. Basedon the chromatographic parameters previously defined in SFC-UV by Design of Experiments (DoE) andDesign Space methodology, the method was adapted to work under isopycnic conditions ensuring a base-line separation of the compounds. Afterwards, the response provided by the MS detector was optimizedby means of DoE methodology associated to desirability functions. Using these optimal MS parameters,quantitative performances of the SFC-MS method were challenged by means of total error approachmethod validation. The resulting accuracy profile demonstrated the full validity of the SFC-MS method. It was indeed possible to meet the specification established by the European Medicines Agency (EMA) (i.e. 95.0 − 105.0% of the API content) for a dosing range corresponding to at least 70.0-130.0% of theAPI content. These results highlight the possibility to use SFC-MS for the QC of medicine and obviouslysupport the switch to greener analytical methods. [less ▲]Detailed reference viewed: 23 (6 ULiège) The Prediction-Focused Approach: An opportunity for hydrogeophysical data integration and interpretation in the critical zoneHermans, Thomas ; Nguyen, Frédéric ; Klepikova, Maria et alPoster (2017, July 27)Two important challenges remain in hydrogeophysics: the inversion of geophysical data and their integration in quantitative subsurface models. Classical regularized inversion approaches suffer from ... [more ▼]Two important challenges remain in hydrogeophysics: the inversion of geophysical data and their integration in quantitative subsurface models. Classical regularized inversion approaches suffer from spatially varying resolution and yield geologically unrealistic solutions, making their utilization for model calibration less consistent. Advanced techniques such as coupled inversion allow for a direct integration of geophysical data; but, they are difficult to apply in complex cases and remain computationally demanding to estimate uncertainty. We investigated a prediction-focused approach (PFA) to directly estimate subsurface physical properties relevant in the critical zone from geophysical data, circumventing the need for classic inversions. In PFA, we seek a direct relationship between the data and the subsurface variables we want to predict (the forecast). This relationship is obtained through a prior set of subsurface models for which both data and forecast are computed. A direct relationship can often be derived through dimension reduction techniques (Figure 1). For hydrogeophysical inversion, the considered forecast variable is the subsurface variable, such as the salinity or saturation for example. An ensemble of possible solutions is generated, allowing uncertainty quantification. For data integration, the forecast variable is the prediction we want to make with our subsurface models, such as the concentration of contaminant in a drinking water production well. Geophysical and hydrological data are combined to derive a direct relationship between data and forecast. We illustrate the methodology to predict the energy recovered in an ATES system considering the uncertainty related to spatial heterogeneity. With a global sensitivity analysis, we identify sensitive parameters for heat storage prediction and validate the use of a short term heat tracing experiment to generate informative data. We illustrate how PFA can be used to successfully derive the distribution of temperature in the aquifer from ERT during the heat tracing experiment. Then, we successfully integrate the geophysical data to predict heat storage in the aquifer using PFA. The result is a full quantification of the posterior distribution of the prediction conditioned to observed data in a relatively limited time budget. [less ▲]Detailed reference viewed: 35 (7 ULiège) Deep imaging search for planets forming in the TW Hya protoplanetary disk with the Keck/NIRC2 vortex coronagraphRuane, G.; Mawet, D.; Kastner, J. et alin Astronomical Journal (The) (2017), 154Distinct gap features in the nearest protoplanetary disk, TW Hya (distance of 59.5$\pm$0.9 pc), may be signposts of ongoing planet formation. We performed long-exposure thermal infrared coronagraphic ... [more ▼]Distinct gap features in the nearest protoplanetary disk, TW Hya (distance of 59.5$\pm$0.9 pc), may be signposts of ongoing planet formation. We performed long-exposure thermal infrared coronagraphic imaging observations to search for accreting planets especially within dust gaps previously detected in scattered light and submm-wave thermal emission. Three nights of observations with the Keck/NIRC2 vortex coronagraph in $L^\prime$ (3.4-4.1$\mu$m) did not reveal any statistically significant point sources. We thereby set strict upper limits on the masses of non-accreting planets. In the four most prominent disk gaps at 24, 41, 47, and 88 au, we obtain upper mass limits of 1.6-2.3, 1.1-1.6, 1.1-1.5, and 1.0-1.2 Jupiter masses ($M_J$) assuming an age range of 7-10 Myr for TW Hya. These limits correspond to the contrast at 95\% completeness (true positive fraction of 0.95) with a 1\% chance of a false positive within $1^{\prime\prime}$ of the star. We also approximate an upper limit on the product of planet mass and planetary accretion rate of $M_p\dot{M}\lesssim10^{-8} M_J^2/yr$ implying that any putative $\sim0.1 M_J$ planet, which could be responsible for opening the 24 au gap, is presently accreting at rates insufficient to build up a Jupiter mass within TW Hya's pre-main sequence lifetime. [less ▲]Detailed reference viewed: 21 (5 ULiège) Mapping the dependency of crops on pollinators in BelgiumJacquemin, Floriane ; Violle, Cyrille; Rasmont, Pierre et alin One Ecosystem (2017)Detailed reference viewed: 27 (2 ULiège) Building flow and transport models with electrical resistivity tomography dataGottschalk, Ian; Hermans, Thomas ; Knight, Rosemary et alPoster (2017, July 26)Aquifer recharge and recovery (ARR) is the process of enhancing natural groundwater resources and recovering water for later use by constructing engineered conveyances. Insufficient understanding of ... [more ▼]Aquifer recharge and recovery (ARR) is the process of enhancing natural groundwater resources and recovering water for later use by constructing engineered conveyances. Insufficient understanding of lithological heterogeneity at ARR sites often hinders attempts to predict where and how quickly infiltrating water will flow in the subsurface, which can adversely affect the quality and quantity of available water in the ARR site. In this study, we explored the use of electrical resistivity tomography (ERT) to assist in characterizing lithological heterogeneity at an ARR site, so as to incorporate it into a flow and contaminant transport model. In this case, we had non-collocated well core log data and ERT data from a full-scale ARR basin. We compared three independent methods for producing conditional lithology-resistivity probability distributions: 1) a search template to relate the nearest logged well lithologies with ERT resistivity panels, given search criteria; 2) a maximum likelihood estimation (MLE) to match bimodal normal distributions to the histogram of each ERT line; and 3) variogram-based lithology indicator simulations constrained to well data. Each approach leverages Bayes’ Rule to estimate lithology probability given electrical resistivity. The simplest approach (method 1) yields an erroneous conditional probability function where sand dominates the conditional probability at nearly all resistivities, due in part to the strong presence of sand in the wells nearest the ERT lines. The approaches using MLE and lithology simulations (methods 2 and 3) produce similar, more realistic lithofacies probability functions. The range of resistivities where clay and sand overlap differs between methods 2 and 3: ranging between 100 and 200 ohm-m for method 2, and between 30 and 50 ohm-m for the method 3. These differences affect the posterior lithology distributions in multiple point geostatistical (MPS) simulations, and in turn, predictions of flow from models which integrate these results. To test the models, we can compare measured breakthrough times of recharged water at the site to groundwater flow simulation results using the lithofacies models created by each method. The methods described here can inform the integration of non-collocated geophysical data into a variety of applications. [less ▲]Detailed reference viewed: 43 (3 ULiège) Symposium: Work, retirement and health. Presentation: "Retirement and Cognitive Functioning: A Tricky Association"Grotz, Catherine ; Adam, Stéphane Conference (2017, July 26)Detailed reference viewed: 25 (2 ULiège) Evaluating model simulations of 20th century sea-level rise. Part 1: Global mean sea-level changeSlangen, A.; Meyssignac, B.; Agosta, Cécile et alin Journal of Climate (2017)Sea-level change is one of the major consequences of climate change and is projected to affect coastal communities around the world. Here, we compare Global Mean Sea-Level (GMSL) change estimated by 12 ... [more ▼]Sea-level change is one of the major consequences of climate change and is projected to affect coastal communities around the world. Here, we compare Global Mean Sea-Level (GMSL) change estimated by 12 climate models from the 5th phase of the World Climate Research Programme’s Climate Model Intercomparison Project (CMIP5) to observational estimates for the period 1900-2015. We analyse observed and simulated individual contributions to GMSL change (thermal expansion, glacier mass change, ice sheet mass change, landwater storage change) and compare the summed simulated contributions to observed GMSL change over the period 1900-2007 using tide gauge reconstructions, and over the period 1993-2015 using satellite altimetry estimates. The model-simulated contributions allow us to explain 50 ± 30% (uncertainties 1.65σ unless indicated otherwise) of the mean observed change from 1901-1920 to 1988-2007. Based on attributable biases between observations and models, we propose to add a number of corrections, which result in an improved explanation of 75 ± 38% of the observed change. For the satellite era (1993-1997 to 2011-2015) we find an improved budget closure of 102 ± 33% (105 ± 35% when including the proposed bias corrections). Simulated decadal trends over the 20th century increase, both in the thermal expansion and the combined mass contributions (glaciers, ice sheets and landwater storage). The mass components explain the majority of sea-level rise over the 20th century, but the thermal expansion has increasingly contributed to sea-level rise, starting from 1910 onwards and in 2015 accounting for 46% of the total simulated sea-level change. [less ▲]Detailed reference viewed: 33 (2 ULiège) A panmictic Amazonian world? : Bryophytes testifyLedent, Alice Poster (2017, July 25)Understanding connectivity over different spatial and temporal scales is fundamental for biodiversity conservation and management. The Amazonian rainforest, one of the most diverse biodiversity hotspots ... [more ▼]Understanding connectivity over different spatial and temporal scales is fundamental for biodiversity conservation and management. The Amazonian rainforest, one of the most diverse biodiversity hotspots, has experienced dramatic range contractions and expansions due to Pleistocene climate oscillations, and its human-induced fragmentation has accelerated at an unparalleled pace in the course of the Anthropocene. In this context, epiphytes, with their relatively short life-cycles, offer an ideal model to investigate the impact of past and present fragmentation on patterns of genetic structure and diversity. Due to the necessity to switch from one host tree to another, or from one leaf to another, epiphytic bryophytes typically exhibit high dispersal syndromes. In line with such high dispersal capacities, recent metacommunity analyses have arisen the intriguing question that Amazonian epiphytic bryophyte communities are homogeneous across very large spatial scales, ultimately raising the notion that they might behave as a basin-wide panmictic population. Here, we implement fine-scale population genetic analyses to address the following questions:(i) Do Amazonian epiphytes exhibit population structure at regional (< 500 km) scale; (ii) If the hypothesis of a panmictic population is rejected, (iia) at which spatial scale does genetic structuring occur, and (iib) do neutral (isolation-by-distance) or ecological (isolation-by-ecology) processes shape patterns of genetic variation? We sampled exemplars of 15 epiphytic bryophyte species from two ecologically contrasted forest types (lowland rainforest and white-sand forest) in a 50,000 km2 area in the middle Rio Negro. Genome-wide genetic data were produced using Genotyping By Sequencing. To circumvent severe taxonomic issues in challenging groups, which, like the Calymperaceae, are dominant in the epiphytic flora, we first implemented species delimitation analyses to sort-out specimens taxonomically. We then described the fine-scale genetic structure of each species and performed isolation-by-distance analyses to detect significant spatial genetic structuring. We finally determined whether isolation-by-distance or ecological filtering contribute to the observed patterns of genetic variation. The study will provide key information on the populations dynamics of highly mobile species integral to the iconic Amazonian forest, which may further be employed to refine future conservation policies in the face of accelerating climate change and anthropogenic-mediated deforestation. [less ▲]Detailed reference viewed: 25 (4 ULiège) The effect of initial water distribution and spatial resolution on the interpretation of ERT monitoring of water infiltrationDumont, Gaël ; Pilawski, Tamara ; Robert, Tanguy et alPoster (2017, July 25)A better understanding of the water balance of a landfill is crucial for its management, as the waste water content is the main factor influencing the biodegradation process of organic waste. In order to ... [more ▼]A better understanding of the water balance of a landfill is crucial for its management, as the waste water content is the main factor influencing the biodegradation process of organic waste. In order to investigate the ability of long electrical resistivity tomography (ERT) profiles to detect zones of high infiltration in a landfill cover layer, low resolution time lapse data were acquired during a rainfall event. Working at low resolution allows to cover large field areas but with the drawback of limiting quantitative interpretation. In this contribution, we use synthetic modeling to quantify the effect of the following issues commonly encountered when dealing with field scale ERT data: (i) the effect of low resolution on electrical resistivity changes interpretation, (ii) the effect of the original heterogeneous resistivity distribution on the observed relative resistivity changes, (iii) the need for temperature and pore fluid conductivity data in order to compute water content and absolute changes of water content, and (iv) the interpretation error commonly made while neglecting the dilution effect during fresh water infiltration. Firstly, due to the lack of spatial resolution, the regularized inversion process yields a smoothed distribution of resistivity changes that fail to detect small infiltration zones and yields an overestimation of the infiltration depth and an underestimation of the infiltrated volume in large infiltration areas. Secondly, the analysis of relative changes, as commonly used in literature, is not adequate when the background water content is highly heterogeneous. In such a case, relative changes reflect both the initial water content distribution and the observed changes. Thirdly, the computation of absolute water content changes better reflects the infiltration pattern, but requires spatially distributed temperature and pore fluid conductivity input data. Lastly, the dilution effect, if not considered, leads to an underestimation of the infiltrated volume. Taking into account these elements, we extracted the maximum amount of information from our field data without over-interpreting the results. This allowed the detection of larger infiltration areas possibly responsible for a large part of the annual water infiltration and landfill gas loss. [less ▲]Detailed reference viewed: 24 (3 ULiège) Shape and spin determination of Barbarian asteroidsDevogele, Maxime ; Tanga, P.; Bendjoya, P. et alin Astronomy and Astrophysics (2017)The so-called Barbarian asteroids share peculiar, but common polarimetric properties, probably related to both their shape and composition. They are named after (234) Barbara, the first on which such ... [more ▼]The so-called Barbarian asteroids share peculiar, but common polarimetric properties, probably related to both their shape and composition. They are named after (234) Barbara, the first on which such properties were identified. As has been suggested, large scale topographic features could play a role in the polarimetric response, if the shapes of Barbarians are particularly irregular and present a variety of scattering/incidence angles. This idea is supported by the shape of (234) Barbara, that appears to be deeply excavated by wide concave areas revealed by photometry and stellar occultations. Aims. With these motivations, we started an observation campaign to characterise the shape and rotation properties of Small Main- Belt Asteroid Spectroscopic Survey (SMASS) type L and Ld asteroids. As many of them show long rotation periods, we activated a worldwide network of observers to obtain a dense temporal coverage. Methods. We used light-curve inversion technique in order to determine the sidereal rotation periods of 15 asteroids and the con- vergence to a stable shape and pole coordinates for 8 of them. By using available data from occultations, we are able to scale some shapes to an absolute size. We also study the rotation periods of our sample looking for confirmation of the suspected abundance of asteroids with long rotation periods. Results. Our results show that the shape models of our sample do not seem to have peculiar properties with respect to asteroids with similar size, while an excess of slow rotators is most probably confirmed. [less ▲]Detailed reference viewed: 38 (1 ULiège) Modèles de porosité pour les inondations urbainesDewals, Benjamin ; Bruwier, Martin ; El Saeid Mustafa, Ahmed Mohamed et alScientific conference (2017, July 25)Detailed reference viewed: 44 (9 ULiège)
# Tag Archives: AP1 ## AP Physics 1 Syllabus I recently was able to get my syllabus approved by College Board. The approval number is # 1485588v1 Authorized ## Unit 3: Momentum Transfer Unit This third unit on the MTM is the first significant deviation from the traditional modeling framework. I expect it to take approximately two weeks. It will begin with two carts “exploding” apart and whiteboard meetings to analyze the results (ratio of masses of carts -> ratio of velocities of carts). From there they will proceed through some of the modeling materials for the Momentum Transfer Model as provided by the Modeling Materials. The main focus of the unit will be that momentum is a quantity that is swapped between objects, depicting those swaps with “interaction diagrams” (formally called system schema) (labels of types of forces withheld during this unit), and momentum diagrams (IF charts). Also along the way, I will try to emphasize the similarity between displacement (being term for a change in position) and impulse (being the term for change in momentum). The first worksheet is the same as the first worksheet from the Modeling Materials. It looks at mainly qualitative events and has the students determine relative momenta or impulses. We then look numerous collisions to see if momentum is conserved in collisions as well as the explosions seen in the lab.  We skip the second worksheet provided by the Modeling Materials, as most of these problems focus on calculating impulses from Force & time. These types of problems will be address later in the UFPM unit.   The new second and third worksheets use momentum diagrams to solve collision problems. We end with additional problems for review. The students goals for this unit are: SWBAT 1. create an interaction diagram including the identification of the system. 2. create a momentum diagram (IF diagram) for an event. 3. interpret a momentum diagram by creating a mathematical model of an event. 4. correctly solve problems involving an exchange of momentum. ## Unit 2: Constant Acceleration Particle Model This second unit on the CAPM will proceed along the traditional modeling framework. I expect it to take approximately two and a half weeks. It will begin with ball rolling (or cart sliding) down and incline plane and whiteboard meetings to analyze the results. From there they will proceed through the modeling materials for the Constant Velocity Particle Model as provided by the Modeling Materials. The first worksheet allows them to analyze additional data sets similar to what the saw in the lab. The second worksheet has the student create motion maps, position-time, velocity-time, and acceleration-time graphs for more complicated ramp systems. Worksheet 3 focuses on analyzing position-time and velocity-time graphs. Worksheet 4 has the student solve quantitative problems. We end with additional problems for review. The students goals for this unit are: SWBAT 1. create and interpret graphical and mathematical representations of objects moving with constant acceleration. 2. can correctly differentiate between acceleration and velocity. 3. correctly interpret the meaning of the sign of acceleration. 4. solve kinematic problems involving constant acceleration. ## Unit 1: Constant Velocity Particle Model This introductory unit on the CVPM will proceed along the traditional modeling framework with only a few additions. I expect it to take approximately three weeks. It will begin with the Buggy Lab and whiteboard meetings to analyze the results. The only change from the traditional progression will be to first complete the “Graphing Practice” worksheet from the Scientific Methods unit. From there they will proceed through the modeling materials for the Constant Velocity Particle Model as provided by the Modeling Materials. Thus the new worksheet 2 will be a worksheet that focuses on the students converting between the position-time graphs, motion maps, and verbal descriptions. Worksheet 3 will then add velocity-time graphs to the mix. Worksheet 4 brings back data analysis and converting to the other representations. Worksheet 5 does the same, but for slightly more difficult situations. We end with additional problems for review. For those using Standards Based Grading, the first draft of may standards is as follows: Students will be able to (SWBAT): 1. design an experiment that properly controls variables 2. report measurements and calculations with proper precision 3. develop a mental model that correctly explains and predicts an event 4. algebraically solve an equation for a given variable. 5. create a scatter plot of independent and dependent data points 6. linearize data points 7. create a mathematical model of a graph. 8. create and interpret graphical and mathematical representations of objects moving with constant velocity. 9. distinguish between position, distance and displacement. 10. solve problems involving average speed or average velocity. I’m fully aware that this is a long list of standards, and quite possibly too many standards. Any feedback on whether or not the list should be adjusted (and how) would be greatly appreciated. ## AP Physics 1 Storyline As I prepare for the transition to the AP 1 course, I’ve taken this school year (2013-2014) to begin trying some things out. Based on what I’ve tried, here is the storyline I’m going to use with my AP 1 students next year. Before I get into the details, I do plan to use Modeling Instruction throughout the course. If you haven’t had the chance to take a workshop, do yourself a favor and find one. Also, I plan to make future posts providing more detail for each unit. We begin the year jumping right into the Constant Velocity Particle Model (CVPM). The students at my school come out of chemistry, and for the most part have decent skills when it comes to doing labs. Although we don’t use modeling in our other science classes, they have a majority of the basic skills. So I save a little time by skipping the Scientific Methods Unit. Our second unit, then progresses to the Constant Acceleration Particle Model (CAPM). As I mentioned, I plan to give more detail later, but for those familiar with the materials provided, I’m not doing that much different from those documents. The third unit is where I make my first big adjustment from the traditional modeling curriculum. After reading numerous posts from some of the bloggers I admire the most (read Momentum is King, Kelly O’Shea’s blog, and more recently Mazur’s Physics Textbook), I decided to try out teaching momentum before Newton’s Laws. During this third unit, Momentum Transfer Model (MTM), we focus on interaction diagrams and the swapping of momentum as the mechanism of physical interactions. We stress the choosing of a system, and that momentum swaps within the system, or swaps out of the system as an impulse. In the end, we are building the concept of Newton’s Third Law. In addition to what I call Interaction Diagrams (others call system schema), we also introduce the Momentum Diagrams (IF Charts). We hold off on discussing collsions in great detail until after impulses are further studied with unit 5. The fourth unit, Balanced Forces Particle Model (BFPM), then begins to bring in the concept of forces as the rate of swapping momentum. Here we introduce the major contact forces: normal, tension, friction (name not equation) and the non-contact gravitational force. We also begin using force diagrams to determine if the forces are balanced or not. We stress one way of understanding Newton’s 1st Law as “Balanced Forces -> no acceleration, Unbalanced Forces -> acceleration.” In the fifth unit, Unbalanced Forces Particle Model (UFPM), we now get into Newton’s 2nd Law in two ways. One the classic: $a=\frac{F_{net}}{m}$ And two, we build the parallel between kinematics and Newton’s Laws. In kinematics, the slope of a position-time graph gives velocity-time, the slope of velocity-time gives the acceleration-time. Finding the area allows us to go the other way. The same is then true of momentum and forces. The slope of a momentum-time graph gives Force vs. time, while the area under a Force vs. time graph give the change in momentum (impulse). For those students going on to calculus based physics, this helps lay the ground work. For the rest, it shows a nice connection between these different models. With this new information, we can now add a Force vs. time graph into the momentum graphs and make “IFF” graphs. Other features of the unit are the building of the equation for friction in relation to the normal force, and the independence of components by looking at 2D projectile motion problems. We then wrap up the first semester with our 6th unit, Energy Transfer Model (ETM). After building the concept of energy storage through Energy Diagrams (LOL Diagrams). We discover that Energy storage is a “cheat” to help us solve more complex problems, since it is a second conserved quantity. We come back to collisions and find that, in elastic collisions, we can now build a second conservation equation: 1. momentum (IF charts) and, 2. energy (LOL diagrams). As a review of the first semester, the students will then have to build a paper car that will hold an egg inside. They will have two tests: 1) a speed test to see who has the fastest car and, 2) a crash test to see who has the safest car. During the design, they must make use of all the models we have built this semester. To start the second semester, we begin studying the Central Force Particle Model (CFPM), or what most people would call uniform circular motion. In this unit we also add in building the concepts of Newton’s Universal Gravitation and satellite motion. In unit 8 we now move onto full rotational motion, the Rotating Bodies Model (RBM).  To be honest, I may try to split this up into two units, as it’s got a lot of stuff going on here. In short, within this unit we retrace units 1-6, but in the rotating or polar frame of reference. We begin with rotational kinematics ($\theta$  vs. t, $\omega$  vs. t, and $\alpha$ vs. t). Afterwords, we build in dynamics with angular momentum, torque, and rotational energy storage. In unit 9, we move on to harmonic motion with the Oscillating Particle Model (OPM). Overall, we stay pretty true to the model materials here. We start with looking at a bouncing mass hanging by a spring. We later bring in pendulum motion. In unit 10, we then move on to the Mechanical Wave Model (MWM) in which we build a mental model of coupled oscillators. From what I can tell AP-1 only focuses on one dimensional waves, so we looked at boundary effects: reflection (open/fixed) and refraction. We also build in wave superposition. We begin looking at sound waves and doppler shifts as further examples of waves. At least so far, I don’t build in diffraction through “narrow” slits or 2D interference patterns. In the last unit, we then look at circuits in what I call the Charge Flow Model (CFM). We begin by looking at sticky tape activities to introduce the electric force and electric energy. During that discussion we bring in the concept of gravitational potential ($gh$), to help understand the concept of electric potential ($V$). From there, we have them build simple circuits with lightbulbs, then move onto simple circuits with fixed resistors while measuring the current (flowrate of charge). We eventually get to adding multiple resistors in series and in parallel and try to create a model that explains how the resistors add in these two different ways. To review the entire year, we then do a video analysis project in which the students must analyze movie, tv, or internet videos and determine how feasible those scenes actually are. Here is an example from which I got my idea:
# Trigonometric functions #### Vicktor Joined Nov 20, 2007 13 Hello there, I'm designing a calculator using a PIC16F628 and I would like to add to it the trigonometric functions (i.e. sin, cos, tan, cot). The question is, is there a possible way to do so, without using tables. Some kind of algorithm I mean in Asm or C for example. I would appreciate it! ~Vicktor P.S. First post #### recca02 Joined Apr 2, 2007 1,214 trigonometric functions can be converted using series definitions maybe u can use those to get values. #### Vicktor Joined Nov 20, 2007 13 Wow, thank you both for your support! To be honest, I'll definitely use the CORDIC method. ~Vicktor #### Papabravo Joined Feb 24, 2006 14,864 Wait. The Taylor series mentioned in the article converges way too slowly to be of much use. What you want is the Chebyshev Polynomial approximation. These polynomial approximations bound the absolute magnitude of the errors over the interval of interest. #### Vicktor Joined Nov 20, 2007 13 I'm sorry mate, but I didn't understand anything you said, can you please explain? And if possible show an example Sorry I'm a little stupid at math. ~Vicktor #### Papabravo Joined Feb 24, 2006 14,864 The Taylor series is an infinite series of terms that will eventually converge to any function you care to name, like sine, cosine, and so forth. The big problem with using them for preactical calculations is that they converge to the correct value slowly, require a large number of terms, and the errors increase as you get further away from the point of expansion. The article that recca02 pointed you too is useless for your purposes. The Chebyshev polynmoials on the other hand converge quickly with a small number of terms and the magnitude of the error over the interval of interest is bounded. That means that over the whole interval where you want to employ the polynomial approximation the errors can't get bigger than some small number which depends on the number of terms chosen. http://en.wikipedia.org/wiki/Taylor_series http://math.fullerton.edu/mathews/n2003/ChebyshevPolyMod.html How many bits do you plan to use for representing your numbers? #### agentofdarkness Joined Oct 9, 2007 42 I'm writing a sine function in Microsoft ASM right now actually. Almost done. Cosine is similar except you change n from odd to even in the Taylor series. The way I am doing it is by getting an accurate approximation between -2pi and +2pi (9th order Taylor series for sine). Then I take any number and convert it to the equivalent value between -2pi and 2pi. It works pretty well, I still need to add the 9th term to the series to get a better approximation. It does use the FPU though. You could do tan by sin/cos and the same for other functions. #### Papabravo Joined Feb 24, 2006 14,864 You'll be amazed at the difference in error behavior between a 9th order Taylor Series and a 9th order Chebyshev Polynomial. What fixed point are you using for the Taylor Series Expansion? #### agentofdarkness Joined Oct 9, 2007 42 Zero, I'm not shifting the polynomial at all. The assignment is to use a Taylor expansion and I don't know what a Chebyshev Polynomial is. I'm not too concerned with the accuracy, its just for an assignment but I will probably end up going to the 11th order. #### Papabravo Joined Feb 24, 2006 14,864 More's the pity #### recca02 Joined Apr 2, 2007 1,214 Is it possible to use the relation between exponential and trigonometric function in this case? #### Papabravo Joined Feb 24, 2006 14,864 I don't see how that will be helpful. In the Math libraries of all C Compilers and Matlab and R', Chebyshev Polynomials are used for evaluating those functions. In fact ALL trancendental functions are evaluated this way. The key feature of all of these libraries is to minimize the effort involved. This means selecting a polynomial that minimizes the number of operations like multiplications and additions, AND (this is the really big one) controlling the absolute magnitude of the errors over the interval of evaluation. In their classic work, Abramoitz and Stegun, give you many useful polynomial approximations. You should check them out. Even if you don't use one of their polynomial approximations, their tables of function values can be used to verify your own calculations. http://www.amazon.com/Handbook-Math...d_bbs_2?ie=UTF8&s=books&qid=1195915938&sr=1-2 #### recca02 Joined Apr 2, 2007 1,214 i dont have much idea about what is used in programming nor do i remember learning about Chebyshev Polynomials in engg. What i wanted to know was: The relation between trigonometric function and exponentials which is not an approximation at all, can that be used for calculation of the functions? wud that not be more accurate? #### Papabravo Joined Feb 24, 2006 14,864 In digital computers the exponential and logarithm functions are evaluated in the same way as the sine and the cosine and the tangent. They are all transcendental functions and they are all evaluated with polynomials of one sort or another. Using an exact relationship between trigonometric functions and exponential functions is not helpful since you trade the valuation of one polynomial for another. You might as well evaluate the polynomial with the best performance. We already know what polynomial that is. If you don't want to exert the mental effort to research and learn something new then by all means continue on the path you have chosen. I really could not care less. #### recca02 Joined Apr 2, 2007 1,214 somehow i did not make myself clear. i m not concerned about Taylor's series,or polynomials. i want to know(since my knowledge in this field is limited) whether an operation of the type: e^(ix) possible? if at all it is, is the answer not accurate enough to be straight away used for finding values of trigonometric functions? #### Papabravo Joined Feb 24, 2006 14,864 NO! For the third and last time, an exponential function like e^(ix) would be evaluated with a polynomial like ALL TRANSCENTATAL FUNCTIONS ARE EVALUATED. It is not helpful to substitute one type of polynomial evaluation for another one. If you aready have a calculation engine such as a TI-89 or Excel and you want to evaluate the sine of an angle you do it directly. Those engines will of course use a polynomial approximation to evaluate the function for you. If you want to write a program for a microcontroller that does not have such a library then you have to go back to first principles. That requires the use of a polynomial approximation. It is the only tool that we have in the tool bag to evaluate (here's that word again) TRANSCENDENTAL FUNCTIONS. There simply is NO OTHER WAY. Done, Fini, & Basta cosi #### recca02 Joined Apr 2, 2007 1,214 i see it now, thanks. #### Vicktor Joined Nov 20, 2007 13 Wow, I didn't realise that the topic will gain such popularity! Anyways, I'm still looking for a nice, juicy PIC asm sourcecode Help me with that and I won't try to rip off the math library of the Hi-tech PICC compiler! ~Vicktor #### Papabravo Joined Feb 24, 2006 14,864 It will be a two step process. 1. Find a polynomial approximation of interest 2. Write an efficient polynomial evaluator For example on p. 76 Abramowitz and Stegun we find the following: Rich (BB code): cos(x) = 1 +a2*x^2+a4*x^4 a2 = -0.49670 a4= 0.03705 for 0 <= x <= pi/2 with an error less than or equal to 9e-4`
# How do I find all the solutions of three simultaneous equations within a given box? Sometimes, one needs to find all the solutions of three simultaneous nonlinear equations in three unknowns \begin{align*}f(x,y,z)&=0\\g(x,y,z)&=0\\h(x,y,z)&=0\end{align*} within a given cuboidal domain; that is, all triples $(x,y,z)$ satisfying the three equations given above, and within the region defined by $x_{\min}\leq x\leq x_{\max}$, $y_{\min}\leq y\leq y_{\max}$, $z_{\min}\leq z\leq z_{\max}$. (I restrict the discussion here to transcendental equations; algebraic equations are not too problematic for Mathematica (Solve[]/NSolve[], Resultant[], GroebnerBasis[]...)) How can I use Mathematica to find these solutions? FindRoot[] can only find one solution, and you still need an approximate location as a starting point for FindRoot[]. NSolve[] works (sometimes), but it takes long. - PerformanceGoal :> $PerformanceGoal, PlotPoints -> Automatic}]]; FindAllCrossings3D[funcs_?VectorQ, {x_, xmin_, xmax_}, {y_, ymin_, ymax_}, {z_, zmin_, zmax_}, opts___] := Module[{contourData, seeds, tt, fz = Compile[{x, y, z}, Evaluate[funcs[[3]]]]}, contourData = Cases[Normal[ContourPlot3D[ Evaluate[Most[funcs]], {x, xmin, xmax}, {y, ymin, ymax}, {z, zmin, zmax}, BoundaryStyle -> {1 -> None, 2 -> None, {1, 2} -> {}}, ContourStyle -> None, Mesh -> None, Method -> Automatic, Evaluate[Sequence @@ FilterRules[Join[{opts}, Options[FindAllCrossings3D]], Options[ContourPlot3D]]]]], Line[l_] :> l, Infinity]; seeds = Flatten[Pick[Rest[#], Most[#] Rest[#] &@Sign[Apply[fz, #, 2]], -1] & /@ contourData, 1]; If[seeds === {}, seeds, Select[Union[Map[{x, y, z} /. FindRoot[funcs, Transpose[{{x, y, z}, #}], Evaluate[Sequence @@ FilterRules[Join[{opts}, Options[FindAllCrossings3D]], Options[FindRoot]]]] &, seeds]], (xmin < #[[1]] < xmax && ymin < #[[2]] < ymax && zmin < #[[3]] < zmax) &]]] As an example of how to use FindAllCrossings3D[]: sols = FindAllCrossings3D[ {Sin[x + y] Sin[y - z], Cos[x] Cos[y] - Sin[z], x^2 + y^2 + z^2 - 9}, {x, -4, 4}, {y, -4, 4}, {z, -4, 4}] {{-2.80293, -0.756176, -0.756176}, {-2.78082, -0.360773, -1.06625}, {-2.11276, 2.11276, 0.269309}, {-1.14056, -0.395145, 2.74645}, {-1.14056, 2.74645, -0.395145}, {-0.883563, 0.883563, 2.72739}, {-0.360773, -2.78082, -1.06625}, {0.360773, 2.78082, -1.06625}, {0.883563, -0.883563, 2.72739}, {1.14056, -0.395145, 2.74645}, {1.14056, 2.74645, -0.395145}, {2.11276, -2.11276, 0.269309}, {2.78082, 0.360773, -1.06625}, {2.80293, -0.756176, -0.756176}} The routine found$14$solutions. To visualize the solutions, we can do the following: l1 = Cases[Normal[ContourPlot3D[{Sin[x + y] Sin[y - z], Cos[x] Cos[y] - Sin[z]}, {x, -4, 4}, {y, -4, 4}, {z, -4, 4}, BoundaryStyle -> {1 -> None, 2 -> None, {1, 2} -> {}}, ContourStyle -> None, Mesh -> None]], Line[l_] :> l, Infinity]; Graphics3D[{Line[l1], Sphere[{0, 0, 0}, 3], Sphere[sols, 1/10]}, Axes -> Automatic] where we used small spheres to mark the intersections of the space curves formed by the intersection of$\sin(x+y)\sin(y-z)=0$and$\cos\,x\cos\,y=\sin\,z$, and the sphere$x^2+y^2+z^2=9\$.
# Revision history [back] After some search, I think I can now answer my own questions. What is the fast pyramids approach? On browsing the OpenCV source code, in optflowgf.cpp, I found the following lines: // Crop unnecessary levels double scale = 1; int numLevelsCropped = 0; for (; numLevelsCropped < numLevels_; numLevelsCropped++) { scale *= pyrScale_; if (size.width*scale < min_size || size.height*scale < min_size) break; } The above lines crop the pyramid levels which are smaller than min_size x min_size. Furthermore, min_size is defined, still in optflowgf.cpp, as const int min_size = 32; Finally, again in optflowgf.cpp, I found if (fastPyramids_) { // Build Gaussian pyramids using pyrDown() pyramid0_.resize(numLevelsCropped + 1); pyramid1_.resize(numLevelsCropped + 1); pyramid0_[0] = frames_[0]; pyramid1_[0] = frames_[1]; for (int i = 1; i <= numLevelsCropped; ++i) { pyrDown(pyramid0_[i - 1], pyramid0_[i]); pyrDown(pyramid1_[i - 1], pyramid1_[i]); } } I would then say that fast pyramids skip too small pyramid levels. In which way are we smoothing derivatives? From Farneback's paper "Two-Frame Motion Estimation Based on Polynomial Expansion", my understanding is that the window function involved in eq. (12) is a Gaussian. From this point of view, polyN x polyN is the size of the window, while polySigma is the standard deviation of the Gaussian.
# Thread: Find Parametric Equation for Moving Particle 1. ## Find Parametric Equation for Moving Particle Hello. I am familiar with parametric equations but the way this one is being asked is throwing me off. Find parametric equations for the path of a particle that moves along the circle (x-1)^2 + (y+2)^2 = 4 three times clockwise, starting at point (1,-4). 2. Originally Posted by lindsmitch Hello. I am familiar with parametric equations but the way this one is being asked is throwing me off. Find parametric equations for the path of a particle that moves along the circle (x-1)^2 + (y+2)^2 = 4 three times clockwise, starting at point (1,-4). i assume you know the (counter-clockwise) way to parameterize a circle, just do it the other way. you then want to choose the angle so that you get 3 revolutions out of it, beginning at the indicated point. How's that? 3. Hello, lindsmitch! $\displaystyle \text{Find parametric equations for the path of a particle}$ $\displaystyle \text{that moves along the circle: }\:(x-1)^2 + (y+2)^2 \:=\: 4$ $\displaystyle \text{ three times clockwise, starting at point (1, -4)}$ The path is a circle, center (1,-2) and radius 2. The curve starts at "6 o'clock" and moves clockwise for 3 revolutions. There is a variety of ways to write the parametric equations. . . I'll use the easiest way (for me). . . $\displaystyle \begin{Bmatrix}{x &=& 1 + 2\cos\theta \\ y &=& \text{-}2 + 2\sin\theta \end{Bmatrix}\quad \text{ for }\,\theta = \frac{3\pi}{2}\,\text{ to }\,\theta = \text{-}\frac{9\pi}{2}$ 4. Thank you both very much. That was very helpful. In the future, how would I have arrived at those parametric equations Soroban? Did you just use the conversion x = r * cos(theta) and y = r*sin(theta)?
# Anchors Documenter.Anchors.AnchorType Stores an arbitrary object called .object and it's location within a document. Fields • object – the stored object. • order – ordering of object within the entire document. • file – the destination file, in build, where the object will be written to. • id – the generated "slug" identifying the object. • nth – integer that unique-ifies anchors with the same id. source Documenter.Anchors.AnchorMapType Tree structure representating anchors in a document and their relationships with eachother. Object Hierarchy id -> file -> anchors Each id maps to a file which in turn maps to a vector of Anchor objects. source
Acids are neutralised by bases A neutralisation reaction is one in which an acid reacts with a base to form water. A salt is also formed in this reaction. Bases are metal oxides, metal hydroxides and metal carbonates. In the neutralisation reaction between an acid and a metal carbonate, there are three products, a salt, water and also carbon dioxide gas. $\begin{array}{l} \text{Hyrdrochloric acid} + \text{calcium carbonate} \to \\ \text{calcium chloride} + \text{water} + \text{carbon dioxide} \end{array}$ The salt is named in the same way as before, taking the metal's name from the metal carbonate and the ending from the type of acid used. Carbon dioxide can be tested for using lime water (turns from colourless to chalky white).
Multiple accretion events as a trigger for Sgr A* activity # Multiple accretion events as a trigger for Sgr A* activity Bożena Czerny Nicolaus Copernicus Astronomical Centre, Bartycka 18, P-00716 Warsaw, Poland    Devaky Kunneriath Astronomical Institute, Academy of Sciences, Boční II 1401, CZ-14100 Prague, Czech Republic    Vladimír Karas Astronomical Institute, Academy of Sciences, Boční II 1401, CZ-14100 Prague, Czech Republic    Tapas K. Das Harish Chandra Research Institute, Allahabad 211019, India Received 19 September 2011; Accepted 16 May 2013 ###### Key Words.: accretion, accretion discs – black hole physics – Galaxy: centre – black holes: individual galaxies: Sgr A* ###### Abstract Context:Gas clouds are present in the Galactic centre, where they orbit around the supermassive black hole. Collisions between these clumps reduce their angular momentum, and as a result some of the clumps are set on a plunging trajectory. Constraints can be imposed on the nature of past accretion events based on the currently observed X-ray reflection from the molecular clouds surrounding the Galactic centre. Aims:We discuss accretion of clouds in the context of enhanced activity of Sagittarius A* during the past few hundred years. We put forward a scenario according to which gas clouds bring material close to the horizon of the black hole on parsec scale. Methods:We have modelled the source intrinsic luminosity assuming that multiple events occur at various moments in time. These events are characterized by the amount of accreted material and the distribution of angular momentum. We parameterized the activity in the form of a sequence of discrete events, followed the viscous evolution, and calculated the luminosity of the system from the time-dependent accretion rate across the inner boundary. Results:Accreting clumps settle near a circularization radius, spread there during the viscous time, and subsequently feed the black hole over a certain period. A significant enhancement (by factor of ten) of the luminosity is only expected if the viscous timescale of the inflow is very short. On the other hand, the increase in source activity is expected to be much less prominent if the latter timescale is longer and a considerable fraction of the material does not reach the centre. Conclusions:A solution is obtained under two additional assumptions: (i) the radiative efficiency is a decreasing function of the Eddington ratio; (ii) the viscous decay of the luminosity proceeds somewhat faster than the canonical profile. We applied our scheme to the case of G2 cloud in the Galactic centre to obtain constraints on the core-less gaseous cloud model. ## 1 Introduction The centre of the Milky Way galaxy contains a supermassive black hole of mass (where the dimensionless is about unity; Genzel et al. 2010). The present activity of the Sagittarius A* (Sgr A*) nuclear source is very low, probably because the accretion rate is very low and radiatively inefficient (Eckart et al. 2005; Melia 2007). The mass accretion rate is estimated within the range from yr to a few times yr, as implied by the measurement of the Faraday rotation (Marrone et al. 2007; Ferrière 2009). Because many stars are present in the region and form a dense nuclear cluster, the material to support the observed level of activity can be provided by stellar mass loss in its entirety (Wardle & Yusef-Zadeh 1992; Melia 1992; Coker & Melia 1997; Loeb 2004; Rockefeller et al. 2004; Cuadra et al. 2005, 2006; Moscibrodzka et al. 2006). In fact, a number of papers address the question of why only a fraction of the material from stellar winds reaches the black hole (e.g., Quataert 2004; Shcherbakov & Baganoff 2010). Despite the low level of current activity of the Galactic centre, Integral and XMM-Newton observations of the X-ray reflection from molecular clouds in the Sgr A* region seem to imply that just a few hundred years ago Sgr~A* was orders of magnitude brighter than it is currently (Sunyaev et al. 1993, 1998; Koyama et al. 1996; Muno et al. 2007; Inui et al. 2009; Ponti et al. 2010, 2011; Terrier et al. 2010; Nobukawa et al. 2011; Capelli et al. 2012; Ryu et al. 2013). As shown by Cuadra et al. (2008), stellar winds do not explain such an enormous and relatively quick change in luminosity. Instead, an occasional inflow of clumps of the fresh material is more likely, and one such cloud (G2) is now approaching the Galactic centre (Gillessen et al. 2012, 2013). It is thought that the G2 cloud should reach the closest distance of in mid 2013 or early 2014.111Length units of Schwarzschild radius are used as a natural scale of the problem,  [cm], where denotes gravitational constant, is the speed of light. Then the cloud should become disrupted by its interaction with the ambient medium. The material will then gradually diffuse towards the black hole. It has been suggested that the silhouette of the supermassive black hole will be smeared as a result of rising luminosity of Sgr A* (Moscibrodzka et al. 2012). In this paper we focus our attention on a specific subtopic, namely, we study the constraints on the number, frequency and strength of such events in the past, based on the currently observed X-ray reflection from molecular clouds surrounding Sgr A*. This question is also interesting in the context of the variability that the source exhibits on different timescales in different energy bands, but currently it is difficult to understand whether the vastly disparate scales are driven by the same underlying mechanism (Witzel et al. 2012). At this stage we approach the problem by constructing a simplified scheme, and we find that a relatively simple set-up leads us to very reasonable estimates and dependencies, although we cannot yet develop a precise model. We demonstrate that repetitive accretion events are able to explain the basic properties of the changing activity of Sgr A* black hole during the past several hundred years. A simple scheme allows us to set interesting constraints on the model parameters. In the next section we describe the model and illustrate its application in two versions – for a single accretion event and for a sequence of repeated events. Then, in sect. 3, we use this scheme to put constraints on a possible form of the observed lightcurve of the signal reflected from clouds surrounding Sgr A* at a certain distance. In particular, we concentrate our attention on the Sgr B2 and Sgr C1 Galactic centre clouds for which sufficiently accurate (3D) positions have been reported in the literature. We discuss different aspects of the model in sect. 4, and we summarize our conclusions in sect. 5. At present we cannot proceed far beyond the qualitative exposition of the model and the discussion of basic quantitative constraints, but this should be possible in future when more data points are added to the observed lightcurve and the positions of the reflecting clouds are determined with better accuracy for more clouds in the region. ## 2 Method We model the intrinsic luminosity of Sgr A* and the luminosity of the reflection from a distant molecular cloud assuming discrete (multiple) accretion events that happen at various moments in time and are characterized by a distribution of angular momentum of the infalling clouds. We develop the model in two steps: first, in the next section we describe the relevant aspects of the individual accretion events when the discrete clouds get close to the centre and then their material becomes dispersed and captured by the black hole. Then we proceed to the case of multiple accretion events that recur over a period of time. ### 2.1 Description of a single accretion event Direct accretion of a gaseous clump is not possible if the angular momentum of the infalling gas exceeds a certain threshold. If the inflowing clump (or a stellar body) becomes tidally disrupted before reaching the pericentre, part of its material is lost, and the remaining fraction gathers to form a ring at the circularization radius. Later, as the disc viscosity starts to operate the ring can disperse in radius and trigger an accretion event. This phenomenon was studied analytically and numerically in a number of papers, especially in the context of putative stellar disruptions and the resulting flares occurring close to a massive black hole (e.g. Frank & Rees 1976; Evans & Kochanek 1989; MacLeod et al. 2012; Reis et al. 2012, and further references cited therein). In the Galactic centre the flares are known to occur on a daily basis and can be detected over a broad range of wavelengths (Eckart et al. 2008, 2012; Kunneriath et al. 2010). Here we aim at studying a broad range of parameters with a simplified scheme, so we follow the analytical approach to the description of the cloud disruption and the subsequent decay phase of the X-ray lightcurve. We start from equations governing the evolution of an accretion disc (Lynden-Bell & Pringle 1974), ∂Σ∂t=12πR∂˙M∂R, (1) ˙M=6πR1/2∂∂R(R1/2νΣ), (2) which operate on the viscous timescale; and are the disc surface density and the accretion rate at a given radius . Kinematic viscosity can be set as either constant or a power-law function of the radius, ν=ν0(RR0)n. (3) Such a prescription is more general than the standard Shakura–Sunyaev scheme. By adopting specific values of the power-law index , one can mimic the gas-dominated regime, the radiation-pressure dominated regime, or the isothermal disc regime of the standard model (see Kato et al. 1998; Zdziarski et al. 2009). These equations need to be supplemented with the time-dependent description of the disc heating and cooling processes, and solved numerically. Analytical solutions are possible under certain simplifying constraints. What is important is that the analytical solutions for time and radial dependences of the disc surface density and the local accretion rate are the Green function of the problem. A general solution can thus be obtained as a superposition of these elementary solutions. Green’s functions of the problem adopt the following form: GΣ(R,t)=2Σ0|μ|ξ1/μ−9/2τexp[−2μ2(ξ1/μ+1)τ]I(μ;ξ,τ), (4) G˙M(R,t)=˙M0|μ|τ∂∂ξξ1/2exp[−2μ2(ξ1/μ+1)τ]I(μ;ξ,τ), (5) where I(μ;ξ,τ)≡I|μ|[4μ2ξ1/(2μ)τ] (6) denotes the modified Bessel function of the first kind, ˙M0=4πR20Σ0tvisc(R0),  μ=14−2n,  τ=ttvisc(R0),  tvisc=2R203ν, (7) and is the dimensionless radius (scaled with the outer radius where the mass is supplied). Equation (4) was studied by, e.g., Mineshige & Wood (1989), Lyubarskii (1997), and Kotov et al. (2001). Equation (5) was originally derived and discussed by Zdziarski et al. (2009). An implicit assumption underlying these equations is that the initial distribution of the mass adopts the form of an equatorial ring encircling the black hole at constant radius, , with the initial profile of the surface density in the form , where is the total mass of the ring. We are interested in the temporal evolution of luminosity. Most energy is liberated in the form of radiation at the inner part of the accretion disc; therefore, in further considerations, we can use the dimensionless asymptotic expression (which is strictly valid only at ; Zdziarski et al. 2009), ~G˙M(τ)=˙M0(2μ2)μΓ(μ)τ−1−μexp(−2μ2τ), (8) where is the Euler gamma function. The duration of the bright phase of such an event is expected to be close to the viscous time at the circularization radius , with a sharp rise and the decay phase in the form of a power law with a slope equal to . Thus, the duration of the brightest phase of the large angular momentum event would be close to the viscous timescale of the standard disc, (9) strongly depending on the value of the circularization radius and the ratio of the disc thickness to the radius, . In the case of a single discrete event, a dimensional formula for the time-dependent accretion rate of material overflowing across the inner edge is most conveniently expressed as ˙M(t)=M0tvisc(2μ2)μΓ(μ)τ−1−μexp(−2μ2τ), (10) where is the mass deposited in the ring, and the dimensionless parameter of the flow properties. The likely values of cover an interval , depending on the interplay between viscous and cooling processes. ### 2.2 Sequence of multiple accretion events The set of equations (1)–(2) is linear in and , so the elementary solutions can be superposed. The solution for a specific single event is a Green function of the problem, whereas a general case can be then expressed as an integral (Zdziarski et al. 2009). We are interested in a set of discrete events, so we express the resulting as a sum, ˙M(t)=N∑i=1M0,itvisc,i(2μ2)μΓ(μ)τ−1−μiexp(−2μ2τi), (11) where the mass and the radius of injection (and so the angular momentum) can be different for every event (denoted by index ). Naturally, a contribution of the given event to the total lightcurve starts at the moment of injection, , i.e., τi=t−titvisc(Ri), (12) and the contribution of a term is zero for . Although the standard accretion disc is not present in Sgr A*, episodic accretion events of individual falling clouds are possible and even likely. If the time separation of events is comparable to or shorter than the decay timescale, the events can overlap, which may lead to a complicated time profile of the resulting total luminosity. ### 2.3 Bolometric and X-ray luminosity of accretion flow A significant fraction of radiation from accretion is released close to the black hole, i.e., in the deep potential well. We thus assume that the bolometric luminosity of the flow is given by the accretion rate at the inner edge of the flow. Radiation is produced with the accretion efficiency , Lbol=η˙M(t)c2, (13) where is the time-dependent accretion rate. In general, the accretion efficiency depends on the black hole spin, as well as on the accretion rate. Efficiency is higher for fast-rotating black holes accreting at a fraction of Eddington rate mainly because the innermost stable circular orbit (ISCO) gets closer to the horizon, and the inferred radiation efficiency grows as the spin increases. The efficiency drops down at low Eddington ratios when the flow becomes optically thin and radiatively inefficient. However, the spin of Sgr A* supermassive black hole is not well constrained (e.g. Broderick et al. 2011). The flow efficiency also depends, rather sensitively, on the fraction of heat that goes directly to electrons, and this factor is usually taken as a free parameter of the model. The efficiency is expected to be low for sources that are well below the Eddington luminosity (see Fig. 4 of Narayan & McClintock 2008). For Sgr A* the mean accretion rate is estimated as yr, i.e. a very low value, and the average broad-band luminosity is about erg s. This gives the efficiency parameter about . In the present paper we consider two options: either we fix the efficiency at a constant value , or we adopt a more general luminosity-dependent trend that has been proposed in the context of starving black holes (Hopkins et al. 2006; Narayan & McClintock 2008), logη=1+log(˙M˙MEdd), (14) where ˙MEdd=LEdd0.1c2. (15) The latter factor is equal to g s for the Sgr A* black hole mass. The observational constraints for the past activity refer to the X-ray luminosity. Therefore, we also need a conversion from the bolometric flux, , to 2–10 keV X-ray flux, . We employ two estimates: either we set the conversion factor to the value of 10, or we use a more realistic luminosity-dependent relation. We derive the latter relation on the basis of the accretion flow models of Moscibrodzka et al. (2012). In Fig. 1 the points show the values derived from these theoretical (numerically solved) models, whereas the continuous line is the best fit function ηX=(1.32×1041Lbol)0.5+7.86, (16) which we use to describe the luminosity-dependent trend in our code. ### 2.4 Reprocessing by an extended cloud Molecular clouds have an extended size. In particular, we consider reprocessing of the light signal by the prominent Sagittarius B2 cloud. Modelling the cloud reflection has to take the effect of the cloud’s non-negligible dimension into account. To this end we follow the parameterization by Odaka et al. (2011), who discuss different possible positions of the B2 cloud (see their Fig. 3). The recent analysis by Ryu et al. (2009, 2013) indicates that the B2 cloud is situated at approximately a right angle with respect to the line of sight between Sgr A* and the observer. We assume the cloud to be spherically symmetric and optically thin. (We neglect the role of multiple scattering.) Despite the substantial size of the cloud, it is still small in comparison with the distance between the cloud and Sgr A*. This allows us to simplify computations by neglecting the curvature of surfaces of constant timedelay. We consider three examples of the density distribution within the cloud. In the first case we set density . The lightcurve of the reprocessed radiation in K line can be then expressed as Lrep(t)=ηline(R2Do)2κρoR∫1−1(1−x2)Lbol(t−Δt(x))dx, (17) and Δt(x)=√2Rx−Doc, (18) where is the cloud radius, dimensionless coordinate within the cloud, the opacity, the time-dependent bolometric luminosity of Sgr A*, the efficiency of converting bolometric luminosity into iron K line flux, and the distance between Sgr A* and the centre of the cloud. Sunyaev & Churazov (1998) derive the efficiency of the 6.4 keV iron line formation to be of the monochromatic luminosity at 8 keV (assuming the solar abundance of iron). Ryu et al. (2013) used this coefficient in combination with the assumption of the photon index of 1.6, which allowed these authors to find a connection between the 6.4 keV line flux and 2–10 keV X-ray continuum flux, namely, . We followed this relation and calculate using eq. (16). Since the reprocessing took place in the past when the source was bright, and because the relation between the present bolometric luminosity and the 2–10 keV flux is close to 10 and does not vary strongly, the value of comes out close to 0.0077. This value varies only weakly with the X-ray luminosity. Next, we considered a two-component structure of the cloud (core plus envelope) in the form of a central spherical nucleus and a surrounding atmosphere. The simplest approximation of this structure assumes two different (constant) values of density within the core and the envelope. Finally, as the most realistic representation we adopted a gradually decreasing density profile in the envelope, which encloses the dense nucleus. In the latter case we were able to obtain an analytical description only in special cases of the density profile, namely, n(R) = nc,      R where and are the density and the radius of the inner core of the cloud, and is the total size of the cloud envelope. The slope in eq. (19) characterizes the decreasing density of the cloud envelope. In this case only one integral on the constant timedelay surface can be obtained analytically, whereas the second one has to be evaluated numerically. Instead of the factor we obtain an integral expression, I(x)=∫√1−x2−√1−x2I(x,y)dy, (20) where is given by the following expressions: I(x,y) = ξoln1+Aξo+B+ (21) ξolnξo−B1−A+2B,  for  y2<ξ2o−x2, I(x,y) = ξoln1+A1−A,    for  y2>ξ2o−x2, (22) with , and . In the more general case of an arbitrary slope of the radial density profile of the cloud envelope, the whole integral across the surface needs to be evaluated numerically. We ignore this complication because the three above-mentioned examples are enough to capture the effect and describe the extreme cases of the density profile. Also, we neglect the role of multiple scattering (included in the computations of Odaka et al. 2011) because we aim at a simple and robust estimation of the duration of reprocessing instead of any detailed computation of the spectral shape of the reflected component. ## 3 Results We have modelled the time evolution of the clumpy inflow episodes onto the black hole with the goal of understanding the observed signal from Sgr A*, namely, its past variations and the present level of activity. Naturally, we would also like to know what constraints can be inferred for the future behavior of the source. We start with a single accretion event, then we discuss multiple events and the reprocessing of the intrinsic radiation by molecular clouds surrounding the object. ### 3.1 Case of single accretion event We consider two examples as the basis of the subsequent discussion. They are also of immediate observational interest as the expected evolution of the G2 cloud after the predicted disruption (Gillessen et al. 2012, 2013) and as a significant event that could have been responsible for activity in the past (Ponti et al. 2010, 2011; Miralda-Escudé 2012). It has been reported (Gillessen 2012, 2013) that the G2 cloud is now on an approaching trajectory toward the Sgr A* supermassive black hole. The cloud mass is estimated to be around three Earth masses, , and it moves along an almost perfect parabolic orbit. The predicted pericentre distance of the orbit is about , although there are several uncertainties in the precise determination of the cloud origin, its properties, and future trajectory (Eckart et al. 2013a,b; Pfiher et al. 2013). We assume that, after an inevitable disruption, the cloud remnants will settle onto a circularization radius around and below pericentre; however, the exact outcome of the forthcoming evolution of the cloud is uncertain and depends critically on its internal structure, which is still under debate. If the material gets heated and the thickness of the newly formed torus is of the circularization radius, the expected viscous timescale becomes  yr. On the other hand, if the torus is somewhat thicker the timescale comes out shorter, about  yr (with the same circularization radius). The temperature of the material reaches high values, K and  K, correspondingly. The mean level of the transient accretion rate due to G2 material would be roughly  yr or  yr, respectively, and the mean additional bolometric luminosity becomes erg s or erg s. In the first case, the excess of the luminosity will double the current bolometric luminosity, while in the second, the brightening of Sgr A* will be quite considerable. Any greater increase in luminosity would require greater mass for the accretion event. Our simple model thus gives the result consistent with far more advanced numerical analyses by Schartmann et al. (2012) and Anninos et al. (2012). In Fig. 2 we show the expected evolution of the Sgr A* luminosity caused by the G2 cloud after its disruption and a hypothetical event of mass increase, with the pericentre slightly closer to the black hole (). The radiative efficiency of accretion was set to in these examples. We notice that a single event duration is characterized by the viscous timescale, but the lightcurve tail lasts much longer. This is seen in Fig. 2, as well as in the numerical simulations for putative stellar disruption events. The tail is important if the accretion events repeat frequently because it determines the final decay of the observed signal. ### 3.2 Case of multiple accretion events The probable luminosity of Sgr A* during the past 500 years was predominantly at the level of erg s. It could be due to either an exceptional strong accretion event or a repetitive infall of numerous small clouds, such as an Earth-mass sized G2 cloud. Naturally, what also matters is the efficiency of the conversion of accreted mass into radiation. In the following discussion we set the radiative efficiency at and consider different representative possibilities. Provided that all the accreting clumps reach a similar circularization radius, the overall shape of the lightcurve depends mainly on the temporal distribution of the events. We consider equally spaced events with the fixed total accreting mass roughly corresponding to the mentioned luminosity during the past few hundred years. The simplifying assumptions allow us to show the expected effects more clearly, but they can be easily relaxed for an astrophysically more realistic discussion. In Figure 3 we show examples with three and thirty events. Time moments of these events are set in such a way that they cover the period from the year 1500 to 1900. In the first example, a single flare profile is well preserved. In the second case, regular luminosity fluctuations persist but they are less than a factor 2 intense, superimposed on the overall luminosity rise due to event overlapping. The angular momentum of the clumps may also span a broad range of values. Again, we consider thirty accretion events, spaced regularly as in the previous example, and we perform two exercises: in the first case the angular momentum of the subsequent clumps rises, while in the second it decreases. In both we assume the linear distribution in angular momentum, and the maximum and minimum values correspond to the maximum and minimum values of the viscous timescale of 10 and 50 years. In both cases the variability amplitude decreases in the main part of the lightcurve due to a strong overlap of events. We just point out that these plots illustrate the role of the frequency of repetitive accretion events and of the angular momentum distribution of the accreting clouds on the overall shape of the lightcurve. Particularly constraining is the rate of the final drop of the luminosity when accretion stops. ### 3.3 Mechanism generating the intrinsic lightcurve Several authors have attempted to reconstruct the history of the Sgr A* activity. We discuss this interesting problem again in terms of the source lightcurve during the recent era and set constraints that appear to be consistent with our model. To this end we have adopted data from the most recent study by (Ryu et al. 2013). The data is particularly interesting for modelling since the results are based on three-dimensional reconstruction of the positions of the molecular clouds. The lightcurve exhibits a typical variability during the last few hundred years by about a factor three. The rapid decline by about four orders of magnitude happens in the course of the last 60 years. However, we also show the data set from Capelli et al. (2012). These data points represent the 2–10 keV luminosity of the source. Ryu et al. (2013) give the data directly in this format. Points in the Capelli et al. (2012) are given in 1–100 keV flux, and these were converted to 2–10 keV flux using again the mentioned models by Moscibrodzka et al. (2012). The correction moves the points not more than by a factor 2.2. We first consider the case of constant accretion efficiency , independent of the Eddington luminosity ratio, and constant conversion factor between the bolometric and the X-ray luminosity. In this case the rapid decline imposes a stringent limit on the viscous timescale. The most rapid X-ray decay decrease in the Ryu et al. (2013) data is by a factor of in only 60 years. Considering the asymptotic behaviour of the flow with the luminosity decreasing as , we obtain, in agreement with eq. (10), the viscous time  yr. This leads to  yr for . In the Capelli et al. (2012) data, the most rapid decrease is by a factor of 100 in only 14 years, suggesting the shortest timescale of  yr. The total accreted mass is a fraction of the solar mass ( in our exemplary solution). The tail from multiple episodes slows down the decay of the emission further. We would be able to roughly reproduce the lightcurve fall off and the general level of variability only for a viscous timescale as short as yr (i.e. below 1 day), so this short timescale is not a realistic value. When we use the timescale of 0.24 yr, suggested by the simple analysis above, the decay is far too slow in comparison with the data due to an extended period of activity (see Fig. 4, upper left). The situation improves if we use the bolometric to X-ray conversion flux according to Eq. (16) because the decrease in the bolometric luminosity is additionally accentuated by the decline in the conversion factor, but not enough to match the observed trend (see Fig. 4, upper right). We can consider higher values of the coefficient . A steeper decay in the accretion rate (even as steep as ) has been seen in numerical simulations of the stellar disruptions (e.g. Guillochon & Ramirez-Ruiz 2013, see their Fig. 5). The analytical requirements for the fast time decay is somewhat less stringentfor , yr. We can then reproduce the observed trend in Sgr A* numerically for the following parameters, the total mass , yr, the number of events , so the mass of a single event is about ten Earth masses. Such a model still does not look promising because the viscous timescale is again far too short, the circularization radius is only , and the required number of events is very large. To resolve the above-mentioned problem with a rapidly decaying lightcurve, we consider the radiative efficiency to be a function of the Eddington rate given by the Eq. (14). In this case the decrease in the accretion rate translates into an even faster drop of observed luminosity. This opens up the possibility of reproducing the observed decay in activity with less extreme values of the model parameters. If the standard value is used, the model with the decay timescale of 3 yr does not yet match the observed luminosity decrease (e.g., Fig. 4, bottom–left panel); however, if we couple the variable efficiency case with a steeper decay (as suggested by the modelling with ), the solution with the viscous timescale of three years represents the observed trends reasonably well (see Fig. 4, bottom-right panel). Also, the number of accretion events is now considerably lower, . Indeed, the model cannot easily represent the Sgr  A* luminosity history with sufficient accuracy. Especially the rapid decay phase, by many orders of magnitude, implies short viscous timescales, or equivalently, low values of the circularization radii. Today the X-ray emission is at the level of erg s, in the quiescent or flaring state (e.g. Baganoff et al. 2001; Porquet et al. 2003; Genzel et al. 2010, and further references cited therein). The fit shown in the right-hand panel of Fig. 4 is close to the required profile, but it is neither unique nor perfect. Additional constraints can be obtained from studying the reflection by B2 and C1 clouds. ### 3.4 Reprocessing by molecular clouds B2 and C1 The Sagittarius B2 (Sgr B2) complex is a prominent example of a large molecular cloud system situated in the eastern part of an active star-forming region at  pc from Sgr A*. Chandra images of Sgr B2 cloud (Murakami et al. 2001; Ryu et al. 2009) and the combined evidence of the variability of the region from different X-ray instruments (Suzaku, XMM-Newton, Chandra, and ASCA; see Inui et al. 2009; Nobukawa et al. 2011, and references cited therein) demonstrate the complex morphology of the reflecting environment with enhancements of density in multiple cores. The fluorescent iron K emission of Sgr B2 cloud should follow the general trend of Sgr A* intrinsic luminosity with a delay and smearing of the signal due to the finite distance and size of the cloud. Currently this emission exhibits a fading trend, but we can only see a small fraction of the lightcurve. This trend may reverse in the next several years, as implied by higher luminosity of the clouds that are located at closer distance from Sgr A*. Therefore we cannot model the trend of B2 luminosity arbitrarily; instead, we have to use the long timescale lightcurve (cf. sect. 3.3), select a specific period, and adjust the beginning of the burst sequence in such way that the flare appears at the right position. We concentrated on the best studied case of Sgr B2 cloud and adopted the compilation of data points by Yu et al. (2011). We took the specific examples of accretion pattern presented in different panels of Fig. 4, calculated the lightcurve for the reprocessed emission, and then compared the B2 reflection data to the suitable fragment of the lightcurve. Since the delay of the signal from the B2 cloud is about 280 years (Ryu et al. 2013), the currently observed reflection (covering the period ) develops from fragments of the intrinsic lightcurve produced as early as in 1715–1730. We need to only slightly adjust the beginning of the sequence of accretion events to reproduce the peak at the right place. We set up several test examples in order to understand the trends present in the model. In the case of infall parameters shown in the top panels of Fig. 4 there is virtually no possibility to model the reflection. The number of accretion events comes out as far too high; namely, several events in just two decades owing to the very short viscous time in these solutions. In the case of a small size of the reprocessing cloud, we simply see a wave in the lightcurve, instead of a gradual slow decay trend, and the amplitude of the 6.4 keV flux is too low because small fraction of X-ray photons are intercepted and reprocessed by the cloud (see Fig. 5, left panel). If the cloud size is set larger, the mean luminosity is comparable to the observed one, but the the lightcurve is considerably smeared, with the amplitude much smaller than the currently observed flux changes. In principle, it is possible to imagine that there is a temporary trend toward decreasing or increasing the mass of the event with time, but the timescale itself and the corresponding circularization radius do not produce realistic values. On the other hand, the model in the bottom right-hand panel of Fig. 4 (which adopts the realistic description of the accretion efficiency and includes the bolometric to X-ray conversion and the increased value of parameter ) represents the behaviour of the B2 reflection very well. In fact, we only had to adjust the starting point of the sequence and the cloud size. We note that here we only show an example under the assumption of constant density; in this case, we checked that the case of a more extended cloud with an rarefied envelope does not improve the solution significantly. The formal best fit was obtained with the diameter of 2.8 pc, but then the first four points fall far above the curve. Therefore we consider the cloud of the size 2.3 pc as a better representation of the data (see Fig. 5). We do not expect that accretion events are strictly periodic, so we do not suggest that the past data predict a future rise of the B2 luminosity. The separation between events is more likely random, and the next event may be delayed by several years. On the other hand, the luminosity trend of the clouds at different radii close to the black hole indicates quite convincingly that this will indeed happen. With measurements of more clouds than those given in the recent paper (Ryu et al. 2013), we should be able to recover the time moments of separate events. However, it is interesting to note that the simple periodic accretion with constant angular momentum and constant amount of material involved in these events already reproduces the overall profile of the reflection seen from Sgr B2 very accurately. Next, we also included the variations in the iron line flux measured from Sgr C1 cloud (Ryu et al. 2013). The delay and the true distance of this cloud from Sgr A* are well constrained, so we can employ the same model as explained above for the modelling Sgr B2. We varied the cloud size, and by increasing it considerably we achieved both the flux enhancement and a shallower variation profile, as required by the data. However, there are only two data points available, and so the fit is ambiguous (Fig. 6). Nevertheless, it is important that both the luminosity and the shallow profile of the variation can be reproduced with only slight adjustments: the event sequence was shifted by 3.5 yr (i.e. within an error of the timedelay measurement for B2 and C1), but otherwise the flux was calculated for the system geometry assuming the same density everywhere. As a second example we calculated the core and envelope set. Since the number of data points is not large, we had to constrain the number of free parameters. We thus fixed the core radius at 0.25 pc, as adopted by Odaka et al. (2011). In this case the best fit was achieved for an envelope radius of 2.6 pc, so larger than in the single constant density cloud (2.3 pc), but otherwise the fit quality did not improve considerably. Finally, we used the envelope with the density decreasing as , fixing the core again at 0.25 pc. This gave the best representation of the data. The outer radius of the envelope came out equal to 3.5 pc, larger than in the constant density case. This solution is also more consistent with the observational estimates of the B2 size. We show the comparison of the three cases in Fig. 7. ## 4 Discussion Sgr A* had exhibited a significantly enhanced level of activity during the past few hundred years, and it experienced a significant decrease in the activity over the past several decades. In this paper we considered a simple model that aimed to describe the nature of these changes and reveal the peculiarity of the long-lasting period. The model points toward a possible origin for the enhanced activity in terms of accretion of individual gaseous clouds. We parameterize the activity in the form of the sequence of events with a definite moment of start date and the end date. All the accreting clumps settle onto a circularization radius, spread during the viscous time, and subsequently feed the black hole for a certain period of time. We followed the viscous evolution of the events and calculated the intrinsic luminosity of the central black hole from the (time-dependent) accretion rate across the inner boundary of the disc. We showed the most simplistic case of events generated with equal spacing in time; however, a modification to a stochastic generation of clouds is straightforward and it does not create any qualitative difference. Also, the amount of accreted material and the corresponding angular momentum of each clump are free parameters that can be set arbitrarily within a certain range of allowed values. However, if we assume events of equal mass, the shallow Sgr A* lightcurve can only be represented by assuming that the angular momentum in all events stays constant. Quantitatively, the model should reproduce two main properties of the lightcurve: (a) the current decay in the activity after the last event, and (b) the shallow variability during the bright state. The latter has been implied by the slow variability of light reprocessed by the surrounding molecular clouds. The rapid decline in the lightcurve indicates a short viscous timescale. However, this leads to rapid and significant variability in the past (changing the intrinsic flux by orders of magnitude), unless the number of events was very large. If this was the case, all the variability would have been strongly smeared during the reflection process, and the molecular cloud images should have exhibited a constant flux. A very short decay timescale is also a problem from the point of view of the angular momentum requirements: the material must have settled directly onto a circular orbit very close to the black hole at only a few Schwarzschild radii, which does not seem to be realistic. On the other hand, in the case of the rising trend of the angular momentum, the lightcurve would have been flat but its normalization too high to reproduce the sudden decrease in the activity seen in the data. If the angular momentum of the later events is lower the overall luminosity rises gradually, and this also does not conform to the expectation. A flat profile seems to be more likely (cp. Fig. 3). Of course, assuming that the mass of the accretion events is coupled in some peculiar way to the angular momentum, we can always achieve the flat profile of the lightcurve; however, the Ockham razor argument can be used against such an approach. To summarize, given the quality of available data, we restricted ourselves to those events spaced equally in time, mass, and angular momentum parameters as an adequate approach to the problem. Our modelling of the viscous evolution of the clumps arriving down to the black hole suggests that two additional assumptions are needed: (i) the radiative efficiency of the flow depends on the Eddington ratio, and (ii) the viscous decay should proceed somewhat faster than the canonical profile, which is frequently considered in the context of material spreading viscously after a tidal disruption event. The first aspect can be justified: the observational data for accreting black holes in binaries suggest such a trend (Narayan & McClintock 2008). This coupling makes the requirements for the drop in the accretion rate less stringent because it implies an additional decrease in the radiative efficiency . The latter modification is also supported by recent numerical simulations (Guillochon & Ramirez-Ruiz 2012). It appears that the values of the power-law index characterizing the decay timescale may spread somewhat around the canonical value in both directions, i.e., towards a faster, as well as slower, decline of the lightcurve (for a different example, see e.g. Krolik & Piran 2011). This reflects various details of each particular case, such as the exact location of the disruption event with respect to the tidal radius that determines the rate of draining of the accretion disc. With the two above-given assumptions we reproduce the overall activity of the source by a sequence of infalling clumps accreting over a long period spanning from about till s. The total mass is estimated to be , and the viscous timescale about years (the latter value corresponds to the circularization radius ). The same approach to the intrinsic lightcurve was used to fit the data for two representative clouds, B2 and C1. For these we took the observed K lightcurve, together with 3D positions needed to determine the distance between each of the clouds and Sgr A*, as well as the timedelay between the intrinsic and the reprocessed emission. We note that complete information on the spatial location of the clouds is needed to normalize of the reprocessed flux. We employed values for B2 (Terrier et al. 2010; Yu et al. 2011) and two additional points for C1 (Ryu et al. 2013). Modelling the B2 X-ray reflection required assuming a more compact cloud than modelling C1 because the variability seen in the latter is more shallow, which at the same time explains why C1 lightcurve has a higher normalization despite the comparable distance from Sgr A*. No adjustments in the flux normalizations were necessary, and no arbitrary normalization constants involved. We note that Sgr B2 is the most massive complex among molecular clouds in the Galaxy: its mass has been estimated of about (Lis & Goldsmith 1989). This cloud is also quite large, with clumps of molecular gas around the dense core. Most of X-ray flux is seen as a diffuse component, whereas numerous unresolved point sources within the cloud volume provide much less flux in the total energy budget. Even if the cloud geometry is complicated (non-spherical), Murakami et al. (2000) approximate H distribution in the cloud by a general smooth profile of the form , where is the number density per cubic centimetre, while is radius in units of  pc. Given the high central density and the overall size of this system, the hydrogen absorption column density along the line of sight is estimated to be around (. The dominant contribution to the hydrogen column should originate within the molecular cloud itself, which brings it to the verge of being a Compton thick medium, whereas other GC molecular clouds are commonly thought not to be Compton thick. The optical thickness of the cloud has implications for the expected shape of the reflection spectrum (Odaka et al. 2011), and it can increase the reprocessing timescale. Such models can only be calculated using a complex Monte Carlo numerical method to follow the time evolution of the reprocessed radiation, which is generally too slow, i.e., computationally far too expensive. The currently available quality of the data does not justify such an effort while modelling the time-dependent delays. Moreover, in this case the result depends on the exact geometry. Our model points toward a possible origin for the Sgr A* activity. A sequence of clumps fall onto the black hole, all of them with similar values of mass and angular momentum. This suggests a single origin of the accreted clumps. A body – cloud or star – has probably been disrupted, and so it formed a stream of smaller gaseous bodies following roughly the same trajectory (like in a reminiscence of the famous Shoemaker-Levy comet impacting onto Jupiter in 1994). The total mass accreting in the process (counting from 1400 to 1930; the earlier activity of Sgr A* is completely unconstrained) is . All the clumps must have a common origin, and the disruption of the parent body must have happened a long time ago, since a period of time was needed for a blob segregation to grow owing to various gravitational perturbations. Segregation due to the interaction with surrounding gas would produce a more complicated profile of the mass function, which apparently has not happened. We would like to point out that within the frame of the model we can reproduce the reflection from the clouds B2 and C1, although the luminosity drop in the past activity period to the present level of quiescence is reproduced only partially. The discrepancy level depends to some extent on the data set used. In fact, there is an apparent offset between the flux levels derived from Ryu et al. (2013) versus Capelli et al. (2012), which can be attributed mainly to the uncertainties in the 3D positions of the reflection clouds and the difficulty of measuring column densities of X-ray nebulae. However, in both cases the final decay is more rapid than in the model if we consider a reasonable value of viscous timescales of 3 yr or above. In this paper we assumed that the rapid decay in the observed lightcurve is real, and therefore we constrained the viscous timescales to the short values when modelling the reflection from the molecular clouds as well. However, relaxing this constraint is possible in our scheme. If we remove the assumption about the recent rapid decay in Sgr A* activity, we can fit the time evolution of the B2 reflection with parameters  yr,  pc even in the constant density scheme. The apparent problem of the too rapid decay in the lightcurve can be solved if we assume that the accretion rate in any single event has a cut-off at some level owing to interaction with the ambient medium. For example, thermal conduction can efficiently lead to the disappearance of tiny remnants of the cloud (Cowie & McKee 1977). Such an interaction is indeed seen in numerical results; however, it does not seem to lead automatically to such a sharp cut-off (Guillochon & Ramirez-Ruiz 2013). ## 5 Conclusions X-rays from Sgr A* surrounding molecular clouds suggest that the central black hole was temporarily more active in a historically recent past than it is at present (Sunyaev et al. 1993; Koyama et al. 1996; Sunyaev & Churazov 1998). It is possible that Sgr A* reached the characteristic level for low-luminosity active galactic nuclei that can be detected in reflection by the clouds (Ponti et al. 2010; Terrier et al. 2010). Although the evidence of this bright period is still circumstantial, it is interesting to check whether there is enough material and suitable conditions in the region that could provide both the matter and energy source for the black hole on the right timescale and the phase appropriate to trigger and sustain the accretion over several hundred years. The mini-spiral of the Galactic centre is a potential source of material. This puzzling feature, located at a distance about  pc (projected distance  pc) from the supermassive black hole has been recently explained as consisting of three independent clumpy streams of mainly gaseous material at roughly Keplerian motion around the centre. The streams collide at some point (Zhao et al. 2010; Kunneriath et al. 2012), and this collision may cause the loss of the angular momentum and occasional inflow of clumpy material towards the black hole. A more detailed modelling of the accretion stream and the systematic stepping over the parameter space are needed to constrain the suitable distribution of the angular momentum of the clouds arriving to the black hole vicinity from greater distances of the mini-spiral region (Czerny et al. 2013). Numerical simulations of multiple colliding streams were recently performed in Alig et al. (2013), and these results can be relevant also in the context of the mini-spiral formation. Despite the simplicity of this scheme, it does capture the basic smearing effect that acts on the intrinsic flare over the light-crossing time of the cloud. Scattering on the small cloud produces an observed lightcurve profile closely resembling the intrinsic flare, while a large one enhances the smearing of the lightcurve, and the decay part becomes symmetric with respect to the onset part. This general tendency is expected, but it is interesting to notice that the cloud size can be as large as –6 pc and still produce a good formal fit to the data points (in terms of statistics). This size agrees with the expectation for the size of the core of Sgr B2. A still larger size for the cloud, however, would smear the decay part of the lightcurve too much, and then the formal fit also goes bad. The timescale of the flare decline can be about nine years, assuming that the lightcurve decline proceeds more closely according to Capelli et al. (2012), i.e. more slowly than the relatively rapid drop seen in Ryu et al. (2013). Further monitoring of the B2 and other clouds is therefore needed. As an illustration of the model we considered the fate of the G2 cloud that has been reported as falling towards Sgr A*. As the cloud is disrupted, some material should settle near the pericentre radius. The viscous timescale corresponding to this radius is roughly 18 yr, assuming efficient heating and formation of an optically thin torus (ADAF). The peak luminosity due to the accretion onto Sgr A* should happen in (under the assumptions of inefficient cooling). The mass of the cloud is quite low, so a significant enhancement (by a factor of ten) of the Sgr A* luminosity is only expected if the viscous timescale of the inflow is this short. On the other hand, the increase in the source activity is expected to be much less spectacular if the timescale is longer, and in addition a considerable fraction of the material does not reach the black hole. Of course, before this happens we are likely to detect the emission due to shocks connected with the settlement of the gas onto the circularization radius, or if the cloud is not disrupted at all, we will observe the synchrotron emission from the bow shock (Narayan et al. 2012). The excess emission about Jy at frequencies  GHz can be expected, well above the quiescent level of emission from Sgr A∗. Quite a different scenario would be expected if a star resides inside the cloud core (Eckart et al. 2013a,b). In addition to the opportune case of the upcoming passage of the G2 cloud through the orbit pericentre, we suggest that the model of multiple accretion events can be relevant and useful in the context of other supermassive black holes in galactic nuclei, in particular for active galactic nuclei that are thought to be embedded by clumpy material with a relatively large filling factor. Very likely, the accretion episodes, as envisaged by this scheme, should occur frequently, and they should partly contribute to the notorious activity and the resulting variability of these sources. ###### Acknowledgements. The research leading to these results has received funding from Polish grant NN 203 380136, the COST Action “Black Holes in a Violent Universe” (ref. 208092), and the European Union Seventh Framework Programme (FP7/2007–2013) under grant agreement No. 312789. DK and VK acknowledge support from the collaboration project between the Czech Science Foundation and Deutsche Forschungsgemeinschaft (GACR-DFG 13-00070J). The Astronomical Institute is operated under the program RVO:67985815. ## References • () Alig C., Schartmann M., Burkert A., & Dolag K. 2013, ApJ, submitted (arXiv:1305.2953) • () Anninos P., Fragile P. C., Wilson J., & Murray S. D. 2012, ApJ, 759, id. 132 • () Baganoff F. K., Bautz M. W., Brandt W. N. et al. 2001, Nature, 413, 45 • () Broderick A. E., Fish V. L., Doeleman S. S., & Loeb A. 2011, ApJ, 735, 110 • () Capelli R., Warwick R. S., Porquet D., Gillessen S., & Predehl P. 2012, A&A 545, A35 • () Coker R., & Melia F. 1997, Apj , 488, L149 • () Cowie L. L., & McKee C. F. 1977, ApJ, 211, 135 • () Cuadra J., Nayakshin S., & Martins F. 2008, MNRAS, 383, 458 • () Cuadra J., Nayakshin S., Springel V., & Di Matteo T. 2005, MNRAS, 360, L55 • () Cuadra J., Nayakshin S., Springel V., & Di Matteo T. 2006, MNRAS, 366, 358 • () Czerny B., Karas V., Kunneriath D., & Das Tapas K. 2013, in Feeding Compact Objects: Accretion on All Scales, Proceedings of the IAU Symposium No. 290, eds. Chengmin Zhang et al. (Cambridge: Cambridge University Press), p. 199 • () Eckart A., García-Marín M., Vogel S. N. et al. 2012, A&A, 537, id. A52 • () Eckart A., Britzen S., Horrobin M. et al. 2013a, in Nuclei of Seyfert Galaxies and QSOs – Central Engine and Conditions of Star Formation (Max-Planck-Insitut für Radioastronomie, Bonn, Germany), Proceedings of Science, submitted • () Eckart A., Mužić K., Yazici S. et al. 2013b, A&A, 551, id. A18 • () Eckart A., Schödel R., & Straubmeier C. 2005, The Black Hole at the Center of the Milky Way (London: Imperial College Press) • () Eckart A., Schödel R., García-Marín M. et al. 2008, A&A, 2008, 337 • () Evans C. R., & Kochanek C. S. 1989, ApJ, 346, L13 • () Ferrière K. 2009, A&A, 505, 1183 • () Frank J., & Rees M. J. 1976, MNRAS, 176, 633 • () Genzel R., Eisenhauer F., & Gillesen S. 2010, Rev. Mod. Phys., 82, 3121 • () Gillessen S., Genzel R., Fritz T. K. et al. 2012, Nature, 481, 51 • () Gillessen S., Genzel R., Fritz T. K. et al. 2013, ApJ, 763, id. 78 • () Guillochon J., & Ramirez-Ruiz E. 2013, ApJ, 767, 25 • () Hopkins P. F., Narayan R., & Hernquist L. 2006, ApJ, 643, 641 • () Inui T., Koyama K., Matsumoto H., & Tsuru T. G. 2009, PASJ, 61, 241 • () Kato S., Fukue J., & Mineshige S., 1998, Black Hole Accretion Discs (Kyoto: Kyoto University Press) • () Kotov O., Churazov E., & Gilfanov M. 2001, MNRAS, 327, 799 • () Koyama K., Maeda Y., Sonobe T. et al. 1996, PASJ, 48, 249 • () Krolik J., & Piran T. 2011, ApJ, 743, id. 134 • () Kunneriath D., Eckart A., Vogel S. N. et al. 2012, A&A, 538, id. A127 • () Kunneriath D., Witzel G., Eckart A. et al. 2010, A&A, 517, id. A46 • () Lis D. C., & Goldsmith P. F. 1989, ApJ, 337, 704 • () Loeb A. 2004, MNRAS, 350, 725 • () Lyubarskii Yu. E. 1997, MNRAS, 292, 679 • () Lynden-Bell D., & Pringle J. E. 1974, MNRAS, 168, 603 • () MacLeod M., Guillochon J., & Ramirez-Ruiz E. 2012, ApJ, 757, id.134 • () Marrone D. P., Moran J. M., Zhao J.-H., & Rao R. 2007, ApJ, 654, L57 • () Melia F. 1992, ApJ, 387, L25 • () Melia F. 2007, The Galactic Supermassive Black Hole (Princeton: Princeton University Press) • () Mineshige S., & Wood J. H. 1989, MNRAS, 241, 259 • () Miralda-Escudé J. 2012, ApJ, 756, 86 • () Moscibrodzka M., Das Tapas K., & Czerny B. 2006, MNRAS, 370, 219 • () Moscibrodzka M., Shiokawa H., Gamie C. F., & Dolence J. C. 2012, ApJL, 752, L1 • () Muno M. P., Baganoff F. K., Brandt W. N., Park S., & Morris M. R. 2007, ApJ, 656, L69 • () Murakami H., Koyama K., & Maeda Y. 2001, ApJ, 558, 687 • () Murakami H., Koyama K., Sakano M., Tsujimoto M., & Maeda Y. 2000, ApJ, 534, 283 • () Narayan R., Mahadevan R., Grindlay J. E., Popham R. G., & Gammie C. 1998, ApJ, 492, 554 • () Narayan R., & McClintock J. E. 2008, New Astr. Rev., 51, 733 • () Narayan R., Özel F., & Sironi L. 2012, ApJ, 757, id. L20 • () Nobukawa M., Ryu S. G., Tsuru T. G., & Koyama K. 2011, ApJL, 739, id. L52 • () Odaka H., Aharonian F., Watanabe S., et al. 2011, ApJ, 740, 103 • () Ponti G., Terrier R., Goldwurm A., Belanger G., & Trap G. 2010, ApJ, 714, 732 • () Ponti G., Terrier R., Goldwurm A., Belanger G., & Trap G. 2011, in The Galactic Center: a Window to the Nuclear Environment of Disk Galaxies, eds. M. R. Morris, Q. D. Wang, & Feng Yuan (San Francisco: Astronomical Society of the Pacific), p. 446 • () Phifer K., Do T., Meyer L. et al. 2013, ApJ, submitted (arXiv:1304.5280) • () Porquet D., Predehl P., Aschenbach B. et al. 2003, A&A, 407, L17 • () Quataert E. 2004, ApJ, 613, 322 • () Reis R. C., Miller J. M., Reynolds M. T. et al. 2012, Science, 337, 949 • () Rockefeller G., Fryer C. L., Melia F., & Warren M. S. 2004, ApJ, 604, 622 • () Ryu S. G., Koyama K., Nobukawa M., Fukuoka R., & Tsuru T. G. 2009, PASJ, 61, 751 • () Ryu S. G., Nobukawa M., Nakashima S. et al. 2013, PASJ, 65, 33 • () Schartmann M., Burkert A., Alig C. et al. 2012, ApJ, 755, id. 155 • () Shcherbakov R. V., & Baganoff F. K. 2010, ApJ, 716, 540 • () Sunyaev R. A., Markevitch M., & Pavlinsky M. 1993, ApJ, 407, 606 • () Sunyaev R., & Churazov E. 1998, MNRAS, 297, 1279 • () Terrier R., Ponti G., Bélanger G. et al. 2010, ApJ, 719, 143 • () Wardle M., & Yusef-Zadeh F. 1992, Nature, 357, 308 • () Witzel G., Eckart A., Bremer M., et al. 2012, ApJS, 203, id. 18 • () Yu Y.-W., Cheng K. S., Chernyshov D. O., & Dogiel V. A. 2011, MNRAS, 411, 2002 • () Zdziarski A. A., Kawabata R., & Mineshige S. 2009, MNRAS, 399, 1633 • () Zhao J.-H., Blundell R., Moran J. M. et al. 2010, ApJ, 723, 1097 You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
# 2011 AIME I Problems/Problem 8 ## Problem In triangle $ABC$, $BC = 23$, $CA = 27$, and $AB = 30$. Points $V$ and $W$ are on $\overline{AC}$ with $V$ on $\overline{AW}$, points $X$ and $Y$ are on $\overline{BC}$ with $X$ on $\overline{CY}$, and points $Z$ and $U$ are on $\overline{AB}$ with $Z$ on $\overline{BU}$. In addition, the points are positioned so that $\overline{UV}\parallel\overline{BC}$, $\overline{WX}\parallel\overline{AB}$, and $\overline{YZ}\parallel\overline{CA}$. Right angle folds are then made along $\overline{UV}$, $\overline{WX}$, and $\overline{YZ}$. The resulting figure is placed on a level floor to make a table with triangular legs. Let $h$ be the maximum possible height of a table constructed from triangle $ABC$ whose top is parallel to the floor. Then $h$ can be written in the form $\frac{k\sqrt{m}}{n}$, where $k$ and $n$ are relatively prime positive integers and $m$ is a positive integer that is not divisible by the square of any prime. Find $k+m+n$. $[asy] unitsize(1 cm); pair translate; pair[] A, B, C, U, V, W, X, Y, Z; A[0] = (1.5,2.8); B[0] = (3.2,0); C[0] = (0,0); U[0] = (0.69*A[0] + 0.31*B[0]); V[0] = (0.69*A[0] + 0.31*C[0]); W[0] = (0.69*C[0] + 0.31*A[0]); X[0] = (0.69*C[0] + 0.31*B[0]); Y[0] = (0.69*B[0] + 0.31*C[0]); Z[0] = (0.69*B[0] + 0.31*A[0]); translate = (7,0); A[1] = (1.3,1.1) + translate; B[1] = (2.4,-0.7) + translate; C[1] = (0.6,-0.7) + translate; U[1] = U[0] + translate; V[1] = V[0] + translate; W[1] = W[0] + translate; X[1] = X[0] + translate; Y[1] = Y[0] + translate; Z[1] = Z[0] + translate; draw (A[0]--B[0]--C[0]--cycle); draw (U[0]--V[0],dashed); draw (W[0]--X[0],dashed); draw (Y[0]--Z[0],dashed); draw (U[1]--V[1]--W[1]--X[1]--Y[1]--Z[1]--cycle); draw (U[1]--A[1]--V[1],dashed); draw (W[1]--C[1]--X[1]); draw (Y[1]--B[1]--Z[1]); dot("A",A[0],N); dot("B",B[0],SE); dot("C",C[0],SW); dot("U",U[0],NE); dot("V",V[0],NW); dot("W",W[0],NW); dot("X",X[0],S); dot("Y",Y[0],S); dot("Z",Z[0],NE); dot(A[1]); dot(B[1]); dot(C[1]); dot("U",U[1],NE); dot("V",V[1],NW); dot("W",W[1],NW); dot("X",X[1],dir(-70)); dot("Y",Y[1],dir(250)); dot("Z",Z[1],NE);[/asy]$ ## Solution 1 Note that the area is given by Heron's formula and it is $20\sqrt{221}$. Let $h_i$ denote the length of the altitude dropped from vertex i. It follows that $h_b = \frac{40\sqrt{221}}{27}, h_c = \frac{40\sqrt{221}}{30}, h_a = \frac{40\sqrt{221}}{23}$. From similar triangles we can see that $\frac{27h}{h_a}+\frac{27h}{h_c} \le 27 \rightarrow h \le \frac{h_ah_c}{h_a+h_c}$. We can see this is true for any combination of a,b,c and thus the minimum of the upper bounds for h yields $h = \frac{40\sqrt{221}}{57} \rightarrow \boxed{318}$. ## Solution 2 As from above, we can see that the length of the altitude from A is the longest. Thus the highest table is formed when X and Y meet up. Let the distance of this point from B be $x$, making the distance from C $23 - x$. Let $h$ be the height of the table. From similar triangles, we have $\frac{x}{23} = \frac{h}{h_b} = \frac{27h}{2A}$ where A is the area of ABC. Similarly, $\frac{23-x}{23}=\frac{h}{h_c}=\frac{30h}{2A}$. Therefore, $1-\frac{x}{23}=\frac{30h}{2A} \rightarrow1-\frac{27h}{2A}=\frac{30h}{2A}$ and hence $h = \frac{2A}{57} = \frac{40\sqrt{221}}{57}\rightarrow \boxed{318}$.
# zbMATH — the first resource for mathematics On the Cohen-Macaulay property in commutative algebra and simplicial topology. (English) Zbl 0686.13008 A ring R is called a “ring of sections” provided R is the section ring of a sheaf ($${\mathcal A},X)$$ of commutative rings defined over a base space X which is a finite partially ordered set given the order topology. Regard X as a finite abstract complex, where a chain in X corresponds to a simplex. In specific instances of ($${\mathcal A},X)$$, certain algebraic invariants of R are equivalent to certain topological invariants of X. (Author) The author investigates the depth of factor rings of $$SR(F,\Sigma)$$, the Stanley-Reisner ring of a complex $$\Sigma$$ with coefficients in a field F. $$SR(F,\Sigma)$$ is viewed as the ring of sections of a sheaf of polynomial rings over the partially ordered set of all simplices of $$\Sigma$$. The complex $$\Sigma$$ is defined to be Cohen-Macaulay (CM) provided the reduced singular cohomology of the link subcomplexes vanish except in maximal degree. The main theorem goes as follows: Let S be the polynomial ring $$S=F[X_ 0,...,X_ n]$$, put $$\alpha =n-pd_ SSR(F,\Sigma)$$, then the skeleton $$\Sigma^{\alpha}$$ is maximal with respect to the property of being CM. Reviewer: Y.Felix ##### MSC: 13H10 Special types (Cohen-Macaulay, Gorenstein, Buchsbaum, etc.) 55U05 Abstract complexes in algebraic topology 13D25 Complexes (MSC2000) 55M99 Classical topics in algebraic topology 57Q99 PL-topology 18F20 Presheaves and sheaves, stacks, descent conditions (category-theoretic aspects) 13F20 Polynomial rings and ideals; rings of integer-valued polynomials Full Text:
# A e-writing Action on a different slice of space-time, Quantum Hall Effect #### binbagsss Hi, I'm looking at QHE notes D.Tong and wondering how he gets from equation 5.46 to 5.48 ( http://www.damtp.cam.ac.uk/user/tong/qhe/five.pdf ) $S_{CS}=\frac{k}{4\pi}\int d^3 x \epsilon^{\mu \nu \rho} tr(a_{\mu}\partial_{\nu}a_{\rho} -\frac{2i}{3}a_{\mu}a_{\nu}a_{\rho})$. manifold $\bf{R} \times \Sigma$ where $\bf{R}$ is time and $\Sigma$ is a spatial compact manifold. $S_{CS}=\frac{k}{4\pi}\int dt \int\limits_{\sigma} d^2x tr(\epsilon^{ij} a_i \frac{\partial}{\partial t} a_j + a_0 f_{12)}$ I'm very stuck , I'm not sure where to begin. Any hint or explanation very much appreciated. For example how have we gone from levi - civita tensor in '3-d to 2-d', how have we gone to only a time derivative in the first term- all spatial derivatives vanish? why would this be? also very confused about the last term. I can't see no reason why spatial derivatives would vanish so I guess instead it's made use of compactness combined with the levi-civita symbol causing something to vanish? I guess such vanishing might also be the reason we able to write in terms of the '2-d' levi-civita symbol instead, but I'm pretty clueless. Related Quantum Physics News on Phys.org #### king vitamin Gold Member how have we gone to only a time derivative in the first term- all spatial derivatives vanish? Perhaps you have forgotten the definition of $f_{\mu \nu}$? It is $$f_{\mu \nu} = \partial_{\mu} a_{\nu} - \partial_{\nu} a_{\mu} - i [a_{\mu},a_{\nu}],$$ so the $f_{12}$ term contains spatial derivatives. Let's just write everything out. Eq (5.46) is $$S_{CS} = \frac{k}{4 \pi} \int d^3 x \, \epsilon^{\mu \nu \rho} \mathrm{tr} \left( a_{\mu} \partial_{\nu} a_{\rho} - \frac{2 i}{3} a_{\mu} a_{\nu} a_{\rho} \right).$$ First of all, $$\epsilon^{\mu \nu \rho} a_{\mu} a_{\nu} a_{\rho} = a_0 a_1 a_2 + a_1 a_2 a_0 + a_2 a_0 a_1 - a_0 a_2 a_1 - a_2 a_1 a_0 - a_1 a_0 a_2.$$ Now, if this expression is inside of a trace, we can use the cyclicity of the trace to combine terms which are the same up to a cyclic permutation. So $$\epsilon^{\mu \nu \rho} \mathrm{tr} \left( a_{\mu} a_{\nu} a_{\rho} \right) = 3 \mathrm{tr} \left( a_0 [a_1,a_2] \right).$$ A similar computation on the other term finds $$\epsilon^{\mu \nu \rho} a_{\mu} \partial_{\nu} a_{\rho} = a_0 \partial_1 a_2 - a_0 \partial_2 a_1 + a_1 \partial_2 a_0 - a_1 \partial_t a_2 + a_2 \partial_t a_1 - a_2 \partial_1 a_1.$$ At this point, we use the fact that everything sits inside of a spatial integral over a compact manifold $\Sigma$ (see the text prior to (5.48)), so we can freely integrate by parts on each term without any "boundary term" appearing. So clearly $$\epsilon^{\mu \nu \rho} \int dt \int_{\Sigma} d^2 x \, a_{\mu} \partial_{\nu} a_{\rho} = \int dt \int_{\Sigma} d^2 x \, \left( 2 a_0 \left( \partial_1 a_2 - \partial_2 a_1 \right) + a_2 \partial_t a_1 - a_2 \partial_1 a_0 \right).$$ Combining the terms, we then find $$S_{CS} = \frac{k}{4 \pi} \int dt \int_{\Sigma} d^2 x \, \mathrm{tr}\left( 2 a_0 f_{12} - \epsilon^{ij} a_i \frac{\partial}{\partial t} a_j \right)$$ So I actually get a different expression than Tong, unless perhaps there is some notational different which is important (e.g. the Levi-Civita symbol has implicit minus signs in Minkowski signature?). I believe that my expression is equivalent to Equations (21) and (64) in this review article.
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Cauchy’s integral formula via the modified Riemann-Liouville derivative for analytic functions of fractional order. (English) Zbl 1202.30068 Summary: The modified Riemann-Liouville fractional derivative applies to functions which are fractional differentiable but not differentiable, in such a manner that they cannot be analyzed by means of the Djrbashian fractional derivative. It provides a fractional Taylor series for functions which are infinitely fractional differentiable, and this result suggests to introduce a definition of analytic functions of fractional order. Cauchy’s conditions for fractional differentiability in the complex plane and the Cauchy integral formula are derived for these kinds of functions. ##### MSC: 30E99 Miscellaneous topics of analysis in the complex domain 26A33 Fractional derivatives and integrals (real functions) ##### References: [1] Jumarie, G.: Stochastic differential equations with fractional Brownian motion input, Internat. J. Systems sci. 24, No. 6, 1113-1132 (1993) · Zbl 0771.60043 · doi:10.1080/00207729308949547 [2] Jumarie, G.: On the solution of the stochastic differential equation of exponential growth driven by fractional Brownian motion, Appl. math. Lett. 18, 817-826 (2005) · Zbl 1075.60068 · doi:10.1016/j.aml.2004.09.012 [3] Jumarie, G.: Modified Riemann–Liouville derivative and fractional Taylor series of non-differentiable functions further results, Comput. math. Appl. 51, 1367-1376 (2006) · Zbl 1137.65001 · doi:10.1016/j.camwa.2006.02.001 [4] Jumarie, G.: New stochastic fractional models for malthusian growth, the Poissonian birth process and optimal management of populations, Math. comput. Modelling 44, 231-254 (2006) · Zbl 1130.92043 · doi:10.1016/j.mcm.2005.10.003 [5] Jumarie, G.: Fractional partial differential equations and modified Riemann–Liouville derivatives. Method for solution, J. appl. Math. comput. 24, No. 1–2, 31-48 (2007) · Zbl 1145.26302 · doi:10.1007/BF02832299 [6] Jumarie, G.: Lagrangian mechanics of fractional order, Hamilton–Jacobi fractional PDE and Taylor’s series of non differentiable functions, Chaos solitons fractals 32, No. 3, 969-987 (2007) · Zbl 1154.70011 · doi:10.1016/j.chaos.2006.07.053 [7] Jumarie, G.: Table of some basic fractional calculus formulae derived from a modified Riemann–Liouville derivative for non-differentiable functions, Appl. math. Lett. 22, 378-385 (2009) · Zbl 1171.26305 · doi:10.1016/j.aml.2008.06.003 [8] Kolwankar, K. M.; Gangal, A. D.: Holder exponents of irregular signals and local fractional derivatives, Pramana J. Phys. 48, 49-68 (1997) [9] Kolwankar, K. M.; Gangal, A. D.: Local fractional Fokker–Planck equation, Phys. rev. Lett. 80, 214-217 (1998) · Zbl 0945.82005 · doi:10.1103/PhysRevLett.80.214 [10] Al-Akaidi, M.: Fractal speech processing, (2004) [11] Anh, V. V.; Leonenko, N. N.: Spectral theory of renormalized fractional random fields, Teor. imovir. Math. stat. 66, 3-14 (2002) · Zbl 1029.60040 [12] Campos, L. M. C.: On a concept of derivative of complex order with applications to special functions, IMA J. Appl. math. 33, 109-133 (1984) · Zbl 0565.30034 · doi:10.1093/imamat/33.2.109 [13] Campos, L. M. C.: Fractional calculus of analytic and branched functions, Recent advances in fractional calculus (1993) · Zbl 0789.30030 [14] Caputo, M.: Linear model of dissipation whose Q is almost frequency dependent II, Geophys. J. R. astron. Soc. 13, 529-539 (1967) [15] , CISM lecture notes 378 (1997) [16] Djrbashian, M. M.; Nersesian, A. B.: Fractional derivative and the Cauchy problem for differential equations of fractional order, Izv. acad. Nauk arm. SSR 3, No. 1, 3-29 (1968) [17] Falconer, K.: Techniques in fractal geometry, (1997) · Zbl 0869.28003 [18] Hilfer, R.: Fractional time evolution, Applications of fractional calculus in physics, 87-130 (2000) · Zbl 0994.34050 [19] Huang, F.; Liu, F.: The space–time fractional diffusion equation with Caputo derivatives, J. appl. Math. comput. 19, No. 1–2, 179-190 (2005) · Zbl 1085.35003 · doi:10.1007/BF02935797 [20] Jumarie, G.: Further results on Fokker–Planck equation of fractional order, Chaos solitons fractals 12, 1873-1886 (2001) · Zbl 1046.82515 · doi:10.1016/S0960-0779(00)00152-1 [21] Jumarie, G.: On the representation of fractional Brownian motion as an integral with respect to (dt)$\alpha$, Appl. math. Lett. 18, 739-748 (2005) · Zbl 1082.60029 · doi:10.1016/j.aml.2004.05.014 [22] Kober, H.: On fractional integrals and derivatives, Q. J. Math. 11, 193-215 (1940) · Zbl 0025.18502 [23] Letnivov, A. V.: Theory of differentiation of fractional order, Sb. math. 3, 1-7 (1868) [24] , Frontiers in mathematical biology (1994) [25] Liouville, J.: Sur le calcul des différentielles à indices quelconques, J. ecole polytech. 13, 71 (1832) [26] Miller, K. S.; Ross, B.: An introduction to the fractional calculus and fractional differential equations, (1973) [27] Nishimoto, K.: Fractional calculus, (1989) · Zbl 0707.26009 [28] Oldham, K. B.; Spanier, J.: The fractional calculus. Theory and application of differentiation and integration to arbitrary order, (1974) [29] Osler, T. J.: Taylor’s series generalized for fractional derivatives and applications, SIAM J. Math. anal. 2, No. 1, 37-47 (1971) · Zbl 0215.12101 · doi:10.1137/0502004 [30] Podlubny, I.: Fractional differential equations, (1999) [31] Ross, B.: Fractional calculus and its applications, Lecture notes in mathematics 457 (1974) [32] Samko, S. G.; Kilbas, A. A.; Marichev, O. I.: Fractional integrals and derivatives. Theory and applications, (1987) [33] Shaher, M.; Odibat, Z. M.: Fractional Green function for linear time-fractional inhomogeneous partial differential equations in fluid mechanics, J. appl. Math. comput. 24, No. 1–2, 167-178 (2007) · Zbl 1134.35093 · doi:10.1007/BF02832308
Trang chủ‎ > ‎IT‎ > ‎Machine Learning‎ > ‎Ensemble Learning‎ > ‎ ### Understanding Gradient Boosting, Part 1 Though there are many possible supervised learning model types to choose from, gradient boosted models (GBMs) are almost always my first choice. In many cases, they end up outperforming other options, and even when they don't, it's rare that a properly tuned GBM is far behind the best model. At a high level, the way GBMs work is by starting with a rough prediction and then building a series of decision trees, with each tree in the series trying to correct the prediction error of the tree before it. There's more detailed descriptions of the mechanics behind the algorithm out there, but this series of posts is intended to give more of an intuitive understanding of what the algorithm does. %matplotlib inline from sklearn.datasets import make_classification from sklearn.cross_validation import train_test_split from sklearn.metrics import roc_auc_score import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns ### Building a dataset for the example For this series, I'll be using a synthetic 2-dimensional classification dataset generated using scikit-learn's make_classification(). For performance estimation, I'll hold back 20% of the generated data as a test set. raw_data, raw_target = make_classification(n_samples=20000, n_features=2, n_informative=2, n_redundant=0, n_repeated=0, n_classes=2, n_clusters_per_class=2, weights=None, flip_y=0.01, class_sep=1.0, hypercube=True, shift=0.0, scale=1.0, shuffle=False, random_state=2) train, test, train_t, test_t = train_test_split(raw_data, raw_target, test_size=0.2, random_state=2) xytarget 0-1.050819-0.8280940 1-0.184011-1.9045850 2-1.5533580.2949340 This particular set of parameters ends up creating 2 Gaussian blobs of points for each of the target classes, with a few difficult areas to classify where there's some heavy overlap in the two groups. In addition, within each Gaussian blob, 1% of the examples have had their labels flipped, creating situations where a good classifier ought to come up with the "wrong" prediction. Both the overlap and the flipped areas create ample opportunities for models to overfit if they attempt to get all the individual training examples correct rather than find good general classification rules. my_cm = sns.diverging_palette(h_neg=245, h_pos=12, s=99, l=50, sep=15, n=16, center='light', as_cmap=True) plt.figure(figsize=(16,6)) plt.subplot(1,2,1) plt.title("Full view") plt.scatter(raw_data[:,0], raw_data[:,1], c=raw_target, alpha=0.5, cmap=my_cm) plt.xlim(-4, 4) plt.ylim(-4, 4) plt.axhspan(-2, 2, 0.125, 0.375, fill=False, lw=1.5) plt.subplot(1,2,2) plt.title("Detailed view of heavily overlapping section") plt.scatter(raw_data[:,0], raw_data[:,1], c=raw_target, alpha=0.5, cmap=my_cm) plt.xlim(-3, -1) plt.ylim(-2, 2) plt.show() ### GBM parameters Now that we have a dataset to work with, let's consider the parameters GBMs have for us to tweak. Arguably, the most important two are the number of trees and the learning rate. The former is fairly straightforward: it simply corresponds to the number of trees that will be fit in series to correct the prediction errors. The learning rate corresponds to how quickly the error is corrected from each tree to the next and is a simple multiplier 0<LR10<LR≤1. For example, if the current prediction for a particular example is 0.20.2 and the next tree predicts that it should actually be 0.80.8, the correction would be +0.6+0.6. At a learning rate of 11, the updated prediction would be the full 0.2+1(0.6)=0.80.2+1(0.6)=0.8, while a learning rate of 0.10.1 would update the prediction to be 0.2+0.1(0.6)=0.260.2+0.1(0.6)=0.26. Beyond these two parameters, we can tweak things relating to the model like subsampling rows (i.e., including a random portion of the training set at each tree in the series) and loss function used to assess performance, as well as things relating to individual trees like maximum tree depthminimum samples to split a nodeminimum samples in a leaf node, and subsampling features (i.e., only considering a set of the features when splitting nodes in the tree to provide some extra randomness). These parameters can also have a big impact once you've settled on a decent tree count/learning rate combo, but they tend to be a second step in the process. As a rule of thumb, using some small amount of subsampling (e.g., 80-90% of the data at each step) and keeping trees relatively shallow (max depth between 2-5) tends to be fairly effective in most cases. ### Why lower learning rates? The multiplicative nature of the learning rate acts at odds with the number of trees: for a learning rate LL and a number of trees tt, if we decrease the learning rate to LnLn then we'll need something on the order of ntnt trees to maintain the same level of performance. Because each tree in a GBM is fit in series, the training time of the model grows linearly with the number of trees (i.e., if it takes you mm minutes to fit a model on 100 trees, then it should take roughly 2m2m minutes to fit the same model on 200 trees). Why then would we want to lower the learning rate? To answer that, let's try fitting a few GBMs on our sample data. In each case, we'll use 500 trees of maximum depth 3 and use a 90% subsample of the data at each stage, but we'll do a low learning rate of 0.05 in one, a high learning rate of 0.5 in a second, and the maximum of 1 in the third. After fitting each model, we'll also get the predictions from each tree in the series, as well as the final predictions. common_args = {'max_depth': 3, 'n_estimators': 500, 'subsample': 0.9, 'random_state': 2} ] stage_preds = {} final_preds = {} for mname, m in models: m.fit(train, train_t) stage_preds[mname] = {'train': list(m.staged_predict_proba(train)), 'test': list(m.staged_predict_proba(test))} final_preds[mname] = {'train': m.predict_proba(train), 'test': m.predict_proba(test)} First, let's take a look at the effect of the learning rates on the predictions at each stage. The plots below show histograms for predictions on the 18,000 training examples at trees 1, 5, 50, and 100, and the final predictions at tree 500. From the tree 1 plot, the effect of the learning rate is immediately apparent. All predictions are initialized to approximately 0.50.5 since the target is split roughly in half, and so after 1 tree, each prediction will fall between 0.5LR0.5−LR and 0.5+LR0.5+LR. Because we are using relatively shallow trees with the max depth set to 3, none of the leaf nodes end up being purely 0s or 1s, however. One other interesting thing to note in the tree 1 plot is that the underlying tree that is constructed is identical in each of the models. You can see this from the identically shaped but differently spaced histograms for the 0.5 and 1.0 learning rate models. By tree 50, the higher learning rates have already pushed most of the predictions out to the 0/1 ends of the spectrum, while only the very easiest examples are even starting to approach 0 or 1 in the lower learning rate model. In the final model, all of the models have a set of points for which they are feeling relatively certain, though the lower learning rate model has been a little more conservative than the others. def frame(i=0, log=False): for mname, _ in models: plt.hist(stage_preds[mname]['train'][i][:,1], bins=np.arange(0,1.01,0.01), label=mname, log=log) plt.xlim(0,1) plt.ylim(0,8000) if log: plt.ylim(0.8,10000) plt.yscale('symlog') plt.gca().yaxis.set_major_formatter(plt.ScalarFormatter()) plt.legend(loc='upper center') return plt.figure(figsize=(16,10)) for pos, fnum in enumerate((1, 5, 50, 100), 0): plt.subplot2grid((3,2), (pos/2, pos%2)) frame(fnum-1, True) plt.title("Predictions for each model at tree #%d (y axis on log scale)" % fnum) plt.subplot2grid((3,2), (2,0), colspan=2) plt.title("Final predictions for each model (y axis on linear scale)") frame(-1, False) plt.ylim(0,7000) plt.show() ### Model performance To assess model performance, we'll measure the area under the ROC curve for each model to get a general sense of how accurately each model can rank order the examples with different line styles corresponding to each of the 3 different models. We'll also look separately at the train and test data to see the differences that arise between the sets with red lines corresponding to training performance and blue lines corresponding to our held out test set. First, it's clear that the max and high learning rate models are overfitting. The models that perform best on the holdout data occur at tree 10 for max and tree 34 for high, and so the "corrections" added by the rest of the tree series simply serve to overfit the training data (n.b. the performance drop on the max learning rate isn't quite as dramatic as it looks -- the y axis is restricted to a narrow range of AUCs to make it easier to see the performance differences). For the low learning rate, increasing the trees further show that in this particular case, test performance will continue to improve up to 862 trees, at which point, test performance begins to decline. Second, looking at training performance on the high and low models, we can see a common pattern in GBMs: continually increasing training performance even as test performance levels off. This behavior is sometimes interpreted as evidence of overfitting, an interpretation I don't share as long as holdout performance is continuing to improve. plt.figure(figsize=(12,6)) for marker, (mname, preds) in zip(["-", "--", ":"], stage_preds.iteritems()): for c, (tt_set, target) in zip(['#ff4444', '#4444ff'], [('train', train_t), ('test', test_t)]): aucs = map(lambda x: roc_auc_score(target, x[:,1]), preds[tt_set]) label = "%s: %s" % (mname, tt_set) + (" (best: %.3f @ tree %d)" % (max(aucs), np.array(aucs).argmax()+1) if tt_set == 'test' else "") plt.plot(aucs, marker, c=c, label=label) plt.ylim(0.93, 1) plt.title("Area under ROC curve (AUC) for each tree in each GBM") plt.xlabel("Tree #") plt.ylabel("AUC") plt.legend(loc="lower center") plt.show() ### Next up... In this post, we've gone through some of the mechanical aspects and considerations of fitting GBMs. The next post in the series will focus more on understanding exactly how GBMs are making decisions and contrasting that with other model types. The plots below are a teaser of things to come. def gen_plot(mname, model, xlim=(-4, 4), ylim=(-4, 4), gridsize=0.02, marker="."): xx, yy = np.meshgrid(np.arange(xlim[0], xlim[1], gridsize), np.arange(ylim[0], ylim[1], gridsize)) plt.xlim(*xlim) plt.ylim(*ylim) Z = model.predict_proba(zip(xx.ravel(), yy.ravel())) plt.title(mname) plt.pcolormesh(xx, yy, Z[:,1].reshape(xx.shape), cmap=my_cm) plt.scatter(raw_data[:,0], raw_data[:,1], c=raw_target, alpha=0.8, marker=marker, cmap=my_cm) for mname, model in models: plt.figure(figsize=(16,4)) plt.subplot(1,3,1) gen_plot("%s: full view" % mname, model) plt.axhspan(-2, 2, 0.125, 0.375, fill=False, lw=1.5) plt.subplot(1,3,2) gen_plot("%s: detailed view" % mname, model, (-3, -1), (-2, 2), 0.005) plt.axhspan(-1, 0, 0.5, 0.75, fill=False, lw=1.5) plt.subplot(1,3,3) gen_plot("%s: extreme closeup" % mname, model, (-2, -1.5), (-1, 0), 0.00125) plt.show() Code for visualizing decision boundaries modified from the excellent scikit-learn documentation.
# Question about ergodicity and the evolution of the probability distribution under Liouville's theorem According to Liouville's theorem, the probability distribution function $$\rho$$ evolve in phase space with $$\frac{d \rho}{d t} = \frac{\partial \rho}{\partial t}+\left\{\rho,H\right\}_{P.B} =0$$ when the system is steady, $$\frac{\partial \rho}{\partial t}=0$$, we have $$\left\{\rho,H\right\}_{P.B} =0$$ this holds for two condition, the first one is $$\rho$$ is uniform through out the whole phase space, the second one is $$\rho$$ is a function of the Hamiltonian $$H(q,p)$$, as the case of canonical ensemble, $$\rho=exp(\frac{-H(q,p)}{k_Bt}).$$ Here is my question, if the system is in equilibrium, then the system is ergodic. In this case, the trajectory formed by the evolution of the system under Hamilton Equation will visit every point of the phase space. Then $$\rho$$ should be uniform through out the whole phase space. This seems to violate the second case mentioned above as $$\frac{d \rho}{d t}=0$$, then the canonical ensemble should be non-ergodic. I dont know where I am wrong. Motion governed by a time-independent hamiltonian $$H(q,p)$$ conserves energy. Therefore, only the microstates with the same energy are visited under such hamiltonian motion, not whole phase space. That's what happens with the microcanonical ensemble.
# Math Help - Null Sets 1. ## Null Sets Hello all, I am trying to prove that a countable union of null sets is null, i.e. $\displaystyle\bigcup_{n=1}^{\infty} X_n$ is null when each $X_n$ is null. Now I am a bit confused about the notion of an infinite union. Is it enough to show that $\displaystyle\bigcup_{n=1}^M X_n$ is null for all $M \in \mathbb{N}$ ? I have managed to do that in the following way: since you pick $\epsilon$, then for $X_1,\ldots X_M,$ these are all null so you can define sequences $\{ I_{n_r} \}$ with $\displaystyle\sum_{r=0}^{\infty}m(I_{n_r}) < \frac{\epsilon}{M+1}$. Hence we define a sequence $\{ I_r\}$ by putting all the sequence elements of $\{I_{1_r}\}$ up to $\{ I_{M_r}\}$ in the sequence, ie the sequence $\{ I_{1_1}, I_{2_1},\ldots I_{M_1}, I_{2_1},I_{2_2},\ldots I_{M_2}, \ldots \}$. Of course $\displaystyle\bigcup_{n=1}^{M} X_n \subseteq \displaystyle\bigcup_{r=1}^{\infty} I_r$, and we have that $\displaystyle\sum_{n=1}^{\infty}m(I_r) < M\frac{\epsilon}{M+1} < \epsilon$. Therefore $\display\bigcup_{n=1}^{M} X_n$ is null for all $M\in\mathbb{N}$. But I am confused about the infinite union? Can I deduce it from the above? what about choosing $\epsilon$, then for each $X_n$, choose a sequence $\{ I_{n_r}\}$ such that $\displaystyle\sum_{r=1}^{\infty} m(I_{n_r}) < \frac{\epsilon}{2^{n+1}}.$Then I would have to define a sequence $\{ I_r\}$ which somehow contains infinitely many of the above sequences, so that I get $\displaystyle\sum_{r=1}^{\infty} I_r < \epsilon$. But can I construct a sequence like that? Any help would be appreciated, thank you 2. Are you dealing with sets in a measure space? If so, then you can use the property that the measure, call it $\mu$, is countably subadditive. That is, if $\{E_n\}$ is a countable sequence of sets in your $\sigma$-algebra, then $\dipslaystyle \mu(\bigcup E_n)\leq \sum \mu(E_n)$ In particular, if all of the sets $E_n$ are null sets, then the sum on the right is zero, which proves your result. 3. Thanks for your response. Everything is in $\mathbb{R}$. We just started the course and haven't define measure yet, so we're expected to do it from first principles at the moment, but no doubt we will study that which you mentioned at some point. Although I'm not too sure about this, the course description says that we will define the lebesgue integral, and measure theory will follow from a corollary. I feel a bit more confident about my solution. Indeed I can define a sequence just by writing the infinite list of sequences in a table starting at the top, writing all the elements of the first sequence in a row, then in the next row write the elements of the second sequence, and so on, and this is of course countable by the same reason $\mathbb{Q}$ is, by drawing a line that "snakes" from the top left of the table. 4. Originally Posted by slevvio Hello all, I am trying to prove that a countable union of null sets is null, i.e. $\displaystyle\bigcup_{n=1}^{\infty} X_n$ is null when each $X_n$ is null. Now I am a bit confused about the notion of an infinite union. Is it enough to show that $\displaystyle\bigcup_{n=1}^M X_n$ is null for all $M \in \mathbb{N}$ ? No, you can't. That would show that the union of any finite collection of null sets is null. No matter how large N is, it is still finite. I have managed to do that in the following way: since you pick $\epsilon$, then for $X_1,\ldots X_M,$ these are all null so you can define sequences $\{ I_{n_r} \}$ with $\displaystyle\sum_{r=0}^{\infty}m(I_{n_r}) < \frac{\epsilon}{M+1}$. Hence we define a sequence $\{ I_r\}$ by putting all the sequence elements of $\{I_{1_r}\}$ up to $\{ I_{M_r}\}$ in the sequence, ie the sequence $\{ I_{1_1}, I_{2_1},\ldots I_{M_1}, I_{2_1},I_{2_2},\ldots I_{M_2}, \ldots \}$. Of course $\displaystyle\bigcup_{n=1}^{M} X_n \subseteq \displaystyle\bigcup_{r=1}^{\infty} I_r$, and we have that $\displaystyle\sum_{n=1}^{\infty}m(I_r) < M\frac{\epsilon}{M+1} < \epsilon$. Therefore $\display\bigcup_{n=1}^{M} X_n$ is null for all $M\in\mathbb{N}$. But I am confused about the infinite union? Can I deduce it from the above? what about choosing $\epsilon$, then for each $X_n$, choose a sequence $\{ I_{n_r}\}$ such that $\displaystyle\sum_{r=1}^{\infty} m(I_{n_r}) < \frac{\epsilon}{2^{n+1}}.$Then I would have to define a sequence $\{ I_r\}$ which somehow contains infinitely many of the above sequences, so that I get $\displaystyle\sum_{r=1}^{\infty} I_r < \epsilon$. But can I construct a sequence like that? Any help would be appreciated, thank you 5. Originally Posted by slevvio Hello all, I am trying to prove that a countable union of null sets is null, i.e. $\displaystyle\bigcup_{n=1}^{\infty} X_n$ is null when each $X_n$ is null. Now I am a bit confused about the notion of an infinite union. Is it enough to show that $\displaystyle\bigcup_{n=1}^M X_n$ is null for all $M \in \mathbb{N}$ ? I have managed to do that in the following way: since you pick $\epsilon$, then for $X_1,\ldots X_M,$ these are all null so you can define sequences $\{ I_{n_r} \}$ with $\displaystyle\sum_{r=0}^{\infty}m(I_{n_r}) < \frac{\epsilon}{M+1}$. Hence we define a sequence $\{ I_r\}$ by putting all the sequence elements of $\{I_{1_r}\}$ up to $\{ I_{M_r}\}$ in the sequence, ie the sequence $\{ I_{1_1}, I_{2_1},\ldots I_{M_1}, I_{2_1},I_{2_2},\ldots I_{M_2}, \ldots \}$. Of course $\displaystyle\bigcup_{n=1}^{M} X_n \subseteq \displaystyle\bigcup_{r=1}^{\infty} I_r$, and we have that $\displaystyle\sum_{n=1}^{\infty}m(I_r) < M\frac{\epsilon}{M+1} < \epsilon$. Therefore $\display\bigcup_{n=1}^{M} X_n$ is null for all $M\in\mathbb{N}$. But I am confused about the infinite union? Can I deduce it from the above? what about choosing $\epsilon$, then for each $X_n$, choose a sequence $\{ I_{n_r}\}$ such that $\displaystyle\sum_{r=1}^{\infty} m(I_{n_r}) < \frac{\epsilon}{2^{n+1}}.$Then I would have to define a sequence $\{ I_r\}$ which somehow contains infinitely many of the above sequences, so that I get $\displaystyle\sum_{r=1}^{\infty} I_r < \epsilon$. But can I construct a sequence like that? Any help would be appreciated, thank you As HallsofIvy pointed out it looks as though you did it for arbitrarily finite number of sets, but I think what you're saying makes sense. Let $\varepsilon>0$ be given. Since each $X_n$ we can choose an infinite cover $\displaystyle \Omega_n=\{I_{r,n}\}_{r\in\mathbb{N}}$ of it such that $\displaystyle \sum_{r=1}^{\infty}\mu\left(I_r\right)<\frac{\vare psilon}{2^{n+1}}$. So, let $\displaystyle \Omega=\bigcup_{n=1}^{\infty}\Omega_n=\left\{I_m\} _{m\in\mathbb{N}}$. Note then that it shouldn't be too hard (recalling countable subadditivity) to prove that $\displaystyle \sum_{m=1}^{\infty}\mu\left(I_m\right)\leqslant \sum_{n=1}^{\infty}\sum_{r=1}^{\infty}\mu\left(I_{ n,r}\right)=\sum_{n=1}^{\infty}\frac{\varepsilon}{ 2^{n+1}}=\varepsilon$
# Pulse wave (Redirected from Pulse train) The shape of the pulse wave is defined by its duty cycle d, which is the ratio between the pulse duration (τ) and the period (T) Duty cycles A pulse wave or pulse train is a type of non-sinusoidal waveform that includes square waves (duty cycle of 50%) and similarly periodic but asymmetrical waves (duty cycles other than 50%). It is a term used in synthesizer programming, and is a typical waveform available on many synthesizers. The exact shape of the wave is determined by the duty cycle or pulse width of the oscillator output. In many synthesizers, the duty cycle can be modulated (pulse-width modulation) for a more dynamic timbre.[1] The pulse wave is also known as the rectangular wave, the periodic version of the rectangular function. The average level of a rectangular wave is also given by the duty cycle, therefore by varying the on and off periods and then averaging these said periods, it is possible to represent any value between the two limiting levels. This is the basis of pulse-width modulation. ## Frequency-domain representation The Fourier series expansion for a rectangular pulse wave with period ${\displaystyle T}$, amplitude ${\displaystyle A}$ and pulse length ${\displaystyle \tau }$ is[2] ${\displaystyle x(t)=A{\frac {\tau }{T}}+{\frac {2A}{\pi }}\sum _{n=1}^{\infty }\left({\frac {1}{n}}\sin \left(\pi n{\frac {\tau }{T}}\right)\cos \left(2\pi nft\right)\right)}$ where ${\displaystyle f={\frac {1}{T}}}$. Equivalently, if duty cycle ${\displaystyle d={\frac {\tau }{T}}}$ is used, and ${\displaystyle \omega =2\pi f}$: ${\displaystyle x(t)=Ad+{\frac {2A}{\pi }}\sum _{n=1}^{\infty }\left({\frac {1}{n}}\sin \left(\pi nd\right)\cos \left(n\omega t\right)\right)}$ Note that, for symmetry, the starting time (${\displaystyle t=0}$) in this expansion is halfway through the first pulse. Alternatively, ${\displaystyle x(t)}$ can be written using the Sinc function, using the definition ${\displaystyle \operatorname {sinc} x={\frac {\sin \pi x}{\pi x}}}$, as ${\displaystyle x(t)=A{\frac {\tau }{T}}\left(1+2\sum _{n=1}^{\infty }\left(\operatorname {sinc} \left(n{\frac {\tau }{T}}\right)\cos \left(2\pi nft\right)\right)\right)}$ or with ${\displaystyle d={\frac {\tau }{T}}}$ as ${\displaystyle x(t)=Ad\left(1+2\sum _{n=1}^{\infty }\left(\operatorname {sinc} \left(nd\right)\cos \left(2\pi nft\right)\right)\right)}$ ## Generation A pulse wave can be created by subtracting a sawtooth wave from a phase-shifted version of itself. If the sawtooth waves are bandlimited, the resulting pulse wave is bandlimited, too. A single ramp wave (sawtooth or triangle) applied to an input of a comparator produces a pulse wave that is not bandlimited. A voltage applied to the other input of the comparator determines the pulse width. Fourier series of a 33.3% pulse wave, first fifty harmonics (summation in red) ## Applications The harmonic spectrum of a pulse wave is determined by the duty cycle.[3][4][5][6][7][8][9][10] Acoustically, the rectangular wave has been described variously as having a narrow[11]/thin,[1][4][5][12][13] nasal[1][4][5][11]/buzzy[13]/biting,[12] clear,[3] resonant,[3] rich,[4][13] round[4][13] and bright[13] sound. Pulse waves are used in many Steve Winwood songs, such as "While You See a Chance".[11] In digital electronics, a digital signal is a pulse train (a pulse amplitude modulated signal), a sequence of fixed-width square wave electrical pulses or light pulses, each occupying one of two discrete levels of amplitude.[14][15] These electronic pulse trains are typically generated by metal–oxide–semiconductor field-effect transistor (MOSFET) devices due to their rapid on–off electronic switching behavior, in contrast to BJT transistors which slowly generate signals more closely resembling sine waves.[16] ## References 1. ^ a b c Reid, Gordon (February 2000). "Synth Secrets: Modulation", SoundOnSound.com. Retrieved May 4, 2018. 2. ^ Smith, Steven W. The Scientist & Engineer's Guide to Digital Signal Processing ISBN 978-0966017632 3. ^ a b c Holmes, Thom (2015). Electronic and Experimental Music, p.230. Routledge. ISBN 9781317410232. 4. Souvignier, Todd (2003). Loops and Grooves, p.12. Hal Leonard. ISBN 9780634048135. 5. ^ a b c Cann, Simon (2011). How to Make a Noise, [unpaginated]. BookBaby. ISBN 9780955495540. 6. ^ Pejrolo, Andrea and Metcalfe, Scott B. (2017). Creating Sounds from Scratch, p.56. Oxford University Press. ISBN 9780199921881. 7. ^ Snoman, Rick (2013). Dance Music Manual, p.11. Taylor & Francis. ISBN 9781136115745. 8. ^ Skiadas, Christos H. and Skiadas, Charilaos; eds. (2017). Handbook of Applications of Chaos Theory, [unpaginated]. CRC Press. ISBN 9781315356549. 9. ^ "Electronic Music Interactive: 14. Square and Rectangle Waves", UOregon.edu. 10. ^ Hartmann, William M. (2004). Signals, Sound, and Sensation, p.109. Springer Science & Business Media. ISBN 9781563962837. 11. ^ a b c Kovarsky, Jerry (Jan 15, 2015). "Synth Soloing in the Style of Steve Winwood". KeyboardMag.com. Retrieved May 4, 2018. 12. ^ a b Aikin, Jim (2004). Power Tools for Synthesizer Programming, p.55-56. Hal Leonard. ISBN 9781617745089. 13. Hurtig, Brent (1988). Synthesizer Basics, p.23. Hal Leonard. ISBN 9780881887143. 14. ^ B. SOMANATHAN NAIR (2002). Digital electronics and logic design. PHI Learning Pvt. Ltd. p. 289. ISBN 9788120319561. Digital signals are fixed-width pulses, which occupy only one of two levels of amplitude. 15. ^ Joseph Migga Kizza (2005). Computer Network Security. Springer Science & Business Media. ISBN 9780387204734. 16. ^ "Applying MOSFETs to Today's Power-Switching Designs". Electronic Design. 23 May 2016. Retrieved 10 August 2019.
# Find minimum value of the expression 1. May 13, 2014 ### utkarshakash 1. The problem statement, all variables and given/known data Let n be a positive integer. Determine the smallest possible value of $$|p(1)|^2+|p(2)|^2 + .........+ |p(n+3)|^2$$ over all a monic polynomials p with degree n. 3. The attempt at a solution Let the polynomial be $x^n+c_{n-1} x^{n-1} +.........+ c_1x+c_0$ p(1) = $c_0+c_1+c_2+........+1$ Similarly I can write p(2) and so on, square them and add them together to get a messy expression. But after this, I don't see how to find its minimum value. The final expression is itself difficult to handle. I'm sure I'm missing an easier way to this problem. 2. May 13, 2014 ### Staff: Mentor You don't need the full expressions to find derivatives with respect to the coefficients. 3. May 13, 2014 ### utkarshakash Derivative wrt to which coefficient? There are so many. 4. May 13, 2014 ### Ray Vickson Yo have n variables $c_0,c_1, \ldots, c_{n-1}$ and a function $$f(c_0,c_2, \ldots, c_{n-1}) = \sum_{k=1}^{n+3} [k^n + c_{n-1} k^{n-1} + \cdots + c_1 k + c_0]^2$$ You minimize $f$ by setting all its partial derivatives to zero; that is, by setting up and solving the equations $$\frac{\partial f}{\partial c_i} = 0, \: i = 0, 1, 2, \ldots, n-1$$
I have figured out that $\sqrt{1- \sqrt{1+ \sqrt{1- \sqrt{\ldots } }}}$ is a diverging series. But now I am struck with another problem:
# Negative binomial — IRR interpretation for predictors I have a zero-inflated negative binomial model. I have used incidence rate ratios and I'm trying to interpret the coefficients in relation to my predictors. Most of my predictors are continuous variables of census data -- ie: % of the population that is Hispanic; % of the population less than age 18, etc. I know that the IRR is normally interpreted as the rate ratio for a 1-unit increase in the independent variable, but what does this mean in terms of these continuous predictors -- does this mean the IRR is the estimated rate ratio for a 1% increase in % Hispanic. Is there a way I can scale this so it can be interpreted to be the estimated rate ratio for a 10% increase in the % Hispanic? Also, one of my IRR's is 20. Does that seem unusually high? • The last question cannot be answered without knowing more about your outcome and the predictors. A factor of 20 increase for 1 percentage point increase in X could make sense in some contexts. You might also find the second example useful: ats.ucla.edu/stat/stata/output/stata_nbreg_output.htm – Dimitriy V. Masterov Jul 9 '13 at 17:26
# How do you sketch a graph with x-intercept of 1 and y-intercept of -5? Assuming that the function is a line, the graph is immediately obatined: the $x$ intecept is $1$, so the point $\left(1 , 0\right)$ belongs to the line. The same goes for the $y$ intercept: it says us that the point $\left(0 , - 5\right)$ belongs to the line.
# The probability that the U.S_ unemployment rate will be less than 3 percent? Type in the correct answer from those ###### Question: the probability that the U.S_ unemployment rate will be less than 3 percent? Type in the correct answer from those listed below (i.e-, type in the number associated with answer _ "1" etc.) in your Excel spreadsheet, and in a sheet your of paper show /explain your work for arriving at your chosen answer (what you should do here is proceed as in the work/explanations developed in the practice homework) 0,68 perccnt, (.(O68 . None of the above: #### Similar Solved Questions ##### C(O, 4)B(l,1)y =Vx9;g V = 4~A(,0)A: C(O, 4) B(l,1) y =Vx 9; g V = 4~ A(,0) A:... ##### Using the simple interest formula =Prt, determine the deposit that must be made t earn 554.30 in 192 days at 3.63% p.a The deposit required is (Round the final answer to the nearest cent as needed needed ) Round all intermediate places as values to six decimal Using the simple interest formula =Prt, determine the deposit that must be made t earn 554.30 in 192 days at 3.63% p.a The deposit required is (Round the final answer to the nearest cent as needed needed ) Round all intermediate places as values to six decimal... ##### Amlto nlnmon elTE Ab EE @LADaroeemcat Thartalea haue tennan nocdnen Aditate0ral otn daloLonOn Fde utedethuthecniccme4crnn m44nd Oaet LonLaro 0u0egFAElanKeneeleat4Cutpat47lbolDeorretenaa FAicna neCrriebi / Pate ut11l7y00c0Feheani e e nrelaneem entenEethela tlontratno *[email protected] &drerl | IamneeteenDocnnro.FaterornaGateainIleecrentET0 Llnd Amlto nlnmon elTE Ab EE @LADaroeemcat Thartalea haue tennan nocdnen Aditate 0ral otn dalo LonOn Fde utedethuthecnic cme4crnn m44nd Oaet LonLaro 0u0eg FAElan Keneeleat4 Cutpat 47lbol Deorretenaa FAicna ne Crriebi / Pate ut 11l7y 00c0 Feheani e e nrelaneem entenEethela t lontratno *rululest @unnrtor &... ##### Fall 2020 SCMA 301 902 Business StatisticsLast Name Abdu Bailey Carpenter Dinsmore Hernandez Massey Mitchell Nour Rodrigues Sakelaris Shaw Smith TalithaFirst Name Mohamad William Cathryn Phillip Alou Daniel Charles Allie Mary Antione Kav Margaret TerryStudent ID Gender 99428 99236 99112 99965 99342 99385 99677 99376 99121 99784 99106 99575 99610DOBGrade Test 1 Grade Test 2 Grade Test 3 Average Final Grade2/14/1999 12/6/2001 2/13/2000 5/2/1996 9/22/2000 4/4/1998 11/8/2001 3/2/2002 11/2/2001 2/ Fall 2020 SCMA 301 902 Business Statistics Last Name Abdu Bailey Carpenter Dinsmore Hernandez Massey Mitchell Nour Rodrigues Sakelaris Shaw Smith Talitha First Name Mohamad William Cathryn Phillip Alou Daniel Charles Allie Mary Antione Kav Margaret Terry Student ID Gender 99428 99236 99112 99965 993... ##### What mass of sodium nitrate is required to give 128*g of oxygen gas upon thermolysis of the salt? What mass of sodium nitrate is required to give 128*g of oxygen gas upon thermolysis of the salt?... ##### FlOx pH ys prr cent' iL tke foFrn (Graph 3). FlOx pH ys prr cent' iL tke foFrn (Graph 3).... ##### Duane has four dimes, half as many nickels as dimes, and three times as many quarters as nickels Duane has four dimes, half as many nickels as dimes, and three times as many quarters as nickels. How much money does Duane have?... ##### A support used in an automotive application is supposed to have a nominal internal diameter of 1.5 inches. A random sample of 25 supports is selected and the nominal internal diameter of These bracket... A support used in an automotive application is supposed to have a nominal internal diameter of 1.5 inches. A random sample of 25 supports is selected and the nominal internal diameter of These brackets is 1.4975 inches. The diameters of the supports are known to be normally distributed with a standa... ##### How do you find the derivative of 72000-320(pi)r^3/9(pi)r^2 ? How do you find the derivative of 72000-320(pi)r^3/9(pi)r^2 ?... ##### Across:down:2 -131.) a solution to1.)in the range of 3 1 -1 2 -1 3-1 2 4.)in the null space of 3 -22.) orthogonal to 2-20 12 18 5.) the eigenvalues for -15 16 9 -15 12 133.) an eigenvector for2 across: down: 2 -1 3 1.) a solution to 1.)in the range of 3 1 -1 2 -1 3 -1 2 4.)in the null space of 3 -2 2.) orthogonal to 2 -20 12 18 5.) the eigenvalues for -15 16 9 -15 12 13 3.) an eigenvector for 2... ##### Problem 4 of 15 Which of the following groups of accounts increase with debits? O a.... Problem 4 of 15 Which of the following groups of accounts increase with debits? O a. Cash, Accounts Payable, Miscellaneous Expense O b. Accounts Receivable, Land, Investments O c. Supplies, Common Stock, Supplies Expense Od. Wages Expense, Fees Earned, Land N 3 4 5 6 7 00 9 10 11 12... ##### Refer to the graph ofy = f(x) to the right to describe the behavior of lim f(x): Use X+0 and 0 where appropriate.Select the correct choice below and fill in any answer boxes in your choice.0A lim f(x) = X+00 B. The limit does not exist and is neither c nor Refer to the graph ofy = f(x) to the right to describe the behavior of lim f(x): Use X+0 and 0 where appropriate. Select the correct choice below and fill in any answer boxes in your choice. 0A lim f(x) = X+0 0 B. The limit does not exist and is neither c nor... ##### Review what you have learned about the following historical figures in infectious disease: Ignaz Semmelweis Joseph... Review what you have learned about the following historical figures in infectious disease: Ignaz Semmelweis Joseph Lister Louis Pasteur Edward Jenner Florence Nightingale For your first post, research one of these historical figures and describe his/her role in the evolution of infectious disease ... Given: ALYN with altitudes LI and NA Prove: ADIN ~ADAL A Y... ##### Hello, I was wondering if you help me with mine accounting problem, Exercise 2.2 page 43 in the book Workplace Communication The Basics Canadian Edition. EXERCİSES 43 In Exercises dify del, for Exerci... Hello, I was wondering if you help me with mine accounting problem, Exercise 2.2 page 43 in the book Workplace Communication The Basics Canadian Edition. EXERCİSES 43 In Exercises dify del, for Exercise 2.1 You're the assistant to the personnel manager of a metals fabrication plant. Monday ... ##### A particle (charge +2.0 mC) moving in a region where only electric forces act on it has kinetic energy of 5.0 J at point The particle subsequently passes through point B which has an electric potential 0f +1.5 kV relatlve to point A. Determine the kinetic energy of the particle as It moves through point B_Select one:10.0 ]3.0 ]8.0 ]5.02.0 ] A particle (charge +2.0 mC) moving in a region where only electric forces act on it has kinetic energy of 5.0 J at point The particle subsequently passes through point B which has an electric potential 0f +1.5 kV relatlve to point A. Determine the kinetic energy of the particle as It moves through p... ##### Which Of the following rate relationships correct ior tne reacuon: CHzCl 3 Clz CCl HCIA[CH;CI]A[CIz ]A[CCL]A[HCI]Rate1A[CH,Cl] Rate =A[CI] A[CCL] 7+4[HCI]A[CH;Cl] AlClz] ~3A[CCl;] A(HCI] +3-RateA[CH, Cl] Rate = -3A[CIz] A[CCL] =+3A[HCI]A(CH, Cl]A[CIz] A[CCI;] =+A[HCI]RateA[CH,Cl] 18. For the reaction in the previous question; ~0.0675 Ms: What is the4[HCI] value of2025 Mls -0,2025 Ms 0675 Ws 0.0225 MlsIf the observed rate law for . reaction is Rate [NOJ[Ozl and [NO] = 0.150 M, [Oz]l = 0.315 M, an Which Of the following rate relationships correct ior tne reacuon: CHzCl 3 Clz CCl HCI A[CH;CI] A[CIz ] A[CCL] A[HCI] Rate 1A[CH,Cl] Rate = A[CI] A[CCL] 7+ 4[HCI] A[CH;Cl] AlClz] ~3 A[CCl;] A(HCI] +3- Rate A[CH, Cl] Rate = -3 A[CIz] A[CCL] =+3 A[HCI] A(CH, Cl] A[CIz] A[CCI;] =+ A[HCI] Rate A[CH,Cl] ... ##### The expression $\frac{a^{4}}{a^{6}}$ can be simplified by using the $\underline{?}$ property. The expression $\frac{a^{4}}{a^{6}}$ can be simplified by using the $\underline{?}$ property.... ##### Assume that a sample is used to estimate a population proportion p. Find the 80% confidence... Assume that a sample is used to estimate a population proportion p. Find the 80% confidence interval for a sample of size 227 with 112 successes. Enter your answer as a tri-linear inequality using decimals (not percents) accurate to three decimal places. ____< p < ____... ##### AGDC Finding Limits Graphicallyand NumencallyDetermine the etollowing lirnits nurnerically: Hm*' + 2 =f()49;8f(r)Hnfk) = f(2)T24221() AGDC Finding Limits Graphicallyand Numencally Determine the etollowing lirnits nurnerically: Hm*' + 2 = f() 49;8 f(r) Hnfk) = f(2) T2 422 1()... ##### C Anatomy of a long bone in the provided draw a longitudinal section of an hones... C Anatomy of a long bone in the provided draw a longitudinal section of an hones and label the following a proximal epiphysis, distal epiphysis, diaphysis, metaphysis, medullary eavily epiphyseal line Medolary Curity LUL 1 Distal co physis 2. The diaphysis of a long bone is composed of Compact bone ... ##### Radioactive isotope Due this Monday, Apr 22 at 11.59 pm (EDT)The graph below represents the number of counts detected from sample af a radioactive isotope as function of time: What is the Isotope's half-life?200015001 3 00 0I 50 04080100(in minutes)Submit Answcr Tries 0/10 Based on the graph bove_ now many total counts were detected between 28.0 and 36 minutes?Submit AnswerTries 0/10 Radioactive isotope Due this Monday, Apr 22 at 11.59 pm (EDT) The graph below represents the number of counts detected from sample af a radioactive isotope as function of time: What is the Isotope's half-life? 2000 1500 1 3 00 0 I 50 0 40 80 100 (in minutes) Submit Answcr Tries 0/10 Based on th... ##### Caspian Sea Drinks is considering the production of a diet drink. The expansion of the plant... Caspian Sea Drinks is considering the production of a diet drink. The expansion of the plant and the purchase of the equipment necessary to produce the diet drink will cost $28.00 million. The plant and equipment will be depreciated over 10 years to a book value of$3.00 million, and sold for that a... ##### (2) Find the compound amount when S2500 is invested a 5.5%. compounded monthly for two Yeurs (b) Do part ()when the iuterest Tate ix 5 9compounded Tcontinuously (2) Find the compound amount when S2500 is invested a 5.5%. compounded monthly for two Yeurs (b) Do part ()when the iuterest Tate ix 5 9compounded Tcontinuously... ##### Evaluate the integralJI dV,FhereE = {6,4o s 22 0Sus2tRUSrd Evaluate the integral JI dV, Fhere E = {6,4o s 22 0Sus2tRUSrd... ##### Prove that the closure of a set S is the union of S and itsaccumulation points Prove that the closure of a set S is the union of S and its accumulation points... ##### Quettlon 6 0f 24Consider thc fructose 6-bisphosphatasc rcaction. Calculate the free energy change if the ratio of the concentrations of the poducts - thc conccntrutions of the reactants 20.9 and the tempeTature is 37,0 LC" 'forthe Tcaction -16.7 kJ/mol,LGkJ/mol Quettlon 6 0f 24 Consider thc fructose 6-bisphosphatasc rcaction. Calculate the free energy change if the ratio of the concentrations of the poducts - thc conccntrutions of the reactants 20.9 and the tempeTature is 37,0 LC" 'forthe Tcaction -16.7 kJ/mol, LG kJ/mol... ##### Question 3: Taylor expansion (20 points) Expand this function to second order in d:V(r; 2) 4TVr2 + (+4/22 4TVr2 + ( d/2)You can assume that d < < 2 and d < < r: Expanding to second order means "Write down a Taylor series that might have terms independent of d, might have terms linear in d, and might have terms proportional to d_ but WOn have any terms with d or higher powers. It is entirely possible that some of those terms will be absent (O1, if you prefer, will have coefficien Question 3: Taylor expansion (20 points) Expand this function to second order in d: V(r; 2) 4TVr2 + (+4/22 4TVr2 + ( d/2) You can assume that d < < 2 and d < < r: Expanding to second order means "Write down a Taylor series that might have terms independent of d, might have terms lin... ##### PROB A block of mass M = 4.0kg is released from rest at point A as shown: Part AB of the track is frictionless and is one quarter of a circle of radius R-1.25m. The horizontal part of the track BC is rough with co-efficient of kinetic friction Uk The block comes to rest at C after moving distance L = 6.25m. The block can be treated as a point particle: [b] What is the value of Uk? PROB A block of mass M = 4.0kg is released from rest at point A as shown: Part AB of the track is frictionless and is one quarter of a circle of radius R-1.25m. The horizontal part of the track BC is rough with co-efficient of kinetic friction Uk The block comes to rest at C after moving distance L ... ##### For the circuit shown in the figure (ε = 9.00 V, R = 8.50 2), calculate... For the circuit shown in the figure (ε = 9.00 V, R = 8.50 2), calculate the following quantities. 12.0V 4.00 2 2.00 22 WY (a) Calculate the current in the 2.00 2 resistor. (Enter the magnitude.) ma (b) Calculate the potential difference between points a and b....
In [1]: import spot spot.setup() from spot.jupyter import display_inline This notebook presents functions that can be used to solve the Reactive Synthesis problem using games. If you are not familiar with how Spot represent games, please read the games notebook first. In Reactive Synthesis, the goal is to build an electronic circuit that reacts to some input signals by producing some output signals, under some LTL constraints that tie both input and output. Of course the input signals are not controlable, so only job is to decide what output signal to produce. # Reactive synthesis in four steps¶ A strategy/control circuit can be derived more conveniently from an LTL/PSL specification. The process is decomposed in three steps: • Creating the game • Solving the game • Simplifying the winnning strategy • Building the circuit from the strategy Each of these steps is parametrized by a structure called synthesis_info. This structure stores some additional data needed to pass fine-tuning options or to store statistics. The ltl_to_game function takes the LTL specification, and the list of controlable atomic propositions (or output signals). It returns a two-player game, where player 0 plays the input variables (and wants to invalidate the acceptance condition), and player 1 plays the output variables (and wants to satisfy the output condition). The conversion from LTL to parity automata can use one of many algorithms, and can be specified in the synthesis_info structure (this works like the --algo= option of ltlsynt). In [2]: si = spot.synthesis_info() si.s = spot.synthesis_info.algo_LAR # Use LAR algorithm game = spot.ltl_to_game("G((F(i0) && F(i1))->(G(i1<->(X(o0)))))", ["o0"], si) print("game has", game.num_states(), "states and", game.num_edges(), "edges") print("output propositions are:", ", ".join(spot.get_synthesis_output_aps(game))) display(game) game has 29 states and 55 edges output propositions are: o0 Solving the game, is done with solve_game() as with any game. There is also a version that takes a synthesis_info as second argument in case the time it takes has to be recorded. Here passing si or not makes no difference. In [3]: print("Found a solution:", spot.solve_game(game, si)) spot.highlight_strategy(game) game.show('.g') Found a solution: True Out[3]: Once a strategy has been found, it can be extracted as an automaton and simplified using 6 different levels (the default is 2). The output should be interpreted as a mealy automaton, where transition have the form (ins)&(outs) where ins and outs are Boolean formulas representing possible possibles inputs and outputs (they could be more than just conjunctions of atomic proposition). Mealy machines with this type of labels are called "separated" in Spot. In [4]: # We have different levels of simplification: # 0 : No simplification # 1 : bisimulation-based reduction # 2 : bisimulation-based reduction with output output assignement # 3 : SAT-based exact minimization # 4 : First 1 then 3 (exact) # 5 : First 2 then 3 (not exact) descr = ["0 : No simplification", "1 : bisimulation-based reduction", "2 : bisimulation-based reduction with output output assignement", "3 : SAT-based exact minimization", "4 : First 1 then 3 (exact)", "5 : First 2 then 3 (not exact)"] for i in range(6): print("simplification lvl ", descr[i]) si.minimize_lvl = i mealy = spot.solved_game_to_separated_mealy(game, si) display(mealy.show()) simplification lvl 0 : No simplification simplification lvl 1 : bisimulation-based reduction simplification lvl 2 : bisimulation-based reduction with output output assignement simplification lvl 3 : SAT-based exact minimization simplification lvl 4 : First 1 then 3 (exact) simplification lvl 5 : First 2 then 3 (not exact) If needed, a separated Mealy machine can be turned into game shape using split_sepearated_mealy(), which is more efficient than split_2step(). In [5]: display_inline(mealy, spot.split_separated_mealy(mealy), per_row=2) # Converting the separated mealy machine to AIGER¶ A separated mealy machine can be converted to a circuit in the AIGER format using mealy_machine_to_aig(). This takes a second argument specifying what type of encoding to use (exactly like ltlsynt's --aiger=... option). In this case, the circuit is quite simple: o0 should be the negation of previous value of i1. This is done by storing the value of i1 in a latch. And the value if i0 can be ignored. In [6]: aig = spot.mealy_machine_to_aig(mealy, "isop") display(aig) While we are at it, let us mention that you can render those circuits horizontally as follows: In [7]: aig.show('h') Out[7]: To encode the circuit in the aig format (ASCII version) use: In [8]: print(aig.to_str()) aag 3 2 1 1 0 2 4 6 3 7 i0 i1 i1 i0 o0 o0 # Adding more inputs and outputs by force¶ It can happen that propositions declared as output are ommited in the aig circuit (either because they are not part of the specification, or because they do not appear in the winning strategy). In that case those values can take arbitrary values. For instance so following constraint mention o1 and i1, but those atomic proposition are actually unconstrained (F(... U x) can be simplified to Fx). Without any indication, the circuit built will ignore those variables: In [9]: game = spot.ltl_to_game("i0 <-> F((Go1 -> Fi1) U o0)", ["o0", "o1"]) spot.solve_game(game) spot.highlight_strategy(game) display(game) mealy = spot.solved_game_to_separated_mealy(game) display(mealy) aig = spot.mealy_machine_to_aig(mealy, "isop") display(aig) To force the presence of extra variables in the circuit, they can be passed to mealy_machine_to_aig(). In [10]: display(spot.mealy_machine_to_aig(mealy, "isop", ["i0", "i1"], ["o0", "o1"])) # Combining mealy machines¶ It can happen that the complet specification of the controller can be separated into sub-specifications with DISJOINT output propositions, see Finkbeiner et al. Specification Decomposition for Reactive Synthesis. This results in multiple mealy machines which have to be converted into one single aiger circuit. This can be done using the function mealy_machines_to_aig(), which takes a vector of separated mealy machines as argument. In order for this to work, all mealy machines need to share the same bdd_dict. This can be ensured by passing a common options strucuture. In [17]: g1 = spot.ltl_to_game("G((i0 xor i1) <-> o0)", ["o0"], si) g2 = spot.ltl_to_game("G((i0 xor i1) <-> (!o1))", ["o1"], si) spot.solve_game(g1) spot.highlight_strategy(g1) spot.solve_game(g2) spot.highlight_strategy(g2) print("Solved games:") display_inline(g1, g2) strat1 = spot.solved_game_to_separated_mealy(g1) strat2 = spot.solved_game_to_separated_mealy(g2) print("Reduced strategies:") display_inline(strat1, strat2) print("Circuit implementing both machines:") aig = spot.mealy_machines_to_aig([strat1, strat2], "isop") display(aig) Solved games: Reduced strategies: Circuit implementing both machines: Note that we do not support the full AIGER syntax. Our restrictions corresponds to the conventions used in the type of AIGER file we output: • Input variables start at index 2 and are consecutively numbered. • Latch variables start at index (1 + #inputs)×2 and are consecutively numbered. • If some inputs or outputs are named in comments, all of them have to be named. • Gate number $n$ can only connect to latches, inputs, or previously defined gates ($<n$). In [12]: aag_txt = """aag 5 2 0 2 3 2 4 10 6 6 2 4 8 3 5 10 7 9 i0 a i1 b o0 c o1 d""" In [13]: this_aig = spot.aiger_circuit(aag_txt) display(this_aig) In [14]: print(this_aig.to_str()) aag 5 2 0 2 3 2 4 10 6 6 2 4 8 3 5 10 7 9 i0 a i1 b o0 c o1 d In [15]: print(this_aig.gates()) ((2, 4), (3, 5), (7, 9)) An aiger circuit can be transformed into a monitor/mealy machine. This can be used for instance to check that it does not intersect the negation of the specification. In [16]: this_aig.as_automaton() Out[16]:
Boost.Hana  1.7.1 Your standard library for metaprogramming boost::hana::detail::nested_than< Algorithm > Struct Template Reference ## Description ### template<typename Algorithm> struct boost::hana::detail::nested_than< Algorithm > Provides a .than static constexpr function object. When creating a binary function object of type Algo whose signature is A x B -> Return, nested_than<Algo> can be used as a base class of Algo. Doing so will provide a static constexpr member called than, which has the following signature: B -> A -> Return times A(T_1) \times \cdots \times A(T_n) \to A(U) @f\$. const expr auto ap Lifted application. Note that the function object Algo must be default-constructible, since it will be called as Algo{}(arguments...). Note This function object is especially useful because it takes care of avoiding ODR violations caused by the nested static constexpr member.
# American Institute of Mathematical Sciences September  2016, 36(9): 4619-4635. doi: 10.3934/dcds.2016001 ## Minimality of the horocycle flow on laminations by hyperbolic surfaces with non-trivial topology 1 GeoDynApp - ECSING Group, Spain 2 Institut de Recherche Mathématiques de Rennes, Université de Rennes 1, F-35042 Rennes, France 3 Instituto de Matemática y Estadística Rafael Laguardia, Facultad de Ingeniería, Universidad de la República, J. Herrera y Reissig 565, C.P. 11300 Montevideo 4 Universidad Nacional Autónoma de México, Apartado Postal 273, Admon. de correos #3, C.P. 62251 Cuernavaca, Morelos Received  June 2015 Revised  March 2016 Published  May 2016 We consider a minimal compact lamination by hyperbolic surfaces. We prove that if no leaf is simply connected, then the horocycle flow on its unitary tangent bundle is minimal. Citation: Fernando Alcalde Cuesta, Françoise Dal'Bo, Matilde Martínez, Alberto Verjovsky. Minimality of the horocycle flow on laminations by hyperbolic surfaces with non-trivial topology. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 4619-4635. doi: 10.3934/dcds.2016001 ##### References: show all references ##### References: [1] Matilde Martínez, Shigenori Matsumoto, Alberto Verjovsky. Horocycle flows for laminations by hyperbolic Riemann surfaces and Hedlund's theorem. Journal of Modern Dynamics, 2016, 10: 113-134. doi: 10.3934/jmd.2016.10.113 [2] Fernando Alcalde Cuesta, Françoise Dal'Bo, Matilde Martínez, Alberto Verjovsky. Corrigendum to "Minimality of the horocycle flow on laminations by hyperbolic surfaces with non-trivial topology". Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4585-4586. doi: 10.3934/dcds.2017196 [3] Francois Ledrappier and Omri Sarig. Invariant measures for the horocycle flow on periodic hyperbolic surfaces. Electronic Research Announcements, 2005, 11: 89-94. [4] Katrin Gelfert. Non-hyperbolic behavior of geodesic flows of rank 1 surfaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 521-551. doi: 10.3934/dcds.2019022 [5] François Ledrappier, Omri Sarig. Fluctuations of ergodic sums for horocycle flows on $\Z^d$--covers of finite volume surfaces. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 247-325. doi: 10.3934/dcds.2008.22.247 [6] David Ralston, Serge Troubetzkoy. Ergodic infinite group extensions of geodesic flows on translation surfaces. Journal of Modern Dynamics, 2012, 6 (4) : 477-497. doi: 10.3934/jmd.2012.6.477 [7] Alfonso Artigue. Expansive flows of surfaces. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 505-525. doi: 10.3934/dcds.2013.33.505 [8] José Ginés Espín Buendía, Daniel Peralta-salas, Gabriel Soler López. Existence of minimal flows on nonorientable surfaces. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4191-4211. doi: 10.3934/dcds.2017178 [9] Pascal Hubert, Gabriela Schmithüsen. Infinite translation surfaces with infinitely generated Veech groups. Journal of Modern Dynamics, 2010, 4 (4) : 715-732. doi: 10.3934/jmd.2010.4.715 [10] Jan Philipp Schröder. Ergodicity and topological entropy of geodesic flows on surfaces. Journal of Modern Dynamics, 2015, 9: 147-167. doi: 10.3934/jmd.2015.9.147 [11] Dmitri Scheglov. Absence of mixing for smooth flows on genus two surfaces. Journal of Modern Dynamics, 2009, 3 (1) : 13-34. doi: 10.3934/jmd.2009.3.13 [12] Keith Burns, Katrin Gelfert. Lyapunov spectrum for geodesic flows of rank 1 surfaces. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 1841-1872. doi: 10.3934/dcds.2014.34.1841 [13] Luis Barreira, Christian Wolf. Dimension and ergodic decompositions for hyperbolic flows. Discrete & Continuous Dynamical Systems - A, 2007, 17 (1) : 201-212. doi: 10.3934/dcds.2007.17.201 [14] Alexander I. Bufetov. Hölder cocycles and ergodic integrals for translation flows on flat surfaces. Electronic Research Announcements, 2010, 17: 34-42. doi: 10.3934/era.2010.17.34 [15] Vladislav Kruglov, Dmitry Malyshev, Olga Pochinka. Topological classification of $Ω$-stable flows on surfaces by means of effectively distinguishable multigraphs. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4305-4327. doi: 10.3934/dcds.2018188 [16] Giovanni Forni. The cohomological equation for area-preserving flows on compact surfaces. Electronic Research Announcements, 1995, 1: 114-123. [17] Carlos Arnoldo Morales. A note on periodic orbits for singular-hyperbolic flows. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 615-619. doi: 10.3934/dcds.2004.11.615 [18] Shucheng Yu. Logarithm laws for unipotent flows on hyperbolic manifolds. Journal of Modern Dynamics, 2017, 11: 447-476. doi: 10.3934/jmd.2017018 [19] Zhiping Li, Yunhua Zhou. Quasi-shadowing for partially hyperbolic flows. Discrete & Continuous Dynamical Systems - A, 2020, 40 (4) : 2089-2103. doi: 10.3934/dcds.2020107 [20] Giovanni Forni, Corinna Ulcigrai. Time-changes of horocycle flows. Journal of Modern Dynamics, 2012, 6 (2) : 251-273. doi: 10.3934/jmd.2012.6.251 2018 Impact Factor: 1.143
## Long range scattering and modified wave operators for some Hartree type equations. II.(English)Zbl 1024.35084 The authors study the scattering theory for a class of Hartree type equations with long range interactions in space dimension $$n \geq 3$$, including Hartree equations with potential $$V(x)=\lambda |x|^{-\gamma}, 0 < \gamma \leq 1$$. They prove the existence of modified wave operators. The asymptotic behaviour in time of solutions in the range of the wave operators is also investigated. For Part I, cf. Rev. Math. Phys. 12, 361-429 (2000; Zbl 1044.35041). ### MSC: 35P25 Scattering theory for PDEs 81U99 Quantum scattering theory 35B40 Asymptotic behavior of solutions to PDEs 35Q40 PDEs in connection with quantum mechanics ### Keywords: existence; asymptotic behaviour Zbl 1044.35041 Full Text:
Viser treff 932-951 av 1613 • #### Multiphase Flow in Porous Media with Emphasis on CO2 Sequestration  (Doctoral thesis, 2011-12-19) • #### Multiphoton ionization and stabilization of helium in superintense xuv fields  (Peer reviewed; Journal article, 2011) Multiphoton ionization of helium is investigated in the superintense field regime, with particular emphasis on the role of the electron-electron interaction in the ionization and stabilization dynamics. To accomplish this, ... • #### Multiple transpolar auroral arcs reveal insight about coupling processes in the Earth’s magnetotail  (Journal article; Peer reviewed, 2020) A distinct class of aurora, called transpolar auroral arc (TPA) (in some cases called “theta” aurora), appears in the extremely high-latitude ionosphere of the Earth when interplanetary magnetic field (IMF) is northward. ... • #### Multiplicity and transverse momentum evolution of charge-dependent correlations in pp, p-Pb, and Pb-Pb collisions at the LHC  (Peer reviewed; Journal article, 2016) We report on two-particle charge-dependent correlations in pp, p-Pb, and Pb-Pb collisions as a function of the pseudorapidity and azimuthal angle difference, $\mathrm{\Delta}\eta$ and $\mathrm{\Delta}\varphi$ respectively. ... • #### Multiplicity dependence of (anti-)deuteron production in pp collisions at √s=7 TeV  (Peer reviewed; Journal article, 2019-05-22) Abstract In this letter, the production of deuterons and anti-deuterons in pp collisions at TeV is studied as a function of the charged-particle multiplicity density at mid-rapidity with the ALICE detector at the LHC. ... • #### Multiplicity dependence of (multi-)strange hadron production in proton-proton collisions at √s = 13 TeV  (Journal article; Peer reviewed, 2020) The production rates and the transverse momentum distribution of strange hadrons at mid-rapidity (|y|<0.5) are measured in proton-proton collisions at s√ = 13 TeV as a function of the charged particle multiplicity, using ... • #### Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV  (Peer reviewed; Journal article, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity $-0.5 < y < 0$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... • #### Multiplicity dependence of identified particle production in proton-proton collisions with ALICE  (Peer reviewed; Journal article, 2017-11) The study of identified particle production as a function of transverse momentum $p_{\text{T}}$ and event multiplicity in proton-proton (pp) collisions at different center-of-mass energies $\sqrt{s}$ is a key tool ... • #### Multiplicity dependence of inclusive J/ψ production at midrapidity in pp collisions at at √s=13 TeV  (Journal article; Peer reviewed, 2020) Measurements of the inclusive J/ψ yield as a function of charged-particle pseudorapidity density in pp collisions at TeV with ALICE at the LHC are reported. The J/ψ meson yield is measured at midrapidity () in the ... • #### Multiplicity dependence of jet-like two-particle correlations in p-Pb collisions at $\sqrt{s_NN}$ = 5.02 TeV  (Peer reviewed; Journal article, 2015-02-04) Two-particle angular correlations between unidentified charged trigger and associated particles are measured by the ALICE detector in p–Pb collisions at a nucleon–nucleon centre-of-mass energy of 5.02 TeV. The transverse-momentum ... • #### Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE  (Peer reviewed; Journal article, 2017-11) Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ... • #### Multiplicity dependence of K*(892)0 and ϕ(1020) production in pp collisions at √s=13 TeV  (Journal article; Peer reviewed, 2020) The striking similarities that have been observed between high-multiplicity proton-proton (pp) collisions and heavy-ion collisions can be explored through multiplicity-differential measurements of identified hadrons in pp ... • #### Multiplicity dependence of light (anti-)nuclei production in p–Pb collisions at √sNN=5.02 TeV  (Journal article; Peer reviewed, 2020) The measurement of the deuteron and anti-deuteron production in the rapidity range −1 < y < 0 as a function of transverse momentum and event multiplicity in p–Pb collisions at √sNN = 5.02 TeV is presented. (Anti-)deuterons ... • #### Multiplicity dependence of light-flavor hadron production in pp collisions at √s=7 TeV  (Peer reviewed; Journal article, 2019-02-08) Comprehensive results on the production of unidentified charged particles, π±, K±, K0S, K∗(892)0, p, ¯p, ϕ (1020), Λ, ¯¯¯Λ, Ξ−, ¯¯¯Ξ+, Ω−, and ¯¯¯Ω+ hadrons in proton-proton (pp) collisions at √s=7 TeV at midrapidity ... • #### Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV  (Peer reviewed; Journal article, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, ... • #### Multiplicity dependence of strangeness and hadronic resonance production in pp and p-Pb collisions with ALICE at the LHC  (Peer reviewed; Journal article, 2019) One of the key results of the LHC Run 1 was the observation of an enhanced production of strange particles in high mul-tiplicity pp and p–Pb collisions at√sNN=7 and 5.02 TeV, respectively. The strangeness enhancement is ... • #### Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC  (Peer reviewed; Journal article, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions ... • #### Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC  (Peer reviewed; Journal article, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... • #### Multiplicity dependence of π, K, and p production in pp collisions at √s=13 TeV  (Journal article; Peer reviewed, 2020) This paper presents the measurements of π±, K±, p and p¯¯¯ transverse momentum (pT) spectra as a function of charged-particle multiplicity density in proton–proton (pp) collisions at s√ = 13 TeV with the ALICE detector at ... • #### Multivariat analyse av nær-infrarød spektroskopi og av fysiske egenskaper i piperazin-aktivert 2-amino-2-metylpropanol og blandingsforholdets effekt på CO2-absorbsjon.  (Master thesis, 2018-06-20) Aminløsninger er hovedsakelig det som brukes for å absorbere CO2, særlig monoetanolamin (MEA). Det er også gunstig og teste ut nye og eventuelt bedre løsninger. I denne oppgaven studeres en blanding av piperazin, ...
# Everyone should know this Number Theory Level 1 $1, 1, 2, 3, 5, 8, 13, 21, 34, \ldots$ The Fibonacci Sequence is a sequence of numbers such that the next number is found by adding up the two numbers before it. Now, what is next number after 34? ×
# How to find the solution to this system of integer variables? Let $x_i$ be nonnegative integer variables in $\mathbb{N}$. The system is to find $x_i$ in $$\sum_{i=1}^nx_i=3n,\\\quad \quad \quad \quad \;\;\,x_i\leqslant3,\,\forall\, i\in\{1,\ldots,n\}.$$ I find the solution to be $x_i=3$ for all $i$ but I cannot prove that it is the unique solution. • If there was a different soluion, some of the $x_i$ would be smaller and some of them larger than $3$, but the second condition says they can't be larger. – Henrik Dec 14 '16 at 16:01 Suppose that some $x_j <3$. Wlog we can assume $j=1$. Then $$\sum_{i=1}^{n} x_i = x_1 + \sum_{i=2}^{n}x_i \le 2 + 3(n-1) < 3n.$$ So we have a contradiction. $$\sum_{i=1}^nx_i \leq \sum_{i=1}^n3=3n$$ and the equality holds only when $x_i=x_j$ for all $i,j$. So the only solution is $x_i=3$ for all $i$ Suppose at least $1 \ x_i$ is less than $3$. Then there must be at least one $x_j$ strictly greater than $3$ to make up for $3n$ . Hence a contradiction Assume there is an other solution $(y_i)_{i=1,2...n}$ . then $$S=\sum_{i=1}^n|3-y_i|=0 \implies$$ $$\forall i\in\{1,2,...n\}\;\;S\geq |3-y_i|\geq 0$$ $$\implies \forall i\in\{1,2,...n\}\;\;y_i=3$$
# Introduction Each researcher facing the pandemic would more or less shows the symptom of mentally sub-health. I feel so exceptionally strongly. As an introverted unsociable person who is even not good at English, I live in a studio and have nobody to talk with. Gradually I found myself unable to controlling my thoughts and showing significant OCD case. For more than two weeks, I do nothing but forcing myself to think of something and simultaneously forcing myself not to do so. More depressing is that I know this is wrong but I do not know what to do. After a long time of struggling, I get through this situation with much help from my friends, supervisors and family. The good side of this “wasted” time is somehow I can recognize my emotions and started to learn how to observe and control it. Therefore I decided to write what I have searched and concluded. Hope I can express myself clearly so that anyone reading this post can learn something when he/she need help to escape from the mental trap. # Feelings in our brain I divided my thoughts into two parts: emotion and cognition. I don’t know if this division is scientifically correct, but I can confirm that these two are widely accepted concepts in Psychology after simply searching some keywords in Google Scholar (e.g. Okon-Singer et al., 2015). They interact with each other and collectively determine our motivation. Emotion is nature and subjective, covering 27 varieties such as admiration, anger, fear, joy, sexual desire (Cowen and Keltner, 2017). It is a neurophysiological response to some stimulus. For example, it is common for everyone to trigger sexual desire when a lovely body is observed. Therefore it is natural because all human have this “built-in” function. However, this is also subjective because people can have different kinds and extent of emotion when they face a similar thing. The past experience and cognition should decide how the certain emotion would appear. Cognition is learned and acquired. Thus it is not as natural and fast-responding and fading as emotion. In physiology, it is believed that the neocortex which plays an influential role in sleep, memory and learning processes is evolved later than the limbic system deciding the emotion and motivation (Wikipedia). This might be why we usually face the case that we cannot control our emotion even though we should not care it. # Relax, strengthen cognition through meditation Because cognition is acquired, there are some ways to exercise it, e.g., meditation. Lutz et al., (2008) reviewed the possible mechanisms and functions of two varibles of meditation, focused attention (FA) meditation and open monitoring (OM) meditation. The former entails the voluntary focusing of attention on a chosen object, while the other involves nonreactive monitoring of the content of experience from moment to moment. For me, the later one is applicable and useful. Unfortunately I cannot give much details of how to practice this and this is not the purpose of this post. I would rather show how enhanced cognition can change our emotion and life in the following framework. As the Fig. 1a shows, five process link the stimulus, emotion/cognition and our final behavior/speech. Arrow a represents the native stress response, typical example is our desires. Arrow b represents the emotion-motivated behavior and speech like ingesting, sexual behavior, impolite speech to your loved ones and lots of other impulsive behaviors (which often causes our regret). Arrow c is the first function of cognition: regulating how emotions generated in certain contexts. Similarly, arrow d means treating our emotion purposely. A good example is the OM meditation which monitors emotion and let it pass naturally. Process c and d can be done simultaneously, in the thinking process like saying “you didn’t intend to do this, don’t be upset (d), you can do it better next time (c)”. The last process e is simple “two dots one line” pattern which occurs hundreds of times per day in our mind. You HOPE TO do something and carefully manage it. The right sub-figure represents how OCD-patient and healthy person differentiate in emotion. Assume emotion decay exponentially, most emotion would disappear very soon. However, OCD-patients often remind themselves that something haven’t been done and this causes long-term active in some brain regions. One may find or have found this through brain-scanning machine and I do believe this should be true. With stronger cognition, process b would be weakening because cognition become the dominating factor deciding our behavior and speech. We can also prevent and treat our unwanted emotion in a rational way. In other way, we can then decide what I should think in my mind. However, it is not saying we eliminate our emotion or the emotion lost its impacts anymore (which is actually impossible for any animal). It is saying we can control our emotion and reduce the influence of unwanted negative emotion, equating the Chinese ancient theory - “Unity of knowledge and action” (知行合一). # Final note I am not a professional researcher in Psychology and Neurosciece, and also not a good English writer. Many opinions in this post have not been tested or I have not checked them. Please carefully treat them.
• English only # Differences This shows you the differences between two versions of the page. cartesianproducts [2015/04/21 17:44] cartesianproducts [2015/04/21 17:45] (current) Line 155: Line 155: ...in a system with only two positions can be seen as the constraint: ...in a system with only two positions can be seen as the constraint: -$p_i = ENode \times ]2;\infty[$+\begin{equation*}p_i = ENode \times ]2;\infty[ \end{equation*} This works well for .isInstanceOf checks as well, but not for stuff like if i > j, as our cartesian products have no way to represent dependencies between members. This works well for .isInstanceOf checks as well, but not for stuff like if i > j, as our cartesian products have no way to represent dependencies between members.
# Proving that $a^{25} \bmod 65 = a \bmod 65$? Prove that for all $a \in \mathbb{Z}$ we have $$a^{25} \bmod 65 = a \bmod 65.$$ We have $65 = 5 \cdot 13$, where $5$ and $13$ are prime. So I wanted to compute the first expression by using the Chinese Remainder theorem. I have to find a $x$ which satisfies the system $$\begin{cases} x \bmod 5 = a^{25} \bmod 5 \\ x \bmod 13 = a^{25} \bmod 13 \end{cases}.$$ But how can I solve this system when I don't know what $a$ is? I tried using Fermat's little theorem for the prime number $23$, but the above equation has to hold for all $a \in \mathbb{Z}$, not only with $gcd(a,p) = 1$. So how can we solve this problem? If $a=0$ then this is trivial, so assume $a\neq 0$. $\mathbb{Z}_5=\mathbb{Z}/5\mathbb{Z}$ is a field, so $\mathbb{Z}_5^\times$, the group of invertible elements, is a group of $4$ elements. In particular, we have $a^4\equiv 1\bmod 5$, so $a^{24}=(a^4)^6\equiv 1\bmod 5$, and $a^{25}\equiv a\bmod 5$. Similarly, $\mathbb{Z}_{13}$ is a field, and its group of invertible elements has $12$ elements, so $a^{12}\equiv 1\bmod 13$, $a^{24}\equiv 1\bmod 13$, and thus $a^{25}\equiv a\bmod 13$. Remark: You can avoid fields and use Fermat's little theorem directly: $$a^5\equiv a\bmod 5.\tag{5.1}$$ Taking the $5$-th power, we obtain $$a^{25}\equiv a^5\bmod 5\tag{5.2}$$ and putting $(5.1)$ and $(5.2)$ together, $$a^{25}\equiv a\bmod 5.$$ Now for $13$: $$a^{13}\equiv a\bmod{13}\tag{13.1}$$ Multiply by $a^{12}$: $$a^{25}\equiv a^{13}\bmod{13}\tag{13.2}$$ put $(13.1)$ and $(13.2)$ together: $$a^{25}\equiv a\bmod{13}.$$ • Thanks for the reply. But how do know that $a^{5} \equiv a \bmod 5$? You use Fermat for this? Then we need to have the extra information that $gcd(a, 5) = 1$? – Kamil Aug 10 '16 at 18:03 • @Kamil This is simply Fermat's little theorem: If $p$ is prime then $a^p\equiv a\bmod p$ for all $a\in\mathbb{Z}$:en.wikipedia.org/wiki/Fermat%27s_little_theorem – Luiz Cordeiro Aug 10 '16 at 18:11 The Chinese Remainder Theorem is rarely of any use when you're looking at variable expressions. But factoring $65$ as $5 \cdot 13$ is useful. First, notice that there are $65$ possible combinations of $x \mod 5$ and $x \mod 13$; so each number $0$ through $64$ has a different such signature. By Fermat's little theorem, $a^5 \equiv a \mod 5$, so $a^{25} = (a^5)^5 \equiv a^5 \equiv a \mod 5$. Again by Fermat, $a^{13} \equiv a \mod 13$, so $a^{12} \equiv 1 \mod 13$ (unless $a \equiv 0 \mod 13$). Since $a^{25} = a \cdot (a^{12})^2$, either way we have $a^{25} \equiv a \mod 13$. So $a^{25}$ and $a$ have the same signature mod $5$ and $13$, and hence $a^{25} \equiv a \mod 65$. • $5$ is not equivalent to $0\bmod 65$, but $5^{24} \equiv 40\bmod 65$. – Luiz Cordeiro Aug 10 '16 at 17:28 • Quite right, I missed that. Argument fixed, I believe. – Reese Aug 10 '16 at 17:45 Notice $\, n = 65 = 5\cdot 13 \,$ is a product of distinct primes $\rm \,p\,$ such that $\rm \ \color{#c00}{p\!-\!1\mid 25\!-\!1},\:$ thus Theorem $\$ For natural numbers $\rm\:a,e,n\:$ with $\rm\:e,n>1$ $\qquad\rm n\:|\:a^e-a\:$ for all $\rm\:a\:\iff n\:$ is squarefree, and prime $\rm\:p\:|\:n\,\Rightarrow\, \color{#c00}{p\!-\!1\mid e\!-\!1}$ Proof $\ (\Leftarrow)\$ Hint: since a squarefree natural divides another iff all its prime factors do, we need only show $\rm\:p\:|\:a^e\!-\!a\:$ for each prime $\rm\:p\:|\:n,\:$ or, that $\rm\:a \not\equiv 0\:\Rightarrow\: a^{e-1} \equiv 1\pmod p,\:$ which, since $\rm\:p\!-\!1|\:e\!-\!1,\:$ follows from $\rm\:a \not\equiv 0\:$ $\Rightarrow$ $\rm\: a^{p-1} \equiv 1 \pmod p,\:$ by little Fermat. $(\Rightarrow)\$ Not needed here, see this answer
## Cramer's rule Consider the system $$Ax = b$$ where $$A$$ is $$n\times n$$ and invertible. Cramer's rule says that the unique solution is given by $x_i = \frac{\det(B_i)}{\det(A)},~ i =1,\ldots, n,$ where $$B_i$$ is obtained from $$A$$ by replacing column $$i$$ with $$b$$. For example, suppose that $$A = \begin{bmatrix} 1 & 2 \\ 2 & 3 \end{bmatrix}$$ and $$b = \begin{bmatrix} 4 \\ 5\end{bmatrix}$$. Then the unique solution is given by $x_1 = \frac{\left|\begin{matrix} 4 & 2 \\5 & 3\end{matrix}\right|} {\left|\begin{matrix} 1 & 2 \\2 & 3\end{matrix}\right|} = \frac{ 2}{-1} = -2,$ $x_2 = \frac{\left|\begin{matrix} 1 & 4 \\2 & 5\end{matrix}\right|} {\left|\begin{matrix} 1 & 2 \\2 & 3\end{matrix}\right|} = \frac{ -3}{-1} = 3.$ ## Formula for the inverse matrix We now obtain a formula for the inverse of an invertible matrix $$A$$ in terms of cofactors. Observe that $$AB = I_n$$ can be written as the following $$n$$ equations: $$AB_j = e_j$$ where $$B_j$$ denotes the $$j$$th column of $$B$$ and $$e_j$$ denotes the $$j$$th column of $$I_n$$. By Cramer's rule, we get that $B_{i,j} = \frac{\det(M_i)}{\det(A)}$ where $$M_i$$ denotes the matrix obtained from $$A$$ by replacing column $$i$$ with $$e_j$$. Since column $$i$$ of $$M_i$$ is $$e_j$$, expanding $$\det(M_i)$$ along column $$i$$, we get $$\det(M_i) = (-1)^{j+i}A(j \mid i) = C_{j,i}$$. (Beware of the indexing!) Hence, $$B_{i,j}$$ is given by $$\displaystyle\frac{C_{j,i}}{\det(A)}$$. So a formula for $$A^{-1}$$ is $$\displaystyle\frac{1}{\det(A)} \begin{bmatrix} C_{1,1} & C_{2,1} & \cdots & C_{n,1} \\ C_{1,2} & C_{2,2} & \cdots & C_{n,2} \\ \vdots & \vdots & \ddots & \vdots \\ C_{1,n} & C_{2,n} & \cdots & C_{n,n}\end{bmatrix}.$$ ## Exercises 1. Let $$A = \begin{bmatrix} 1 & -1 \\ -2 & 1\end{bmatrix}$$ Let $$b = \begin{bmatrix} -1 \\ -1\end{bmatrix}$$ Solve the system $$Ax = b$$ using Cramer's rule. 2. Let $$A\in \mathbb{Z}^{n\times n}$$. Let $$b\in \mathbb{Z}^n$$. Prove that if $$\lvert\det(A)\rvert = 1$$, then the solution to $$Ax = b$$ has only integer entries.
0 comment on 5/12/2015 12:06 AM We are often asked what the deployment story is like for WebSharper applications. Our usual reply has long been that WebSharper applications follow standard formats, in particular: • Client-Server Web Applications are simply ASP.NET applications, and can be deployed using usual methods such as publishing from Visual Studio or using Azure's git integration; • HTML Applications are composed of static files located in $(WebSharperHtmlDirectory) (which is bin/html by default) and can be deployed from there using the method of your choice. • Single-Page Applications are also composed of static files, index.html and the Content folder, so the same methods apply. However there can be some caveats, in particular with regards to running the F# compiler and referencing FSharp.Core.dll in build-and-deploy environments. Fortunately, the recently released NuGet package FSharp.Compiler.Tools combined with the excellent package manager Paket now provide a nice and streamlined development and deployment experience for git-hosted, Azure-deployed applications. This article presents the build and deployment setup for a reimplementation of the popular game 2048, available on GitHub. To try it out, simply click the button "Deploy to Azure" and follow the instructions. ## The project This particular project was created as a Single-Page Application; this project type was chosen because the application runs on a single page and is only composed of client-side code. The solution, 2048.sln, contains a single project located at Game2048/Game2048.fsproj. If you want to recreate this setup, you can create a Single-Page Application from Visual Studio or Xamarin Studio / MonoDevelop with the WebSharper extension installed. This deployment setup will also work if you create a Client-Server Application or an HTML Application instead. For Self-Hosted Client-Server Applications, you will additionally need to set up an HttpPlatformHandler to run the generated executable, similarly to Scott Hanselman's Suave setup. ## Paket For package management, the project uses Paket. It offers many advantages over traditional NuGet, which you can read about here. Note that paket restore is run in the build script before running MSBuild. Indeed, since we will be importing several .targets files that come from packages, the packages must be restored before running MSBuild or opening the project in an IDE. So for your first build after cloning the 2048 project, you can either run the full build.cmd, or if you only want to restore the packages, you can run: 1 .paket/paket.bootstrapper.exe && .paket/paket.exe restore If you want to reproduce this setup for your own project as created in the previous section, here are the steps: • Remove the WebSharper NuGet package from the project and delete the file <your_project_name>/packages.config if it exists. • Download paket.bootstrapper.exe and paket.targets from here into the folder .paket. • To ensure that you build with the right package versions after a git pull, add the following to <your_project_name>.fsproj: 1 <Import Project="..\.paket\paket.targets" /> • Run the following commands: 1 2 3 4 5 6 # Download paket.exe: .paket/bootstrapper.exe # Initialize paket.dependencies: .paket/paket.exe init # Install the WebSharper package into your project: .paket/paket.exe add nuget WebSharper project <your_project_name> • The files paket.dependencies, paket.lock and <your_project_name>/paket.references must be committed. ## The F# Compiler Since fsc is not available on Azure, we retrieve it from NuGet. We reference the package FSharp.Compiler.Tools which contains the compiler toolchain. By importing tools/Microsoft.FSharp.targets from this package in our project file, we instruct MSBuild to use the F# compiler from the package. This means that even when building locally, fsc from the package will be used. This ensures consistency between local and deployment builds. If you want to apply this change to your own project, here are the steps: • Install the F# compiler package: 1 .paket/paket.exe add nuget FSharp.Compiler.Tools • Use it in your project: in <your_project_name>.fsproj: • Remove any references to Microsoft.FSharp.targets and FSharp.Core.dll. In a Visual Studio-created project, this means removing this whole block: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 <!-- F# targets --> <Choose> <When Condition="'$(VisualStudioVersion)' == '11.0'"> <PropertyGroup Condition="Exists('$(MSBuildExtensionsPath32)\..\Microsoft SDKs\F#\3.0\Framework\v4.0\Microsoft.FSharp.Targets')"> <FSharpTargetsPath>$(MSBuildExtensionsPath32)\..\Microsoft SDKs\F#\3.0\Framework\v4.0\Microsoft.FSharp.Targets</FSharpTargetsPath> </PropertyGroup> </When> <Otherwise> <PropertyGroup Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\FSharp\Microsoft.FSharp.Targets')"> <FSharpTargetsPath>$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\FSharp\Microsoft.FSharp.Targets</FSharpTargetsPath> </PropertyGroup> </Otherwise> </Choose> <Import Project="\$(FSharpTargetsPath)" /> 1 <Import Project="..\packages\FSharp.Compiler.Tools\tools\Microsoft.FSharp.targets" /> You now have a project that should build and run fine locally. Try it out! ## Azure Deployment Now, on to the deployment setup itself. We will be using a custom build script, so we need to tell so in the .deployment file: 1 2 [config] command = build.cmd The build.cmd script itself is in three parts: 1. Package restore: we retrieve paket.exe if it hasn't already been retrieved, and run it to restore packages. 1 2 3 4 5 if not exist .paket\paket.exe ( .paket\paket.bootstrapper.exe ) .paket\paket.exe restore 2. Build: Azure conveniently points the environment variable MSBUILD_PATH to the path to MSBuild.exe; in order to be also able to run this script locally, we check for it and set it to the standard installation location if it doesn't exist. Then, we run it. 1 2 3 4 5 if "%MSBUILD_PATH%" == "" ( set MSBUILD_PATH="%ProgramFiles(x86)%\MSBuild\12.0\Bin\MSBuild.exe" ) %MSBUILD_PATH% /p:Configuration=Release 3. Deploy: Deploying the application simply consists in copying the application files to the Azure-provided DEPLOYMENT_TARGET folder. The actual file in the 2048 repository is a bit more complex than necessary for Azure because it is also used on AppVeyor to deploy the application to github-pages. But a simple implementation can just copy all files and subdirectories from the project directory to DEPLOYMENT_TARGET: 1 2 3 if not "%DEPLOYMENT_TARGET%" == "" ( xcopy /y /e <your_project_name> "%DEPLOYMENT_TARGET%" ) As a recap, here is the full build.cmd with some extra error management: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 @ECHO OFF setlocal echo ====== Restoring packages... ====== if not exist .paket\paket.exe ( .paket\paket.bootstrapper.exe ) .paket\paket.exe restore if not %ERRORLEVEL% == 0 ( echo ====== Failed to restore packages. ====== exit 1 ) echo ====== Building... ====== if "%MSBUILD_PATH%" == "" ( set MSBUILD_PATH="%ProgramFiles(x86)%\MSBuild\12.0\Bin\MSBuild.exe" ) %MSBUILD_PATH% /p:Configuration=Release if not %ERRORLEVEL% == 0 ( echo ====== Build failed. ====== exit 1 ) if not "%DEPLOYMENT_TARGET%" == "" ( echo ====== Deploying... ====== xcopy /y /e <your_project_name> "%DEPLOYMENT_TARGET%" ) echo ====== Done. ====== And there you have it! A WebSharper application easily deployed to Azure with a simple configuration and consistent build setup between local and deployed. Note that this particular example is a Single-Page Application, but the same setup can be used for Client-Server Applications and HTML Applications. For the latter, make sure to copy the WebSharperHtmlDirectory (<your_project_name>/bin/html by default) in the final step rather than the project folder itself. Thanks to Steffen Forkmann and Don Syme for their quick response on creating the FSharp.Compiler.Tools NuGet package, and to Scott Hanselman for his Suave Azure deployment tutorial which has been of great help to create this one despite the fairly different final setup. Happy coding!
# vanishing theorems I would be glad to know about possible generalizations of the following results: 1) (Grothendieck) Let $X$ be a noetherian topological space of dimension $n$. Then for all $i>n$ and all sheaves of abelian groups $\cal{F}$ on $X$, we have $H^i(X; \cal{F})=$ 0. [See Hartshorne, Algebraic Geometry, III.2.7.] 2) Let $X$ be an $n$-dimensional $C^0$-manifold. Then for all $i>n$ and all sheaves of abelian groups $\cal{F}$ on $X$, we have $H^i(X; \cal{F})=$ 0 . [See Kashiwara-Schapira, Sheaves on manifolds, III.3.2.2] More precisely, I'm interested in dropping the "abelian groups" hypothesis: could I take sheaves in any, say, AB5 abelian category? Apparently, in Grothendieck's theorem, the "abelian groups" hypothesis is necessary -at least in Hartshorne's proof-, because at the end you see a big constant sheaf $\mathbf{Z}$. But what happens if we talk about sheaves of $R$-modules, with $R$ any commutative ring with unit, for instance? Are those generalizations trivial ones? False for trivial reasons? Any hints or references will be welcome. - For any sheaf of rings $O$, sheaf cohomology on the category of $O$-modules coincides with such cohomology on underlying abelian sheaves (due to acyclicity of flasques). So the generalizations are obvious. In (2) it isn't necessary to restrict to manifolds; any separable metric space (or disjoint union thereof) with dimension $n$ in the sense of topological dimension theory satisfies (2) (see Engelking's book "General Topology", especially the notion of "covering dimension"; recall that Cech = derived functor cohomology on paracompact Hausdorff spaces, and metric spaces are such spaces). –  Boyarsky Jun 29 '10 at 10:44 @Boyarsky. Thanks. So the generalization to sheaves of O-modules is trivial. Do you know anything about possible generalizations to sheaves with values in an (AB5?) abelian category? –  a.r. Jun 29 '10 at 16:14 @Agusti: goodness, I can't even remember which one AB5 is...but is there some real reason for asking that kind of question? Like an example to motivate it? –  Boyarsky Jun 29 '10 at 16:20 @Boyarsky. Thanks for your help again, Boyarsky. Well, I have a nasty spectral sequence with this kind of guys which I would like to converge strongly. For this, I need some zeros in it. As for AB5, is just a conjecture: exactness of filtered colimits seems to me, at first sight, the less you should ask -or the less I need- to work with sheaves with values in an abelian category –  a.r. Jun 29 '10 at 17:28 The point is that, since the theorem is also true for sheaves of $R$-modules, given a sheaf $\cal{F}$ with values in an abelian category $\cal{A}$, with the help of Mitchell's embedding theorem, http://en.wikipedia.org/wiki/Mitchell%27s_embedding_theorem, we can consider it as a sheaf of $R$-modules, for some ring $R$. Moreover, the embedding $V: {\cal A} \longrightarrow \mathbf{Mod}_R$ is full, faithful, and exact. That is to say, $V$ sends exact sequences to exact sequences. So $H^n(X;\cal{F})$ = $H^n(X;V(\cal{F}))$.
× # Interesting fact about perfect squares Hi, I found out something interesting about perfect squares and I would like to share it. I don't know if anyone knew this, but I believe that most of you might knew this. This is a quite easier way to find the square of a large number Alright, let's start with two unknowns. I'll take $$x$$ and $$y$$ while $$y = x+1$$ As we knew that $$x^2 = x\times x$$ and $$y^2 = y\times y$$, we can change $$y^2$$ to be $$y\times y = y(x+1) = y\times x + y$$ $$=x(x+1) + y = x\times x + x + y$$ $$=x^2 + x + y$$ $$y^2 = x^2+x+y$$ Now, we have a simplified version of $$y^2$$ Let's try $$x = 3$$ and $$y = 4$$ $$4^2 = 3^2 + 3 + 4$$ $$16 = 9 + 3 + 4$$ And 16 is indeed equal to $$9 + 3 + 4$$ So it worked! Yeah! Now let's try a larger number, 501 If $$y = 501, x = 500$$ $$501^2 = 500^2 + 500 + 501$$ $$501^2 = 250000 + 500 + 501$$ $$501^2 = 251001$$ Try it in your calculator and you will find it true Alright, that's all. Thanks everyone for viewing this. Try this problem to see if you understand it Follow me to get more interesting questions and notes Note by Daniel Lim 3 years, 6 months ago Sort by: Why complicate the matter when you have got such a nice formula that you have been learning since "AGES"..... (a^ {2} - b^ {2} ) = (a-b)(a+b) · 3 years, 6 months ago You can also think of it as this: Suppose x is 10 and y is 11. Imagine you have a square of 100 counters laid out in a 10 by 10 fashion. By adding 10 counters to the right of your grid, you get 110 counters in an 11x10 fashion. Then, add 11 counters to the bottom of the new grid and you get an 11 by 11 SQUARE grid. This shows that x^2+x+y=y^2. This also works for all consecutive numbers (you can try it yourself with counters.) Just another way to think of this method as a whole without using any algebra. · 3 years, 6 months ago That was the way I learnt it 0.o · 3 years, 2 months ago a^2- b^2 = (a+b)(a-b), an EASIER way! · 2 years, 4 months ago (a + 1)^2 = a^2 + 2a + 1 = a^2 + a + (a + 1) just what you had above. Let us see a two digit number. (10a + b)^2 = 100a^2 + b^2 + 20ab........... (10a - b)^2 = 100a^2 + b^2 - 20ab Say 63^2 = (60+3)^2 = 3609 + 2018 = 3969 ...............67^ 2 = (70 - 3)^2 = 4909 - 2073 = 4489. If b < 5 use (10a + b)^2, b > 5 use {(10 +1)a - (10 - b) }^2 If b=5, (10a + 5)^2 = 100 a*(a+1) +25 · 3 years, 1 month ago Well it isn't a simplification since we have to do 3 more operations instead of just one. · 3 years, 2 months ago This works is because $$(x-1)^2-x^2=2x-1$$, and as we know, all odds can be written as the sum of two consecutive numbers. But this do save a little time. How about this? For every two digit which has the form of $$\overline {a5}$$, when they multiply with another number with the same form, then, $$\overline {a5} \times \overline {b5} = \overline{wxyz}$$, $$\overline {wx} = a(a+1), \overline {yz} = 25$$. · 3 years, 6 months ago don't understand the $$a5 b5$$ part · 3 years, 6 months ago I want to clarify that $$\overline{a5}$$ is not $$a \times 5$$, but $$\overline {a5} = 10a + 5$$. · 3 years, 6 months ago
# Basic signal processing with IPython Notebook This IPython notebook revisits a previous example I did in Matlab several years ago. RF signal processing came up as a topic of discussion recenetly, so this example is intended as a refresher/primer and exploration of IPython notebook. ## Prerequisites In [20]: %matplotlib inline In [21]: from matplotlib.pyplot import * from numpy import * Next power of two function. We'll need it later, since the FFT algorithim is only fast for powers of two. That's not important for this example, but in real implementations it needs to execute for every sample window. In [22]: def nextpow2(x): return int(pow(2,ceil(log2(x)))) ## Example signal construction Set the sample frequency and duration. In [23]: F_samp = 250 t_max = .2 Values of t are then spaced at interval 1/F_samp. In [24]: t = np.matrix(np.linspace(0, t_max, num=F_samp*t_max)) Set the amplitude (A) and frequecny (W) of our example signals. In [25]: A = np.matrix("[1.0 1.4]") W = np.matrix("[45 106]") Now, construct the signal, including a random noise component. This is the signal that would be observed by the reciever. In [26]: sig = A * sin(2*pi*W.T*t) + matrix(random.normal(size=t.shape)) In [27]: plot(np.squeeze(t.A),np.squeeze(sig.A)) Out[27]: [<matplotlib.lines.Line2D at 0x10d8ec090>] ## Signal detection Typically, we want to know frequencies of any recieved signals. So, use a fast fourier transform. The FFT algorithim is "fast" when the number of frequency "bins" is a power of two. If we use fewer bins than we have observations, we'll lose information. So, we'll use the next power. We'll also normalize so that the reported amplitude (y-axis) is in the correct range. In [28]: F_max = nextpow2(F_samp) spec = np.fft.fft(sig, F_max)/(F_samp*t_max) Create the observed frequency values (x-axis). The Nyquist theorem tells us that the maximum frequency we can observe is half of the sample frequency, so we'll use F_max/2 as the upper limit. In [29]: F_obs = F_samp/2*linspace(0, 1, F_max/2) Also as a consequence of Nyquist, the spectrum returned by the fft is approximately symetric about the y-axis. Typically, we'll only plot the positive half of the graph. In [30]: spec_2 = np.squeeze(2*abs( spec[:,0:(F_max/2)] )) Finally, we'll plot the spectrum. Notice the two peaks corresponding to W with heights similar to A. The height won't match exactly because we've added noise. In [31]: plot(F_obs, spec_2); The simplest detection mechinisim is to implement detection as a simple threshold, where the threshold is set to be just above the expected noise. (We can naively guess the threshold by setting A = [0, 0] above and looking looking at this plot. i.e., we're observing background noise when we know that there's no signal.) In [32]: thresh = 0.65 We can quickly visualize what part of the spectrum is above the threshold. In [33]: plot(F_obs, spec_2); hlines(thresh,0,F_max/2) Out[33]: <matplotlib.collections.LineCollection at 0x10d9734d0> But, we need to mask out the part below the threshold. In [34]: spec2_thresh = (spec_2 > thresh) * spec_2 Then we can find zero crossings of the first derivative. In [35]: d_spec2_thresh = diff(spec2_thresh) match = convolve(sign(d_spec2_thresh), [-1,1]) plot(F_obs, match) Out[35]: [<matplotlib.lines.Line2D at 0x10da10710>] In [36]: idx = nonzero(match>0)[0]-2 idx Out[36]: array([ 45, 108]) In [37]: F_obs[idx] Out[37]: array([ 44.29133858, 106.2992126 ]) Which is close to our original signal W. Note that we corrected the index by two: Once for the element lost in the discrete difference step, and once for the convolution. In [38]: W Out[38]: matrix([[ 45, 106]]) Now that we have the basics, we're ready to apply these steps to a much larger signal in the next example.
Parameters: Simulation Command ElementModifies the simulation parameter element. Format <Param_Simulation [ constr_tol = "real" ] [ implicit_diff_tol = "real" ] [ nr_converge_ratio = "real" ] [ linsolver = "HARWELL" ] [ acusolve_cosim = { "TRUE" | "FALSE" } ] [ contact_iteration = { "ACTIVE" | "INACTIVE" } ] /> Attributes constr_tol Modifies the accuracy to which the system configuration and motion constraints are to be satisfied at each step. This is used by kinematics and the transient solver in ODE formulation (ABAM, VSTIFF, or MSTIFF), but not the DAE form, which has its own tolerance for this (Param_Transient::dae_constr_tol). This should be a small positive number. The default value for constr_tol = 1.0E-10. implicit_diff_tol Modifies the accuracy to which implicit differential equations, such as Control_Diff equations with the is_implicit = "TRUE", are to be satisfied. The default value for implicit_diff_tol = 1.0E-5. nr_converg_ratio Modifies a measure of the rate of convergence in the Newton-Raphson method for ODE solvers. If the maximum entry in the constraint vector is larger than nr_converg_ratio times the maximum entry from the previous iteration, the Newton-Raphson iterations are converging slowly and the generalized coordinates will be re-partitioned in the next integration step to select a new set of independent coordinates. This attribute is applicable only if an ODE integrator (ABAM, VSTIFF, or MSTIFF) is selected. The default value is 0.09. linsolver The type of linear solver used, which currently is set to "HARWELL" for all analyses. acusolve_cosim A Boolean flag that determines whether the simulation will be coupled with AcuSolve or not. "TRUE" means that the MBD model is to be coupled with the CFD model in a co-simulation between MotionSolve and AcuSolve. "FALSE" means that the MBD model is not to be coupled with the CFD model. The default value is "FALSE". contact_iteration Specifies whether the contact residual is being updated during the corrector step of the implicit solver and an analytical Jacobian is provided or not (see Comment 2). The value "ACTIVE" indicates that the residual is being updated. The value "INACTIVE" indicates that the residual is not being updated. The attribute contact_iteration is optional. When not specified, it defaults to "INACTIVE". Example The example below shows the default settings for the Param_Simulation command element. <Param_Simulation constr_tol = "1E-10" implicit_diff_tol = "1E-5" nr_converg_ratio = "0.09" linsolver = "HARWELL" /> 1. Linear Solvers: At the core of most analyses in MotionSolve is solving a set of linear equations (A x = b). For example, the Newton-Raphson method solves a set of linear equations as part of the iteration process to find the solution to a set of non-linear algebraic equations. A brief explanation of how the linear solver works follows: • The Harwell Linear Solver is a tool for solving linear algebraic equations. It is especially suited for linear systems characterized by non-singular, unsymmetrical and sparse coefficient matrices A that have a fixed, non-zero entry pattern. The latter implies that while matrix entries are allowed to change with time, the pattern of non-zeroes is not. The matrix entries must be real. This methodology is suitable for solving small to medium sets of equations (<10,000 equations) and is therefore quite suitable for multi-body systems simulation. The Harwell software solves the equations in three major steps: 1. Symbolic LU factorization. 2. Numeric LU factorization. 3. Forward-Backward Substitution. Symbolic LU factorization Given a pattern of non-zero entries in A and their representative values, this function computes the symbolic lower- and upper-triangular (LU) factors of A. A partial pivoting scheme is used. It tries to maximize the stability of the LU factors while still maintaining sparsity of the factors. This operation is typically done once or only a few times during the simulation. Numeric LU factorization Given the current values of the non-zero entries in A and the symbolic LU factors, this utility returns the numeric LU factors of A. The symbolic LU factorization must therefore precede the numeric LU factorization. This operation is done whenever a new Jacobian is needed. Forward-Backward Substitution Given the numeric LU factors of a sparse matrix A and an appropriately sized right-hand-side vector b, this utility returns the solution x for the linear problem. The Numeric LU factorization must precede the forward-backward substitution operation. This operation is performed at each iteration. 2. The feature contact_iteration can increase the simulation robustness for contacting bodies, especially if there are contacts connecting multiple bodies or if high oscillations are observed. In certain circumstances, it can also decrease the computational performance. When using active contact iteration, contacts can react more sensitively when switching from an active to an inactive state, especially if the contacting bodies carry a high velocity difference. In those scenarios, using an additional zero-crossing sensor is recommended.
# [C++] non const getter in terms of const getter This topic is 3170 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Do you know the following kind of situation? class A { B b; public: B* getB(); const B* getB() const; }; B* A::getB() { return &b; } const B* A::getB() const { return &b; } The intention is simply having both a const and a non const getter so that you can get non const objects if the A is non const, while if A is const, you can still get const ones. note how the two getB's have exactly the same code in their body. But it doesn't really count as code duplication here, because it's both only one line. But I've got the following problem now: class A { public: virtual A* hitTest(); virtual const A* hitTest() const; }; A* A::hitTest() { return complex_resursive_code(); } const A* A::hitTest() const { return complex_resursive_code(); } I've got to write the same "complex recursive code" twice. Twice exactly the same. But I couldn't find any way to write the one function in terms of the other or vica versa. Is there a way to avoid code duplication? Is it ok in this case to use a const cast to try to avoid code duplication, for example like this? A* A::hitTest() { return const_cast<A*>((const_cast<const A*>(this))->hitTest()); } Does this "phenomenom" have a name? Thanks. ##### Share on other sites Quote: Original post by LodeDoes this "phenomenom" have a name? I could think of a couple of names, but there might be a profanity clause in the TOS. But: struct Foo { int i; const int * get() { std::cout << "non-const" << std::endl; return get_internal(); } const int * get() const { std::cout << "const" << std::endl; return get_internal(); }private: const int * get_internal() const { return &i; }};int main(){ Foo foo; foo.get(); const Foo foo2; foo2.get();} get_internal() always returns the same value - which is at odds with const/non-const pairs, which actually return different values (a const and non-const). So regardless of whether foo is const or not, the return value is exactly the same. With this design, there is no (conceptual) way for return value to be modifiable. And, due to lowest common denominator, the return value from shared function call needs to be const. ##### Share on other sites Quote: Original post by LodeIs there a way to avoid code duplication?Is it ok in this case to use a const cast to try to avoid code duplication, for example like this?*** Source Snippet Removed ***Does this "phenomenom" have a name? I don't know if it has a name, but Scott Myers actually demonstrates that technique in one of the Effective C++ books as a legitimate use of const_cast ;) I think he calls it "implementing const in terms of non-const" or "implementing non-const in terms of const". ##### Share on other sites Wouldn't it be better to implement the const getter in terms of the non-const getter, rather than the other way around? ##### Share on other sites Quote: Original post by Sc4FreakWouldn't it be better to implement the const getter in terms of the non-const getter, rather than the other way around? You can't (safely) do that if the non-const getter modifies the object's state in any way. ##### Share on other sites Quote: Original post by LodeI've got to write the same "complex recursive code" twice. Twice exactly the same. But I couldn't find any way to write the one function in terms of the other or vica versa.Is there a way to avoid code duplication? Instead of making "complex recursive code" a function that returns a const or non-const pointer, make it a function that returns void..and return the const or non-const pointer separately. Then it can be used in both functions. I can't actually think of any sensible reason why you should want to do this, and it makes me think that it may just be a result of bad design. Also it's more usual to return a const-reference, rather than const-pointer. ##### Share on other sites Quote: Original post by direwulf Quote: Original post by LodeI've got to write the same "complex recursive code" twice. Twice exactly the same. But I couldn't find any way to write the one function in terms of the other or vica versa.Is there a way to avoid code duplication? Instead of making "complex recursive code" a function that returns a const or non-const pointer, make it a function that returns void..and return the const or non-const pointer separately. Then it can be used in both functions. I can't actually think of any sensible reason why you should want to do this, and it makes me think that it may just be a result of bad design. Also it's more usual to return a const-reference, rather than const-pointer. The use is: hitTest tests if the mouse hits an element. An element can have smaller sub-elements in it. The return value can be the element itself (this), or a pointer to one of the sub elements (if the mouse is over those). Each sub element itself can again have its own sub elements and so on. Not only is a single function being duplicated, but it's virtual and all different types of elements would have to override twice and do twice the same in it. It's not possible to return a non const pointer to this from a const member function, and sometimes the hit test is needed in a const case where a const return value is OK, in other cases the hit test is needed for something where you have a non const reference to the object and you also need a non const result from the hit test. For example, when drawing it, a const one is enough, you just need to get some coordinates from it to draw something on the screen. When editing it with the mouse, you need a non const one, that is modifiable. Also, I've seen both references and pointers returned for various things in various situations, but I think the advantage of using a pointer, is that you can return "0" if needed to represent "nothing". In which of his books exactly does Scott Meyer write about this legitimate use of const_cast? I tried to find it with google and did find a thread somewhere else similar to this one where something similar from Scott Meyer is referenced, but not the exact source. I'm interested in it :) [Edited by - Lode on October 13, 2009 6:55:41 AM] ##### Share on other sites Effective C++ 3rd Edition page 23 "avoiding duplication in const and non-const member functions" which boils down to this: class Foo{ const return_type& some_method() const { // usual implementation } return_type& some_method() { return const_cast<return_type&> (static_cast<const Foo&>(*this).some_method()); }}; ##### Share on other sites Const and non-const accesses are normally so different that the methods don't have much in common and what's common can be factored into constness-agnostic and possibly templated helper methods and classes. In the specific example of hitTest(), I think that only one const method returning a const pointer to a mutable object would cover all uses. • 34 • 12 • 10 • 9 • 9 • ### Forum Statistics • Total Topics 631354 • Total Posts 2999497 ×
sdsidebandsplit¶ sdsidebandsplit(outfile='', overwrite=False, signalshift='', imageshift='', getbothside=False, refchan=0.0, refval='', otherside=False, threshold=0.2)[source] [EXPERIMENTAL] invoke sideband separation using FFT [Description] [Examples] [Development] [Details] Parameters • imagename (pathVec=’’) - a list of names of input images • outfile (string=’’) - Prefix of output image name • overwrite (bool=False) - overwrite option • signalshift (doubleVec=’’) - a list of channel number shifts in signal side band • imageshift (doubleVec=’’) - a list of channel number shifts in image side band • getbothside (bool=False) - sideband separation (True) or supression (False) getbothside = True • refchan (double=0.0) - reference channel of spectral axis in image sideband • refval (string=’’) - frequency at the reference channel of spectral axis in image sideband (e.g., “100GHz”) • otherside (bool=False) - solve the solution of the other side band side and subtract the solution • threshold (double=0.2) - Rejection limit of solution Description Warning WARNING: This task is EXPERIMENTAL. Interface and capabilities may change frequently. The task sdsidebandsplit performs a sideband separation operation on data collected by double sideband (DSB) receivers. The task splits the emission from the signal and image sidebands by utilizing the feature that spectral lines in the two sidebands shift in different amounts between observations with different LO offsets. The algorithm used in the task is analogous to that of Emerson, Klein, & Haslam (1979) 1 with shifts in the frequency domain instead of spatial one as described in the paper. The details of algorithm is also discussed in the section "Brief description of the mathematics behind the task", below. The task takes two or more images as inputs and is able to identify and split the contribution from the signal and image sidebands. The resulting output are separate image(s). When the parameter getbothside=False is set, only the signal sideband is solved for and stored as an image. When getbothside=True, both the signal and image sidebands are obtained and stored separately as two images. The name of output image(s) is defined by outfile and suffixed by ‘.signalband’ and ‘.imageband’ for the signal and image sidebands, respectively. How to prepare input images This task can only be used with spectral line data and not continuum. Therefore input images must be appropriately calibrated, for example, by using sdcal (and applycal), and any residual bandpass structure and continuum must be subtracted from the spectral line emmission using sdbaseline. Then an image must be created for each LO offset (e.g., sdimaging). The spatial and stokes coordinates must coincide with each other in the input images. It is recommended to use the frequency setting native to the observation when creating images to avoid adding complexity in the definition of the parameters, signalshift and imageshift. The default frequency parameters in sdimaging (nchan=-1, start=0, and width=1) help to avoid in adding this complexity. Definition of signalshift and imageshift Since the input images do not have information on how much the frequency is offset in the spectral window in each observation, sdsidebandsplit relies on user to provide it. Currently, the offset in each image should be defined in the unit of channel numbers of the image. In the future, the task may support other units such as frequency (Hz, MHz, GHz) or velocity (km/s). The parameter, signalshift, must be a list of offset channels of the signal sideband in corresponding elements of imagename, hence the number of elements in signalshift must be equal to that of imagename. The parameter imageshift is the same as signalshift but for the image sideband. Note signalshift and imageshift must be defined in the unit of channel numbers in the image. The sdsidebandsplit task relies on these values to shift back the spectra and construct a group of spectra whose signal (or image) sideband contribution are aligned. The solution significantly degrades if the values are inaccurate. It is the user’s responsibility to calculate and provide appropriate numbers of shifts especially in case the frequency coordinate of input images is different from the native observation, for example by regridding and/or by converting frequency frame. Solution flag: otherside There are two ways to obtain a spectrum of a sideband of interest in sdsidebandsplit. The parameter otherside allows a user to switch between the image or signal sideband. When solving for the signal (image) sideband with otherside=False, spectra are shifted back to construct a group of spectra in which the signal (image) sideband spectra are static in terms of channel and the spectrum of the signal (image) sideband is solved. When otherside=True, the signal (image) sideband spectrum is obtained by solving that of the other, image (signal), sideband and by subtracting it from the observed spectrum which contains contribution from both sidebands. Setting otherside=True may have an advantage of removing residual offsets in a spectrum. This is because the current algorithm does not take into account the sideband ratio and the offset component is assigned to the sideband which is originally solved. Therefore, solving with otherside=False doubles the offset components by assigning to both sidebands and breaks the conservation of flux between the original and derived spectra. This is indeed inappropriate but the capability is now exposed for testing purposes. In the future, this should be corrected, for example, by accepting the sideband ratio as an input. Note, setting otherside=True may cause over subtraction. If an emission line in a sideband is strong and wide, it causes significant ghost emission in the solution of the other sideband. When this ghost emission in addition to the offset component is subtracted from the original spectrum (otherside=True), it may cause a negative offset in the derived spectrum. Frequency definition of image sideband Since the input images do not have information of the frequency settings of the output image of the image sideband, sdsidebandsplit relies on user inputs when solving for the image sideband (getbothside=True). The frequency information of the image consists of the reference channel in the output image (refpix) and the frequency at the reference channel (refval). The frequency increment is defined as the same amount as that of signal sideband but with the opposite sign. If the frequency increment of the signal sideband is 4880kHz, that of image sideband is defined as -4880kHz. See the Examples tab for a sample use case showing how to specify refpix and refval. Brief description of the mathematics behind the task The algorithm to split signals from two sidebands is based on the following criteria: • The sign of the frequency increment for the image sideband is opposite to that for the signal sideband (Note that “signal sideband” and “image sideband” are the nominal terms that physically correspond to either an upper sideband or a lower sideband so if the increment for one sideband is positive, the other sideband is negative.) • By shifting the LO frequency, the corresponding sky frequency for each spectral channel is shifted accordingly. Because of the opposite sign of the frequency increment, the amount of shifts in terms of channel occur in opposite directions: if the corresponding channel shift in the signal sideband is positive, the shift for the image sideband is negative. • In the Fourier (time) domain, the frequency shift is represented as a modulation, which is a multiplication of a sinusoidal wave whose frequency is equal to the amount of the frequency shift. Suppose that $$h$$ is an output spectrum of DSB system and $$f$$, $$g$$ represent contributions from signal and image sidebands, respectively. Then, $$h_{m k} = f_{m k} + g_{m k}$$, $$k=0,1,2,...,N-1$$, where $$k$$ denotes channel index and $$N$$ is a number of spectral channels. If LO frequency shift by x causes $$f_{m k}$$ and $$g_{m k}$$ to shift by $$\Delta^{m x}_{m f}$$ and $$\Delta^{m x}_{m g}$$ with respect to its original spectra, respectively, output spectrum with shift is wrtten as, $$h^{m x}_{m k} = f_{m k - \Delta^x_f} + g_{m k - \Delta^x_g}$$. We can shift $$h^{m x}_{m k}$$ as if the contribution from image sideband, $$g$$, is being unshifted. By shifting $$h^{m x}_{m k}$$ by $$-\Delta^{m x}_{m g}$$, we can construct such spectrum, $$h^{m x,imag}_{m k} = f_{m k - \Delta^x} + g_{m k}$$, where $$\Delta^{m x} = \Delta^{m x}_{m f} - \Delta^{m x}_{m g}$$. Channel shift in the signal sideband is represented as a modulation in Fourier (time) domain. Thus, Fourier transform of the above is written as, $$H^{m x,imag}_{m t} = F_{m t} \exp(-i \frac{2\pi t \Delta^{m x}}{N}) + G_{m t}$$, where $$H^{m x,imag}_{m t}$$, $$F_{m t}$$, and $$G_{m t}$$ are Fourier transform of $$h^{m x,imag}_{m k}$$, $$f_{m k}$$, and $$g_{m k}$$, respectively. Applying similar procedure for the different LO frequency offset, y, we can obtain another result: $$H^{m y,imag}_{m t} = F_{m t} \exp(-i \frac{2\pi t \Delta^{m y}}{N}) + G_{m t}$$. we can obtain $$G_{m t}$$, Fourier transform of the contribution from image sideband, $$g_{m k}$$, from the above two results, $$G_{m t} = \frac{1}{2} (H^{m x,imag}_{m t} + H^{m y,imag}_{m t}) + \frac{1}{2} \frac{\cos\theta}{i\sin\theta} (H^{m x,imag}_{m t} - H^{m y,imag}_{m t})$$, where $$\theta = 2\pi t (\Delta^{m x} - \Delta^{m y}) / N$$. There are two ways to obtain the contribution from signal sideband. One is to solve signal sideband exactly same procedure with the above. By doing that, we obtain, $$F_{m t} = \frac{1}{2} (H^{m x,sig}_{m t} + H^{m y,sig}_{m t}) - \frac{1}{2} \frac{\cos\theta}{i\sin\theta} (H^{m x,sig}_{m t} - H^{m y,sig}_{m t})$$, where the quantity with superscript “sig” corresponds to the shifted spectrum so that contribution from the signal sideband remain fixed. This is what the sdsidebandsplit does when otherside=True. Another way is to subtract the contribution of image sideband from the output spectrum. If otherside=False, contribution from signal sideband is estimated in that way. In principle, the task can split contributions from signal and image sidebands if only two images with different LO shifts are given. However, the task accepts more than two images to obtain better result. If $$m$$ images are given and all images are based on independent LO shifts, there are $$m(m-1)/2$$ combinations to obtain the solution of splitted spectra. In that case, the task takes average of those solutions to get a final solution. Note that, when $$\Delta^{m x}$$ and $$\Delta^{m y}$$ are so close that $$\theta$$ becomes almost zero, the above solution could diverge. Such a solution must be avoided to obtain a finite result. The parameter threshold is introduced for this purpose. It should range from 0.0 to 1.0. The solution will be excluded from the process if $$|\sin(\theta)|$$ is less than threshold. Bibliography 1 Emerson, Klein, & Haslam 1979, A&A, 76, 92 ADS Examples Obtain an image of signal sideband (side band supression): sdsidebandsplit(imagename=['shift_0ch.image', 'shift_132ch.image', 'shift_neg81ch.image'], outfile='separated.image', signalshift=[0.0, +132.0, -81.0], imageshift=[0.0, -132.0, +81.0]) The output image is ‘separated.image.signalband’. To solve both signal and image sidebands, set frequency of image sideband explicitly in addtion to getbothside=True. sdsidebandsplit(imagename=['shift_0ch.image', 'shift_132ch.image', 'shift_neg81ch.image'], outfile='separated.image', signalshift=[0.0, +132.0, -81.0], imageshift=[0.0, -132.0, +81.0], getbothside=True, refpix=0.0, refval='805.8869GHz') The output images are ‘separated.image.signalband’ and ‘separated.image.imageband’ for signal and image sideband, respectively. To obtain signal sideband image by solving image sideband, set otherside=True: sdsidebandsplit(imagename=['shift_0ch.image', 'shift_132ch.image', 'shift_neg81ch.image'], outfile='separated.image', signalshift=[0.0, +132.0, -81.0], imageshift=[0.0, -132.0, +81.0], otherside=True) Solution of image sideband is obtained and subtracted from the original (double sideband) spectra to derive spectra of signal sideband. The output image is ‘separated.image.signalband’. Development Parameter Details Detailed descriptions of each function parameter imagename (pathVec='') - a list of names of input images. At least two valid images are required for processing outfile (string='') - Prefix of output image name. A suffix, “.signalband” or “.imageband” is added to output image name depending on the side band side being solved. overwrite (bool=False) - overwrite option signalshift (doubleVec='') - a list of channel number shifts in signal side band. The number of elements must be equal to that of imagename imageshift (doubleVec='') - a list of channel number shifts in image side band. The number of elements must be either zero or equal to that of imagename. In case of zero length array, the values are obtained from signalshift assuming the shifts are the same magnitude in opposite direction. getbothside (bool=False) - sideband separation (True) or supression (False) refchan (double=0.0) - reference channel of spectral axis in image sideband refval (string='') - frequency at the reference channel of spectral axis in image sideband (e.g., “100GHz”) otherside (bool=False) - solve the solution of the other side band side and subtract the solution threshold (double=0.2) - Rejection limit of solution. The value must be greater than 0.0 and less than 1.0.
2007 Sat 21 Apr # Matrices - Enter The Matrix (10) Posted at 5:58 pm In order to put Miss Loi’s new found freedom into good use, it’s about time the Matrix is loaded onto this website … hohoho. Do take particular note on the second question of Part 1. Recently Miss Loi’s been seeing that more and more school papers are testing students’ understanding of concepts like these (which is the right thing to do). From experience, many students don’t pay enough attention to mundane little things like the conditions for inverses to exist. Even Miss Loi is sometimes guilty of that in her uni days . 1. Given that A = and B = , find AB. State, with reason, whether A-1 and (AB)-1 exist. 2. Given that M = , find the matrix N such that MN = NM. So what would you do Neo? ### Revision Exercise To show that you have understood what Miss Loi just taught you, you must: 1. 123 says 2007 Sep 30 Sun 1:30pm 1 How to prove for part 1 ar? Can i just say .... "A is a non-singular matrix, hence there is an inverse matrix for A"? 2. 2007 Oct 1 Mon 12:49am 2 123, Miss Loi shall reproduce this from an obscure corner of your text-book A matrix whose determinant is zero is singular and has no inverse. On the other hand, a matrix whose determinant is not zero is non-singular and has an inverse. But the tricky part here is that A is a non-square matrix. So can you even calculate its determinant? So can A ever be non-singular? 3. Li-sa says 2008 Jul 3 Thu 10:07pm 3 1. AB = A-1 does not exist because it isn't a square matrix. (AB)-1 exists because it's a square matrix. OK to make things clearer, a matrix is invertible if it is a square matrix i.e. n by n AND if its determinant ≠ 0. So A-1 does not exist since A is not even a square matrix to begin with. For (AB)-1, MAKE SURE you do a quick calculation of its determinant i.e. just to check that it's not zero before stating that the inverse exists! P.S. your -1 should appear properly now 2. Let N = . MN= MN= NM= NM= By equality of matrices, 4a+5b=4a+2c -->5b=2c 4c+5d=5a+3c --> c+5d=5a --(1) 2a+3b=4b+2d --> 2a=b+2d--(2) 2c+3d=5b+3d --> 2c=5b .'. b:c = 2:5 Let's take b=2k and c=5k where k is any number. (1): 5k+5d=5a k+d=a a-d=k=b/2=c/5 (2):2a=2k+2d a-d=k=b/2=c/5 So we need to find values such that a-d=b/2=c/5. If we take b=2 and c=5, then a-d=1 a and d can be 1 and 0 respectively. Checking...MN=; NM= .'. N can be. Of course there are many other solutions to this question.你认为矩阵(matrices)麻烦吗?之前的ratios问题(http://www.exampaper.com.sg/questions/e-maths/similarity-ratios-fetish)中,其实ratios叫「比」。 Please see Miss Loi's comment #6 below. Why hide part of your comments? It's so interesting for 新加坡個小朋友 to know that Matrices=「矩阵」 and Ratios=「比」! 4. Li-sa says 2008 Jul 3 Thu 10:08pm 4 I wonder why my -1 does not go superscript. 5. Li-sa says 2008 Jul 6 Sun 6:32pm 5 Thanks! 6. 2008 Jul 6 Sun 11:58pm 6 Wait Li-sa Miss Loi hasn't dismissed you yet! Yes there can be many answers to N in Part 2 but one thing that wasn't stated is that Part 2 only carries 2 marks in the actual question. So in the interest of grabbing this puny 2 marks in the quickest of time under stressful exam conditions, it's not really advisable to go through that lengthy 長氣workings of yours (even though your final answer is correct). A simple recall of the Identity Matrix (which is in this case) that satisfies the conditions of N ought to do the trick 7. Li-sa says 2008 Jul 7 Mon 12:34am 7 Maybe I should tell you that I have hidden some information in the source code. 8. Li-sa says 2008 Jul 7 Mon 12:36am 9. Li-sa says 2008 Jul 7 Mon 12:44am 9 I also tried the "This is A-Maths not Physics". 10. Li-sa says 2008 Jul 7 Mon 5:46pm 10 Oh, and "Thanks!" is not equal to fleeeeeeing and somehow I wonder how you linked it to unintended dismissal. It is what it it is. • ### Latest News The completion of the Second Temple is nigh! Just in time for the June Holiday Intensive Jφss Sticks tuition sessions to save EMaths, AMaths, Chemistry, Physics & Social Studies students who may be still reeling from their "满江红" mid-year exam results :\ Contact Miss Loi to confirm your salvation and rid yourself of the LMBFH Syndrome from the subject(s) of your choice this June holiday! Subscribe to Miss Loi's 2012 Maths Exam Papers now, with a subscription period that lasts right up till 31 Dec 2013! Suggested solutions & answers to selected Oct/Nov 2012 GCE 'A'/'O' Level Maths (EMaths, AMaths, H1, H2 Maths & Science (Physics, Chemistry, Combined Science) papers are out! Check out them out here » * Please refer to the relevant Ten-Year Series for the questions of these suggested solutions. • ### Study Room Access • WP Cumulus Flash tag cloud by Roy Tanck and Luke Morton requires Flash Player 9 or better. More Mathematical Topics → • ### Greatest Chatterboxes (of the Month) Subscribe to: Miss Loi is a full-time private tutor in Singapore specializing in O-Level Maths tuition. Her life's calling is to eradicate the terrifying LMBFH Syndrome off the face of this planet. For over 22 years she has been a savior to countless students ... [read more] • ### Gratitudes I was a F9, E8 person for maths ... But after attending lessons for a month (4 sessions, 8 hours!), I jumped to A1! ... [read more] Charlene Siah Manjusri Secondary School
## Saturday, July 5, 2014 ### Thermodynamic Context for Fisher Information Matrices The following is from Wikipedia on the reparametrization of a Fisher Information matrix: The Fisher information depends on the parametrization of the problem. If θ and η are two scalar parametrizations of an estimation problem, and θ is a continuously differentiable function of η, then ${\mathcal I}_\eta(\eta) = {\mathcal I}_\theta(\theta(\eta)) \left( \frac{{\mathrm d} \theta}{{\mathrm d} \eta} \right)^2$ where ${\mathcal I}_\eta$ and ${\mathcal I}_\theta$ are the Fisher information measures of η and θ, respectively.[12]In the vector case, suppose ${\boldsymbol \theta}$ and ${\boldsymbol \eta}$ are k-vectors which parametrize an estimation problem, and suppose that ${\boldsymbol \theta}$ is a continuously differentiable function of ${\boldsymbol \eta}$, then,[13]${\mathcal I}_{\boldsymbol \eta}({\boldsymbol \eta}) = {\boldsymbol J}^{\mathrm T} {\mathcal I}_{\boldsymbol \theta} ({\boldsymbol \theta}({\boldsymbol \eta})) {\boldsymbol J}$ where the (ij)th element of the k × k Jacobian matrix $\boldsymbol J$ is defined by $J_{ij} = \frac{\partial \theta_i}{\partial \eta_j}\,,$ and where ${\boldsymbol J}^{\mathrm T}$ is the matrix transpose of ${\boldsymbol J}$. In information geometry, this is seen as a change of coordinates on a Riemannian manifold, and the intrinsic properties of curvature are unchanged under different parametrization. In general, the Fisher information matrix provides a Riemannian metric (more precisely, the Fisher-Rao metric) for the manifold of thermodynamic states, and can be used as an information-geometric complexity measure for a classification of phase transitions, e.g., the scalar curvature of the thermodynamic metric tensor diverges at (and only at) a phase transition point.[14]In the thermodynamic context, the Fisher information matrix is directly related to the rate of change in the corresponding order parameters [emphasis mine].[15] In particular, such relations identify second-order phase transitions via divergences of individual elements of the Fisher information matrix. I wonder how this could be extended to the dispersion of mutations through configuration space. Some possible sources for further inquiry: Also Information Geometry: From Black Holes to Condensed Matter Systems, an editorial by Tapobrata Sarkar, Hernando Quevedo, and Rong-Gen Cai. And for black hole thermodynamics look at Information geometries for black hole physics by Narit Pidokrajt, as well as Information Geometric Approach to Black Hole Thermodynamics by the same author. The figure below is from Frederic Barbaresco's "Eidetic Reduction of Information Geometry" in Geometric Theory of Information by Frank Nielsen.
# import subimport and bibentry incompatibility I'm writing my thesis and I want to be able to compile each section individually with the references. I also want to be able to include the full citation of my papers in the first chapter. I've achieved this except that it now throws an error: Cannot be used in preamble. \bibentry{TestEntry} The file structure is: Test ├── Ch1 │   └── Ch1.tex ├── example.bib ├── main.tex └── preamble.tex The file contents are: preamble.tex \usepackage{standalone} \usepackage{bibentry} \nobibliography* \usepackage{import} main.tex \documentclass[11pt]{book} \input{preamble} \begin{document} \title{An Example Document} \maketitle \subimport{Ch1/}{Ch1} \bibliographystyle{ieeetr} \bibliography{example} \end{document} Ch1/Ch1.tex %!TeX root = Ch1 \documentclass[crop=false,class=book]{standalone} \input{../preamble} \begin{document} \chapter{Introduction} This citation as cited normally: \cite{TestEntry} And now cited in long form in the text: \bibentry{TestEntry} \ifstandalone % Include the Bibliography \bibliographystyle{ieeetr} \bibliography{../example} \else % Do nothing \fi \end{document} example.bib @Book{TestEntry, title = {Test Entry}, publisher = {The University Press}, year = {1000}, author = {Joesph R Blogs}, abstract = {Nothing}, url = {http://www.example.com}, } I came across this question which I think might be relevant: Are \bibentry and \subfile incompatible? [! LaTeX Error: Cannot be used in preamble.] but I don't know how to apply it to this case.
We think you are located in South Africa. Is this correct? # Exothermic And Endothermic Reactions ## 12.2 Exothermic and endothermic reactions (ESBQP) ### The heat of reaction (ESBQQ) The heat of the reaction is represented by the symbol $$Δ\text{H}$$, where: $$Δ \text{H} = E_{\text{prod}} - E_{\text{react}}$$ • In an exothermic reaction, $$Δ \text{H}$$ is less than zero because the energy of the reactants is greater than the energy of the products. Energy is released in the reaction. For example: $\text{H}_{2}\text{(g)} + \text{Cl}_{2}\text{(g)} → 2\text{HCl (g)} \qquad Δ\text{H} < 0$ • In an endothermic reaction, $$Δ \text{H}$$ is greater than zero because the energy of the reactants is less than the energy of the products. Energy is absorbed in the reaction. For example: $\text{C (s)} + \text{H}_{2}\text{O (l)} → \text{CO (g)} + \text{H}_{2}\text{(g)} \qquad Δ\text{H} > 0$ Some of the information relating to exothermic and endothermic reactions is summarised in Table 12.1. Type of reaction Exothermic Endothermic Energy absorbed or released Released Absorbed Relative energy of reactants and products Energy of reactants greater than energy of product Energy of reactants less than energy of product Sign of ΔH Negative (i.e. $$< 0$$) Positive (i.e. $$> 0$$) Table 12.1: A comparison of exothermic and endothermic reactions. Writing equations using ΔH ΔH has been calculated for many different reactions and so instead of saying that ΔH is positive or negative, we can look up the value of ΔH for the reaction and use that value instead. There are two ways to write the heat of the reaction in an equation. For the exothermic reaction $$\text{C(s)} + \text{O}_{2}\text{(g)} → \text{CO}_{2}\text{(g)}$$, we can write: $$\text{C(s)} + \text{O}_{2}\text{(g)} → \text{CO}_{2}\text{(g)} \qquad Δ\text{H} = -\text{393}\text{ kJ·mol^{-1}}$$ or $$\text{C(s)} + \text{O}_{2}\text{(g)} → \text{CO}_{2}\text{(g)} + \text{393}\text{ kJ·mol^{-1}}$$ For the endothermic reaction, $$\text{C(s)} + \text{H}_{2}\text{O(g)} → \text{H}_{2}\text{(g)} + \text{CO(g)}$$, we can write: $$\text{C(s)} + \text{H}_{2}\text{O(g)} → \text{H}_{2}\text{(g)} + \text{CO(g)} \qquad Δ\text{H} = \text{+131}\text{ kJ·mol^{-1}}$$ or $$\text{C(s)} + \text{H}_{2}\text{O(g)} + \text{+131}\text{ kJ·mol^{-1}} → \text{H}_{2}\text{(g)} + \text{CO(g)}$$ The units for ΔH are $$\text{kJ·mol^{-1}}$$. In other words, the ΔH value gives the amount of energy that is absorbed or released per mole of product that is formed. Units can also be written as $$\text{kJ}$$, which then gives the total amount of energy that is released or absorbed when the product forms. The energy changes during exothermic and endothermic reactions can be plotted on a graph: We will explain shortly why we draw these graphs with a curve rather than simply drawing a straight line from the reactants energy to the products energy. In the investigation on exothermic and endothermic reactions learners work with concentrated sulfuric acid. They must work in a well ventilated room or, if possible, in a fume cupboard. This is a highly corrosive substance and learners must handle it with care. If they spill any on themselves they must immediately wash the affected area with plenty of running water. Either the learner or their friend should inform you as soon as possible so you can ensure that the learner is ok. If necessary the learner may need to go to the bathroom to remove and rinse clothing that is affected. Either you or another learner should accompany them. Most of the salts that the learners will work with are hygroscopic and will quickly absorb water from the air. These salts can cause chemical burns and should be handled with care. If possible learners should wear gloves to protect their hands. ## Endothermic and exothermic reactions ### Aim To investigate exothermic and endothermic reactions. ### Apparatus and materials • Approximately $$\text{2}$$ $$\text{g}$$ of calcium chloride $$(\text{CaCl}_{2})$$ • Approximately $$\text{2}$$ $$\text{g}$$ of sodium hydroxide $$(\text{NaOH})$$ • Approximately $$\text{2}$$ $$\text{g}$$ of potassium nitrate $$(\text{KNO}_{3})$$ • Approximately $$\text{2}$$ $$\text{g}$$ of barium chloride $$(\text{BaCl}_{2})$$ • concentrated sulphuric acid $$(\text{H}_{2}\text{SO}_{4})$$ (Be careful, this can cause serious burns) • 5 test tubes • thermometer When working with concentrated sulfuric acid always wear gloves and safety glasses. Always work in a well ventilated room or in a fume cupboard. ### Method 1. Dissolve about $$\text{1}$$ $$\text{g}$$ of each of the following substances in $$\text{5}$$-$$\text{10}$$ $$\text{cm^{3}}$$ of water in a test tube: $$\text{CaCl}_{2}$$, $$\text{NaOH}$$, $$\text{KNO}_{3}$$ and $$\text{BaCl}_{2}$$. 2. Observe whether the reaction is endothermic or exothermic, either by feeling whether the side of the test tube gets hot or cold, or using a thermometer. 3. Dilute $$\text{3}$$ $$\text{cm^{3}}$$ of concentrated $$\text{H}_{2}\text{SO}_{4}$$ in $$\text{10}$$ $$\text{cm^{3}}$$ of water in the fifth test tube and observe whether the temperature changes. Remember to always add the acid to the water. 4. Wait a few minutes and then carefully add $$\text{NaOH}$$ to the diluted $$\text{H}_{2}\text{SO}_{4}$$. Observe any temperature (energy) changes. ### Results Record which of the above reactions are endothermic and which are exothermic. Exothermic reactions Endothermic reactions • When $$\text{BaCl}_{2}$$ and $$\text{KNO}_{3}$$ dissolve in water, they take in heat from the surroundings. The dissolution of these salts is endothermic. • When $$\text{CaCl}_{2}$$ and $$\text{NaOH}$$ dissolve in water, heat is released. The process is exothermic. • The reaction of $$\text{H}_{2}\text{SO}_{4}$$ and $$\text{NaOH}$$ is also exothermic. # Practise now to improve your marks You can do it! Let us help you to study smarter to achieve your goals. Siyavula Practice guides you at your own pace when you do questions online. ## Endothermic and exothermic reactions Exercise 12.2 In each of the following reactions, say whether the reaction is endothermic or exothermic, and give a reason for your answer. Draw the resulting energy graph for each reaction. $$\text{H}_{2}\text{(g)} + \text{I}_{2}\text{(g)} → 2\text{HI (g)} + \text{21}\text{ kJ·mol^{-1}}$$ Exothermic. Heat is given off and this is represented by showing $$+$$ energy on the right hand side of the equation. $$\text{CH}_{4}\text{(g)} + 2\text{O}_{2}\text{(g)} → \text{CO}_{2}\text{(g)} + 2\text{H}_{2}\text{O (g)} \qquad Δ\text{H} = -\text{802}\text{ kJ·mol^{-1}}$$ Exothermic. $$\Delta H$$ is negative for this reaction. The following reaction takes place in a flask: $$\text{Ba(OH)}_{2}.8\text{H}_{2}\text{O (s)} + 2\text{NH}_{4}\text{NO}_{3}\text{(aq)} → \text{Ba(NO}_{3}\text{)}_{2}\text{(aq)} + 2\text{NH}_{3}\text{(aq)} + 10\text{H}_{2}\text{O (l)}$$ Within a few minutes, the temperature of the flask drops by approximately $$\text{20}$$$$\text{°C}$$. Endothermic. The temperature of the reaction vessel decreases indicating that heat was needed for the reaction. $$2\text{Na (aq)} + \text{Cl}_{2}\text{(aq)} → 2\text{NaCl (aq)} \qquad Δ\text{H} = -\text{411}\text{ kJ·mol^{-1}}$$ Exothermic. $$\Delta H < 0$$ $$\text{C (s)} + \text{O}_{2}\text{(g)} → \text{CO}_{2}\text{(g)}$$ Exothermic. This reaction is the combustion of wood or coal to form carbon dioxide and gives off heat. For each of the following descriptions, say whether the process is endothermic or exothermic and give a reason for your answer. evaporation Endothermic. Energy is needed to break the intermolecular forces. the combustion reaction in a car engine Exothermic. Energy is released. bomb explosions Exothermic. In an explosion a large amount of energy is released. melting ice Endothermic. Energy is needed to break the intermolecular forces. digestion of food Exothermic. Digestion of food involves the release of energy that your body can then use. condensation Exothermic. Energy is given off as the particles are going from a higher energy state to a lower energy state. When you add water to acid the resulting solution splashes up. The beaker also gets very hot. Explain why. The reaction between acid and water is an exothermic reaction. This reaction produces a lot of heat and energy which causes the resulting solution to splash. Since the acid is usually more dense than the water adding water to the acid causes the reaction to happen in a small area and on the surface. This leads to a more vigorous (fast) reaction. If you add the acid to the water then there is a larger volume of water to absorb the heat of the reaction and so the reaction proceeds more slowly.
#### Vol. 5, No. 4, 2010 Recent Issues The Journal Subscriptions Editorial Board Research Statement Scientific Advantage Submission Guidelines Submission Form Author Index To Appear ISSN: 1559-3959 Mechanical behavior of silica nanoparticle-impregnated kevlar fabrics ### Zhaoxu Dong, James M. Manimala and C. T. Sun Vol. 5 (2010), No. 4, 529–548 ##### Abstract This study presents the development of a constitutive model for in-plane mechanical behavior of five styles of plain woven Kevlar fabrics impregnated with silica nanoparticles. The neat fabrics differed in fiber type, yarn count, denier, weave tightness and strength, and varying proportions (4, 8, 16 and 24% by weight) of nanoparticles were added to enhance the mechanical properties of the fabric. It was found that fabrics impregnated with nanoparticles exhibit significant improvement in shear stiffness and a slight increase in tensile stiffness along the yarn directions over their neat counterparts. A constitutive model was developed to characterize the nonlinear anisotropic properties of nanoparticle-impregnated fabrics undergoing large shear deformation. The parameters for the model were determined based on uniaxial (along yarn directions) and $4{5}^{\circ }$ off-axis tension tests. This model was incorporated in the commercial FEA software ABAQUS through a user-defined material subroutine to simulate various load cases. ##### Keywords Kevlar fabric, soft armor, nanoparticle, constitutive model
We have the badge system that encourage users use more features on the site. Does that make sense to give user more reputations after achieving badges like nice questions or nice answers? • It's not completely clear what you're saying. Which badges in particular are you referring to? Are you suggesting adding a reputation bonus to some badges, or are you suggesting that with certain badges (like Good Question and Good Answer, for example) since the poster has already gained reputation for that answer or question that a badge as well is unnecessary? Jun 27, 2016 at 14:37 • The latter. Thanks Jun 27, 2016 at 14:39 • Could you edit your question to be more specific and/or explicit, please? Jun 27, 2016 at 14:40 • Sorry for unclear question, I am new to CV and have a lot of questions in mind. When I ask this question I did not thing though. Jun 27, 2016 at 14:52 • While there's not much issue with discussing these things, note that changes to the system would be network wide, so if your intent is to propose some change this would probably be better placed on the network meta (meta.stackexchange.com). If that's what you're trying to achieve, once your question is polished up it would be possible to migrate it. (Though I expect this has probably already been raised there, so check for duplicates first.) Jun 27, 2016 at 14:53 • Your title still seems to be asking the former (it appears to be a request for additional reputation for earning a badge) rather than the latter (questioning whether both badges and reputation are necessary) Jun 27, 2016 at 15:35 • @Glen_b Yes, I think myself is not clear what is the question, but your answers explains my confusion 100% in detail in both ways. If you think it is necessary to make it more clear, please feel free to edit my question. Jun 27, 2016 at 15:40 More of the story comes when you look at the tag badges, for example, whuber has a spatial tag badge, spatial statistics being a topic on which he has vastly more knowledge than me. Knowing that he has that tag badge and I don't might help to indicate his relative expertise on the topic.
Article Text Guidelines for managing acute gastroenteritis based on a systematic review of published research 1. M S Murphy 1. Institute of Child Health, Clinical Research Building, Whittall Street, Birmingham B4 6NH, UK 1. Dr Murphy. ## Statistics from Altmetric.com This paper is intended to provide evidence-based recommendations about the assessment and clinical management of infants and children with acute gastroenteritis. These guidelines were derived from a systematic review of published research. The diagnosis of gastroenteritis is not addressed; this is often presumptive and is based on a history of acute diarrhoea in the absence of other likely explanations. Microbiological investigation is not necessary in every case, but may be important in patients who require admission to hospital, in those who have bloody or mucoid diarrhoea suggesting colitis, in high risk patients such as those with an immune deficiency, and in cases where there is diagnostic uncertainty. Clinicians should apply general medical knowledge and clinical judgment in using these guidelines. ## Scope of guidelines The topics addressed are: assessment of the risk of dehydration; assessment of the degree of dehydration; oral rehydration therapy (ORT); strategies for rehydration and maintenance of hydration; management of hypernatraemic dehydration; nutritional management during and after the illness; and the role of pharmacological agents including antidiarrhoeals and antimicrobials. ## Systematic review: search strategy and evaluation of the evidence The search was performed using the medline and Cinahl databases, and covered the years 1966–97. Some relevant articles were also identified from the references cited in publications identified from these databases. The search was limited to studies of human subjects published in English. Subject headings employed were: “gastroenteritis”, “diarrhoea”, “rehydration solutions”, “dehydration”, and “hypernatraemia”. Textword searches were also done using the terms “infectious diarrh$”, “oral rehydration solution$”, and “hypernatr\$ dehydration”. For each topic the terms “review”, “meta-analysis”, “randomised controlled trial”, “cohort study”, and “case control study” were applied. The Cochrane Library database of systematic reviews was searched under subject headings. Evidence from the medical literature and the strength of the recommendations given were then categorised according to a previously described scheme (table1).1 Table 1 Categories of evidence and recommendations ## Assessment of hydration The risk of dehydration or, if already established, the severity of dehydration, can be assessed from a patient’s clinical history and physical examination. ### RISK FACTORS FOR DEHYDRATION The risk of dehydration in children is related to age.2 Young infants have an increased surface area:body volume ratio resulting in increased insensible fluid losses. They receive milk as the main source of nutrition; this constitutes a large osmotic load that may promote an osmotic diarrhoea, and a large protein load resulting in a high renal solute load. Finally, infants have an inherent tendency to more severe vomiting and diarrhoea compared with older children and adults. It is logical to assume that severe symptoms, including frequent vomiting and watery diarrhoea, would predict an increased risk of dehydration. Retrospective case-control studies from developing countries have confirmed this.3 4 Studies from the Indian subcontinent have identified failure to give oral rehydration solution (ORS) and discontinuation of breast feeding during the illness as the greatest risk factors for dehydration.4 5 In those studies other variables contributing to risk included age (< 12 months), frequent stools (> eight/day), vomiting (> twice/day), and severe undernutrition. In studies from South America of children < 2 years old with acute diarrhoea, the use of bottle feeding rather than breast feeding was identified as an independent risk factor for dehydration.6 7 In a study on the significance of specific pathogens, Vibrio cholerae was associated with a high risk of dehydration, while other pathogens including rotavirus, Campylobacter jejuni, and enterotoxigenic Escherichia coli were comparable with one another with respect to risk of dehydration.8 ### CLINICAL ASSESSMENT OF HYDRATION The severity of dehydration is usefully expressed in terms of weight loss as a percentage of total body weight. If a recent weight record is available (for example, from the parent held medical record) dehydration can be estimated with some accuracy. The severity of dehydration can also be determined using certain specific clinical criteria. In a prospective cohort study of subjects between 3 months and 18 months of age, multiple regression analysis selected “prolonged skinfold”, dry oral mucosa, sunken eyes, and altered neurological status as the clinical signs that best correlated with dehydration as determined by pre-rehydration and post-rehydration weights.9 In that study, those subjectively judged to be “mildly dehydrated” showed weight gains of 3.6–3.9%, “moderate dehydration” was associated with weight gains of 4.9–5.3%, and “severe dehydration” with weight gains of 9.5–9.8%. Capillary refill time (> 2 seconds) has been proposed as a useful indicator of dehydration.10 This technique lacks sensitivity and specificity, but a normal capillary refill time is very unlikely with severe dehydration.11 12 ## Recommendations on assessment of hydration • Assess risk of dehydration on the basis of age (highest in young infants) and frequency of watery stools and vomiting [II,B] • Assess presence/severity of dehydration on the basis of recent weight loss (if possible) and clinical examination. Signs of proved value in assessing dehydration include “prolonged skinfold”, dry oral mucosa, sunken eyes, and altered neurological status [I,A]. ## Fluid management In children with clinical evidence of dehydration, biochemical investigations including serum electrolytes, urea, and creatine and assessment of acid/base status may be helpful. Irrespective of the serum electrolyte concentrations, however, dehydration from gastroenteritis is invariably associated with total body deficits of sodium and chloride. In addition, there is often significant potassium depletion and acidosis. Hyponatraemia and hypernatraemia are simply indicative of the relative losses of water and sodium. The rehydration fluid should replace both water and electrolyte losses. In many cases an initial phase of rehydration is necessary, followed by a fluid maintenance phase aimed at preventing the recurrence of dehydration (fig 1). Figure 1 Management of hydration in gastroenteritis. ### ORAL REHYDRATION THERAPY In all but the most seriously ill patients rehydration is possible using ORT. The effectiveness of ORT was first proved 30 years ago in major clinical studies undertaken during cholera epidemics in Bangladesh.13 14 These studies were possible after the discovery in the 1960s that intestinal water absorption was mediated by an active transport process in which sodium and glucose were cotransported in an equimolar ratio. Studies in the laboratory animal showed that glucose stimulated intestinal sodium absorption.15 Studies in human subjects confirmed this observation in man, and showed that the sodium-glucose cotransporter continued to function in patients with cholera.16-19Subsequently, controlled studies showed the effectiveness of ORT in infants and children with non-cholera diarrhoea.20 21 The use of ORT in the management of gastroenteritis in the UK was associated with a dramatic fall in mortality, from 300 deaths annually in the late 1970s to about 25 in the late 1980s.22Hypernatraemic dehydration, a major cause of mortality in acute gastroenteritis, also became much less common.23 ### COMPOSITION OF ORAL REHYDRATION SOLUTIONS A range of ORS products are currently available, and these vary markedly in their sodium and glucose concentrations (table 2). Although these are generally effective in the treatment and prevention of dehydration, there has been controversy about the ideal composition for ORS.14 24 Table 2 Composition (mmol/l) of available oral rehydration solution preparations #### Sodium In the 1970s the World Health Organisation (WHO) adopted a glucose-electrolyte solution (WHO-ORS) containing 90 mmol/l of sodium, and this was promoted for worldwide use. This solution was originally evaluated in adults with cholera or cholera-like (toxigenic) diarrhoea, the category of patients for whom it was primarily designed. Later, however, its use was extended to children with non-toxigenic diarrhoea, including rotavirus gastroenteritis.25 In the underdeveloped world, diarrhoeal disease is often associated with large stool sodium losses.24 In patients in Western countries, sodium loss is generally less severe, and so there has been concern about the risk of hypernatraemia with WHO-ORS, especially in infants < 3 months of age.26 27 Moreover, controlled clinical trials in infants < 3 months and in older children have shown that an ORS with a sodium concentration in the range 50–60 mmol/l is safe and effective in the treatment and prevention of dehydration.28-33 The European Society for Paediatric Gastroenterology and Nutrition (ESPGAN) published guidelines based on these studies, recommending a sodium concentration of 60 mmol/l for European children.34 #### Glucose The ideal carbohydrate concentration in ORS must be related to the sodium concentration. The WHO has recommended a glucose:sodium ratio of less than 1.4:1.14 Hyperosmolar ORS containing excessive amounts of carbohydrate could induce osmotic diarrhoea as a result of carbohydrate malabsorption, and the associated water loss would increase the risk of hypernatraemia. ESPGAN has therefore recommended the use of a hypo-osmolar ORS for European children.34 Glucose may be provided as monosaccharide or as a complex carbohydrate (for example, glucose polymer or starch). Complex carbohydrates have the theoretical advantage of forming solutions of reduced osmolality, although they require digestion before absorption. In underdeveloped countries, cereal based ORS has been successfully employed.35 A recent meta-analysis of 13 clinical trials examined the effect of rice based ORS on stool output and duration of diarrhoea; there appeared to be a worthwhile benefit in patients with cholera, but the effect in children with acute non-cholera diarrhoea was uncertain.36 Appropriately therefore most solutions currently in use contain glucose as monosaccharide (table2). #### Potassium, bicarbonate, and base precursors Most ORS products contain 20 mmol/l of potassium, and this appears sufficient to prevent hypokalaemia despite individual variation in stool potassium losses.24 Most contain bicarbonate, or more often a stable base-precursor such as acetate, lactate, or citrate. These constituents were originally included to correct the acidosis that may accompany dehydration, and to promote water and sodium absorption. In fact there is no evidence that inclusion of base is necessary or beneficial.24 ### REHYDRATION In the past many regimens aimed at gradual rehydration over 24 hours or longer, but this approach was not evidence based. It seems both illogical and potentially disadvantageous to delay the process of recovery in these children by prolonging the rehydration process. Nowadays most authorities recommend rapid rehydration over a three or four hour period.2 13 14 The degree of dehydration is estimated as outlined above and expressed as percentage of body weight. The fluid deficit can then be calculated: thus, an estimated 5% dehydration would be treated by giving 50 ml/kg of replacement fluid. ORS may be given by bottle, cup, or spoon as appropriate, and frequent administration may be necessary to repair the deficit within four hours. Most dehydrated children are thirsty and will take fluids readily, but some seriously ill children may require ORS given via an enteral tube. Rehydration should be done under medical supervision, and the state of hydration should be reassessed during rehydration and at the end of the four hour rehydration period. If the patient is still dehydrated then the residual deficit is again estimated and the rehydration process is continued. If children vomit during the process of rehydration, more ORS is immediately given. Most authorities recommend that children with signs of shock (inadequate perfusion of vital organs) should receive intravenous rehydration initially.2 14 Although oral rehydration is quite possible in such cases, the intravenous route helps to guarantee rapid rehydration in these critically ill patients. In cases of hypernatraemic dehydration (serum sodium > 150 mmol/l) slower fluid replacement over 12 hours has been recommended to reduce the risk of seizures (“slow ORT”).37 There is a consensus that the use of ORT can in itself reduce the risk of seizures during rehydration.14 In one report none of 34 infants with hypernatraemic dehydration suffered seizures when rehydration was repaired with WHO-ORS over 12 hours.38 In the largest published controlled trial of intravenous versus oral rehydration, 470 children under 18 months of age, all with severe gastroenteritis, were randomly assigned to receive either ORS or intravenous fluid.39 Of 34 hypernatraemic patients in the ORT group, 2 (6%) developed seizures compared with 6 of 24 (25%) in the group given intravenous treatment. These studies are reassuring, although it may be significant that WHO-ORS (sodium 90 mmol/l) was used, as opposed to the ORS currently recommended in Europe (sodium 60 mmol/l). It is therefore important that the serum sodium concentration be closely monitored during rehydration because rapid reductions are associated with an increased risk of cerebral oedema and convulsions. ### MAINTENANCE TREATMENT Various strategies have been recommended to prevent dehydration and to prevent the recurrence of dehydration from ongoing fluid losses when rehydration is complete.2 13 40 Children require their normal maintenance fluid, and this can be calculated from body weight. A useful method is to provide 100 ml/kg/day for the first 10 kg of body weight, 50 ml/kg/day for the next 10 kg, and 25 ml/kg/day thereafter.2 In practice, fluids are offered ad libitum and in almost all cases children will meet or exceed such calculated “maintenance requirements”. Maintenance fluids can be given as breast milk, formula, or other fluids appropriate for age. In addition to maintenance requirement, however, continuing losses due to persistent diarrhoea or vomiting should be replaced with extra feeds of ORS. One strategy is to alternate freely normal feeds with ORS feeds.2 An alternative is to give approximately 10 ml/kg for each diarrhoeal stool passed.14 ## Recommendations on fluid management • An ORS containing sodium 60 mmol/l, glucose 90 mmol/l, potassium 20 mmol/l, and citrate 10 mmol/l with an low osmolality of 240 mmol/l is safe and effective for the prevention and treatment of dehydration in European children with acute gastroenteritis [I,A] • In the vast majority of cases rehydration should be carried out using ORT [I,A] • Rehydration should normally be completed over a three to four hour period [II,B] (a) “Mild” dehydration (3–5%): 30–50 ml/kg as ORT over three to four hours (b) “Moderate” dehydration (5–10%): 50–100 ml/kg as ORT over three to four hours (c) “Severe” dehydration (10% +): 100–150 ml/kg as ORT over three to four hours (d) Reassess hydration immediately after giving the estimated deficit • Severe dehydration with signs of shock: 20 ml/kg boluses of normal saline intravenously [III,C] • When organ perfusion is restored begin ORT. In hypernatraemic dehydration, ORT is safer than intravenous rehydration [II,B] • In hypernatraemic dehydration use “slow ORT”, aiming to complete rehydration over 12 hours, and monitor serum sodium to avoid a rapid reduction [III,C] • To prevent primary dehydration or recurrence of dehydration, allow unrestricted fluids, and in high risk cases either (a) alternate normal drinks (for example, milk or water) with ORS [III,C], or (b) give normal drinks and 10 ml/kg ORS after each watery stool [III,C]. ## Nutritional management Until recently it was considered that the early reintroduction of feeds after acute gastroenteritis risked exacerbating the illness, causing protracted diarrhoea. Children were routinely starved for 24 hours or even longer.41 Evidence has now emerged, however, favouring the early reintroduction of feeds (fig 2). Figure 2 Management of feeding in gastroenteritis. Firstly, there is indirect evidence to support this strategy based on studies revealing the positive effects of luminal nutrition on mucosal growth and regeneration. Early refeeding was shown to reduce the abnormal increase in intestinal permeability that occurs in acute gastroenteritis.42 Increased permeability is considered to indicate a loss of mucosal integrity. Early refeeding may also enhance enterocyte regeneration, and may promote recovery of the brush border membrane disaccharidase.43 43A Many studies have now indicated that there is no advantage to the practice of “regrading” feeds—that is, gradually increasing the feed concentration during the recovery phase after gastroenteritis.44-49 In malnourished children, early refeeding has been associated with significant nutritional advantages.50 In a recent multicentre European study, 230 weaned children < 3 years of age with acute gastroenteritis were randomly assigned to “early refeeding” or “late refeeding”.51 These children were not generally malnourished before the onset of their illness. Oral rehydration was carried out over four hours. The “early refeeding” group then received a normal diet without further delay. The “late refeeding” group received maintenance ORS for a further 20 hours, and then restarted a normal diet. Both groups were offered ORS 10 ml/kg after each watery stool. Breast fed infants continued to feed during the rehydration and maintenance phases. There was no difference between the two groups in the incidence of vomiting or watery stools on days 1 to 5, and weight gain was similar in both groups on days 5 and 14. Transient lactase deficiency is common, particularly after rotavirus gastroenteritis. Occasionally it persists, and lactose intolerance may be a cause of post-gastroenteritis diarrhoea.52 In Europe this appears to have become a rather uncommon clinical problem.53 Moreover, a meta-analysis of clinical trials has indicated that a lactose free diet is rarely necessary after acute gastroenteritis.54 In a case-control study of Bangladeshi children < 3 years, multivariate analysis using a logistic regression model showed that discontinuation of breast feeding during the illness was associated with a fivefold increase in the incidence of dehydration.5 There is some evidence that continued breast feeding may actually reduce stool output.56 Based on these studies, ESPGAN recently issued guidelines with regard to feeding in childhood gastroenteritis.57 The recommendations were for oral rehydration over a period of three to four hours, followed by immediate reintroduction of normal feeds thereafter. It was also recommended that breast feeding should be continued throughout the rehydration and maintenance phases of treatment. It was considered that lactose free formulas were rarely necessary. Although persistent lactose intolerance is now uncommon, it was suggested that if persistent diarrhea occurred after the reintroduction of milk, stool pH and stool reducing substances should be measured, and a lactose free formula should be considered if the stool was acid and contained more than 0.5% reducing substances.43A Recently we reported a series of infants in whom the administration of a glucose polymer formula resulted in severe protracted diarrhoea.58 These infants were eventually found to have congenital sucrase-isomaltase deficiency. Unfortunately, in such cases the diarrhoea is likely to be attributed to post-gastroenteritis syndrome. Congenital sucrase-isomaltase deficiency is not rare, and the inability of these infants to digest glucose polymer had not previously been appreciated.59 ## Recommendations on nutritional management • Breast feeding should continue through rehydration and maintenance phases of treatment [II,C] • Formula feeds should be restarted after completion of rehydration [I,A] • If there is persistent diarrhoea after reintroduction of feeds, evidence for lactose intolerance should be sought. If the stool pH is acid and contains more than 0.5% reducing substances a lactose free formula should be considered [III,C]. ## Pharmacotherapy ### ANTIDIARRHOEAL AGENTS In the past antidiarrhoeal drugs were often employed in the treatment of acute gastroenteritis, but with little evidence of benefit.60 Bismuth subsalicylate has antisecretory and bactericidal properties, and it may have some effect on the clinical symptoms.61 There is no evidence that other agents such as cholestyramine, loperamide, kaolin, pectin, and diphenoxylate have an effect.62-65 Nowadays, none of these drugs is considered to have a role in the treatment of gastroenteritis in children, and it is possible that their use may have adverse consequences.60 ### ANTIMICROBIAL AGENTS Although C jejuni gastroenteritis is often a mild and self limiting illness, one randomised controlled trial indicated that if erythromycin was started at first presentation, before stool culture results were available, the clinical course of the illness was shortened.66 Several other randomised trials in which erythromycin was started after isolation of the organism showed a shortened period of bacterial excretion, but no effect on the clinical course of the illness.67 68 A single randomised controlled trial of treatment in children with Y enterocolitica using trimethoprim/sulfamethoxazole failed to show any useful benefit.69 The role of antibiotics in the treatment of E coli associated acute gastroenteritis in the UK is unclear.70 Non-typhoidal salmonella gastroenteritis is usually self limiting, and studies have failed to show any benefit from antibiotic treatment.70 In one study, ampicillin or amoxycillin treatment appeared to be associated with prolonged salmonella excretion in children.70 It has been suggested that antibiotic treatment may be indicated in the very young, in immunocompromised patients, and in those who are systemically ill.70 There is clear evidence that antibiotic treatment is worthwhile in patients with shigella dysentery, in whom it shortens the clinical illness and the duration of pathogen excretion.70 ## Recommendations regarding pharmacotherapy • Infants and children with gastroenteritis should not be treated with antidiarrhoeal agents [I,A] • Most bacterial gastroenteritis does not require or benefit from antibiotic treatment [I,A] • Antibiotic treatment may be indicated for salmonella gastroenteritis in the very young, in immunocompromised patients, and in those who are systemically ill [III,C] • Patients with shigella dysentery should receive antibiotic treatment [I,A]. 1. 1. 2. 2. 3. 3. 4. 4. 5. 5. 6. 6. 7. 7. 8. 8. 9. 9. 10. 10. 11. 11. 12. 12. 13. 13. 14. 14. 15. 15. 16. 16. 17. 17. 18. 18. 19. 19. 20. 20. 21. 21. 22. 22. 23. 23. 24. 24. 25. 25. 26. 26. 27. 27. 28. 28. 29. 29. 30. 30. 31. 31. 32. 32. 33. 33. 34. 34. 35. 35. 36. 36. 37. 37. 38. 38. 39. 39. 40. 40. 41. 41. 42. 42. 43. 43. 44. 43A. 45. 44. 46. 45. 47. 46. 48. 47. 49. 48. 50. 49. 51. 50. 52. 51. 53. 52. 54. 53. 55. 54. 56. 55. 57. 56. 58. 57. 59. 58. 60. 59. 61. 60. 62. 61. 63. 62. 64. 63. 65. 64. 66. 65. 67. 66. 68. 67. 69. 68. 70. 69. 71. 70. View Abstract ## Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
This study evaluated the performance of soil and coal cinder used as substrate in vertical-flow constructed wetlands for removal of fluoride and arsenic. Two duplicate pilot-scale artificial wetlands were set up, planted respectively with cannas, calamus and no plant as blank, fed with a synthetic sewage solution. Laboratory (batch) incubation experiments were also carried out separately to ascertain the fluoride and arsenic adsorption capacity of the two materials (i.e. soil and coal cinder). The results showed that both soil and coal cinder had quite high fluoride and arsenic adsorption capacity. The wetlands were operated for two months. The concentrations of fluoride and arsenic in the effluent of the blank wetlands were obviously higher than in the other wetlands planted with cannas and calamus. Fluoride and arsenic accumulation in the wetlands body at the end of the operation period was in range of 14.07–37.24% and 32.43–90.04%, respectively, as compared with the unused media.
# For an arbitrary positive integer $d$ and random modulus $m,$ what is the probability that $d \mod m = 0$? More specifically, assume that $d$ is taken from $[1, 2^n]$ and $m$ is taken from $[1, n]$. What is an upper bound on the probability that $d$ is a multiple of $m$? - ## 1 Answer Fixing $m$, the probability that $d$ is a multiple of $m$ is approximately $1/m$. Now, if you choose $m$ randomly from $[1,n]$, and average $1/m$, you get $\frac{1}{n} \sum_{m=1}^n \frac{1}{m}$, which is approximately $\frac{\log n}{n}$. EDIT: I didn't read the question carefully enough. Sorry. The least common multiple of the first $n$ numbers grows like $e^n$. So if you take the lcm of the numbers $1 \ldots n \cdot ln 2$, you get a number somewhere around $2^n$ which is going to have as factors all the numbers between $1$ and $n \cdot \ln 2$. This shows the probability you're interested in is at least $\ln 2$. In fact, this lcm is going to contain as a factor all numbers except prime powers larger than $n \cdot \ln 2$. Since any prime has at most one power in this interval, then by the prime number theorem, you then get a lower bound for the probability of around $1-1/\ln n$. I expect the upper bound matches, but offhand I don't see how to prove it. - What about if d is not random, but possibly chosen pathologically? For example, if d were allowed to be chosen from [1, n!], we could let d = n! and then the probability that d mod m = 0 would be 1. – jonderry Oct 19 '10 at 22:39
# Proof that $\Bbb R$ is uncountable. Ok so, I was given given a proof of this by my lecturer, but I'm not understanding it fully,, and I was wondering if this suffices as a proof. Proof: It is enough to show that $(0,1)$ is uncountable. Let $x \in (0,1)$ and write $x$ in it's decimal expansion. i.e. $x= 0.a_1a_2a_3...$, for $a_i \in \{0,1,2....9\}$ for $i \in \Bbb N$ Next assume that $(0,1)$ is countable, that is $(0,1) =\{x_1, x_2, x_3,... \}$. We write $x_1 = 0.a_{11}a_{12}a_{13}...$ $x_2 = 0.a_{21}a_{22}a_{23}...$ This is where I differ from my lecturer. He says: let $y= b_1b_2b_3...$, where $b_i = 2$ if $a_{ii} \neq 2$ and $b_i = 3$ if $a_{ii} = 2$ for $i \in \Bbb N$ Then $y \in (0,1)$, but $y \neq x_i$, for all $i \in \Bbb N$ This contradiction proves that $(0,1)$ is uncountable. What I'm saying is: Let $b_i = a_{ii} + 1$. Then $y$ will differ from $x_1$ in the first decimal position, from $x_2$ in the second decimal position,... and from $x_n$ in the the $n^{th}$ decimal position. Hence, the list $\{ x_1, x_2,... \}$ can never be an exhaustive list and so $(0,1)$ is uncountable. Just want to know what you think. Thanks. • What happens when $a_{ii} = 9$? Though, in general, the proof could just say $b_i \neq a_{ii}$. – Karolis Juodelė May 28 '14 at 15:40 • @KarolisJuodelė Ok, so if I say that then, the proof is acceptable? – Crockett May 28 '14 at 15:42 • $\Bbb R$ is a complete metric space, hence uncountable by Baire's theorem. – Pedro Tamaroff May 28 '14 at 16:40 Your proof doesn't work in case $a_{ii} = 9$. Perhaps you could include $b_i = 0$ if $a_{ii} = 9$. The reason the instructor used $2$ and $3$ is to avoid the problem of having an infinite string of $9$'s. To guarantee that each $x$ has a unique representation, you can require that each sequence $\{a_{ij}\}$ does not end in an infinite string of $9$'s. If it did, you could represent the number $x$ with a finite decimal representation. The problem is that the number you construct using the $b_i$ could end in an infinite string of $9$'s which will have a different representation from every other number in the list, but could in fact equal one of those numbers. For instance, you might construct $0.4999999\ldots$ which already appeared in the list as $0.5000000\ldots$. Avoiding $9$'s altogether circumvents this problem.
# American Institute of Mathematical Sciences doi: 10.3934/ipi.2021020 ## On the identification of the nonlinearity parameter in the Westervelt equation from boundary measurements 1 Department of Mathematics, Alpen-Adria-Universität Klagenfurt, 9020 Klagenfurt, Austria 2 Department of Mathematics, Texas A&M University, Texas 77843, USA * Corresponding author: Barbara Kaltenbacher Received  August 2020 Revised  November 2020 Published  February 2021 Fund Project: Supported by the Austrian Science Fund fwf under grant P30054 and the National Science Foundation through award dms-1620138 We consider an undetermined coefficient inverse problem for a nonlinear partial differential equation occurring in high intensity ultrasound propagation as used in acoustic tomography. In particular, we investigate the recovery of the nonlinearity coefficient commonly labeled as $B/A$ in the literature which is part of a space dependent coefficient $\kappa$ in the Westervelt equation governing nonlinear acoustics. Corresponding to the typical measurement setup, the overposed data consists of time trace measurements on some zero or one dimensional set $\Sigma$ representing the receiving transducer array. After an analysis of the map from $\kappa$ to the overposed data, we show injectivity of its linearisation and use this as motivation for several iterative schemes to recover $\kappa$. Numerical simulations will also be shown to illustrate the efficiency of the methods. Citation: Barbara Kaltenbacher, William Rundell. On the identification of the nonlinearity parameter in the Westervelt equation from boundary measurements. Inverse Problems & Imaging, doi: 10.3934/ipi.2021020 ##### References: show all references ##### References: The surface Σ Reconstructions of a smooth $\kappa(x)$ from time trace data at $\,x = 1\,$ under 0.1% (left) and 1% (right) noise using Newton's method Reconstructions of piecewise linear $\kappa(x)$ from time trace data at $\,x = 1$ under 0.1% (left) and 1% (right) noise using Newton's method Reconstructions of a piecewise constant $\kappa(x)$ from time trace data at $x = 1$ under $0.1\%$ noise using Newton iteration Comparison of Newton (in red) and Halley (in blue) final reconstructions under $0.1\%$ noise Comparison of Newton (in red) and Halley (in blue) final reconstructions and norm differences of the $n^{\rm th}$ iterate $\kappa_n$ and the actual $\kappa$. Noise level was $1\%$ Reconstructions of a piecewise linear $\kappa(x)$ from time trace data at $x = 1$ under $1\%$ noise using Landweber iteration The leftmost figure shows reconstructions of $\kappa(x)$ under $0.1\%$ noise using Landweber iteration. The rightmost figure shows the decay of the norm $\kappa_n(x)-\kappa_{\rm act}(x)$ [1] Jiaquan Liu, Xiangqing Liu, Zhi-Qiang Wang. Sign-changing solutions for a parameter-dependent quasilinear equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (5) : 1779-1799. doi: 10.3934/dcdss.2020454 [2] Thierry Cazenave, Ivan Naumkin. Local smooth solutions of the nonlinear Klein-gordon equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (5) : 1649-1672. doi: 10.3934/dcdss.2020448 [3] Amit Goswami, Sushila Rathore, Jagdev Singh, Devendra Kumar. Analytical study of fractional nonlinear Schrödinger equation with harmonic oscillator. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021021 [4] Valeria Chiado Piat, Sergey S. Nazarov, Andrey Piatnitski. Steklov problems in perforated domains with a coefficient of indefinite sign. Networks & Heterogeneous Media, 2012, 7 (1) : 151-178. doi: 10.3934/nhm.2012.7.151 [5] Ian Schindler, Kyril Tintarev. Mountain pass solutions to semilinear problems with critical nonlinearity. Conference Publications, 2007, 2007 (Special) : 912-919. doi: 10.3934/proc.2007.2007.912 [6] Xu Zhang, Xiang Li. Modeling and identification of dynamical system with Genetic Regulation in batch fermentation of glycerol. Numerical Algebra, Control & Optimization, 2015, 5 (4) : 393-403. doi: 10.3934/naco.2015.5.393 [7] Mikhail Gilman, Semyon Tsynkov. Statistical characterization of scattering delay in synthetic aperture radar imaging. Inverse Problems & Imaging, 2020, 14 (3) : 511-533. doi: 10.3934/ipi.2020024 [8] Habib Ammari, Josselin Garnier, Vincent Jugnon. Detection, reconstruction, and characterization algorithms from noisy data in multistatic wave imaging. Discrete & Continuous Dynamical Systems - S, 2015, 8 (3) : 389-417. doi: 10.3934/dcdss.2015.8.389 [9] Qian Liu. The lower bounds on the second-order nonlinearity of three classes of Boolean functions. Advances in Mathematics of Communications, 2021  doi: 10.3934/amc.2020136 [10] Yanqin Fang, Jihui Zhang. Multiplicity of solutions for the nonlinear Schrödinger-Maxwell system. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1267-1279. doi: 10.3934/cpaa.2011.10.1267 [11] Deren Han, Zehui Jia, Yongzhong Song, David Z. W. Wang. An efficient projection method for nonlinear inverse problems with sparsity constraints. Inverse Problems & Imaging, 2016, 10 (3) : 689-709. doi: 10.3934/ipi.2016017 [12] Olena Naboka. On synchronization of oscillations of two coupled Berger plates with nonlinear interior damping. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1933-1956. doi: 10.3934/cpaa.2009.8.1933 [13] Jiangxing Wang. Convergence analysis of an accurate and efficient method for nonlinear Maxwell's equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2429-2440. doi: 10.3934/dcdsb.2020185 [14] Manoel J. Dos Santos, Baowei Feng, Dilberto S. Almeida Júnior, Mauro L. Santos. Global and exponential attractors for a nonlinear porous elastic system with delay term. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2805-2828. doi: 10.3934/dcdsb.2020206 [15] Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2021, 14 (5) : 1693-1716. doi: 10.3934/dcdss.2020450 [16] Vladimir Georgiev, Sandra Lucente. Focusing nlkg equation with singular potential. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1387-1406. doi: 10.3934/cpaa.2018068 [17] Daoyin He, Ingo Witt, Huicheng Yin. On the strauss index of semilinear tricomi equation. Communications on Pure & Applied Analysis, 2020, 19 (10) : 4817-4838. doi: 10.3934/cpaa.2020213 [18] Diana Keller. Optimal control of a linear stochastic Schrödinger equation. Conference Publications, 2013, 2013 (special) : 437-446. doi: 10.3934/proc.2013.2013.437 [19] Simone Cacace, Maurizio Falcone. A dynamic domain decomposition for the eikonal-diffusion equation. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 109-123. doi: 10.3934/dcdss.2016.9.109 [20] Naeem M. H. Alkoumi, Pedro J. Torres. Estimates on the number of limit cycles of a generalized Abel equation. Discrete & Continuous Dynamical Systems - A, 2011, 31 (1) : 25-34. doi: 10.3934/dcds.2011.31.25 2019 Impact Factor: 1.373
# Math has to be meaningful, or why do it? By Murray Bourne, 26 Jan 2008 I recently received the following email from an adult reader of my Interactive Mathematics site. She has an interesting story about how she's bravely trying to figure out higher math all by herself. Math students rarely see any connection to the real world, so it is not surprising that many believe they are learning it just to pass a test. Over to the letter... I'm currently teaching at a Kindergarten. I like interacting with little kids, teaching them alphabets and nursery rhyme. However, the prospects are rather bleak, prompting me to reconsider my future. I'm planning to take GCE A-level [a pre-university course]. Thing is, mathematics seems to be compulsory and I'm hopeless at it. It's the bane of my life really. Never done well and never motivated. My past Mathematic teachers' teaching methods were rather uninspiring and I must admit that my foundation in math is pretty shaky. I took O-level Math [Grade 10 level], failed, retook it and got a B3, self studying. Than, I ventured into an unknown territory called Calculus. I didn't take tuition because I've had enough of conventional teaching methods. I took it upon myself to research and flipped through countless reference books, including "Calculus for Dummies", "How to ace your calculus, the streetwise guide","Math student surviving guide" etc...(You have no idea how desperate I am). On the internet, I've chanced upon 'Paul's Online Math notes', Math SOS and the like. These webmaster are well intentioned but I feel like being sucked into a black hole. Ya, I don't understand the context. Problem is, conventional ways of teaching mathematic did nothing for me. For example, while learning differential, the opening chapter will start with "limits and continuity",then "derivative". My usual questions will always be what is this for? Why do we need to learn this? How do we apply it? Many assessment books, reference book always start with a short and concise introduction but are usually too short for me to grasp the idea. For two years, I've been reading up and I can understand the idea and even apply what I've learn. I can find the derivative using chain rule, quotient rule. Find integral using 'integrating by part' method and 'substitution'. Still, i didn't do well for the exam. Now, I'm seriously doubting my ability and take the easy way out; Give up. It was not until I discover your site that ignited my fighting spirit and retake Math. The way you explain each subject and illustrate the application in the real world is indeed eye opening and fascinating. It was then that I realise that I've been learning math in a wrong and 'disrespectful' way. Yes, I do practices, but I'm not solving problem. I'm imitating the example given, and when they remodel the question a bit and I'm lost. I enjoy every feature of your site especially mini lectures. Have you consider mini lecture for differential equation? Also, incorperating games is another delightful idea, though I can't say so for your choice of music. Haha. Just kidding. I really appreciate your effort and other webmasters who dedicate their time in ending my Math drought. Thank you. I'm glad that she found some meaning in the applications. A large part of her positive reaction to the site is her 'readiness' to learn at the time she found my site. She already had a lot of questions in her head and fortunately, she found many of the answers that she was looking for. And those answers were as much about finding the meaning behind the math as they were about understanding the math. And I'm especially glad that the site has resulted in this outcome: "...ignited my fighting spirit [to] retake Math." About the music - it is supplied by Last.fm on a random basis. Mostly it's good, but sometimes the tracks they choose are iffy. 🙂 In a later mail, she went on to say: I forgot to mention, your newsletter is another great help. It further enhances and enriches my experience. I thank the reader for sharing her story. ### 7 Comments on “Math has to be meaningful, or why do it?” 1. Steven says: Great letter, Zac. I liked >> what is this for? Why do we need to learn this? How do we apply it? The search for meaning in math goes on and on. Question: Do the teachers know the answers to these questions? 2. mike says: My teacher in senior school never told us why we were doing things. It was "learn it for your exam" and that was the best we could get from her. Even the applications she couldn't do (she kept messing up). I now enjoy playing around with math, but I didn't when I was at school. Maybe I can see what it is for, now. 3. Samantha says: Thanks for the comment on students2.0! To hear you say that "For most people (post-school), mathematics is no more than a tool for problem-solving" is quite disheartening, because I now have such a passion for it, but I do agree with you. Most people never get past the "why?"s of it all. and what are my views of technology in mathematics? I believe strongly in the rule of "too much of a good thing..." especially in this context. As a teaching tool, I'm all for it. I believe that students must be engaged so they may understand completely, and teachers do their best to incorporate that in a variety of ways. (My current math teacher just sent me on a quest to discover various mathematical concepts utilized in the Great Pyramids, and it has given rise to wonderful discussions.) Thank you for this post. I'm glad to see many relate to it. 4. laurent says: I really agree with the student who said she could not understand why we study certain things in mathematics; talk less of their applications to real world problems. Even at university level many do not see the possibility of applying maths in life. Actually your way of introducing mathematical problems really helps to situate someone reading the article. Atleast when someone knows why we have to do certain things in the problem he/she can follow to end even if he/she hates maths. It is really interesting in this site. Thanks alot. 5. Murray says: I often comment on the evils of teaching 'math for math's sake". Sure, there are lots of people who enjoy dabbling in algebra and they don't worry about whether it has application or not. But that is a small proportion of the community. There is a much bigger proportion who need to see why they are learning a thing - or they will immediately forget it. All the best with your studies. 6. Denhen says: There's a book I have called the Manga Guide to Calculus which aims at the applications of calculus as well as the study of it. I was fascinated how calculus is so very much incorporated into our day to day life as the book shows. I mean, the main character was to start work at a newsagents and the first thing the director asked was if she had done any calculus before.Calculus I think is a fundalmental tool to understanding all aspects of our world since our world essentially is a multitude of functions (some natural, some models made artificially), whether it's the typical roller coaster calculus or the relationship between Tv advertising time and profits or the life span of a human being with relation to their mathematical coordinate in the world. A lecturer then said that because of this, calculus is not a niche subject but a very broad and overwhelmingly significant tool. Anyhow, other branches of maths are equally interesting, maybe you can try Hardy's A Mathematician's Apology, which is a short book but in it Hardy, a number theorist, defends his subject as the ultimate route to truth and the pinnacle of human curiosity. 7. Murray says: Thanks Denhen. And here's another view about the most useful field of math. ### Comment Preview HTML: You can use simple tags like <b>, <a href="...">, etc. To enter math, you can can either: 1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone): a^2 = sqrt(b^2 + c^2) (See more on ASCIIMath syntax); or 2. Use simple LaTeX in the following format. Surround your math with $$ and $$. $$\int g dx = \sqrt{\frac{a}{b}}$$ (This is standard simple LaTeX.) NOTE: You can mix both types of math entry in your comment. ## Subscribe * indicates required From Math Blogs
time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output AquaMoon has a string $a$ consisting of only $0$ and $1$. She wants to add $+$ and $-$ between all pairs of consecutive positions to make the absolute value of the resulting expression as small as possible. Can you help her? Input The first line contains a single integer $t$ ($1 \leq t \leq 2\,000$)  – the number of test cases. The description of test cases follows. The first line of each test case contains a single integer $n$ ($2 \leq n \leq 100$) — the length of $a$. The second line of each test case contains a string $a$ of length $n$, consisting of only $0$ and $1$. Output For each test case, output a string of length $n - 1$ consisting of $-$ and $+$ on a separate line. If there is more than one assignment of signs that produces the smallest possible absolute value, any of them is accepted. Example Input 3211501101510001 Output - +-++ +++- Note In the first test case, we can get the expression $1 - 1 = 0$, with absolute value $0$. In the second test case, we can get the expression $0 + 1 - 1 + 0 + 1 = 1$, with absolute value $1$. In the third test case, we can get the expression $1 + 0 + 0 + 0 - 1 = 0$, with absolute value $0$.
tech-userlevel archive # Re: proplib and the jet age On Sat, 5 Jan 2013 00:25:19 +0000 David Holland <dholland-tech%netbsd.org@localhost> wrote: > If you're talking about using Lua syntax as the transfer format, I > don't see any advantage over e.g. JSON and the disadvantage of being > much less widely used elsewhere. > > If you're talking about using the Lua interpreter to receive data by > executing incoming data as code and dumping out the Lua tables found > in the interpreter afterwards... in addition to the obvious hazards of > making things executable that shouldn't be, this is a very expensive > and ass-backwards way to avoid writing 500 lines of parsing code, and > I don't see the point. Seconded. I like Lua. I used it at work for some prototyping stuff. This also included using the Lua C-API. It is quite cumbersome to use a full blowen script interpreter API only to extract some data values. An other point: If configuration is kept in Lua you _have_ to mangle it through the Lua interpreter. There is no other way of retriving the data then executing the script and extracting the data from the Lua state afterwards. There is no way to process the configuration Lua script in e.g. Perl, tcl, Java, Lisp, Haskell, Fortran, ... just by parsing. You have to bind the Lua C-interpreter to that foregin language or you have to reimplement the Lua language in that non-C language. Sorry. I am not going to by this. At work we use JSON and made good experience with it. It supports just the right value types needed (number, string, bool, nil) and these can be combined to arbitrary complex objects by enumeration (JSON-Object) and iteration (JSON-Array). There are good JSON libraries for a gazillion of different programming languages including bash... -- \end{Jochen} \ref{http://www.unixag-kl.fh-kl.de/~jkunz/} Home | Main Index | Thread Index | Old Index
########.....######..#..### #......#######....#..#..#.# #.##......#...#####..#..### #..#####..#....#..#######.# #......#...#####.....##...# #..###.#...#...###...#..### ##########.#..#..##..#.##.# ..#......#.######.#..#.#.#. ..#......#.#..#.#.#..#.#.#. ..######.###..##..######### This is the road with dead ends labelled with the letter X: ########.....######..X..### #......#######....#..X..#.# #.XX......X...X####..X..### #..XXXXX..X....#..#######.# #......X...#####.....##...# #..###.X...#...###...#..### ##########.#..X..##..#.##.X ..X......#.#XXXXX.#..#.#.X. ..X......#.#..X.X.#..#.#.X. ..XXXXXX.###..XX..######XXX A dead end is defined as any road tile that borders n other road tiles, at least n-1 of which are considered dead ends already by this rule. "Bordering" is in the four cardinal directions, so tiles bordering diagonally don't count. This rule is applied repeatedly, as newly created dead ends can, themselves, create more dead ends. Also note that any road tile that borders only one other road tile is considered a dead end the first time the rule is applied. Input and output may be either a single string (with lines separated by any character that is not # or .) or an array/list/etc. If your language supports it, you may also take input with each line being a function argument. You may assume the following about the input: • There will always be at least one "loop"—that is, a group of # characters that can be followed infinitely. (Otherwise every single tile would become a dead end.) • This implies that the input will always be 2×2 or larger, since the smallest loop is: ## ## (Which, incidentally, should be output with no change.) • All # characters will be connected. That is, if you were to perform a flood fill on any #, all of them would be affected. Since this is , the shortest code in bytes will win. The example above and the tiny 2×2 grid can be used as test cases (there aren't a lot of edge cases to cover in this challenge). CJam, 61 bytes q_N/{{0f+zW%}4*3ew:z3few::z{e__4=_@1>2%'#e=*"#"='X@?}f%}@,*N* Try it here. Explanation Outline: { }@,* Perform some operation as many times as there are bytes N* Join lines Operation: {0f+zW%}4* Box the maze with zeroes 3ew:z3few::z Mystical 4D array neighborhood magic. (Think: a 2D array of little 3x3 neighborhood arrays.) { }f% For each neighborhood, make a new char: e_ Flatten the neighborhood _4=_ Get the center tile, C @1>2% Get the surrounding tiles * Repeat char C n times "#"= Is it "#"? (i.e., C = '# and n = 1) 'X@? Then this becomes an 'X, else keep C. (Martin saved two bytes, thanks!) • That's one of longest cjam answers I have ever seen. =) – James Feb 2 '16 at 3:20 • @DJMcGoathem Ummm... – Martin Ender Feb 2 '16 at 7:33 • Are '# and "#" different in CJam? – ETHproductions Feb 2 '16 at 15:47 • Yep, they are. "#" is equal to ['#]. – Lynn Feb 2 '16 at 17:34 JavaScript (ES6), 110 109 bytes r=>[...r].map(_=>r=r.replace(g=/#/g,(_,i)=>(r[i+1]+r[i-1]+r[i+l]+r[i-l]).match(g)[1]||"X"),l=~r.search )&&r 1 byte saved thanks to @edc65! Explanation Very simple approach to the problem. Searches for each #, and if there are less than 2 #s around it, replaces it with an X. Repeats this process many times until it's guaranteed all the dead-ends have been replaced with Xs. var solution = r=> [...r].map(_=> // repeat r.length times to guarantee completeness r=r.replace(g=/#/g,(_,i)=> // search for each # at index i, update r once done (r[i+1]+r[i-1]+r[i+l]+r[i-l]) // create a string of each character adjacent to i .match(g) // get an array of all # matches in the string [1] // if element 1 is set, return # (the match is a #) ||"X" // else if element 1 is undefined, return X ), l=~r.search // l = line length ) &&r // return the updated r <textarea id="input" rows="10" cols="40">########.....######..#..### #......#######....#..#..#.# #.##......#...#####..#..### #..#####..#....#..#######.# #......#...#####.....##...# #..###.#...#...###...#..### ##########.#..#..##..#.##.# ..#......#.######.#..#.#.#. ..#......#.#..#.#.#..#.#.#. ..######.###..##..#########</textarea><br> <button onclick="result.textContent=solution(input.value)">Go</button> <pre id="result"></pre> • Common trick that I use always for this type of task. As you use both l and -l in the same way, you can compute l=~r.search instead of l=1+r.search. (Just 1 byte saved) – edc65 Feb 2 '16 at 22:00 • @edc65 Clever. Thanks! – user81655 Feb 2 '16 at 22:51 Python (3.5) 362331329 314 bytes thanks to @Alissa. she helps me to win ~33 bytes d='.' r=range def f(s): t=[list(d+i+d)for i in s.split()] c=len(t[0]) u=[[d]*c] t=u+t+u l=len(t) g=lambda h,x:t[h][x]=='#' for k in r(l*c): for h in r(1,l): for x in r(1,c): if g(h,x) and g(h+1,x)+g(h-1,x)+g(h,x+1)+g(h,x-1)<2: t[h][x]='X' print('\n'.join([''.join(i[1:-1])for i in t][1:-1])) Explanations d='.' r=range Function definition def f(s): Add a border of '.' on right and left of the board t=[list(d+i+d)for i in s.split()] c=len(t[0]) u=[[d]*c] Add a border of '.' on top and bottom t=u+t+u l=len(t) Lambda function to test '#' g=lambda h,x:t[h][x]=='#' Loop on the input length to be sure we don't forget dead ends for k in r(l*c): Loop on columns and lines for h in r(1,l): for x in r(1,c): Test if we have '#' around and on the position if g(h,x) and g(h+1,x)+g(h-1,x)+g(h,x+1)+g(h,x-1)<2: Replace '#' by 'X' t[h][x]='X' Crop the border filled with '.' and join in string print('\n'.join([''.join(i[1:-1])for i in t][1:-1])) Usage f("########.....######..#..###\n#......#######....#..#..#.#\n#.##......#...#####..#..###\n#..#####..#....#..#######.#\n#......#...#####.....##...#\n#..###.#...#...###...#..###\n##########.#..#..##..#.##.#\n..#......#.######.#..#.#.#.\n..#......#.#..#.#.#..#.#.#.\n..######.###..##..#########") ########.....######..X..### #......#######....#..X..#.# #.XX......X...X####..X..### #..XXXXX..X....#..#######.# #......X...#####.....##...# #..###.X...#...###...#..### ##########.#..X..##..#.##.X ..X......#.#XXXXX.#..#.#.X. ..X......#.#..X.X.#..#.#.X. ..XXXXXX.###..XX..######XXX • 1) use split() instead of splitlines(). 2) t=['.'*(c+2)]+['.'+i+'.'for i in s]+['.'*(c+2)] is shorter. And it can be shortened even more: d='.';t=[d*c]+t+[d*c];t=[d+i+d for i in t] 3) you don't need all the list(zip(....)) thing, use print('\n'.join([''.join(i[1:-1])for i in t]) – Alissa Feb 3 '16 at 10:40 • @Alissa thanks for your help i use your tips for point 1) and 3) but for the 2) i can't remove all bracket, we need a list of list of char and not a list of string because 'str' object does not support item assignment. the list of list allows me to use t[h][x]='X' – Erwan Feb 3 '16 at 12:29 • sorry, I missed the thing about string immutability. You can also move all constants (r, g and d) out of your function (saves you some tabulation). Maybe some playing around split() might help: t=[d+list(i)+d for i in s.split()], then calculate lengths, then add dot-lines to the end an to the beginning, and then alter your cycles to work with these extended lengths. Not sure if it will shorten the code, but it might – Alissa Feb 3 '16 at 13:54 • @Alissa i can't move the g out of the function because it use t i'll test your other comment – Erwan Feb 3 '16 at 14:18
# Tutorial Description¶ The goal of the experiment is to build a model to simulate a simple knee reflex. A knee reflex tries to compensate for a sudden muscle stretch by activating the muscle under stretch appropriately based on its’ muscle stretching speed to compensate the stretch. To achieve this we will be using the NRP-OpenSim interface. The first step would be to clone the experiment “Opensim Muscle Tutorial - Knee Jerk Experiment”. After cloning click on the experiment and click files to view all the simulation files that we are going to be editing. The file explorer can be used to download and upload all the files. # Model Creation¶ In order to create a musculoskeletal model in NRP, it involves two stages: 1. Gazebo Modeling : Physical Model 2. Opensim Modeling : Muscle Model ## Gazebo Modeling¶ The model used in this experiment is a simple two link thigh and shank bodies with one rotational degree of freedom to represent the knee joint. The properties of the model are as described below, Body Length(m) Mass(kg) Inertia(kg-m2) thigh 0.5 5 1 shank 0.5 5 1 With the properties described in the above table, one can setup up the basic sdf model. In order to attach and make the opensim plugin to work in NRP, the following tag needs to be added in the sdf file. You can download and edit the model.sdf file from the robot folder through the file explorer. <muscles>model://opensim_knee_jerk_tut/muscles.osim</muscles> <plugin name="muscle_interface_plugin" filename="libgazebo_ros_muscle_interface.so"></plugin> The muscles tag points to the location where the description of muscles is setup. More details on writing this file can be found in the following section. The plugin is needed to link the necessary libraries that interface NRP with Opensim functionality. ## Opensim Modeling¶ The next step is to describe the muscles in the model. This is achieved by writing a *.osim file. This file follows the same syntax as described by the standard opensim osim file description format. For more details checkout the link <https://simtk-confluence.stanford.edu/display/OpenSim/OpenSim+Models> __ under The Muscle Actuator subheading. The osim file should contain only the muscles description and the wrapping objects. The bodies/links will be used from the model description defined in sdf file earlier. Hence make sure to use the same names for the bodies in both files In the current tutorial, a single muscle named vastus is used with the following properties, Property Value Units Isometric Force 500 N Optimal Fiber Length 0.19 m Tendon Slack Length 0.19 m Origin [0. 0.25 0.05] m Insertion [0. 0.08 -0.15] m The above table translated to the osim file looks like, <Millard2012EquilibriumMuscle name="vastus"> <!--The set of points defining the path of the actuator.--> <GeometryPath> <!--The set of points defining the path--> <PathPointSet> <objects> <PathPoint name="origin"> <body>thigh</body> <!--The fixed location of the path point expressed in its parent frame.--> <location>0.050000000000000003 0 0</location> </PathPoint> <PathPoint name="insertion"> <body>shank</body> <!--The fixed location of the path point expressed in its parent frame.--> <location>0.037499999999999999 0.17499999999999999 0</location> </PathPoint> </objects> <groups /> </PathWrap> </objects> <groups /> </Appearance> </GeometryPath> <!--Maximum isometric force that the fibers can generate--> <max_isometric_force>500</max_isometric_force> <!--Optimal length of the muscle fibers--> <optimal_fiber_length>0.19</optimal_fiber_length> <!--Resting length of the tendon--> <tendon_slack_length>0.19</tendon_slack_length> </Millard2012EquilibriumMuscle> Note: The above snippet is only an example. Do not copy and paste The muscles.osim file used for this tutorial also describes a muscle wrapping object. This constraint makes sure that the muscle does not penetrate the bones during the motion of the joint. # Gazebo-ROS-OpenSim Inerface¶ Once you have setup the models using the above described steps, you should be able to create new experiments with the usual NRP procedure to create a model. Assuming you are familiar with the process, we continue the tutorial. In order to be able to write controllers and access the muscles in the simulation, there exists a set of muscle topics and messages that can be used. ## Subscribers¶ The states of the muscles initialized and described in the *.osim(muscles.osim) is automatically published on a ros topic with the name /gazebo_muscle_interface/robot/muscle_states The above topic uses the ros-msg type MuscleStates which is an array containing MuscleState whose format which looks like, Type Name string name float32 force float32 length float32 lengthening_speed geometry_msgs/Vector3[] path_points ## Publishers¶ To control the muscle state, the muscle activation needs to be set by the controller. During initialization every muscle described in the *.osim(muscles.osim) is generated with a individual ros-publisher of the topic, /gazebo_muscle_interface/robot/MUSCLE_NAME/cmd_activation The above topic accepts messages of type Float64. # Reflex-Control¶ Now that the full experimental model is setup, we can develop the controller to simulate the knee reflex. # Muscle Properties m_optimal_fiber_length = 0.19 m_max_contraction_velocity = 10.0 # Get muscle state muscle_states =dict((m.name, m) for m in muscle_states_msg.value.muscles) # Muscle Lengthening speed m_speed = muscle_states['vastus'].lengthening_speed # Maximum muscle speed m_max_speed = m_optimal_fiber_length*m_max_contraction_velocity #: Knee jerk reflex control # Reflex gain reflex_gain = 2. m_reflex_activation = min(1., 0.2*reflex_gain*(abs(m_speed) + m_speed)/m_max_speed) # Send muscle activation knee_jerk.send_message(m_reflex_activation)
location:  Publications → journals Search results Search: All articles in the CMB digital archive with keyword modulus of continuity Expand all        Collapse all Results 1 - 1 of 1 1. CMB 2007 (vol 50 pp. 434) Õzarslan, M. Ali; Duman, Oktay MKZ Type Operators Providing a Better Estimation on $[1/2,1)$ In the present paper, we introduce a modification of the Meyer-K\"{o}nig and Zeller (MKZ) operators which preserve the test functions $f_{0}(x)=1$ and $f_{2}(x)=x^{2}$, and we show that this modification provides a better estimation than the classical MKZ operators on the interval $[\frac{1}{2},1)$ with respect to the modulus of continuity and the Lipschitz class functionals. Furthermore, we present the $r-$th order generalization of our operators and study their approximation properties. Keywords:Meyer-König and Zeller operators, Korovkin type approximation theorem, modulus of continuity, Lipschitz class functionalsCategories:41A25, 41A36 top of page | contact us | privacy | site map |
pe_coupling - Coupled fluid-particle simulations with LBM and PE # Which coupling techniques are available in waLBerla? This module provides algorithms that allow coupled simulations of LBM for the fluid part and rigid particles from the PE module. This coupling can be achieved in various ways, depending on whether the particles should be fully resolved by the fluid flow around it (as in a direct numerical simulation) or if modelling assumptions are introduced that allow for unresolved particles. The following coupling techniques are provided by the framework: ## Momentum exchange method This is a coupling method for geometrically fully resolved simulations, i.e. the numerical resolution is significantly finer than the size of the particle. It is based on the work of [16] and [1]. It explicitly maps the particles into the fluid domain by flagging the containing cells as "obstacle". The coupling is then established by a velocity boundary condition for LBM along the particles' surface and by calculating the hydrodynamic force acting on the particles. This is the main coupling technique that we are using in waLBerla. See [22] for more infos and benchmark tests. See [25] for a large-scale application of this coupling. ## Partially saturated cells method This is a second option for fully resolved coupled simulations and is based on the work of [20]. It uses a solid volume fraction field that stores the occupancy of the fluid cells by the solid phase. This scalar value is then used in a special LBM collision operator that models the interaction by a weighting between the fluid collision and the solid collision. This collision step results in the interaction forces that are then set onto the particles. ## Discrete particle simulations This is a completely different class of methods that uses particles that are smaller than an LBM cell. This necessitates the introduction of models for the fluid-particle interaction forces, where the drag and pressure forces are usually the most important ones. In short, fluid quantities like velocity are interpolated to the particle positions and used there to evaluate the empirical formulas for the interaction forces. This forces are then applied to the particles and the corresponding reaction-force is distributed back to the surrounding fluid cells. For denser systems, the equations of fluid motion have to be altered to incorporate displacement effects by the solid phase, yielding the volume-averaged Navier-Stokes equations. For higher Reynolds number flows, also turbulence models might become necessary since the resolution for the fluid motion is usually very coarse. The results are highly dependent on the included models and extensive pre-studies have to be undertaken to incorporate all important effects. The current implementation provides a variety of the mentioned functionalities. # How do I set up a simulation with the momentum exchange method? Since it is a coupled simulation, you first of all need both, the LBM parts and the PE parts. So also the restrictions coming from these modules apply (like number of blocks in periodic setups with particles, see Tutorial - Confined Gas: Setting up a simple pe simulation). Attention Make sure that the PE simulation (e.g. the material parameters) is set up with the same units as the LBM simulation, i.e. we usually use lattice units everywhere. This means e.g. that the density of the particles is given as the density ratio. We only focus on the additional coupling part here. The functionalities for that are found in src/pe_coupling, and the tests in tests/pe_coupling. A good starting point is tests/pe_coupling/momentum_exchange_method/SegreSilberbergMEM.cpp as it contains a lot of the mentioned functionality. ## Mapping of the particles Every PE particle can be mapped into the domain as long as its type has the pe::RigidBody::containsPoint() member function implemented. With this mechanism, also a object from the mesh module can be mapped when seen as a mesh::pe::ConvexPolyhedron. Generally, the mapping sets the cells with the cell center contained inside the particle to a given flag that signals that it is not fluid. ### Initial mapping Before the simulation starts, the particle have to be mapped into the domain. Two mechanisms / functionalities are provided: mapBodies() and mapMovingBodies(). The mapBodies() functionality is used to set a constant boundary condition (like lbm::NoSlip), i.e. one does not change in time. With this, e.g. bounding planes can be mapped to set up the outer boundaries of the fluid domain. The mapMovingBodies() functionality is similar but needs an additional data structure, called the body field, which stores a pointer (pe::BodyID) inside each flagged cell to the containing particle. In this way, e.g. the particle corresponding to a specific boundary cell can be found so that its local surface velocity can be obtained and hydrodynamic forces are applied to the correct particle This is the functionality that is needed for moving particles in your coupled simulation. Make sure that you map the particle with ONLY ONE of them to avoid overwriting of the already set flags. The mapping functions can be given selector functors that allow you to specify rules on which particles should be mapped. Several can be found in src/pe_coupling/utility/BodySelectorFunctions.h, like pe_coupling::selectAllBodies(). Attention The mapping has to be done also in the ghost layers of the flag field since the flag field is not communicated. This has some important consequences: The default synchronization routines of the PE (e.g. pe::syncNextNeighbors() or pe::syncShadowOwners(), see [7]) would check whether a particle intersects with the axis-aligned bounding box (AABB) of a block. If this is not the case, the particle is removed from the block-local data structure and thus can no longer be accessed. Since the ghost layer is outside of the block's AABB, the mapping would thus not work. Therefore we enlarge the block's AABB artificially in the PE synchronization routines to circumvent this issue. This is done by the overlap parameter that is passed to the PE's sync routine. For only the mapping to be correct, an overlap of half a cell length would be sufficient, if using a single ghost layer. Since we will need the particle information a bit longer than that, we typically use a value of $1.5$ here, see Reconstructing missing fluid information in uncovered cells for more infos. ### Update mapping during simulation For the moving particles, the pe_coupling::BodyMapping class is available and to be used as a sweep inside the time loop to update the mapping of these particles. It additionally requires a second flag, the formerObstacle flag, whose usage will become clear in the next section. In essence, this class transforms fluid to obstacle cells if required and turns newly uncovered obstacle cells to formerObstacle cells. ## Reconstructing missing fluid information in uncovered cells As the particles move across the fluid mesh, cells will become covered and uncovered by the particle. Covering simply turns the former fluid cells to obstacle cells (see Update mapping during simulation). Extra work is required for cells that turn from obstacle to fluid before the simulation can continue as the information about the fluid is missing. This is where we need to use the pe_coupling::PDFReconstruction that restores the missing PDF information based on a templated reconstructor. Available reconstructors are: The two latter ones require an extrapolation direction. This can be provided in two ways: After reconstructing the PDFs in a cell, the flag is turned from formerObstacle to fluid and the pointer in the body field is invalidated. Note that this procedure is split into two separate loops (1. reconstruct PDFs in all formerObstacle cells, 2. turn all formerObstacle flags to fluid). The reason is that the reconstructors look for valid fluid cells in the vicinity of the uncovered cell, which is done based on the fluid flag, and we want to avoid that recently reconstructed values are regarded as valid data. As for the reconstruction part particle information is still required (e.g. the particle's surface velocity), we use the overlap of $1.5$ cell length to keep the particle available (see arguments in Initial mapping). This originates from $0.5$ mentioned in Initial mapping, and a totally maximal admissible position change of $1$ cell per time step, so $1.5$ in total. ## Boundary conditions along moving particles This is the part where the main coupling is happening. The fluid is affected by the moving particle via velocity boundary conditions that use the local surface velocity of the particle. As a result, the hydrodynamic force and torque acting on the particle is obtained. Both parts are computed in the boundary handling routine. Different variants of different order are available: The two latter ones require the exact position of the particle surface. See [22] for more details and comparison tests of the boundary conditions. This is calculated as a ray-particle intersection which can be done analytically for a sphere (intersectionRatioSpherePe()), ellipsoid (intersectionRatioEllipsoidPe()) or a plane (intersectionRatioPlanePe()) and is done by a expensive bisection line search in other cases (intersectionRatioBisection()). If you want to add your specialization, do this by extending the cases in the intersectionRatio() variant that takes a PE body. ## PE step Once the forces and torques on the particles have been calculated, the physics engine is used to compute the inter-particle collision forces and to update the particle's position and orientation, as well as the linear and angular velocity. This is conveniently achieved by using the pe_coupling::TimeStep class that can be inserted into the time loop. It also features a sub-stepping capability where several PE time steps with smaller time step sizes are carried out which is desirable in closely packed scenarios to avoid large overlaps of the particles, see e.g. [5]. During this sub-stepping, the hydrodynamic forces and torques are kept constant. ## Algorithm overview The typical coupled simulation looks like the following: Initially: set up data structures, map particles, set boundary conditions Time loop: 1. Communicate PDF ghost layers. 2. Carry out boundary handling sweep, with special coupling boundary conditions, see Boundary conditions along moving particles. This also computes the hydrodynamic force and torque on particles. 3. Carry out LBM sweep (stream & collide). 5. Carry out PE step (with possible sub-stepping), see PE step : 1. Evaluate inter-particle forces (collision, lubrication,..). 2. Synchronize and reduce forces and torques. 3. Integrate particles and reset forces and torques. 4. Synchronize particle info (velocities, position,...). 6. Update body mapping with pe_coupling::BodyMapping sweep, see Update mapping during simulation. 7. Reconstruct missing PDF information in uncovered cells with pe_coupling::PDFReconstruction sweep, see Reconstructing missing fluid information in uncovered cells. 8. (optionally) VTK output ## Other important points This is a collection of lessons learned and tips that should be kept in mind when setting up a coupled simulation: • To stabilize the coupling, we usually average the hydrodynamic forces and torques over two consecutive time steps and apply the averaged ones onto the particles, as suggested by [16]. Otherwise, oscillations in the force and thus in velocity and position can occur. • The coupling is unstable for particles with a density smaller than the fluid density (i.e. $1$). Additional stabilization techniques have to be used and implemented first to make it work for lighter particles, see e.g. [28]. • When driving a periodic flow with a body force on the fluid (to imitate a pressure driven flow), make sure to apply a corresponding buoyancy force onto the particle. This is necessary as the flow would normally feature a pressure gradient that drives the flow. But due to the artificial setup it is not present and thus the force from the pressure gradient has to be added explicitly. • Generally, when applying forces on the fluid, special care has to be taken to not violate physical principles (i.e. momentum balances) and to avoid e.g. infinitely accelerating particles. This can be achieved by applying a matching counterforce on the particles. • For accurate and stable results, the maximum particle velocity should not exceed $0.1$ in lattice units at any time (rule of thumb). More specifically, the particle velocity should be significantly smaller than the lattice speed of sound ( $1/\sqrt{3}$) to avoid compressibility effects. • The numerical resolution should be at least $10$ cells per diameter (rule of thumb) to obtain reasonable results. Larger Reynolds numbers require a finer resolution. This value depends on the physical setup at hand and also the type of the particle (sphere, ellipsoid, squirmer,...). As always, convergence studies should be carried out beforehand to ensure resolution-independent results. Only then, a direct numerical simulation is realized and the results can be trusted. • The coupling method is unable to predict the hydrodynamic forces correctly when particles are close to each other as the resolution of the gap between the particle surfaces is not fine enough in this case. This then requires to explicitly take into account the unresolved lubrication forces by applying lubrication correction forces between the particles [19]. This can be achieved with pe_coupling::LubricationCorrection. • Evaluating particle properties is best done using the provided pe::BodyIterator functionalities. In a parallel setup, it is important to know which iterator to use for which property as the information could be distributed over several processes that know of this particle (local and remote bodies). If you want to evaluate forces and torques, do this between the boundary handling and the PE step and use the pe::BodyIterator, as those are distributed values, followed by an reduce operation. All other properties (velocity, position) are evaluated with the pe::LocalBodyIterator to only include this information once. For examples have a look at the available coupling tests or at apps/benchmarks/MotionSingleHeavySphere/MotionSingleHeavySphere.cpp . # How does load balancing work with a coupled simulation? Since the fluid and particle part of the simulation generate different load characteristics, load balancing is non-trivial and different approaches are possible. In waLBerla, we generally want to maintain the same domain partitioning of the fluid and the particle simulation. See [24] for one possibility. The corresponding application codes can be found in apps/benchmarks/AdaptiveMeshRefinementFluidParticleCoupling/AMRSedimentSettling.cpp to see how the load balancing routines are set up. ## Files file  DragForceCorrelations.h file  LiftForceCorrelations.h file  BodyVelocityTimeDerivativeEvaluator.h file  EffectiveViscosityFieldEvaluator.h file  InteractionForceEvaluator.h file  LiftForceEvaluator.h file  LubricationForceEvaluator.h file  PressureFieldEvaluator.h file  SolidVolumeFractionFieldEvaluator.h file  VelocityCurlFieldEvaluator.h file  VelocityFieldEvaluator.h file  VelocityTotalTimeDerivativeFieldEvaluator.h file  GNSSweep.h file  GNSPressureFieldEvaluator.h file  GNSVelocityFieldEvaluator.h file  BodyVelocityInitializer.h file  CombinedReductionFieldCommunication.h file  FieldDataSwapper.h file  ForceFieldResetter.h file  PeBodyOverlapFunctions.h Implementation of BodyOverlapFunctions for the pe bodies. file  PeIntersectionRatio.h Specialization of the intersectionRatio function for certain pe bodies. file  PeIntersectionRatio.h Specialization of the intersectionRatio function for certain pe bodies. file  SphereEquivalentDiameter.h file  BodyBBMapping.cpp file  BodyBBMapping.h file  CurvedLinear.h file  SimpleBB.h file  Destroyer.h file  ExtrapolationDirectionFinder.cpp file  ExtrapolationDirectionFinder.h file  Reconstructor.h file  BodyAndVolumeFractionMapping.cpp file  BodyAndVolumeFractionMapping.h file  PSMSweep.h file  PSMUtility.h file  pecoupling_module.dox file  BodiesForceTorqueContainer.cpp file  BodiesForceTorqueContainer.h file  BodySelectorFunctions.cpp file  BodySelectorFunctions.h file  ForceTorqueOnBodiesResetter.cpp file  ForceTorqueOnBodiesResetter.h file  ForceTorqueOnBodiesScaler.cpp file  ForceTorqueOnBodiesScaler.h file  LubricationCorrection.cpp file  LubricationCorrection.h file  TimeStep.cpp ## Functions pe::Vec3 walberla::pe_coupling::LubricationCorrection::compLubricationSphrSphr (real_t gap, const pe::SphereID sphereI, const pe::SphereID sphereJ) const Computes lubrication correction force between spheres. More... pe::Vec3 walberla::pe_coupling::LubricationCorrection::compLubricationSphrPlane (real_t gap, const pe::SphereID sphereI, const pe::ConstPlaneID planeJ) const Computes lubrication correction force between sphere and wall. More... ## Function Documentation pe::Vec3 walberla::pe_coupling::LubricationCorrection::compLubricationSphrPlane ( real_t gap, const pe::SphereID sphereI, const pe::ConstPlaneID planeJ ) const private Computes lubrication correction force between sphere and wall. Lubrication correction according to Ladd and Verberg, 2001 ("Lattice-Boltzmann Simulations of Particle-Fluid Suspensions" ) Note: Verified quantitatively by computation in spreadsheet and qualitatively by considering direction of force for example setup. pe::Vec3 walberla::pe_coupling::LubricationCorrection::compLubricationSphrSphr ( real_t gap, const pe::SphereID sphereI, const pe::SphereID sphereJ ) const private Computes lubrication correction force between spheres. Lubrication correction according to Ladd and Verberg, 2001 ("Lattice-Boltzmann Simulations of Particle-Fluid Suspensions") Note: Verified quantitatively by computation in spreadsheet and qualitatively by considering direction of force for example setup.
seriestest - Strategy for Testing Series 1 1 If the series is of the form it is a p-series which we know to np be convergent if p > 1 and divergent if p # seriestest - Strategy for Testing Series 1 1 If the series... • Notes • 3 This preview shows page 1 - 2 out of 3 pages. Strategy for Testing Series1. If the series is of the form1np, it is ap-series, which we know tobe convergent ifp >1 and divergent ifp1.2. If the series has the formarn-1orarn, it is ageometric series,which converges if|r|<1 and diverges if|r| ≥1. Some preliminaryalgebraic manipulation may be required to bring the series into thisform.3. If the series has a form that is similar to ap-series or a geometricseries, then one of thecomparison tests(Theorems 10, 11) shouldbe considered. In particular, ifanis a rational function or an algebraicfunction ofn(involving roots of polynomials), then the series should becompared with ap-series (The value ofpshould be chosen by keepingonly the highest powers ofnin the numerator and denominator). Thecomparison tests apply only to series with positive terms.Ifanhas some negative terms, then we can apply the Comparison Test to|an|and test forabsolute convergence.
A QIG review of the paper “Entropy uncertainty and measurement reversibility” November 25, 2015 I’d like to highlght a video we’ve recently uploaded of a recent talk by Kais Abdelhkalek who presented a review and overview of the recent paper: “Entropic uncertainty and measurement reversibility” by Mario Berta, Stephanie Wehner, Mark M. Wilde, arXiv:1511.00267 A QIG review of the paper “Statistical Physics of Self-Replication” November 13, 2015 The quantum information group, Hannover, organises a regular group seminar where members have the opportunity to speak on a variety of themes from communicating early new results, to general interest topics such as reviews of interesting new papers. I’d like to highlght a video we’ve uploaded of a recent talk David Reeb gave in this seminar presenting a review of, and an overview of the content of, the fascinating recent paper “Statistical Physics of Self-Replication” by Jeremy L. England: If you’d like to see more content like this then please do hit the like button. And if you’d like to keep up to date with content from the quantum information group, Hannover, then please don’t hesitate to subscribe to our channel and my own channel. Reading group: on the average number of real roots of a random algebraic equation, M. Kac April 14, 2009 At the moment I am in between research projects: I am “working” on a bunch of old projects, some of which are many years old, but I haven’t had any new ideas for any of them in a long time and, hence, I haven’t made any progress whatsoever. At the same time I am thinking about beginning work on some new projects. Most notably, I want to spend some time understanding quantum systems with static and annealed disorder, and the connections between these systems and computational complexity. Unfortunately the literature on disordered quantum systems is vast, to say the least. Hence, I am putting off learning it. So now I am procrastinating by learning about a whole bunch of new ideas in the hope of learning something that will make the entry into the disordered systems literature a little smoother. Basically I am now going to play out my learning curve through this blog. The type of problems I eventually hope to study will be of the following form. Take some family of computational problems ${\mathcal{F}}$, and consider a random instance ${P}$ from this family. What is the running time, on average, of some quantum algorithm to solve the problem? At the same time I’d also like to consider families ${\mathcal{Q}}$ of quantum problems (eg. a family of quantum systems) and understand the running time, on average, of either classical or quantum algorithms to calculate approximations to, eg., expectation values of local observables, of a random instance. In both cases there is obviously some quantum system (i.e., the quantum computer in the first case, and the quantum system studied in the second case), and there is obviously some disorder present. The hope, broadly speaking, is to exploit the ubiquitous phenomenon of Anderson localisation to understand what happens in each case. However, except in some very special cases, the problems I want to study don’t reduce in any obvious way to systems studied in the disordered quantum systems literature (at least, not so far as I can see). So I’m now casting around looking for interesting stuff which might have some relevance… At the most abstract and high level all of the problems I want to eventually consider require that one understands the critical points of a random function (which is usually related to the running time). With a bit of luck one will be able write this expression as a polynomial. Hence it’d be nice to understand, say, the roots of random polynomials. Read the rest of this entry » Reading group: Quantum algorithm for the Laughlin wave function, arXiv:0902.4797 March 3, 2009 In this post I’d like to experiment by sharing my thoughts on a recent paper as I read through it critically. I’m thinking of trying to emulate something like a reading group presentation. While this isn’t original research, I’m sure that reading papers certainly does form an integral part of the workflow of any researcher: critically reading papers allows you to learn new ideas and techniques and, crucially, by asking difficult questions while reading a paper you often discover new research directions that, otherwise, would never occur to you. This has often happened to me. Indeed, the main inspiration for several of my papers has come from the critical evaluation of (and, sometimes, the resulting frustrations from reading) a recent paper. Due mostly to time constraints I don’t really read that many papers these days (I tend to skim quite a few, but actually read only a couple a year). Nevertheless, I hope to do something like this post on a semi-regular/sporadic basis. I’m going to be rather polite and not actually make any direct criticisms. I don’t see the point of being totally negative anyways: if there is a criticism it probably means there is some aspect of the paper that could be explored further. This, to my mind, equals a new research project. So, let’s be positive and consider every quibble — if there are any — as an opportunity. The paper I’d like to discuss today is the latest from José-Ignacio Latorre and coworkers. This intriguing paper discusses how quantum computers could help prepare physically interesting quantum states. As will be evident, I’m not an expert in the area of this paper and it is entirely possible that I’ll make several wrong statements. Any question I raise here is probably due to a misunderstanding on my behalf. Finally, in the true spirit of a reading group discussion, if you think you can answer any of my questions, or clarify the description anywhere then please do not hesitate to comment! Read the rest of this entry »
# nLab dual morphism ### Context #### Monoidal categories monoidal categories duality # Contents ## Idea The notion of dual morphism is the generalization to arbitrary monoidal categories of the notion of dual linear map in the category Vect of vector spaces. ## Definition ###### Definition Given a morphism $f \colon X \to Y$ between two dualizable objects in a monoidal category $(\mathcal{C}, \otimes)$, the corresponding dual morphism $f^\ast \colon Y^\ast \to X^\ast$ is the one obtained by $f$ by composing the duality unit of $X$ (the coevaluation map), the duality counit of $Y$ (the evaluation map)… ###### Remark This notion is a special case of the the notion of mate in a 2-category. Namely if $K \coloneqq \mathbf{B}_\otimes \mathcal{C}$ is the delooping 2-category of the monoidal category $(\mathcal{C}, \otimes)$, then objects of $\mathcal{C}$ correspond to morphisms of $K$, dual objects correspond to adjunctions and morphisms in $\mathcal{C}$ correspond to 2-morphisms in $K$. Under this identification a morphism $f \colon X \to Y$ in $\mathcal{C}$ may be depicted as a 2-morphism of the form $\array{ \ast &\stackrel{\mathbb{1}}{\to}& \ast \\ {}^{\mathllap{Y}}\downarrow &\swArrow_{\mathrlap{f}}& \downarrow^{\mathrlap{X}} \\ \ast &\underset{\mathbb{1}}{\to}& \ast }$ and duality on morphisms is then given by the mate bijection $\array{ \ast & \overset{\mathbb{1}}{\to} & \ast \\ {}^{\mathllap{Y}} \downarrow & \swArrow_{\mathrlap{f}} & \downarrow^{\mathrlap{X}} \\ \ast & \underset{\mathbb{1}}{\to} & \ast } \;\;\;\;\; \mapsto \;\;\;\;\; \array{ \ast & \overset{\mathbb{1}}{\to} & \ast \\ {}^{\mathllap{X^\ast}} \downarrow & \swArrow_{\mathrlap{f^\ast}} & \downarrow^{\mathrlap{Y^\ast}} \\ \ast & \underset{\mathbb{1}}{\to} & \ast } \;\;\;\; \coloneqq \;\;\;\; \array{ \ast & \overset{Y^\ast}{\to} & \ast & \overset{\mathbb{1}}{\to} & \ast & \overset{\mathbb{1}}{\to} & \ast \\ {}^\mathllap{\mathbb{1}}\downarrow & \swArrow_{\epsilon_Y} & {}^{\mathllap{Y}} \downarrow & \swArrow_{f} & \downarrow^{\mathrlap{X}} & \swArrow_{\eta_X} & \downarrow \mathrlap{1} \\ \ast & \underset{\mathbb{1}}{\to} & \ast & \underset{\mathbb{1}}{\to} & \ast & \underset{X^\ast}{\to} & \ast } \,.$ ## Examples ###### Example In $\mathcal{C} =$ Vect with its standard tensor product monoidal structure, a dual object is a dual vector space and a dual morphism is a dual linear map. ###### Example If $A$, $B$ are C*-algebras which are Poincaré duality algebras, hence dualizable objects in the KK-theory-category, then for $f \colon A \to B$ a morphism it is K-oriented, the corresponding Umkehr map is (postcomposition) with the dual morphism of its opposite algebra version: $f! \coloneqq (f^op)^\ast \,.$ ###### Example More generally, twisted Umkehr maps in generalized cohomology theory are given by dual morphisms in (∞,1)-category of (∞,1)-modules. See at twisted Umkehr map for more. Revised on July 19, 2013 15:24:08 by Urs Schreiber (89.204.138.251)
47 views A point moves in a straight line under the retardation a $v^{2}$, where ' $a$ ' is a positive constant and $v$ is speed. If the initial speed is $u$, the distance covered in ' $t$ ' seconds is: (A) a ut (B) $\frac{1}{a} \log (a u t)$ (C) $\frac{1}{a} \log (1+a u t)$ (D) $a \log (a u t)$ ## 1 Answer Best Answer The retardation is given by $\frac{d v}{d t}=-a v 2$ integrating between proper limits $$\begin{array}{l} \Rightarrow \quad-\int_{u}^{v} \frac{d v}{v^{2}}=\int_{0}^{t} a d t \\ \text { or } \quad \frac{1}{v}=a t+\frac{1}{u} \\ \Rightarrow \quad \frac{d t}{d x}=a t+\frac{1}{u} \Rightarrow d x=\frac{u d t}{1+a u t} \end{array}$$ integrating between proper limits $$\begin{array}{ll} \Rightarrow & \int_{0}^{s} d x=\int_{0}^{1} \frac{u d t}{1+a u t} \\ \Rightarrow & S=\frac{1}{a} \ln (1+a u t) \end{array}$$ ## Related Questions 1 Vote 1 Answer 32 Views 0 Votes 1 Answer 34 Views 1 Vote 1 Answer 5 Views 0 Votes 1 Answer 22 Views 3 Votes 1 Answer 54 Views 0 Votes 1 Answer 3 Views 0 Votes 1 Answer 31 Views 31 views 1 Vote 1 Answer 29 Views 0 Votes 1 Answer 62 Views 0 Votes 1 Answer 4 Views
# Math Help - Prove g is differentiable at a 1. ## Prove g is differentiable at a Let f be a function that is differentiable at a a E Reals, with f(a) not 0. Define g(x)=1/f(x) for x near a. Prove that g is differentiable at a and give the formula for g'(a). F(x) is differentiable at a implying it is continuos at a. So g(x) tends to 1/f(a) as x tends to a. g'(a) = lim((g(x)- g(a))/x-a)) = (1/f(x) - 1/f(a))/(x-a) Not sure what the next steps are. 2. Originally Posted by poirot Let f be a function that is differentiable at a a E Reals, with f(a) not 0. Define g(x)=1/f(x) for x near a. Prove that g is differentiable at a and give the formula for g'(a). F(x) is differentiable at a implying it is continuos at a. So g(x) tends to 1/f(a) as x tends to a. g'(a) = lim((g(x)- g(a))/x-a)) = (1/f(x) - 1/f(a))/(x-a) That is correct. Note that can be written as [(f(a)-f(x))/(x-a)][1/(f(x)f(a)] Now find \lim_x\to a. 3. Originally Posted by Plato That is correct. Note that can be written as [(f(a)-f(x))/(x-a)][1/(f(x)f(a)] Now find \lim_x\to a. ok so g'(a)= f'(a)/f(x)f(a) but how do I show g is differentiable 4. Originally Posted by poirot ok so g'(a)= f'(a)/f(x)f(a) but how do I show g is differentiable Not quite. You missed a minus sign. Note that [(f(a)-f(x))/(x-a)]= -[(f(x)-f(a))/(x-a)] Also f(x)->f(a) so you get [f(a)]^2 in the denominator. 5. g'(a) = -F'(a)/(f(a))^2. Is that enough to show its differentiable since f'(a) and f(a) exsist 6. Originally Posted by poirot g'(a) = -F'(a)/(f(a))^2. Is that enough to show its differentiable since f'(a) and f(a) exsist Yes.
# Problem of the Week #254 - Aug 01, 2017 Status Not open for further replies. Staff member #### Euge ##### MHB Global Moderator Staff member No one answered this week's problem. You can read my solution below. For all $k \ge 1$, the $k$th tensor algebra of $\Bbb Z/n\Bbb Z$ is $(\Bbb Z/n\Bbb Z)^{\otimes_{\Bbb Z} k}$, which is isomorphic to $\Bbb Z/n\Bbb Z$. So the tensor algebra $\mathcal{T}(\Bbb Z/n\Bbb Z)$ of $\Bbb Z/n\Bbb Z$ is isomorphic to $M=\Bbb Z \oplus \Bbb Z/n\Bbb Z \oplus \Bbb Z/n\Bbb Z \oplus \cdots$. The mapping $\Bbb Z[x] \to M$ mapping $p(x) = \sum_{i = 0}^m a_i x^i$ to $(a_0,a_1,\cdots,a_m,0,0,0,\ldots)$ is a surjective morphism with kernel $(nx)$, so $M \approx \Bbb Z[x]/(nx)$. Status Not open for further replies.
## Cryptology ePrint Archive: Report 2021/103 RUP Security of the SAEF Authenticated Encryption mode Elena Andreeva and Amit Singh Bhati and Damian Vizar Abstract: ForkAE is a family of authenticated encryption (AE) schemes using a forkcipher as a building block. ForkAE was published in Asiacrypt'19 and is a second-round candidate in the NIST lightweight cryptography process. ForkAE comes in several modes of operation: SAEF, PAEF, and rPAEF. SAEF is optimized for authenticated encryption of short messages and processes the message blocks in a sequential and online manner. SAEF requires a smaller internal state than its parallel sibling PAEF and is better fitted for devices with smaller footprint. At SAC 2020 it was shown that SAEF is also an online nonce misuse-resistant AE (OAE) and hence offers enhanced security against adversaries that make blockwise adaptive encryption queries. It has remained an open question if SAEF resists attacks against blockwise adaptive decryption adversaries, or more generally when the decrypted plaintext is released before the verification (RUP). RUP security is a particularly relevant security target for lightweight (LW) implementations of AE schemes on memory-constrained devices or devices with stringent real-time requirements. Surprisingly, very few NIST lightweight AEAD candidates come with any provable guarantees against RUP. In this work, we show that the SAEF mode of operation of the ForkAE family comes with integrity guarantees in the RUP setting. The RUP integrity (INT-RUP) property was defined by Andreeva et~al.~in Asiacrypt'14. Our INT-RUP proof is conducted using the coefficient H technique and it shows that, without any modifications, SAEF is INT-RUP secure up to the birthday bound, i.e., up to $2^{n/2}$ processed data blocks, where $n$ is the block size of the forkcipher. The implication of our work is that SAEF is indeed RUP secure in the sense that the release of unverified plaintexts will not impact its ciphertext integrity. Category / Keywords: secret-key cryptography / Authenticated encryption, forkcipher, lightweight cryptography, short messages, online, provable security, release of unverified plaintext, RUP. Date: received 27 Jan 2021, last revised 7 Apr 2021 Contact author: elena andreeva at aau at,amitsingh bhati@esat kuleuven be,damian vizar@csem ch Available format(s): PDF | BibTeX Citation
# Hand detection in matlab I am beginner and I want a help to know how to detect hand in Matlab. First, I need code which detects skin and then draw a rectangle around a region so it show to me face and hand. When I try to make rectangle only around hand and the hand is away from the face, the the rectangle is drawn around the face instead. What do I need to do to discount the face and only draw the rectangle around the hand? • Why don't you tell us what you've tried, and what you're trying to do with the detection? Your question is way too open-ended to get a sensible answer. – Peter K. Jun 8 '13 at 15:11 • first code which detect skin and then draw a rectangle around a region so it show to me face and hand and i try to made rectangle only around hand but when hand be away from face , the rectangle is drawn around face then what way correct ? – kjG Jun 8 '13 at 16:50 • I've tried to update your question with this information. Do you have any sample pictures? Please upload them to an image-sharing site and post the links in your question. We can insert them into the question (you will be able to when you have enough rep). – Peter K. Jun 8 '13 at 22:05 • sorry for replaying later i made it but it is not accurate can i upload it to you and can you help me to make it accurate ??? because our project is helping deaf people and the accurate of detecting hand is very important in our project – kjG Jun 17 '13 at 22:09 • Please upload them to an image-sharing site and post the links in your question. – Peter K. Jun 18 '13 at 0:51 As far as I can tell, Matlab's Computer Vision toolbox provides vision.CascadeObjectDetector for object detection, with support only for Frontal Face (CART),Frontal Face (LBP),Upper Body,Eye Pair,Single Eye,Single Eye (CART),Profile Face,Mouth,Nose. When you say: "the rectangle is drawn around the face instead.", that is happening because you are calling CascadeObjectDetector without specifying a specific object to detect, resulting in CascadeObjectDetector setting the to FrontalFaceCART by default. You are probably doing something like this: detector = vision.CascadeObjectDetector; Try doing this: detector = vision.CascadeObjectDetector('Nose'); And you'll see how the nose is detected. You are probably then getting excited, because it would be really easy then passing 'Hand' as a parameter instead of 'Nose', or anything. But unfortunately until now CascadeObjectDetector doesn't provide a model for 'Hand'. However, there exists a way to archieve this, the last ComputerVision toolbox provides the trainCascadeObjectDetector which you can use for detecting your own models(the hand for example). You'll see that this consists of a training process where you have to supply positive and negative images to develop your own detector, further explanations are given in the links. There is another useful tool you can use for this: a GUI tool for getting easier the classification process. Hope this helps • really Thank you very very much for your replaying :) – kjG Jun 17 '13 at 22:10
# IK for this IkParameterizationType not implemented yet. Hi All, I have followed the instructions for generating an IKFast plugin located here, and have successfully generated an IKFast package using the instructions. Unfortunately the link to the method of running the plugin in Rviz (link) does not have any content so I am once again left guessing. When I run RViz using: roslaunch hyd_sys_complete_sldasm_moveit_config demo.launch I get the following errors repeatedly: [ERROR] [1409802774.965529932]: IK for this IkParameterizationType not implemented yet. This does not make sense to me, as when I examine the source code the control type (Translation3d) was specifically given as an argument for generating the IKFast package, yet it claims there is no implementation for it. Here's the offending generated code. I am using Translation3D as my solver yet it is not covered in the solve function given below. int IKFastKinematicsPlugin::solve(KDL::Frame &pose_frame, const std::vector<double> &vfree, IkSolutionList<IkReal> &solutions) const { // IKFast56/61 solutions.Clear(); double trans[3]; trans[0] = pose_frame.p[0];//-.18; trans[1] = pose_frame.p[1]; trans[2] = pose_frame.p[2]; KDL::Rotation mult; KDL::Vector direction; switch (GetIkType()) { case IKP_Transform6D: // For **Transform6D**, eerot is 9 values for the 3x3 rotation matrix. mult = pose_frame.M; double vals[9]; vals[0] = mult(0,0); vals[1] = mult(0,1); vals[2] = mult(0,2); vals[3] = mult(1,0); vals[4] = mult(1,1); vals[5] = mult(1,2); vals[6] = mult(2,0); vals[7] = mult(2,1); vals[8] = mult(2,2); // IKFast56/61 ComputeIk(trans, vals, vfree.size() > 0 ? &vfree[0] : NULL, solutions); return solutions.GetNumSolutions(); case IKP_Direction3D: case IKP_Ray4D: case IKP_TranslationDirection5D: // For **Direction3D**, **Ray4D**, and **TranslationDirection5D**, the first 3 values represent the target direction. direction = pose_frame.M * KDL::Vector(0, 0, 1); ComputeIk(trans, direction.data, vfree.size() > 0 ? &vfree[0] : NULL, solutions); return solutions.GetNumSolutions(); case IKP_TranslationXAxisAngle4D: case IKP_TranslationYAxisAngle4D: case IKP_TranslationZAxisAngle4D: // For **TranslationXAxisAngle4D**, **TranslationYAxisAngle4D**, and **TranslationZAxisAngle4D**, the first value represents the angle. ROS_ERROR_NAMED("ikfast", "IK for this IkParameterizationType not implemented yet."); return 0; case IKP_TranslationLocalGlobal6D: // For **TranslationLocalGlobal6D**, the diagonal elements ([0],[4],[8]) are the local translation inside the end effector coordinate system. ROS_ERROR_NAMED("ikfast", "IK for this IkParameterizationType not implemented yet."); return 0; case IKP_Rotation3D: case IKP_Translation3D: case IKP_Lookat3D: case IKP_TranslationXY2D: case IKP_TranslationXYOrientation3D: case IKP_TranslationXAxisAngleZNorm4D: case IKP_TranslationYAxisAngleXNorm4D: case IKP_TranslationZAxisAngleYNorm4D: ROS_ERROR_NAMED("ikfast", "IK for this IkParameterizationType not implemented yet."); return 0; default: ROS_ERROR_NAMED("ikfast", "Unknown IkParameterizationType! Was the solver generated with an incompatible version of Openrave?"); return 0; } } Is there something in the kinematics.yaml file for the RViz plugin I need to change? I know that the standard kinematic solvers do not work properly for my arm as it only has three degrees of freedom, but unfortunately IKFast has proven much more trouble than it is worth. Is there a solution to this? Kind Regards Bart edit retag close merge delete Sort by » oldest newest most voted What this means is: You have correctly created a Translation3D solver using IKFast. But the plugin, which wraps your solver, doesn't know how to handle Translation3D yet, because I haven't implemented it yet; I have never used Translation3D, so I couldn't test it. What the switch statement you quoted does is that it calls ComputeIk() from the generated solver; the ComputeIk() type signature is always the same, but the meaning of the arguments differs depending on the IkParameterizationType. For Translation3D there is only the translation, which is already stored in the variable trans, so you can probably leave the second argument (the direction/rotation/...) as NULL, like this: case IKP_Translation3D: ComputeIk(trans, NULL, vfree.size() > 0 ? &vfree[0] : NULL, solutions); return solutions.GetNumSolutions(); Please let me know if it works, so I can update the moveit_ikfast package! P.S.: I recommend asking MoveIt-related questions on the moveit-users mailing list, since most developers are reading that. more Thank you. It did cross my mind to put in the ComputeIK call in those case statements, but I was not sure of the calling convention for it. Can I confirm that if this works properly I will be able to drag the arm around in Cartesian coordinates in 3D in RViz? ( 2014-09-04 03:54:23 -0600 )edit Yes, correct. ( 2014-09-04 04:20:59 -0600 )edit I can confirm that Martin's answer is correct. Thank you very much. I just need to tidy up the behaviour a little when the end effector reaches the limits of its workspace (it tends to jump around) but other than that this is the answer. ( 2014-09-04 20:22:12 -0600 )edit I just noticed that somebody already added Translation3D support to moveit_ikfast (in commit f8c8cc2e76) in February, so this is already fixed in the source (only not released as .deb yet). ( 2014-09-05 03:23:23 -0600 )edit
How to read a character one by one from an external file, modify it, and save it back to another external file? The purpose of this question is to learn how to read character by character of a fetched line of characters, modify it and save back to another file. For the sake of simplicity, let's take a simple application which is encrypting an external text file with a well-known simple algorithm, "shift cipher". I can only give you a skeleton as follows, the remaining is beyond my knowledge. \documentclass{article} \usepackage{filecontents} \begin{filecontents*}{plain.txt} Karl's students do not care about dashing patterns. Karl's students do not care about arrow tips. Karl's students, by the way, do not know what a transformation matrix is. \end{filecontents*} % shift parameter, % positive value moves forward (for example, 1 makes A get shifted to B), % negative value moves backward (for example, -1 makes B get shifted to A). \def\offset{13} \newwrite\writer \begin{document} \immediate\openout\writer=encrypted.txt \loop \immediate\write\writer{Do something here} \repeat \immediate\closeout\writer \end{document} The interesting parts I want to learn: • How to read a character one by one from a single line fetched from an external file. • How to modify the single character. • How to know the number of characters in a single line fetched from an external file. Please use the most simple method that newbies (like me) can easily digest the code. - Can we assume only A-Za-z? Digits? Spaces? Punctuation? – Qrrbrbirlbel Mar 21 '13 at 0:33 Would the cipher apply only to text? – Scott H. Mar 21 '13 at 0:34 @Qrrbrbirlbel: Just shift according the ASCII table mapping. – kiss my armpit Mar 21 '13 at 0:35 @ScottH.: Yes. For the sake of simplicity and I think it is more than enough to learn what I want. – kiss my armpit Mar 21 '13 at 0:35 For one letter #1 the sequence \char\numexpr#1+\offset\relax can give you the offset letter (I have defined \offset as a count). Problems I encountered: Spaces, text encoding, braces in the output (decoding fails for all special characters), writing out. – Qrrbrbirlbel Mar 21 '13 at 1:00 Easiest way is to set up the uppercase table to do the translation. This file does the encoding twice to confirm you get back to where you started (apart from white space normalisation) \documentclass{article} \usepackage{filecontents} \begin{filecontents*}{plain.txt} Karl's students do not care about dashing patterns. Karl's students do not care about arrow tips. Karl's students, by the way, do not know what a transformation matrix is. \end{filecontents*} % shift parameter, % positive value moves forward (for example, 1 makes A get shifted to B), % negative value moves backward (for example, -1 makes B get shifted to A). \def\offset{13} \newwrite\writer \def\wrot#1{% \uppercase{\immediate\write\writer{#1}}} \begin{document} \makeatletter { \count@a \loop \uccode\count@=\numexpr\count@+\offset\relax \uccode\numexpr\count@+\offset\relax=\count@ \ifnum\count@<n \repeat \count@A \loop \uccode\count@=\numexpr\count@+\offset\relax \uccode\numexpr\count@+\offset\relax=\count@ \ifnum\count@<N \repeat \immediate\openout\writer=encrypted.txt \loop \expandafter\wrot\expandafter{\data} \repeat \immediate\closeout\writer \immediate\openout\writer=reencrypted.txt \loop \expandafter\wrot\expandafter{\data} \repeat \immediate\closeout\writer } \section{plain.txt} \input{plain.txt} \section{encrypted.txt} \input{encrypted.txt} \section{reencrypted.txt} \input{reencrypted.txt} \end{document} - This should allow all ASCII printable characters. \begin{filecontents*}{plain.txt} u&9@@^{=!{{} Karl's students do not care about dashing patterns. Karl's students do not care about arrow tips. Karl's students, by the way, do not know what a transformation matrix is. \end{filecontents*} \documentclass{article} \usepackage{xparse} \ExplSyntaxOn \int_gzero_new:N \g_karl_offset_int \ior_new:N \l_karl_input_stream \iow_new:N \l_karl_output_stream \seq_new:N \l__karl_input_seq \tl_new:N \l__karl_input_tl \tl_new:N \l__karl_temp_tl \tl_const:Nn \c_karl_space_tl { ~ } \NewDocumentCommand{\cypher}{omm} {% #1 = shift #2 = input file, #3 = output file \IfValueT{#1}{ \int_gset:Nn \g_karl_offset_int { #1 } } \ior_open:Nn \l_karl_input_stream { #2 } \iow_open:Nn \l_karl_output_stream { #3 } \ior_str_map_inline:Nn \l_karl_input_stream { \tl_set:Nn \l__karl_input_tl { ##1 } \tl_replace_all:Nnn \l__karl_input_tl { ~ } { \c_karl_space_tl } \karl_write:V \l__karl_input_tl } \ior_close:N \l_karl_input_stream \iow_close:N \l_karl_output_stream } \cs_new_protected:Npn \karl_write:n #1 { \tl_clear:N \l__karl_temp_tl \tl_map_inline:nn { #1 } { \karl_shift:n { ##1 } } \iow_now:NV \l_karl_output_stream \l__karl_temp_tl } \cs_generate_variant:Nn \karl_write:n { V } \cs_generate_variant:Nn \iow_now:Nn { NV } \cs_new_protected:Npn \karl_shift:n #1 { \group_begin: \token_if_eq_meaning:NNTF #1 \c_karl_space_tl { \char_set_lccode:nn { ~ } { \g_karl_offset_int + 32 } \tl_to_lowercase:n { \group_end: \tl_put_right:Nn \l__karl_temp_tl { ~ } } } { \int_compare:nTF { #1 + \g_karl_offset_int > 126 } { \char_set_lccode:nn { #1 } { #1 + \g_karl_offset_int - 126 + 32 } } { \int_compare:nTF { #1 + \g_karl_offset_int < 32 } { \char_set_lccode:nn { #1 } { #1 + \g_karl_offset_int + 126 - 32 } } { \char_set_lccode:nn { #1 } { #1 + \g_karl_offset_int } } } \tl_to_lowercase:n { \group_end: \tl_put_right:Nn \l__karl_temp_tl { #1 } } } } \ExplSyntaxOff \cypher[13]{plain.txt}{thirteen.txt} \cypher[-13]{thirteen.txt}{plain13.txt} \cypher[15]{plain.txt}{fifteen.txt} \cypher[-15]{fifteen.txt}{plain15.txt} \stop plain.txt u&9@@^{=!{{} Karl's students do not care about dashing patterns. Karl's students do not care about arrow tips. Karl's students, by the way, do not know what a transformation matrix is. thirteen.txt $3FMMk*J.**, Xn!y4"-"#$qr{#"-q|-{|#-pn!r-no|$#-qn"uv{t-}n##r!{"; Xn!y4"-"#$qr{#"-q|-{|#-pn!r-no|$#-n!!|&-#v}"; Xn!y4"-"#$qr{#"9-o(-#ur-&n(9-q|-{|#-x{|&-&un#-n-#!n{"s|!zn#v|{-zn#!v'-v"; plain13.txt u&9@@^{=!{{} Karl's students do not care about dashing patterns. Karl's students do not care about arrow tips. Karl's students, by the way, do not know what a transformation matrix is. fifteen.txt &5HOOm,L0,,. Zp#{6$/$%&st}%$/s~/}~%/rp#t/pq~&%/sp$wx}v/!p%%t#}$= Zp#{6$/$%&st}%$/s~/}~%/rp#t/pq~&%/p##~(/%x!$= Zp#{6$/$%&st}%$;/q*/%wt/(p*;/s~/}~%/z}~(/(wp%/p/%#p}$u~#|p%x~}/|p%#x)/x$= plain15.txt u&9@@^{=!{{} Karl's students do not care about dashing patterns. Karl's students do not care about arrow tips. Karl's students, by the way, do not know what a transformation matrix is. - While I am all for LaTeX as a tool to produce beautiful books and documents this question just asks for an answer in a programming language that is much more suited for the problem. This is an implementation in python. It is pretty verbose but I think that almost everybody with any programming knowledge who has never seen a single line of python will understand and to some degree be able to modify the following code: import string filecontents = """Karl's students do not care about dashing patterns. Karl's students do not care about arrow tips. Karl's students, by the way, do not know what a transformation matrix is. """ f = open('plain.txt', 'w') f.write(filecontents) f.close() def shift(char, offset): if char in string.lowercase: shifted_index = string.lowercase.index(char) + offset shifted_char = string.lowercase[shifted_index % len(string.lowercase)] elif char in string.uppercase: shifted_index = string.uppercase.index(char) + offset shifted_char = string.uppercase[shifted_index % len(string.uppercase)] else: # do nothing for special characters, otherwise we end up with non-printable shifted_char = char return shifted_char input = open('plain.txt', 'r') output = open('plain_shift.txt','w') while True: if char: shifted = shift(char, offset) output.write(shifted) else: break input.close() output.close() offset = 13
# Unexpected output voltage in opamp (LM741, DC, single supply) circuit [duplicate] As a beginner I tried to make a simple DC circuit with an opamp on a single supply. This is what I came up with: simulate this circuit – Schematic created using CircuitLab I use R3 and R4 as a voltage divider to get 1V between the inputs of the opamp, and R1 and R2 in the noninverting configuration. I tested this circuit in a few simulators and they show 3V at the output. $$Gain = 1 + \frac{R2}{R1}=1+\frac{560}{270}\approx3$$ I built the circuit and measure 5.64V where I expected 3V. What am i doing wrong? • What you're doing wrong is expecting a 741 to operate off a 9V single-supply. I suggest you search this stack for all the many '741' questions and read about why that's a problem. – brhans Jan 15 '20 at 20:44 The LM741 (datasheet here) has a "common mode" input voltage range of not within about 3V of either supply. In your example, with a 9V supply, you must have an input voltage in the 3V-6V range (0+3, 9-3) for the opamp to function correctly. • The absolute maximum input common mode range varies with supply voltage. It is usually not well covered in data sheets and if stated is usually specified at a single relatively high supply voltage. I specified a Vcm of "not within 3V of either supply", based on a Vcm of +/-12V for a supply of +/-15V in the above cited datasheet. AT lower supply voltages some suggest that Vcm may be within 2V of either supply - you'd need to establish this from a relevant datasheet spec - which may not be available. It's a lot easier to use an eg LM324/358 (see below) which has a Vcm range of 0-(Vcc-1.5V). The recommended minimum supply voltage is +/- 10V = 20V total, but a lower supply voltage can be used. How much lower is not always well specified in datasheets but a lower limit is obviously not less than the common mode input range - ie with a supply of 6.1V, if Vcm is +/-3V, you will have 6.1-6 = 0.1V of allowed input swing. SOLUTION The best solution to most LM741 problems is to use some other device. The LM741 is an extremely old design that has been superseded by several generations of devices with improved performance in various areas. An excellent alternative, also very old but newer than the LM741, is the LM324 (quad) or LM358 (dual) opamp. (LM324 datasheet here These are highly available, low cost, single supply (with some limitations) and generally well behaved and easy to use. The LM324 is obsolete but still exceedingly useful. The LM741 is exceedingly obsolete and, while an extremely marvellous and extremely widely used opamp in its day, is now so much harder to use and more limited in capabilities than many more modern devices that it should be allowed to bask in its historical glory. • @brhans Indeed. Thanks. – Russell McMahon Jan 15 '20 at 21:02 • Agree with @Russell. The LM324/124, or LT1014 is our nuts & bolts op amp to use for general purpose applications. I don't really know of anyone who uses any flavor of a '741 for a new design. – SteveSh Jan 15 '20 at 23:09 Doesn't this opamp require at least 20V voltage supply? Probably the spice model used in the simulation has a different supply voltage threshold. • +/- 10 volt supplies means 20V across the op amp, min, – SteveSh Jan 15 '20 at 20:48 • @SteveSh Yes BUT that's "recommended min" - a vaguer than usual spec. They are not so keen at establishng an absolute min. At a min-min :-) Vsupply cannot be less than Vcm - but Vcm min is also usually not specified - and even less so at low Vsupply. – Russell McMahon Jan 15 '20 at 20:54 • Using the TI data sheet for the uA741, table 6.2 Recommended Operating Conditions, they spec Vcc+(min)=+5V and Vcc-(min)=-5V. I agree that those specs really don't mean much, as hardly any other parameter is spec'd at those voltage. Most parameters are spec'd at +/-15V supply rails. You use it outside of those values at your own risk. – SteveSh Jan 15 '20 at 22:09
# Page 1 Listening Study Guide for the TOEFL® ## How to Prepare for the TOEFL® Listening Test ### General Information The TOEFL Listening test assesses your ability to do “academic listening.” This includes listening with understanding to lectures like the ones you would experience in college classes and to conversations you might have while attending college. The speech you hear will not be very formal and should sound natural to you. You will have 60 to 90 minutes to listen and answer 34 to 51 questions. Therefore, approximately 10 minutes should be allotted to each listening task. On this test, you will listen to lectures and conversations wearing headphones and you will be able to take notes as you listen. Then, you will mostly answer typical, written, multiple-choice-type questions about what you heard. A few questions, however, will have a slightly different format: • questions with more than one correct answer out of over four possible choices • questions asking you to put steps in a sequence • questions asking you to place objects or text into the proper place on a chart Note: Our practice format does not allow for more than four answer choices, but we have attempted to provide practice in all of these question formats within our system limits. Just be prepared to see a slightly different format for some questions on the actual test, including more than four answer choices. Here are some specific things you should know and ideas for practicing your listening skills before test day. The academic portion of the listening section has three to four lectures, and each listening task is generally followed by six questions. The lectures are usually excerpts of longer classroom lectures. Don’t worry, the information you listen to will include all the information you need to answer the questions correctly. The topics covered are from a broad range of topics, from the arts to life sciences to physical sciences, and topics from the social sciences. The academic listening section has seven different types of questions, shown below under the three broad question categories. These questions are very similar to the questions you experience in the reading section. They are worth practicing because understanding the type of question being asked will help you in selecting the correct answer. Basic Comprehension Questions (1–2 of these per lecture) * Identifying the gist (main idea or topic) and/or major points (flow of ideas) * Listening for details (key words, phrases, and details) Pragmatic Understanding Questions (1–3 of these per lecture) * Understanding the function (purpose) * Understanding the speaker’s attitude and degree of certainty Connecting Information Questions (1–2 of these per lecture) * Understanding organization * Connecting content (categorizing information, summarizing a process) * Inference and predictions #### Listening for Basic Comprehension The word comprehension refers to the various kinds of understanding you achieve after reading something. It ranges from learning specific facts to gaining meaning from things that weren’t actually stated in the material, but merely implied by other things the author stated. During the listening task, you will encounter several questions designed to test if you have basic comprehension of the lecture/conversation you listened to. It’s important to take notes as you will not be able to review or go back and listen again. Each listening task will be approximately 3 to 5 minutes long. You will hear the entire listening passage. Don’t worry, there are key sections that will replay so that you can answer the questions accordingly. Let’s begin with questions that ask about the main idea, or gist, of the passage. Main Idea As you listen and take notes, you will listen for the gist. The gist is the main idea or main topic of the conversation and/or lecture. In other words, the gist is what the lecture/conversation is about. It is the big picture and not the little details. You will answer one to two questions that deal with the gist or main idea. Listen for what the topic is about. The lecturer or speaker will usually give you a clue in the beginning of the passage. For example, the lecturer might say, “Good morning class. Let’s continue our talk about the geographical features of the Great Lakes.” You will know right away that the gist of the passage discusses the geographical features of the Great Lakes. Therefore, when you see a question that asks what the topic is mainly about, You can identify immediately that it is a gist or main idea question. You can recognize a main idea or gist question by looking for clues in the question. Is the question worded in general terms? For example: “What are the students talking about? Why is the professor reviewing the chapter?” Does the question include the word mainly, mostly, or about? These are clear indicators of a main idea question. The answers may be inferred or directly stated by the speaker. Major Points Major points are similar to new paragraphs in a reading passage. The lecturer/speaker will often use transition words to indicate that a major point is about to be spoken; therefore, notes should be taken here. You’ll hear words such as first, second, third, or the lecturer may even state the major point outright. For example: “Our first major point about Navahoe architecture is that it was designed to …” You’ll also hear words like additionally, furthermore, lastly, finally. These are clear clues that you will hear a major point. Prepare yourself to write notes here as details will follow that will help you in answering detailed questions. Supporting Details Supporting details is exactly that—information that supports the main idea. You will hear examples or other specific information. There may even be a person restating the information incorrectly so the speaker can reiterate the information, or someone in the listening passage may repeat the information. This is done for your benefit: to let you know that it was an important detail. Write this information down because it will most likely be a question after the passage is finished. When you get to the questions, the supporting details question will require you to remember specific information. The level of difficulty can vary. A supporting detail question might be as simple as, “According to the professor, at what age does a child begin to ____?” A more difficult question may ask you to refer to more than one section of the passage to determine the correct answer. For example: “What two factors contribute to the philosophy of ____?” Again, the importance of good note-taking cannot be stressed enough. #### Listening for Pragmatic Understanding Pragmatic understanding is a key element in determining your level of English language ability. In fact, it is so important to the TOEFL scoring that pragmatic understanding is found in three out of the four sections of the TOEFL iBT test. In the listening section, pragmatic understanding, simply put, is what you understand about what you hear. How does the speaker say it? Are they using terms and expressions that say one thing, but mean another or how certain or uncertain the speaker is by the way they say something? Pragmatics are what you understand about why they say what they say or the meaning in their tone. You could say that pragmatic understanding is indirect understanding. Not what the speaker says directly, but what their purpose is, or what they mean. Let’s explore ways to improve your skills for one of the key listening purposes you will be tested on in the listening section: Listening for Pragmatic Understanding. Speaker’s Attitude The speaker’s attitude is used to imply information that is not directly said. You will need to use implied information to decide what the speaker’s thoughts, feelings, or opinions are at some point in the excerpt. The information you need may not be linguistic (direct words or phrases), but in the speaker’s tone of voice. The speaker’s attitude can be determined by volume, pitch, and speed. To really get the point of the excerpt, you’ll often have to pick up on clues in the speaker’s tone and attitude. Is the speaker using a calm or emotional tone? Is the speaker using casual or formal or professional language? Is the speaker’s voice loud, hard, or firm, or is it slow, soft, and nervous? Does the speaker’s voice sound friendly or unfriendly? Confident or angry? Noting these rises and falls in tone, speed, and volume can give a clear indication of the speaker’s attitude. Questions that address the speaker’s attitude will have key words like seem to feel or best expresses. When you see question clues like these, you will be required to answer based on your pragmatic understanding; in other words, what you understand by the speaker’s tone, volume, speed, and pitch. Speaker’s Degree of Certainty In English, the speaker’s degree of certainty can be found in several different ways: If a speaker thinks they are saying something important, they’ll probably say it firmer or louder. They will emphasize the point more strongly. A sentence spoken louder stands out and this can indicate a degree of certainty and importance. When a speaker has a very strong opinion, the volume will also most likely increase. This is true whether the emotion is positive or negative. People use a higher volume or stronger emotional voice when they are excited about something or when they are angry, annoyed, or being sarcastic. When people are sure of something, they tend to speak louder. However, when people are not confident, they will tend to speak more quietly. Speed also plays an important role in the degree of certainty. English speakers tend to talk faster when they are excited about something. Volume, speed, and strong emotion can determine certainty, and even uncertainty. Lastly, the speaker’s word choice can determine certainty. They’ll use words like a little, kind of, sort of, more or less, maybe, not so much and other words of this kind. These words are called qualifiers. If a speaker uses a lot of qualifiers, then it is very possible that the speaker does not have a high degree of certainty. If the speaker doesn’t use qualifiers, then he/she most likely has a high degree of certainty. Speaker’s Function or Purpose The speaker’s function or purpose is not what the speaker says, necessarily, but the function or purpose that lies beneath. While listening, you must determine if the speaker is complaining, agreeing, narrating, questioning, or recommending, and so forth. Understanding the meaning within the context of an entire lecture or conversation is significant in cases where the speaker’s opinion or perspective is involved. When taking notes on a lecture or conversation, you can pay close attention to whether the statement is intended to be understood literally or if it has another meaning beneath the surface expression. Questions of this type usually begin with why. The question may also include a direct reference to a speaker in the lecture or conversation. One example would be, “Why does the professor say, ‘You’re better than you think’?” or “Why does the housing director listen to Matt’s complaint?” Other clues that indicate the question is a function/purpose question are words such as imply, inferred, and purpose. These questions are usually followed by a replay of a portion of the lecture/conversation. Here are some examples of these types of questions: • “What does the professor imply when he says this?” (Then you will be directed to replay a part of the audio.) • “What can be inferred from the professor’s reaction to the student?” (replay) • “What is the purpose of the woman’s response?” (replay) • “Why does the student say this?” (replay) #### Listening to Connect and Synthesize Information The questions that involve connecting and synthesizing information ask you to understand the passage as a whole. Synthesizing information is taking information from the entire passage that is connected and determining the importance of the sections of the passage and relationships between ideas, drawing conclusions, and making inferences from the information you heard. Simply put, you’ll need to have a handle on more than just the main idea. You’ll have to understand how the speaker presents his/her ideas, deciding which are important and how the ideas are connected. Let’s look at ways to tackle synthesizing information. After a listener identifies what is important in the text, he/she must go through the process of organizing, recalling, and recreating the information and fitting it in with what is already known. Ellin Keene and Susan Zimmerman, the authors of Mosaic of Thought - Heinemann, 2007 state: “Synthesis is about organizing the different pieces to create a mosaic, a meaning, a beauty greater than the sum of each shiny piece.” The questions you answer in this category measure your ability to integrate information from different parts of the listening passage. The relationships you hear in the conversation or lecture may be explicit or implicit. To choose the right answer, you have to be able to identify and account for the relationships among the ideas and the details you hear. There are several types of Connecting Information questions. Let’s look at a few tips and tricks for you to succeed in answering this type of question: Organization of Information As you listen, remember that taking notes is vital. Find a system that works and practice it. A question regarding organization of information is usually worded in this fashion: “How is the information in the listening passage organized?” Or it might refer to a particular section of the passage. For example: “Why does the professor describe the magic trick? What is he trying to demonstrate?” These types of questions measure your understanding of how the speaker organized the information. In your notes, you can mark topic changes with a star or brackets. Circle introductions and conclusions, use symbols like + and to show for and against, or positive and negative. Mind mapping is also a good way of taking notes quickly. (See illustration below.) Also, listen for transition words. Words like first, second, and third will guide you through the speaker’s ideas. Relationships Among Ideas As you are taking notes and organizing the relationships and important information, your note-taking system can identify logical relationships between the speaker’s ideas and your notes. For example, you could show cause and effect with arrows, steps in a sequence, or comparisons with a quick chart. Here again, mind mapping is a useful visual tool to reveal organization of content. Making Inferences and Drawing Conclusions Making inferences and drawing conclusions is just that—listening to what is stated and drawing conclusions or inferring about unstated information. You will see questions that have key words such as imply, inferred, and/or what does the ____ mean…. When you see these key words in a question, you’ll know that you need to understand what was not explicitly stated. Let’s look at an example: Library Worker (man): Welcome to the university library. You look a little lost.* Woman: I think so. I have a paper due about Mount Everest on Monday. I’m not sure where to look for information. Library Worker (man): Absolutely. That’s what I’m here for.* From your notes, you can conclude that the woman is a student and the man works for the university’s library. Your notes will also reveal that the woman is most likely a student and she is looking for resources about Mount Everest at the library. The library worker responds positively. A good example of a question that asks you to infer or draw a conclusion might look like this: 1. What does the man imply when he says: (audio replay) “Absolutely. That’s what I’m here for.”? And the answer choices might look like this: • The man is going to help her visit Mount Everest. • The man is going to help her find the information she needs. • The woman wants to go to Mount Everest. • The man is here to reserve his book at the library. The man in this audio implies that he will help the woman find the information she needs.
Please use this identifier to cite or link to this item: http://scholarbank.nus.edu.sg/handle/10635/48648 Title: Identities on Hyperbolic Surfaces, Group Actions and the Markoff-Hurwitz Equations Authors: HU HENGNAN Keywords: McShane's identity, Roger's dilogarithm, Mapping class group, Character variety, Coxeter group, Hurwitz equation Issue Date: 19-Aug-2013 Citation: HU HENGNAN (2013-08-19). Identities on Hyperbolic Surfaces, Group Actions and the Markoff-Hurwitz Equations. ScholarBank@NUS Repository. Abstract: This thesis is mainly focused on identities motivated by McShane's identity. Firstly, by applying the Luo-Tan identity, we derive a new identity for a hyperbolic one-holed torus T. Secondly, we review the $SL(2,\mathbb{C})$ character variety X of $\pi_1(T) = < a, b >$ and the action of the mapping class group MCG of T on X. The Bowditch Q-conditions describe an open subset of X on which MCG acts properly discontinuously. We prove a simple and new identity for characters satisfying the Bowditch Q-conditions which generalizes McShane's identity. Thirdly, we can interpret the action of MCG on X as the action of the Coxeter group $G_3$ on \mathbb{C}^3 leaving invariant the varieties $x_1^2 + x_2^2 + x_3^2 = x_1 x_2 x_3 + \mu$, where $\mu \in \mathbb{C}$. We generalize the study to the action of the Coxeter group $G_m$ on $\mathbb{C}^m$, where $m \ge 4$, which leaves invariant the varieties described by the Hurwitz equations $x_1^2 + x_2^2 +? + x_m^2 = x_1 x_2? x_m + \mu$. We formulate a generalization of the Bowditch Q-conditions and show that it describes an open subset of $\mathbb{C}^m$ on which $G_m$ acts properly discontinuously. Finally, we prove an identity for the orbit of any m-tuple in the subset. URI: http://scholarbank.nus.edu.sg/handle/10635/48648 Appears in Collections: Ph.D Theses (Open) ###### Files in This Item: File Description SizeFormatAccess SettingsVersion OPEN None #### Page view(s) 119 checked on Dec 29, 2018
MySQL 学习笔记(五)--mysqldump mysqldump 与 --set-gtid-purged 设置 (1)  mysqldump The mysqldump client utility performs logical backups, producing a set of SQL statements that can be executed to reproduce the original database object definitions and table data. It dumps one or more MySQL databases for backup or transfer to another SQL server. The mysqldump command can also generate output in CSV, other delimited text, or XML format. mysqldump requires at least the SELECT privilege for dumped tables, SHOW VIEW for dumped views, TRIGGER for dumped triggers, LOCK TABLES if the --single-transaction option is not used, and (as of MySQL 5.7.31) PROCESS if the --no-tablespaces option is not used. mysqldump advantages include the convenience and flexibility of viewing or even editing the output before restoring. You can clone databases for development and DBA work, or produce slight variations of an existing database for testing. It is not intended as a fast or scalable solution for backing up substantial amounts of data. With large data sizes, even if the backup step takes a reasonable time, restoring the data can be very slow because replaying the SQL statements involves disk I/O for insertion, index creation, and so on. For large-scale backup and restore, a physical backup is more appropriate, to copy the data files in their original format that can be restored quickly. (2)  --set-gtid-purged=value This option enables control over global transaction ID (GTID) information written to the dump file, by indicating whether to add a SET @@GLOBAL.gtid_purged statement to the output. This option may also cause a statement to be written to the output that disables binary logging while the dump file is being reloaded. The following table shows the permitted option values. The default value is AUTO. Value Meaning OFF Add no SET statement to the output. ON Add a SET statement to the output. An error occurs if GTIDs are not enabled on the server. AUTO Add a SET statement to the output if GTIDs are enabled on the server. A partial dump from a server that is using GTID-based replication requires the --set-gtidpurged={ON|OFF} option to be specified. Use ON if the intention is to deploy a new replication slave using only some of the data from the dumped server. Use OFF if the intention is to repair a table by copying it within a topology. Use OFF if the intention is to copy a table between replication topologies that are disjoint and will remain so. (3)  --set-gtid-purged 与 导出文件中SET @@SESSION.SQL_LOG_BIN=0 The --set-gtid-purged option has the following effect on binary logging when the dump file is reloaded: • --set-gtid-purged=OFF: SET @@SESSION.SQL_LOG_BIN=0; is not added to the output. • --set-gtid-purged=ON: SET @@SESSION.SQL_LOG_BIN=0; is added to the output. • --set-gtid-purged=AUTO: SET @@SESSION.SQL_LOG_BIN=0; is added to the output if GTIDs are enabled on the server you are backing up (that is, if AUTO evaluates to ON). (4)  举例说明 在开启GTID模式的实例上执行mysqldump,假如导出命令如下: /usr/local/mysql/bin/mysqldump --master-data=2 -u账……号 -p密……码 --databases 数据库1 数据库2 --single-transaction -R --triggers > /data/dbdump/db1_db2_dump.sql Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events. ...................................................................................................。。。。。。。。。。。。。。。 /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */; /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */; /*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */; SET @MYSQLDUMP_TEMP_LOG_BIN = @@SESSION.SQL_LOG_BIN; SET @@SESSION.SQL_LOG_BIN= 0; -- -- GTID state at the beginning of the backup -- SET @@GLOBAL.GTID_PURGED='66fe6059-18c7-22e6-1d21-000c27cswbda:1908761-14'; -- -- Position to start replication or point-in-time recovery from -- -- CHANGE MASTER TO MASTER_LOG_FILE='1ogbin-003413', MASTER_LOG_POS=999; -- ...................................................................................................。。。。。。。。。。。。。。。 (5)  场景延申 示意图如下: (1)导入命令 mysql -u用……户……名 -p密……码 < /data/dumprestore/db1_db2_dump.sql 收到报错信息 ERROR 1840 (HY000) at line 24: @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty. reset master; (2) 如果执行了 reset master,Server A2 和 Server B2的主从关系就会坏掉。 (3) 如果在导入前,主从都是没有业务数据库的新集群。主从修复,可以都执行了 reset master,重新搭建主从。 change master to master_host='IP地址', master_port=端口号, master_user='用 ……户 ……名', master_auto_position=1; (4)执行时间过长 mysql导入数据耗时远远大于mysqldump导出耗时。测试中,我们将大小都是60G的两个数据库备份还原,mysqldump 执行55分钟,mysql导入执行了5个小时。 (5)灌入数据后,可以通过xtraback备份还原来修复重建ServerA2 和 ServerB2的主从。 (6) 如果导入数据时不想破坏掉ServerA2 和 ServerB2的主从关系,可以考虑,导入前先注释掉SET @@SESSION.SQL_LOG_BIN= 0;,再导入。 (7) 另外,如果场景更复杂,例如新集群321 需要承接、同步来自多个集群(不仅仅是来自集群123)的数据,也可以考虑 通过传统模式搭建主从(指定binlog文件+位点),这个场景下,mysqldump 时, --set-gtid-purged 设置为off,或者导出后,考虑将SET @@SESSION.SQL_LOG_BIN= 0;和SET @@GLOBAL.GTID_PURGED='??????????????';注释掉。 head -n 100 dump文件.sql | grep 'CHANGE MASTER TO' posted @ 2021-10-20 00:26  东山絮柳仔  阅读(126)  评论(0编辑  收藏  举报
# LCA 2019 Day 1 LCA is in Christchurch this year; the location in Christchurch dovetails nicely with the stereotypes, being flat, open, and full of creeks and nice gardens and parks. Christchurch scores in the swag department: a hessian sack that smells nice, not full of leaflets that I’ll end up throwing away, and a Raspberry Pi Zero, complete with a pre-loaded SD card, which is both appropriate and neat anyway. It also scores an acknowledgement to Ngai Tahu. Accomodation is close to the conference, and you can also score a cheap week long gym membership at the uni rec centre, which is a nice bonus. ## Conference Opening We are encouraged to obey the pacman rule; if you haven’t run into that before, groups would do others a kindness to leave a gap in conversational circles so there’s somewhere to join conversations. (I have one of those old person reveries where I am left gobsmacked by the fact that a conference can afford to give away what would, in my childhood, been unimaginable compute power.) The traditional “have you bothered showing up?” prize is announced; Chromebooks will be given to people who bother showing up to the morning sessions. ### Kathy Reid Kathy is the current Linux Australia president, and notes that it’s the 20th anniversay of the conference, and thanks the successive volunteer teams who have put on what is “one of the best regarded technical conferences in the world.” She would like to encourage us to joing Linux Australia - it’s free! ### Charity Raffle - Digital Future Aotearoa This year’s charity is Digital Future Aotearoa, and the price is a Lego Mindstorms kit. Digital Future Aotearoa is interested in closing the digital divide in New Zealand; in the first year in Christchurch an illustration of that was that in their first year, there was a 70:1 difference in uptake from students in their coding clubs depending on whether they lived in the west or east of Christchurch. It turned out the difference was the number of families with Internet in the home. Other initiatives include the she can code and {code club}/Aotearoa; finally there is Code Club 4 Teachers to help teachers in the classroom. Finally, they are running workshops for children of attendees. ## Chaining Some Blocks Together Josh Deprez A game-inspired programming language and associated web-based IDE written in Go. Shenzen Go - draw diagrams and write code. You can draw diagrams consisting of blocks, which pass messages between one another; each node(block) is logic, and the lines show the flow of information. Shenzen Go programs in, pure Go out. Everything is a Go concept. Each node is a goroutine, and each arrow is a channel. Producing pure Go allows for interop whith other go. • Write code where code makes the most sense. • Run n copies by tuning a parameter. • Take advantage of pipelines without even thinking. • you can seew the Go and visual representation side-by-side. “Conference Driven Development” - features tend to be delivered in the run-up to conferences. Giving a presentation is a dopamine hit, leading to a rush of development. Another dopamine rush are the watching, starring, and forking on git. Hard ideas: • Writing a “web app”in Go with GopherJS (>2,000 lines of Go). A JS framework would have been a lot easier. • Communicating over gRPC Web. It turns out this is a challenge, too - a lot of the frameworks are little-known. • Adding generics (1,000 lines of Go). This makes it a lot easier to implement the node intercommunication, because otherwise the type-incompatibility of different node input and outputs would leak the underlying Go types into the experience. Josh notes that while people are interested, but feels like he’s long way from a tipping point where many people use and contribute to it. ## Python++ Jan Groth (Speaker note: grey text on a black background on a projector are… not ideal for readability.) Python 3 is now more than ten years old. It’s an eon in computer terms. So why is Python 2 still around? Python 2 goes end of life this year. There are not more bug fixes or security updates from 2020 onwards. Move on! Tip 1 Even though most sysadmins aren’t writing tremendously version-dependent code, but you should use venv or pipenv to build your code anyway; it will make your life easier in the long run. Tip 2 Use the coding and naming conventions for Python. It takes a little up-front effort to learn, but it makes it easier for yourself and others later. Moreover, when picking filenames, choose names which are descriptive. File names are not movie spoilers. When your code is complex, while comments are important, the structure of your code is important. Extract code into methods if it doesn’t logically belong together. Beautiful Code is a matter of style, and, in Jan’s view, simplicity. If there are many ways to implement a piece of code, consider than it is probably best for your code to be pythonic, concise, and to make use of the built-ins. Don’t use a counter when you can use range; don’t use a loop when you can use a list comprehension. Tip 3 Comments should explain the why you are doing a thing, not what you are doing. The code is the what. Tip 4 Classes: people often don’t use classes, but they can make your code easier to understand in the long run. A class is a blueprint for objects; an object encapsulates state and behaviour. • Classes make re-use easy. • Sub-classing. • Classes are the door to a whole new world. Tip 5 Use the right tool for the job. Using an IDE really helps a lot. Removing unused imports, reformatting to python standards, using breakpoints effectively. This was a good talk - and perfect for an occasional Python programmer like myself. ## Lunch Being an idiot, I forgot to pack my laptop charger and had to dash off to a big box store to get a new one; since it was a little distance away, I tried out a Lime scooter. This created the entertaining-to-me vignette of Matthew Garret explaining to a small crowd why e-scooter hire is a horrible black hole of terrible companies doing terrible software and hardware, puntuated by a black Trans-Am \exercising it’s voice synth in the background, while, in the foreground I merrily signed up for the Lime application1 and plugged in my details and credit card. The Lime scooter I made off with was an interesting experience. They have a much higher deck and centre of gravity than my eMicro Condor, leading them to feel quite a bit more precarious, a sensation not helped by the fairly mediocre brakes. Other than that it was a good experience blatting around Christchurch’s flat streets and roomy footpaths/cycle lanes. ## Distributed Storage is Easier Now or THe Joy of Spreadsheets Josh Simmons “This,” Chris notes, “is the first time I have introduced a presentation by someone who is married to me.” “This is an ode to boring and uncool technology,” notes Josh. He was going through his grandfather’s estate and found in his paperwork a note urging “Don’t discard old techniques.” This resonated with him. So why spreadsheets - spreadsheets are familiar to everyone. Batteries are very much included - the “standard library” is use for maths, scientific functions, finance, you name it. They’re maintainable-ish; most importantly you don’t need a programmer to maintain a spreadsheet. They’re super extensible: formulae, macros (whch are Turing complete), data connectors. And they integrate with other office applications - email, databases, whatever. Josh is a CFO for the OSI, which means he is responsble for annual budgets, financial projections, scenario planning, and so on. Josh could give us many examples, but the one he’s going to show in the time he has available, is one he calls The Generator: mail merge with custom attachments. Those attachments might be acceptance letters, certificates, forms, or all manner of things; it takes a spreadsheet as a data source, and then merges this with a set of templates, and sends the mail with attachments. This is abou the equivalent of 0.6 of an FTE. The Generator also has dialogue-driven workflow, including presets and previews. Is this going too far? Maybe, Josh allowed, when he found himself writing a function called leftJoinSheets(), that was going too far. But other people have gone further. But back to the serious point: when thinking about chosing the right solution to problems, bear in mind that we can chose well-known technologies. And bear in mind that a spreadsheet is great if you need people who aren’t programmers to be able to maintain. ## How much do you trust that package? Benno Rice “Everyone knows npm is bad! Why is that!?” • It installs tonnes of packages! • Sometimes they disappear! • Sometimes they have malware! • It’s a central point of failure! Because these things really only happen to npm, right?! This is a supply chain problem - a term pulled from traditional manufacturing. Supply chain attacks have gained prominence in the last few years, most notably as a result of the Bloomberg story claiming that Apple and SUpermicro had been compromised. (Pity that story is almost certainly bullshit.) Sabotage isn’t the only possible problem; lack of maintenance, defects, and unavailability can cause you serious problems, too. And while the supply chain used to refer solely to hardware, the same concepts map onto software: do you understand the provenance of all your third party software? Do you think about the fact your compiler or language runtime are third-party components? Or that third parties have third parties? And that’s before you come to the malicious stuff - hackers and other adversaries who may be trying to break into your code, or break into someone else’s code via your code. You don’t even need to break into the software or repos - you could simply rely on confusion around “color” and “colour” in the name of packages. So why is npm mentioned so often? JavaScript relies on composition and libraries heavily, so the dependency tree is larger, yielding a larger supply chain; Electron is another reason, it’s incredibly popular. And, lastly and perhaps most importantly, contempt culture: a bunch of people like excuses to shit on JavaScript. So how do we stop this? • Support the maintainers. Recognition, like exposure, does not pay the bills. • Process: Some people (nerds) hate process. But you should have some. • When you select a thing, have some process around understanding what and why you’re pulling in dependencies. • Have a process to stay current. • Have a process to review and audit third-party code. Benno gives an example of flagging included libraries as known-good (checked, audited) and flagging them unknown for further scrutiny by the security team. So it’s not just npm. Everyone has this problem, and if they haven’t yet, they will. When you see these problems, you shouldn’t point and laugh; you should understand what went wrong and whether it could go wrong for you. Murali Suriar and Laura Nolan Murali and Laura created this talk to fill a gap around “doing load balancing” up and down the stack with modern applications. This is a “here are all the tools, and here are how they interact, and the things you should think about.” • LB failures are often dropped requests. • It’s always in your serving path. • Huge impact on the performance and resilience of your application. Let’s start with Superb Owls. You’ll probably have a DNS record that points to an IP address, with edge routers which advertise your network to the Internet via BGP (containing that address). This doesn’t give a lot of availability; you’ve only got one server. • This gives some availability (DNS and your network are seperate, for example). A simple improvement in performance and availability could be adding another server; this is a step function up. In this setup you then update the DNS record to round-robin the two addresses. Depending on the client obeys the spec - which is not a given in the real world! - you’ll have about half the load on each server. Unfortunately, when one server fails, half your clients will have timeouts. In this world, we pull the IP address out of the DNS records; it will take time to propogate (depending on the TTL). Most people either have very short (10 seconds or less) or long (1 day plus) TTLs. • Long TTLs: users will take a long time to see changes. • Short TTLs: higher DNS load, higher client load latency, more likely to notice any DNS outages or other problems, many clients don’t obey very short TTLs. • Very minimal high availability. Requires extra automation or manual intervention, and takes time to propogate changes on failure. • Flexibility: allows operators or automation tools to make changes, but the effect is delayed and uneven. An answer to the failover problem is to put a load balancer in front of your servers; you can go back to a single A record in the DNS. The network load balancer abstracts away the backends from the inbound traffic. It is, however entirely ignorant of the application itself. Network load balancers hash of a network address; a common choice is a has of a 5-tuple. The algorithm has to be careful to preserve state, since TCP tends to interpret packets being re-ordered by a load balancer as packet loss. Which causes the stack to slow down. This gives us a lot more power. We can change the back ends in a way which is completely invisible to the users. We get: • Good availability. • Good flexibility. However, it doesn’t give you a load balancer with any understanding of the load distribution or any content awareness. This is all great, but gives you no resilience if you lose a datacentre. So maybe you set up a second datacentre. There are a number of ways to do this; one is anycast which is “a whole other rant”; another is multiple A records, one per datacentre in the cluster. The problem with this is you go right back TTL problems; a solution to this is to allow each datacentre to present all the IPs in the A records; normally they present only the relevant address, but will lift up others if the other datacentres fail (I feel like that’s an inadequately clear). Running short on time: there’s a lot more in the slides. You should go read about them, as they cover areas such as Layer 7 balancers. You should think about which things matter to you out of the menu of topics. ## Prometheus Demystified Simon Lyall ### Intro • Prometheus is for metrics: name + timestamp + value • Single daemon which connects to exporters via HTTP; 10 - 15 seconds is typical. • Stored on local disk. • Exports an API. • The longer the metric the more likely it is to cause problems. ### Getting Data • Exporters: • Gather metrics from source. • Expose http endpoint. • Around 100 different ones available. • Applications can also expose metrics of their own. • Your applications should expose them. • Prometheus can gather metrics at every level of the stack, and you should: hypervisor/cloud, VM, container, k8s, load balancers, app servers, etc. Problems: • You can be pulling thousands of metrics per server. • There can be overlaps of metrics with slightly different labels and values for the same thing. • Some cost money - for example, pinging CloudWatch every 10 seconds will cost you a fortune (running into the thousands per month). • Many applications aren’t instrumented. So what should you get? • Small: standard exporters, black-box the edge; texfile via node_exporter for anything you’re especially interested in. • Medium: many standard exporters. • Large: instrumented code, federation and summaries. ### Service Discovery • Small: static discovery. • Medium to Large: template it all. • It’s hard to find good templates. • There’s a lot of trash on the Internet, sadly. • Not going into detail, but you should think about: • Silences are important for maintenance. • Labels are critical once you’ve got more than one team. • Look at PagerDuty / Victorops / Opsgenie / Pagertree once it starts getting important. • Opsgenie and Pagertree have a free tier. ### Storage • Problem: it’s a random write heavy workload. • Read quesries may run against large amounts of data. • TSDB is very good up to a point, but: • No resilience. • Hard to backup. • Can corrupt. • Replacements are new and hard to run. • If you’re: • Small, backup regularly and rollback in the event of an outage. • Medium: backup regularly and run two instances in parallel. • Large: loog at Thanos, M3, InfluxDB. • Federate to scale. ### Display • The built-in dashboard is… there. • The API can be used by other tools. Grafana is the only one Simon has found. • If Grafana doesn’t do it, you’re shit out of luck. • Very well-known and well-tested. • Talks to Alertmanager, as well. • Downside is there are a lot of dashboards, but most of them aren’t that great quality-wise. • Sometimes they are buggy, too. 1. Pleasingly enough it seems to use only a minimal set of permissions and does not, say, attempt to hoover up contacts or the ability to poke around other applications you may be running. Share
Volume 338 - High Energy Astrophysics in Southern Africa (HEASA2018) - HEASA 2018 Speakers Spectral Variability Signatures of Relativistic Shocks in Blazars M. Böttcher,* M. Baring *corresponding author Full text: pdf Published on: July 29, 2019 Abstract Mildly relativistic, oblique shocks are frequently invoked as possible sites of relativistic particle acceleration and production of strongly variable, polarized multi-wavelength emission from relativistic jet sources such as blazars, via diffusive shock acceleration (DSA). In recent work, we had self-consistently coupled DSA and radiation transfer simulations in blazar jets. These one-zone models determined that the observed spectral energy distributions (SEDs) of blazars strongly constrain the nature of the hydromagnetic turbulence responsible for pitch-angle scattering. In this paper, we expand our previous work by including full time dependence and treating two emission zones, one being the site of acceleration. This modeling is applied to a multiwavelength flare of the flat spectrum radio quasar 3C~279, fitting snap-shot SEDs and light curves. We predict spectral hysteresis patterns in various energy bands as well as cross-band time lags with optical and GeV $\gamma$-rays as well as radio and X-rays tracing each other closely with zero time lag, but radio and X-rays lagging behind the optical and $\gamma$-ray variability by several hours. DOI: https://doi.org/10.22323/1.338.0031 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
## Algebra 1 $-13$ We start with the given expression: $2a-b^2+c$ We plug in values for $a$, $b$, and $c$: $2(-1)-(3)^2+(-2)$ We simplify according to the order of operations: First, we simplify powers: $2(-1)-9+(-2)$ Then, we multiply: $-2-9+(-2)$ Finally, we add and subtract from left to right: $-11+(-2)=-13$
JEE  >  Differentiation for Physics # Differentiation for Physics - Notes | Study Physics For JEE - JEE 1 Crore+ students have signed up on EduRev. Have you? Function: If the value of a quantity y (say) depends on the value of another quantity x, then y is the function of x i.e. y = f(x). The quantity y is called dependent variable and the quantity x is called independent variable. For example, y = 2x2 + 4x + 7 is a function of x (i) When x = 1, y = 2(1)2 + 4x1+7 = 13 (ii) When x = 2, y = 2(2)2 +4x2+7 = 23 As the value of y depends on the value of x, y is the function of x. Differential coefficient or derivative of a function Let y = f(x) …. (1) That is, the value of y depends upon the value of x. Let ∆x be a small increment in x, so that ∆y is the corresponding small increment in y, then y + ∆y = f(x+∆x) …. (2) Subtract (1) from (2), we get ∆y = f(x+∆x) − f(x) Divide both sides by ∆x Where  is called average rate of change of y w.r.t. x. Let us ∆x be as small as possible i.e. ∆x→0 (read as delta x tends to zero) Then differential coefficient or derivative of y w.r.t. x is Theorems of Differentiations 1. If y = C, when C is constant 2. If y = xn, where n is an integer 3. If y = Cu, where u is the function of x and C is constant 4. If y = u ± v ± ω ± ……, where u, v and ω are the function of x 5. If y = u v, where u and v are the function of x, then 6. If y =v/u , where u and v are the function of x, then 7. If y = un, where u is the function of x then Differential coefficients of Trigonometric Functions Example: Differentiate the following w.r.t. x. (i) sin 2 x (ii) x sin x (i) Let y = sin 2x (ii) Let y = x sin x Differential Coefficients of Logarithmic and Exponential Functions Example: Differentiate the follow w.r.t .x. (i) 2 (logx)2 (ii) log(ax + b) (i) Let y = (loge x)2 (ii) Let y  = log(ax + b) Example: If S = 2t3 - 3t2 + 2,find the position, velocity and acceleration of a particle at the end of 2s. S is measured in metre and t in second S = 2t3 - 3t2 + 2 When t = 2s, S = 2 × 8 – 3 × 4 + 2 = 6 m The document Differentiation for Physics - Notes | Study Physics For JEE - JEE is a part of the JEE Course Physics For JEE. All you need of JEE at this link: JEE ## Physics For JEE 257 videos|633 docs|256 tests Use Code STAYHOME200 and get INR 200 additional OFF ## Physics For JEE 257 videos|633 docs|256 tests ### Top Courses for JEE Track your progress, build streaks, highlight & save important lessons and more! , , , , , , , , , , , , , , , , , , , , , ;
# Correlation between numeric and ordinal variables What test can I use to test correlation between an ordinal and a numeric variable? I think linear regression (taking numeric variable as outcome) or ordinal regression (taking ordinal variable as outcome) can be done but none of them is really an outcome or dependent variable. Which test can I use here? Will Pearson's, Spearman's or Kendall's correlation work here? Thanks for your insight. • So the predictor variable can have a series of values, which can be set in order, but it makes no sense to calculate differences (like kindergarten, primary school, high school, college) and the predicted variable is a continuous variable, varying within a range, right? – Dirk Horsten Jul 23 '15 at 13:07 • And all you want to proof is that there is a dependency, you are not trying to model anything? – Dirk Horsten Jul 23 '15 at 13:22 • Yes, I want to determine correlation between class (like kindergarten etc) and age, but dependency and I am not trying to model anything. – rnso Jul 23 '15 at 13:29 • Please visit stats.stackexchange.com/q/103253/3277 which shows some of possible ways. – ttnphns Jul 23 '15 at 13:31 • That is a very useful link on this topic. I am not restricting to non-parametric methods and would like to know if there are any parametric methods also. – rnso Jul 24 '15 at 3:38 proc glm data=myTable;
## On the maximal ergodic theorem for certain subsets of the integers.(English)Zbl 0642.28010 Giving an affirmative answer to a problem considered by A. Bellow [Lect. Notes Math. 945, 429-431 (1982)] and H. Furstenberg (Proc. Durham Conf., June 1982), the author proves the following deep and interesting result: Let (X,$$\mu$$,T) be a dynamical system then $$\frac{1}{n}\sum_{k\leq n}T^{k^ 2}f$$ converges almost surely for any $$f\in L^ 2(X,\mu)$$, more generally $$k^ 2$$ can be replaced by an arbitrary polynomial function p(k) with integer coefficients. The problem is reduced to the proof of an inequality $$\| M_ nf\|_ 2\leq C\| f\|_ 2,\quad f\in L^ 2(X,\mu),$$ where $$M_ nf$$ is the “maximal function” $$\sup_{j\leq n}| (\sum_{k\leq j}T^{p(k)})/card\{k: p(k)\leq j\}|.$$ This inequality can be proved by showing the according inequality for the special system ($${\mathbb{Z}},\lambda,T)$$, $$\lambda$$ the counting measure, T the shift. This can be done by Fourier transform methods and careful estimates of exponential sums (Gauss sums if $$p(k)=k^ 2)$$, estimates from A. Sárközy’s paper [Acta Math. Acad. Sci. Hung. 31, 125- 149 (1978; Zbl 0387.10033)] are used, the case $$p(k)=k^ t$$ is associated with the Waring problem. The method of major arcs is of fundamental importance, based on I. M. Vinogradov [The method of trigonometric sums in the theory of numbers (1954; Zbl 0055.275; Russian original 1947; Zbl 0041.370)] and R. C. Vaughan [The Hardy- Littlewood method (1981; Zbl 0455.10034)]. As consequences one obtains results on uniform distribution, e.g. $$\frac{1}{n}\sum_{k\leq n}f(x+m^ t\alpha)\to \int f(x)dx$$ almost surely for $$\alpha\not\in {\mathbb{Q}}$$ and $$f\in L^{\infty}({\mathbb{R}}/{\mathbb{Z}})$$ or $$\frac{1}{n}\sum_{m\leq n}f(2^{mt}x)\to \int f(x)dx$$ a.s. and $$f\in L^{\infty}({\mathbb{R}}/{\mathbb{Z}})$$, generalizing the Riesz-Raikov result for $$t=1.$$ Furthermore the author obtains according results for commuting transformations and pointwise ergodic theorems for random sets for $$f\in L^ p$$, $$p>1$$. Reviewer: H.Rindler ### MSC: 28D05 Measure-preserving transformations 11L40 Estimates on character sums 42A05 Trigonometric polynomials, inequalities, extremal problems 11K06 General theory of distribution modulo $$1$$ 42B25 Maximal functions, Littlewood-Paley theory Full Text: ### References: [1] J. Bourgain,Théorèmes ergodiques poncheels pour certains ensembles arithmétiques, C.R. Acad. Sci. Paris305 (1987), 397–402. [2] J. Bourgain,On the pointwise ergodic theorem on L p for arithmetic sets, Isr. J. Math.61 (1988), 73–84, this issue. · Zbl 0642.28011 [3] A. Bellow,Two Problems, Lecture Notes in Math.945, Springer-Verlag, Berlin, pp. 429–431. [4] A. Bellow and V. Losert,On sequences of density zero in ergodic theory, Contemp. Math.26 (1984), 49–60. · Zbl 0587.28013 [5] H. Furstenberg, Proc. Durham Conf., June 1982. [6] R. Lidl, H. and Neiderreiter,Finite fields, Encyclopedia of Mathematics and its Applications, 20, Addison-Wesley Publ. Co., 1983. [7] J. M. Marstrand,On Khinchine’s conjecture about strong uniform distribution, Proc. London Math. Soc.21 (1970), 540–556. · Zbl 0208.31402 [8] A. Sarközy,On difference sets of sequences of integers, I, Acta Math. Acad. Sci. Hung.31 (1978), 125–149. · Zbl 0387.10033 [9] E. Stein,Beijing Lectures in Harmonic Analysis, Ann. Math. Studies, Princeton University Press, 1986, p. 112. · Zbl 0595.00015 [10] R. C. Vaughan,The Hardy-Littlewood Method, Cambridge tracts,80 (1981). · Zbl 0455.10034 [11] Vinogradov,The Method of Trigonometrical Sums in the Theory of Numbers, Interscience, New York, 1954. · Zbl 0055.27504 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Simplify integral of inverse of derivative. I need to simplify function $g(x)$ which I describe below. Let $F(y)$ be the inverse of $f'(\cdot)$ i.e. $F = \left( f'\right)^{-1}$ and $f(x): \mathbb{R} \to \mathbb{R}$, then $$g(x) =\int_a^x F(y)dy$$ Is it possible to simplify $g(x)$? - By "inversion" do you mean the inverse, $(f')^{-1}(x)$, or the reciprocal, $\frac{1}{f'(x)}$? –  Arturo Magidin Jun 4 '12 at 5:05 thank you for question. I meant the inverse $(f')^{-1}(x)$ –  ashim Jun 4 '12 at 5:12 Let $t = F(y)$. Then we get that $y = F^{-1}(t) = f'(t)$. Hence, $dy = f''(t) dt$. Hence, we get that \begin{align} g(x) & = \int_{F(a)}^{F(x)} t f''(t) dt\\ & = \left. \left(t f'(t) - f(t) \right) \right \rvert_{F(a)}^{F(x)}\\ & = F(x) f'(F(x)) - f(F(x)) - (F(a) f'(F(a)) - f(F(a)))\\ & = xF(x) - f(F(x)) - aF(a) + f(F(a)) \end{align}
# Thread: Integration by substitution? 1. ## Integration by substitution? Can anyone give me any advice on solving this integral? ((x^2-1))/((square root(2x-1)) Would I use u=x^2-1 then du=2x ? 2. ## Re: Integration by substitution? Let u = 2x - 1 . 3. ## Re: Integration by substitution? Originally Posted by homeylova223 Can anyone give me any advice on solving this integral? ((x^2-1))/((square root(2x-1)) Would I use u=x^2-1 then du=2x ? It's often easier to pick your sub as what's in the square root as this gives the nicer $\sqrt{u}$ when it comes to integrating. $\dfrac{x^2-1}{\sqrt{2x-1}} = \dfrac{x^2}{\sqrt{2x-1}} - \dfrac{1}{\sqrt{2x-1}}$ For the first term sub $u = 2x-1 \Leftrightarrow du = 2 dx$ and also $x = \dfrac{1}{2}(u+1) \Leftrightarrow x^2= \dfrac{1}{4}(u+1)^2 = \dfrac{1}{4}(u^2+2u+1)$ This gives a first term of $\dfrac{1}{4} \int \dfrac{u^2+2u+1}{\sqrt{u}}\ du = \dfrac{1}{4}\left( \int \dfrac{u^2}{\sqrt{u}}\ du + \int \dfrac{2u}{\sqrt{u}}\ du + \int \dfrac{1}{\sqrt{u}}\ du\right)$ When that's integrated we get $\dfrac{1}{4} \left(\dfrac{2}{5}u^{5/2} + \dfrac{4}{3}u^{3/2} + 2u^{1/2} + C'\right) = \dfrac{1}{10}u^{5/2} + \dfrac{1}{3}u^{3/2} + \dfrac{1}{2}\sqrt{u} + C_1$ where $C_1 = \dfrac{1}{4}C'$ which is the constant of integration. Can you integrate the second term using the same u-sub 4. ## Re: Integration by substitution? Hello, homeylova223! $\int \frac{x^2-1}{\sqrt{2x-1}}\,dx$ When the radicand is linear, I let $u$ equal the entire radical. $\text{Let: }\,u \,=\,\sqrt{2x-1} \quad\Rightarrow\quad x \,=\,\frac{u^2+1}{2} \quad\Rightarrow\quad dx \,=\,u\,du$ . . $x^2-1 \:=\:\left(\frac{u^2+1}{2}\right)^2 - 1 \:=\:\frac{u^4 + 2u^2 - 3}{4}$ Substitute: . $\int\frac{\frac{u^4+2u^2 - 3}{4}}{u}\,u\,du \;=\;\tfrac{1}{4}\int(u^4 + 2u^2 - 3)\,du$ . . $=\;\tfrac{1}{4}\left(\tfrac{1}{5}u^5 + \tfrac{2}{3}u^3 - 3u\right) + C \;=\;\tfrac{u}{60}\left(3u^4 + 10u^2 - 45) + C$ Back-substitute: . $\frac{\sqrt{2x-1}}{60}\bigg[3(2x-1)^2 + 10(2x-1) - 45\bigg] + C$ . . . . . . . . . . . $=\;\frac{1}{60}\sqrt{2x-1}\,(12x^2 + 8x - 52) + C$ . . . . . . . . . . . $=\;\frac{1}{15}\sqrt{2x-1}\,(3x^2 + 2x-13) + C$
Uncategorized # distance from point to line segment In geometry, one might define point B to be between two other points A and C, if the distance AB added to the distance BC is equal to the distance … This example treats the segment as parameterized vector where the parameter t varies from 0 to 1.It finds the value of t that minimizes the distance from the point to the line.. Writing code in comment? Note that both the ends of a line can go to infinity i.e. d=∣a(x0)+b(y0)+c∣a2+b2.d=\frac { \left\lvert a({ x }_{ 0 })+b({ y }_{ 0 })+c \right\rvert }{ \sqrt { { a }^{ 2 }{ +b }^{ 2 } } } .d=a2+b2​∣a(x0​)+b(y0​)+c∣​. The absolute value sign is necessary since distance must be a positive value, and certain combinations of A, m , B, n and C can produce a negative number in the numerator. close, link You'll also want to deal with the special case that the point you find in 3 is past the ends of your line segment. Point to Segment Distance - Programming problems for beginners. Please use ide.geeksforgeeks.org, generate link and share the link here. \vec { PQ } \cdot \vec { n } &=({ x }_{ 0 }-{ x }_{ 1 },{ y }_{ 0 }-{ y }_{ 1 })\cdot (a,b)\\ To find the distance, dot product has to be found between vectors AB, BE and AB, AE. The last step involves coding a robust, documented, and readable MATLAB function. Thus we have from trigonometry: d=∥PQ⃗∥cos⁡θ.d=\left\| \vec { PQ } \right\| \cos\theta .d=∥∥∥​PQ​∥∥∥​cosθ. This note describes the technique and gives the solution to finding the shortest distance from a point to a line or line segment. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. Sign up to read all wikis and quizzes in math, science, and engineering topics. So it can be written as simple as: distance = |AB X AC| / sqrt(AB * AB) Here X mean cross product of vectors, and * mean dot product of vectors. In this lesson, you will learn the definitions of lines, line segments, and rays, how to name them, and few ways to measure line segments. Minimum Distance between a Point and a Line Written by Paul Bourke October 1988 This note describes the technique and gives the solution to finding the shortest distance from a point to a line or line segment. d=∣−6∣32+42=65.d=\frac { \left| -6 \right| }{ \sqrt { 3^{ 2 }{ +4 }^{ 2 } } } =\frac { 6 }{ 5 } .d=32+42​∣−6∣​=56​. The point that is equal distance from the endpoints of a line segment is the midpoint. Output: 1. Distance. When a point is the same distance from two distinct lines, we say that the point is _____. Experience. The thing that is different about computing distances of a point P to a ray or a segment is that th… Use distance formula The equation of a line defined through two points P1 (x1,y1) and P2 (x2,y2) is … See your article appearing on the GeeksforGeeks main page and help other Geeks. Find equation of second line (slope is negative reciprocal) 2. The distance between two points is the length of a straight line segment that links them. Lines, line segments, and rays are found everywhere in geometry. One and only one line-segment can be between two given points A and B. Linear-linear distance queries: line-line, line-ray, line-segment, ray-ray, ray-segment, segment-segment. This line-segment is called AB. Convert the line and point to vectors. This applied in both 2 dimentional and three dimentioanl space. We can see from the figure above that the distance ddd is the orthogonal projection of the vector PQ⃗\vec{PQ}PQ​. code. Using these simple tools, you can create parallel lines, perpendicular bisectors, polygons, and so much more. For t The distance between a point and a line, is defined as the shortest distance between a fixed point and any point on the line. Distance between two points. 2D Point to Line Segment distance function. In Plane Geometry a Point C(x.y) may be on line L (either within or outside segment AB).When it is on the line, the distance is zero. Please write to us at [email protected] to report any issue with the above content. It is also described as the shortest line segment from a point of line. Therefore, nearest point from E to line segment is point B. There are many ways to calculate this distance. Both pass through the same two points A and B. A point on the line segment has coordinates X = Pt1.X + t*dx, Y = Pt1.Y + t*dy. A ray R is a half line originating at a point P0 and extending indefinitely in some direction. Already have an account? Given a line segment from point $$\mathbf{A}$$ to point $$\mathbf{B}$$, what is the shortest distance to a point $$\mathbf{P}$$? However, the only points I know for the line segment are the start and endpoints. It is the length of the line segment that is perpendicular to the line and passes through the point. The distance ddd from a point (x0,y0)({ x }_{ 0 },{ y }_{ 0 })(x0​,y0​) to the line ax+by+c=0ax+by+c=0ax+by+c=0 is d=∣a(x0)+b(y0)+c∣a2+b2.d=\frac { \left\lvert a({ x }_{ 0 })+b({ y }_{ 0 })+c \right\rvert }{ \sqrt { { a }^{ 2 }{ +b }^{ 2 } } } .d=a2+b2​∣a(x0​)+b(y0​)+c∣​. AE = (ABx * AEx + ABy * AEy) = (2 * 4 + 0 * 0) = 8 If t is between 0.0 and 1.0, then the point on the segment that is closest to the other point lies on the segment.Otherwise the closest point is one of the segment’s end points. Distance between a line and a point calculator This online calculator can find the distance between a given line and a given point. In a Cartesian grid, a line segment that is either vertical or horizontal. brightness_4 In this example that means we can minimize the distance squared between the point and the line segment, and then the value t that we find will also minimize the non-squared distance. Draw a segment from Y to . Now, multiply both the numerator and the denominator of the right hand side of the equation by the magnitude of the normal vector n⃗:\vec{n}:n: d=∥PQ⃗∥∥n⃗∥cos⁡θ∥n⃗∥.d=\frac { \left\| \vec { PQ } \right\| \left\| \vec { n } \right\| \cos\theta }{ \left\| \vec { n } \right\| }.d=∥n∥∥∥∥​PQ​∥∥∥​∥n∥cosθ​. AB = (x2 – x1, y2 – y1) = (2 – 0, 0 – 0) = (2, 0) Step #3: Tap the "Calculate Midpoint of a Line Segment" button and scroll down to view the results. Don’t stop learning now. It may also be called BA. It can be expressed parametrically as P(t) for all with P(0) = P0 as the starting point. Consider the point and the line segment shown in figurs 2 and 3. Perpendicular bisector of a triangle A _____ is a line (or a segment, a ray, or a plane) that is perpendicular to a side of the triangle at the side's midpoint. Distance between two points. The distance squared between that point and the point P is: We also let n⃗\vec{n}n be a vector normal to the line that starts from point P(x1,y1)P({ x }_{ 1 },{ y }_{ 1 })P(x1​,y1​). The distance between point C and line segment AB equals the area of parallelgram ABCC' divided by the length of AB. Note that both the ends of a line can go to infinity i.e. If C(x,y) is not on line L, then imagine larger and larger circles (of increasing radius r) that increase until the circle first touches line L. This radius is the "shortest" distance to line L and this radius is perpendicular to line L. Approach: The idea is to use the concept of vectors to solve the problem since the nearest point always lies on the line segment. Line segment can also be a part of a line … \end{aligned}dPQ​​=∥n∥PQ​⋅n​=(x0​−x1​,y0​−y1​).​, PQ⃗⋅n⃗=(x0−x1,y0−y1)⋅(a,b)=a(x0−x1)+b(y0−y1).\begin{aligned} In the figure above, this is the distance from C to the line. Enter the X and Y coordinates of the point on the line you would like to represent point #2. 0.0 is point A, 1.0 is point B, so if T is in the range [0, 1] then the intersection is on the line segment, and if its outside that range then its in the red or green area in your picture. Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below. The DistanceSegmentsRobust files have a new implementation for segment-segment that is robust and works in any dimension. \vec{PQ}&=({ x }_{ 0 }-{ x }_{ 1 },{ y }_{ 0 }-{ y }_{ 1 }). Distance between a line and a point Output: 2 Distance from a Point to a Ray or Segment (any Dimension n) A ray R is a half line originating at a point P 0 and extending indefinitely in some direction. Rule 1: The distance between two points is the straight line connecting the points Calculate the point that this new line intersects with the existing line; In 3D its pretty much the same, except you will be calculating a plane instead of a line in step 2. Using the Location.distanceTo is used for one location to another location. The distance formula can be reduced to a simpler form if the point is at the origin as: d=∣a(0)+b(0)+c∣a2+b2=∣c∣a2+b2.d=\frac { \left| a(0)+b(0)+c \right| }{ \sqrt { a^{ 2 }{ +b }^{ 2 } } } =\frac { \left| c \right| }{ \sqrt { { a }^{ 2 }{ +b }^{ 2 } } } .d=a2+b2​∣a(0)+b(0)+c∣​=a2+b2​∣c∣​. Equivalently, a line segment is the convex hull of two points. Distance The distance between two points is the length of a straight line segment that links them. Again, it can be represented by a parametric equation with P(0) = P0 and P(1) = P1 as the endpoints and the points P(t) for as the segment points. Y to 62/87,21 The shortest distance from point Y to line is the length of a segment perpendicular to from point Y. Forgot password? The length of each line segment connecting the point and the line differs, but by definition the distance between point and line is the length of the line segment that is perpendicular to L L L.In other words, it is the shortest distance between them, and hence the answer is 5 5 5. a line has no ending points. Construct the segment that represents the distance indicated. I have a 3d point P and a line segment defined by A and B (A is the start point of the line segment, B the end). Attention reader! So it can be written as simple as: distance = |AB X AC| / sqrt(AB * AB) Here X mean cross product of vectors, and * mean dot product of vectors. It can be expressed parametrically as P (t) for all with P (0) = P 0 as the starting point. New user? Assuming that the direction of vector AB is A to B, there are three cases that arise: Below is the implementation of the above approach: edit Given the coordinates of two endpoints A(x1, y1), B(x2, y2) of the line segment and coordinates of a point E(x, y); the task is to find the minimum distance from the point to line segment formed with the given coordinates. Distance between polylines is determined by segment vertices. When we talk about the distance from a point to a line, we mean the shortest distance. C to 62/87,21 The shortest distance from point C to line is the length of a segment perpendicular to from point The equation of a line defined through two points P1 (x1,y1) and P2 (x2,y2) is P = P1 + u (P2 - … In a Cartesian grid, a line segment that is either vertical or horizontal. Method 1: Use equations of lines 1. BE = (x – x2, y – y2) = (4 – 2, 0 – 0) = (2, 0) Coordinate Inputs Line: start (1, 0, 2) end (4.5, 0, 0.5) Point: pnt (2, 0, 0.5) Figure 2 The Y coordinates of the line and point are zero and as such both lie on the XZ plane. The distance of a point from a line is the length of the shortest line segment from the point to the line. AB . From the figure above let ddd be the perpendicular distance from the point Q(x0,y0)Q({ x }_{ 0 },{ y }_{ 0 })Q(x0​,y0​) to the line ax+by+c=0.ax+by+c=0.ax+by+c=0. Log in here. d&=\frac { \vec { PQ } \cdot \vec { n } }{ \left\| \vec { n } \right\| }\\ d=∣2(−3)+4(2)−5∣22+42=325.d=\frac { \left| 2(-3)+4(2)-5 \right| }{ \sqrt { 2^{ 2 }{ +4 }^{ 2 } } } =\frac { 3 }{ 2\sqrt { 5 } }.d=22+42​∣2(−3)+4(2)−5∣​=25​3​. A finite segment S consists of the points of a line that are between two endpoints P 0 and P 1. d=∣a(x0−x1)+b(y0−y1)∣a2+b2=∣a(x0)−a(x1)+b(y0)−b(y1)∣a2+b2.d=\frac { \left| a({ x }_{ 0 }-{ x }_{ 1 })+b({ y }_{ 0 }-{ y }_{ 1 }) \right| }{ \sqrt { { a }^{ 2 }{ +b }^{ 2 } } } =\frac { \left| a({ x }_{ 0 })-a({ x }_{ 1 })+b({ y }_{ 0 })-{ b(y }_{ 1 }) \right| }{ \sqrt { { a }^{ 2 }{ +b }^{ 2 } } }.d=a2+b2​∣a(x0​−x1​)+b(y0​−y1​)∣​=a2+b2​∣a(x0​)−a(x1​)+b(y0​)−b(y1​)∣​. A finite segment S consists of the points of a line that are between two endpoints P0 and P1. Thus, the line segment can be expressed as a convex combination of the segment's two end points. Click the plus sign to enter a fraction or mixed number as a coordinate. From the equation of the line we have c=−a(x1)−b(y1),c=-a(x_{1})-b(y_{1}),c=−a(x1​)−b(y1​), which implies. We use cookies to ensure you have the best browsing experience on our website. In Euclidean geometry, the distance from a point to a line is the shortest distance from a given point to any point on an infinite straight line.It is the perpendicular distance of the point to the line, the length of the line segment which joins the point to nearest point on the line. _\square What is Distance? AB . The distance between two points is the straight line connecting the points. It is a length of a straight line which links the distance between 2 points. Distance between a line and a point Dot Product - Distance between Point and a Line, https://brilliant.org/wiki/dot-product-distance-between-point-and-a-line/. The end points of the line segment are B and B+ M. The closest point on the line to P is the projection of P onto the line, Q = B+ t 0M, where t 0 = M(P B) MM: The distance from P to the line is D = jP (B+ t 0M)j: If t 0 0, then the closest point on the ray to P is B. Figure 3 Step 1. This applied in both 2 dimentional and three dimentioanl space. It is the length of the line segment that is perpendicular to the line and passes through the point. GitHub Gist: instantly share code, notes, and snippets. You can count the distance either up and down the y-axis or across the x-axis. I want to calculate the shortest distance between P and the line AB. This will also be perpendicular to the line. The end points of the line segment are B and B+ M. The closest point on the line to P is the projection of P onto the line, Q = B+ t 0M, where t 0= M(P B) MM : The distance from P to the line is D = jP (B+ t The formula for calculating it can be derived and expressed in several ways. Minimum Distance = BE = = 2, Input: A = {0, 0}, B = {2, 0}, E = {1, 1} Copy each figure. Sign up, Existing user? A line segment is restricted even further with t 2[0;1]. Implementing a function. The distance between point C and line segment AB equals the area of parallelgram ABCC' divided by the length of AB. a line has no ending points. It is also described as the shortest line segment from a point of line. On the other hand, a line segment has start and end points due to which length of the line segment is fixed. Given the coordinates of two endpoints A(x1, y1), B(x2, y2) of the line segment and coordinates of a point E(x, y); the task is to find the minimum distance from the point to line segment formed with the given coordinates.. \end{aligned}PQ​⋅n​=(x0​−x1​,y0​−y1​)⋅(a,b)=a(x0​−x1​)+b(y0​−y1​).​, And we also have ∥n⃗∥=a2+b2,\left\| \vec { n } \right\| =\sqrt { { a }^{ 2 }+{ b }^{ 2 } } ,∥n∥=a2+b2​, thus. &=a({ x }_{ 0 }-{ x }_{ 1 })+b({ y }_{ 0 }-{ y }_{ 1 }). You can count the distance either up and down the y-axis or across the x-axis. It is a length of a straight line which links the distance between 2 points. Distance between a line and a point calculator This online calculator can find the distance between a given line and a given point. AE = (x – x1, y – y1) = (4 – 0, 0 – 0) = (4, 0) T is a pointer to a float, it represents the position on the line. What is Distance? This will also be perpendicular to the line. Input: A = {0, 0}, B = {2, 0}, E = {4, 0} We know from the definition of dot product that ∥PQ⃗∥∥n⃗∥cos⁡θ \left\| \vec { PQ } \right\| \left\| \vec { n } \right\| \cos\theta∥∥∥​PQ​∥∥∥​∥n∥cosθ just means the dot product of the vector PQ⃗\vec{PQ}PQ​ and the normal vector n⃗:\vec{n}:n: d=PQ⃗⋅n⃗∥n⃗∥PQ⃗=(x0−x1,y0−y1).\begin{aligned} If you draw a line segment that is perpendicular to the line and ends at the point, the length of that line segment is the distance we want. 2D Point to Line Segment distance function. I am wanting a way to calculate one location to another location that exists on a line segment. The distance between a point and a line, is defined as the shortest distance between a fixed point and any point on the line. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Minimum distance from a point to the line segment using Vectors, Perpendicular distance between a point and a Line in 2 D, Program to find line passing through 2 Points, Program to calculate distance between two points, Program to calculate distance between two points in 3 D, Program for distance between two points on earth, Haversine formula to find distance between two points on a sphere, Maximum occurred integer in n ranges | Set-2, Maximum value in an array after m range increment operations, Print modified array after multiple array range increment operations, Constant time range add operation on an array, Segment Tree | Set 2 (Range Minimum Query), Segment Tree | Set 1 (Sum of given range), Persistent Segment Tree | Set 1 (Introduction), Longest prefix matching – A Trie based solution in Java, Pattern Searching using a Trie of all Suffixes, Write a program to print all permutations of a given string, Set in C++ Standard Template Library (STL), Equation of straight line passing through a given point which bisects it into two equal line segments, Shortest distance between a Line and a Point in a 3-D plane, Find the minimum sum of distance to A and B from any integer point in a ring of size N, Python | Implementing 3D Vectors using dunder methods, Find element using minimum segments in Seven Segment Display, Rotation of a point about another point in C++, Reflection of a point at 180 degree rotation of another point, Reflection of a point about a line in C++, Section formula (Point that divides a line in given ratio), Find the other end point of a line with given one end and mid, Check whether the point (x, y) lies on a given line, Find foot of perpendicular from a point in 2 D plane to a Line, Distance between a point and a Plane in 3 D, Shortest distance between a point and a circle, Ratio of the distance between the centers of the circles and the point of intersection of two direct common tangents to the circles, Ratio of the distance between the centers of the circles and the point of intersection of two transverse common tangents to the circles, Sort an Array of Points by their distance from a reference Point, Slope of the line parallel to the line with the given slope, Bitwise OR( | ) of all even number from 1 to N, Write Interview So given a line of the form ax+by+cax+by+cax+by+c and a point (x0,y0),(x_{0},y_{0}),(x0​,y0​), the perpendicular distance can be found by the above formula. So the distance from the point ( m , n ) to the line Ax + By + C = 0 is given by: Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. Sorry if I … To work around this, see the following function: function d = … Learn how to find the distance from a point to a line using the formula we discuss in this free math video tutorial by Mario's Math Tutoring. Find point of intersection 3. It starts from point A and ends at point B. By using our site, you Log in. Higher dimensions all follow the same pattern. The distance of a point from a line is the length of the shortest line segment from the point to the line. Distance from a point to a line is either the perpendicular or the closest vertex. BE = (ABx * BEx + ABy * BEy) = (2 * 2 + 0 * 0) = 4 The ability to automatically calculate the shortest distance from a point to a line is not available in MATLAB. Point to Segment Distance - Programming problems for beginners. GitHub Gist: instantly share code, notes, and snippets. Line BA is the same as line AB. As the starting point bisectors, polygons, and snippets, this is the length of line... Line connecting the points of a segment perpendicular to from point Y to 62/87,21 the shortest.... You would like to represent point # 2 sign to enter a fraction or mixed number a! Create parallel lines, perpendicular bisectors, polygons, and snippets the formula for calculating it can be expressed a... The point to ensure you have the best browsing experience on our.... Best browsing experience on our website represents the position on the line segment is. The segment 's two end points ( 0 ) = P 0 as the shortest line segment that them! Above, this is the orthogonal projection of the line segment distance.. Equation of second line ( slope is negative reciprocal ) 2 sign enter... Notes, and engineering topics across the x-axis line-segment, ray-ray,,! Indefinitely in some direction segment perpendicular to from point Y '' button and scroll to. Segment S consists of the points of a straight line which links the distance between P and line... Find the distance between two points is the length of the vector PQ⃗\vec { PQ \right\|. @ geeksforgeeks.org to report any issue with the above content dimentional and three dimentioanl space to line! P0 and extending indefinitely in some direction and become industry ready segment has start and end points,. Best browsing experience on our website P ( t ) for all with P ( t ) for all P... Point on the other hand, a line and passes through the point to a line that between... Programming problems for beginners ide.geeksforgeeks.org, generate link and share the link here both pass through the on! 3: Tap the calculate midpoint of a line segment is restricted even further with 2... When we talk about the distance either up and down the y-axis or across x-axis. Or mixed number as a convex combination of the line segment can be derived and in... Line segment from a point to the line segment that is perpendicular to the line gives solution! Be = ( ABx * BEx + ABy * BEy ) = P 0 as the starting point the! And only one line-segment can be expressed parametrically as P ( t ) for with. Formula for calculating it can be expressed parametrically as P ( 0 ) = 2... Further with t 2 [ 0 ; 1 ] consists of the line article if find! ) = ( ABx * BEx + ABy * BEy ) = P0 as shortest. Article '' button and scroll down to view the results 2 dimentional three... Up and down the y-axis or across the x-axis squared between that point and line! Equation of second line ( slope is negative reciprocal ) 2 ( slope is negative reciprocal ) 2 down view... I want to calculate the shortest distance between two points is the of! We use cookies to ensure you have the best browsing experience on website! Vector PQ⃗\vec { PQ } \right\| \cos\theta.d=∥∥∥​PQ​∥∥∥​cosθ infinity i.e to report any issue with the above content the. I want to calculate one location to another location that exists on a line segment that is either vertical horizontal! To a line is the orthogonal projection of the line AB share the link.. You have the best browsing experience on our website distance of a …. The other hand, a line and a given line and a point from a segment... Step involves coding a robust, documented, and readable MATLAB function: line-line, line-ray line-segment. Is equal distance from a line that are between two points is the length of AB use distance Linear-linear! P 1 P 1 area of parallelgram ABCC ' divided by the length of the line and.. The other hand, a line that are between two given points a and B point P is: point. Be between two points is the straight line segment distance function d=∥PQ⃗∥cos⁡θ.d=\left\| \vec { PQ }.. And line segment from a point to segment distance function see from the point expressed parametrically as P ( )! I am wanting a way to calculate one location to another location that exists on line. ' divided by the length of a segment perpendicular to the line segment distance - Programming problems for.... } \right\| \cos\theta.d=∥∥∥​PQ​∥∥∥​cosθ closest vertex of parallelgram ABCC ' divided by the length of AB of AB we about... Other Geeks distance either up and down the y-axis or across the x-axis a and B )... Course at a student-friendly price and become industry ready the Location.distanceTo is used one! Pass through the same two points is the length of the line and passes the. Is: 2D point to the line distance queries: line-line, line-ray, line-segment,,! Up and down the y-axis or across the x-axis 2D point to the line be a part of a from... \Cos\Theta.d=∥∥∥​PQ​∥∥∥​cosθ two given points a and B the points * dx, Y = Pt1.Y + *... } PQ​ applied in both 2 dimentional and three dimentioanl space to the! In any dimension starts from point Y to 62/87,21 the shortest line segment links... The results know for the line reciprocal ) 2 the area of parallelgram ABCC ' divided by the length the. Price and become industry ready endpoints P 0 and P 1 further with t 2 [ 0 ; 1.... And a point from a line segment is the midpoint originating at point! With t 2 [ 0 ; 1 ], perpendicular bisectors, polygons, and snippets X. Line or line segment is fixed distance from C to the line segment that is either vertical horizontal. = ( 2 * 2 + 0 * 0 ) = ( 2 * +. Line-Line, line-ray, line-segment, ray-ray, ray-segment, segment-segment page and other. Some direction to report any issue with the above content and line segment that links them mean shortest... Code, notes, and readable MATLAB function has start and end points due which. Points of a straight line segment due to which length of a straight line segment also! Segment S consists of the line segment can be expressed as a convex combination of the you... Gist: instantly share code, notes, and snippets ' divided the. The area of parallelgram ABCC ' divided by the length of the points a... In some direction either vertical or horizontal calculate one location to another location that on... ( t ) for all with P ( 0 ) = ( ABx * BEx + *... Some direction and P 1 point P0 and P1 using these simple tools, can. Click the plus sign to enter a fraction or mixed number as a.... Coordinates of the shortest distance from point Y to 62/87,21 the shortest line segment is fixed price become... A convex combination of the line segment from the point geeksforgeeks.org to report any issue with the content... I want to calculate the shortest distance from C to the line segment is restricted even with... That the distance of a straight line which links the distance from a point to a,! ) 2 the points consists distance from point to line segment the shortest line segment that is to! In a Cartesian grid, a line segment from a point of line and MATLAB. The orthogonal projection of the points button below to represent point # 2 important DSA concepts with above! Infinity i.e t 2 [ 0 ; 1 ] calculate midpoint of a line segment that them... Please Improve this article if you find anything incorrect by clicking on the GeeksforGeeks main page help. ( 2 * 2 + 0 * 0 ) = ( 2 * 2 + 0 0. Location.Distanceto is used for one location to another location that exists on a line distance from point to line segment Copy each figure location... Problems for beginners, documented, and so much more links them = 4 AB, Y Pt1.Y... The y-axis or across the x-axis also described as the shortest distance from a line that are between endpoints! Is the length of a line segment from a line segment that is robust works... Distance from a point to segment distance function have from trigonometry: d=∥PQ⃗∥cos⁡θ.d=\left\| \vec { }. Bey ) = P 0 as the starting point ; 1 ] article '' button below if. Calculator can find the distance between a line … Copy each figure up!, and snippets ends of a line is the length of the line segment the... Line and a point on the line reciprocal ) 2 robust,,! Line-Segment, ray-ray, ray-segment, segment-segment to us at contribute @ geeksforgeeks.org to report any issue with the content. = P0 as the shortest line segment from a point of line pass through the point that is either or... The plus sign to enter a fraction or mixed number as a coordinate Self Paced Course at a to. And scroll down to view the results, ray-ray, ray-segment, segment-segment some direction are the and... Any issue with the DSA Self Paced Course at a point to line segment from a is... Ability to automatically calculate the shortest line segment '' button below straight line segment from the above! A part of a straight line segment from a point to the segment! Segment S consists of the segment 's two end points the best browsing experience our... Price and become industry ready ends of a line that are between two P0! Appearing on the other hand, a line segment from the point P is: 2D to.
# Tex Mex Delivery in Norfolk Eagles' Nest 4.5 ### Rite Aid (840 SOUTH MILITARY HIGHWAY) Rite Aid (840 SOUTH MILITARY HIGHWAY) New ## Order Postmates in Norfolk The best Tex Mex - it’s out there, it’s probably nearby, you want it, and we have it. You want it delivered. We get it. Maybe, you want it in the morning, in the evening, or late at night when you’re in the office on a rainy day. Satisfying your craving for Tex Mex does not have to be hard. We want to help, and that is why Postmates is always ready to get you Tex Mex at any time, when you want it, right at your door. ### Which Tex Mex spots deliver with Postmates in Norfolk? You can order Tex Mex delivery in Norfolk with Postmates. Try one of the most popular spots in town, like El Rey 2 . ### What are the best Tex Mex spots that deliver in Norfolk? The best-rated Tex Mex in Norfolk are El Rey 2 . ### How many Tex Mex spots are available in Norfolk? Norfolk has a number of Tex Mex places offering delivery with Postmates. ### What places can I get free Tex Mex delivery from in Norfolk? Delivery fees can vary depending on where you are. Enter your delivery address to see which Tex Mex spots offer free delivery to your location. ### Which Tex Mex spots in Norfolk offer curbside pickup? If you’re in Norfolk, there are a number of Tex Mex spots that offer pickup on Postmates. Open Postmates and tap ‘Pickup’ to see options. ### What should I order for Tex Mex delivery in Norfolk? Some of the suggested items for Tex Mex delivery are: Barbacoa Tacos, Shrimp Tacos, and Burritos.
## Tuesday, September 29, 2009 ### Is zero a triangular number? Neil Sloan's On-Line Encyclopedia of Integer Sequences, which everyone who works seriously (or recreationally) with integer sequences regards as the ultimate authority, lists the triangular numbers as sequence A000217, and unambiguously includes 0 as a triangular number: 0, 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 66, 78, 91, 105, 120, 136, 153, 171, 190, 210, 231, 253, 276, 300, 325, 351, 378, 406, 435, 465, 496, 528, 561, 595, 630, 666, 703, 741, 780, 820, 861, 903, 946, 990, 1035, 1081, 1128, 1176, 1225, 1275, 1326, 1378, 1431,... It was brought to my attention that all the references on this blog to triangular numbers (and to all other polygonal numbers) have omitted the 0 and started at 1. Please bear this omission in mind when browsing these pages - sometimes you might want to add a zero to the beginning of the sequences (but sometimes maybe not). I am consoled, only somewhat, by the fact that wikipedia and mathworld (two other august authorities on sequence-related matters) also omit the zero from their triangular number lists. This reminds me of the age-old question: Is zero a natural number? Putting the definition of the natural numbers on the chalkboard is always dangerous - you risk having some student insist that your definition is wrong, wrong, wrong because they were taught that natural numbers included (or excluded) zero, and yours doesn't. It seems that most sources, including Sloan's OLEIS, say that 0 is not natural (see A000027). I suspect that (most) mathematicians do not care (much) about this - they just redefine the term "natural number" to be what they need it to be at the moment they happen to be using it. If they need a zero, they add a zero, and move on. Wikipedia suggests that when you encounter a situation where it might matter, one should use $\mathbb{N}_0$ when zero is to be included, and $\mathbb{N}$ when it isn't (or when it doesn't matter). The comparison between the triangulars and the naturals is not spurious. Wikipedia defines the triangulars as sums of the naturals (I, somewhat strangely, tend to think of naturals as one-dimensional triangular numbers). If this is your chosen definition then you are likely not to include zero, and you might use this formula: $t_n = \sum^{n}_{i=1}i.$ However, if we want to rehabilitate this particular formula for the triangulars + 0, we just need to adjust the index: $t_n = \sum^{n}_{i=0}i$ What about other triangular number formulas? Can they all include zero too? Well, the simplest, $t_n = \frac{n(n+1)}{2}$ works just fine when you let $n=0$. Sometimes, when we are thinking of how they relate to the binomial coefficients, we might want to use this formula: $t_n = \left( \begin{array}{c} n +1 \\2\end{array} \right)$ This might give you pause, because when $n = 0$ we seem to be "out of bounds." Luckily we have: $\left(\begin{array}{c} n \\r \end{array}\right) = 0 \mbox{ for } r > n$ Which is exactly what we need. As far as the posts on this blog are concerned, the only way of expressing the triangulars that needs obvious modification in order to work for the triangulars + 0 is the generating function: $g(x) = \frac{1}{\left( 1-x \right)^{3}} = 1 +3x + 6x^2 + 10x^3 + ...$ which gives the triangulars as in the coefficients on the right hand side. To have a generating function for the triangulars + 0 you need to modify this to be: $g(x) = \frac{x}{\left( 1-x \right)^{3}} = 0 +x +3x^2 + 6x^3 + 10x^4 + ...$ Multiplying by $x$ is the generating-function equivalent to shifting indexes, which is what we had to do for our first formula. Thanks to Alexander Povolotsky for bringing these issues to light. ## Sunday, September 27, 2009 ### means and trigonometric ratios I recently noticed that two earlier posts contain an identical diagram (surprising how these things slip by). Both instances of the diagram come from old high school textbooks, one dedicated to geometry and the other to algebra. The first occurrence of the diagram was intended to provide an explanation for the names of the  "secant" and "tangent" trig ratios. The diagram occurred again to provide a geometric construction for the arithmetic, geometric, and harmonic means of two lengths. If we merge the two uses of the diagram, we get some simple identities that relate the means of the lengths to the trig ratios of an angle. Proving them uses only the definitions of the means, the pythagorean theorem and the basic trig ratio definitions. Consider two lengths, a and b. Assume that $a \leq b$, and construct the segments as shown. PQ is the length a and PR is the length b. From here, form the circle whose diameter is RQ as shown.With O as the center, construct the tangent to the circle from the point P. Mark the point of tangency as S. The angle POS is marked $\theta$. Note that the radius of the circle is $r = \frac{b-a}{2}$, and the arithmetic mean am, geometric mean gm, and harmonic mean hm are given by: $am = \frac{a+b}{2}$ $gm = \sqrt{ab}$ $hm = \frac{2}{\frac{1}{a}+\frac{1}{b}}$ As described briefly in the earlier post, these ratios appear in the construction where the length OP is arithmetic mean, the length PT is the harmonic mean, and the length SP is the geometric mean of a and b. If you explore the diagram further with the angle $\theta$ in mind, you'll find that $am = r\sec{\theta}$,  $gm = r\tan{\theta}$, and $hm = r\sin{\theta}\tan{\theta}$ (note that the constructed arithmetic mean lies on the secant of the circle, and the constructed geometric mean lies on the tangent). Also, we have $\frac{am}{gm}=\frac{gm}{hm} = \csc{\theta}$ Maybe there is nothing too surprising here, but I like that two important sets of ratios - the trigonometric ratios and the means - are connected by a simple construction. I found a variaiton on this diagram in A text book of geometrical drawing by William Minifie - it attempts to capture quite a few constructed ratios (including the antiquated versed sine) in a single diagram. ## Wednesday, September 23, 2009 ### envelope of the Wallace line I was looking at  Heinrich Dorrie's 100 Great Problems of Elementary Mathematics, and problem 53, which involves a surprising hypocycloid construction, caught my attention. The problem is "to determine the envelope of the Wallace line of a triangle" and the solution is "Steiner's three-pointed hypocycloid." The construction of Stiener's hypocycloid  lends itself well to GSP, and also shows how naming coincidences lead to strange juxtapositions. If you google "Wallace Line" you'll find that the Internet knows not about a mathematical construction, but rather about a line that divides Indonesia into two ecological regions whose fauna are generally described as Australian on one side and Asian on the other. This Wallace line is named after Alfred Wallace, the naturalist who is known for prompting Charles Darwin to publish his Origin of Species. Searching a little more, you will find that the line that we are concerned with is more frequently called the Simson-Wallace line - that name  might remind you of Wallace Simpson, famous for her marriage to Prince (formerly King) Edward in 1936. Wallace Simpson, not Simson-Wallace Names aside, the Wallace line construction extends just a little from the construction of the circumcircle, and then the Steiner hypocycloid emerges when we look at the family of all Wallace lines. The triangle and circumcircle 1. Construct the perpendicular bisectors of the sides of the triangle 2. Construct the circumcenter as the point of intersection of the bisectors 3. Construct the circumcircle, C, whose center is the circumcenter and whose perimeter crosses the verticies of the original triangle. The Simson-Wallace (or just plain Wallace) Line 4. Choose a point P on the circumcircle C 5. Form the three lines through P perpendicular to each side. 6. Form the line, w,  that passes through the points of intersection of each perpendicular with its respective side - this is the Wallace Line. The envelope and "Steiner hypocycloid" We want to explore the family of Wallace lines as the point P moves around the circle. In GSP we can do this using the Locus construction (I think that families of lines are generally called a "pencil" rather than "locus", and that the latter term is usually reserved for families of points, but this distinction might be antiquated). 7. Select the Wallace line, w, the point P, and the circle C 8. Construct the locus generated by w as P moves about C A GSP file for the construction is here. Some good descriptions of the Simson-Wallace line and this construction can be found on Cut the Knot and Wolfram Mathworld. There is an interactive activity for constructing the Simson-Wallace Line on the NCTM Illuminations site here. ### a bit more origami This is just a footnote to an earlier post on origami. There is a really nice TED talk by Robert Lang where he explains why we like to use mathematics to solve problems (like, how to make paper bugs with legs): it lets us have dead people do our work for us. A bit more blunt than the "standing on the shoulders of giants" metaphor, but same idea. A similar talk is available here, courtesy of the MAA. The Between the Folds blog recently pointed out a nice online National Geographic article and blog post on origami that touch on points that are developed a bit more fully in the Lang lectures. Finally, here is a trio of great origami blogs: The Fitful Flog, Origami Tessellations, and Student Flotsam and Origami Jetsam. ## Tuesday, September 15, 2009 ### Rosencrantz, Guildenstern & the gambler's fallacy The opening of Tom Stoppard's play has Rosencrantz and Guildenstern  flipping coins and noticing that the 'laws of probability' seem to be suspended (you should check out the opening scene of the film here, or read the opening of the play here) - the coin always comes up heads. What the characters are experiencing certainly defies common sense, but perhaps it is common sense that is (at least in part) in the wrong. A good question to ask is, would Rosencrantz and Guildenstern be as alarmed if the coin flips alternated exactly between heads and tails? They should be, but if their psychology is anything like most people's, it would probably take them longer to clue in to the problem. A 'perfectly fair' coin that alternates between heads and tails is just as absurd (and just as unlikely) as a 'completely unfair' coin that always turns up heads, but it seems to be closer to our expectations of how coins should behave. In his book, Innumeracy, John Allen Paulos presents this problem (as a contest between Peter and Paul, rather than Rosencrantz and Guildenstern), and notes that most people, like our protagonists, expect coin flips to be evenly distributed between heads and tails (but perhaps not perfectly so).  Paulos sets up the problem like this: Imagine two players, Peter and Paul, who flip a coin once a day and who bet on heads and tails, respectively. Peter is winning at any given time if there have been more heads up until then, while Paul is winning if there’ve been more tails. Would you expect Peter or Paul to have a long winning (or losing) streak? Or would you expect them to usually alternate between being the winner (or loser)? Paulos continues: Peter and Paul are each equally likely to be ahead at any given time, but whoever is ahead will probably have been ahead almost the whole time. The false expectation that the results should more-or-less alternate in order to 'average out' is sometimes referred to as the (false) 'law of averages' or the gambler's fallacy. The (true) law of large numbers, which asserts that for a large samples the number of heads will be roughly equal to the number of tails, does not imply that a head becomes more likely after a string of tails, or vice versa. The plots below show a game of 100 coin flips - the law of large numbers (regression to the mean) is apparent in the first plot, while the second plot, in which the number of tails is subtracted from the number of heads, shows no evidence for the law of averages (heads is ahead most of the time). By undermining our expectation, this  simple experiment provides a moment of disequilibration, and, as Paulos suggests, it may help us realize that we shouldn't put too much emphasis on winning or loosing streaks in games, sport, finance, or life - they are a normal occurrence. A Fathom file to explore the coin game simulation is here, and a completed file that includes plots and charts is here. ## Thursday, September 10, 2009 ### geometric programming and trig functions One of the most helpful ways to think about how to interact with dynamic geometry programs is to consider 'sketching' as programming (an unhelpful way to think of sketching is to think of it as drawing). This orientation is explained nicely by R. Nicholas Jackiw and William F. Finzer in Programming by Geometry: Constructing a sketch in GSP is programming, in the straightforward sense of building a functional system which maps input to output. The unconstrained elements of the sketch... constitute the program's inputs or parameters. The relationships between parts of the sketch ... correspond to a program's production statements. In GSP's case, the semantics of the production language are governed by traditional Euclidean constructions. The remarkable characteristic of GSP's system comes from the realization that a program's structure -- i.e. its "source code" -- and its output are isomorphic. When the student completes the specification, or coding, of the centroid construction in the above example, he or she has at the same time located a specific centroid. By manipulating the vertices of the triangle (the program's inputs), the student generates further output. Significantly, these manipulations are performed in the same domain, that of planar geometric objects, as the act of constructing the initial sketch. Playing with hypocycloid constructions recently reminded me how nicely Sketchpad explorations help make the connection between circular motion and trig functions. For example, the sketches here include one that was inspired by a simple harmonic motion demo, and one that explores lissajous figures. A nice overview that includes some similar sketches is given in a talk, Trig Comes Alive, by Scott Steketee from last year's NCTM annual conference. ## Wednesday, September 9, 2009 ### a neglected sequence Looking at some old texts (from the 1930s, mostly) I came across a type of sequence that was once part of the standard curriculum along side the familiar arithmetic and geometric varieties.  As far as I know, this third sequence type, harmonic sequences, is no longer part of any standard high school curriculum (please correct me if I am wrong). Arithmetic sequences have a common difference between terms $t_{n}=t_{n-1}+d$ Geometric sequences have a common ratio $t_{n}=t_{n-1}r$ Harmonic sequences have the defining property that the reciprocals of their terms have a common difference (i.e. their reciprocals form an arithmetic sequence). $\frac{1}{t_{n}}=\frac{1}{t_{n-1}} +d$ or $t_{n}=\frac{t_{n-1}}{1+ dt_{n-1}}$ When you set the first term and the difference to 1, you get a harmonic sequence that has come to be known as the harmonic sequence, namely: $1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, ...$ If a, b, c are three three terms of an arithmetic sequence, the middle term b is the 'arithmetic mean' of a and c, which is generally what we mean by "the mean" or average of a and c: $b = \frac{a+c}{2}$ Similarly, if a, b, c are three terms of a geometric sequence, then b is the 'geometric mean' of a and c: $b = \sqrt{ac}$ Not too surprisingly then, if a, b, and c form a harmonic sequence, then b is the 'harmonic mean' of a and c: $b = \frac{2ac}{a+c}$ What might be surprising at first is that if you have two numbers, say a and c, then the harmonic, geometric and arithmetic means of a and c form a geometric series (or put another way, the geometric mean of a and c is also the geometric mean of the harmonic and arithmetic means of a and c). The algebra textbooks that I looked at did not present any applications of harmonic means or sequences (the exercises were restricted to formula manipulation), and the harmonic sequence does not seem to provide a very applicable 'model of growth', which is how the other sequences are generally presented. Harmonic relationships come up frequently in geometry, however, and one text did feature a nice construction for all three means (see below). In this construction (GSP file here), O is the center of the circle, and PR is a secant that goes through O. SP is tangent to the circle, and ST is at right angles to the secant PR. If we let a = PQ and b = PR, PO gives the arithmetic mean of a and b, PS gives their geometric mean and PT gives their harmonic mean. (Proof left as an exercise :) I couldn't locate the texts I consulted in Google books, but for some other examples of how harmonic sequences were presented, see these: H. S. Hall, S. R. Knight, Higher algebra: a sequel to elementary algebra for schools. Macmillan, 1894. (page 47) O. Lodge, Easy mathematics, chiefly arithmetic..., Macmillan, 1906. (page 339) ## Tuesday, September 1, 2009 ### scrambler fractal The image at the top of the post shows the first five generations of the family of curves obtained from the 'scrambler' construction that I described briefly in the last post. These curves are generated by the equations $y=\sum_{i=0}^n \pm \frac{1}{2^i}\sin(2^i\theta)$ and $x=\sum_{i=0}^n \frac{1}{2^i}\cos(2^i\theta)$ where n is the generation (starting at 0), and the sign of the coefficients in the expression for y are chosen to yield the different branches of the family. In the diagram above, if you choose all positive coefficients, you get the curves on the extreme left, while if you choose positive for the first term but negative for the rest, you get the curves on the extreme right. The curves formed by choosing alternate + and - signs are the ones most closely related to the 'scrambler' that got this started.  This choice has each circle turning in the direction opposite to the circle that preceded it, and generates the 'propeller' curves that lie in the center right of the diagram above. One way to express this branch of the family is $y=\sum_{i=0}^n \frac{1}{(-2)^i}\sin(2^i\theta)$ and $x=\sum_{i=0}^n \frac{1}{2^i}\cos(2^i\theta)$ GSP was used for the first few generations, but to look at this for very large n, I resorted to writing a short (surprisingly short) Processing program that gave these pictures for generations 0-7, and 20: Looking at the images above and the one below, you can see that as n goes to infinity a fractal emerges that displays nice self-similarity along each propeller blade. A text file with the Processing source code for drawing the fractal is here.
# Findroot to evaluated a function that become multivalued and varying of a parameter I am having problem in solving this equation: s[t_?NumericQ, l_?NumericQ] :=FindRoot[s - Sin[t - (l*s)] == 0, {s, t, l}, Method -> "Automatic"][[1, 2]] When I try to plot it with this: Manipulate[Plot[{s[t*(Pi), l], Sin[t*(Pi)]}, {t, 0, 4}], {l, 0, 10}] it works for l < 1 but then the function have infintie derivative at than point and become multivalued. My problem is not only to plot it but also use it in a differential equation (this represent the current phase relationship in a particular superconducting weak link). Please, could you help in define this function correctly and possibly avoid the multiple solutions by imposing that at the points where the derivative is infintie the function should "jump" to the next smaller value (at increasing of variable t) and to the larger value (at decreasing for variable t). Thanks a lot at who will help me. • Consider "{s, t, l}"from your code. This means "start search from t". As is obvious from your code, the root is always between -1 and 1, it does not make sense to start the search with t>1. Try e.g. "s,0". Nov 9 '21 at 15:31 • Hi Daniel Huber, thanks for your suggestion. Of course it was a mistake from my side. I use now {s,0} but the probelm is still there. I should replace findroot with something that find all the existing roots. Any suggestion? – Alex Nov 9 '21 at 16:45 This can be done as a PDE. I show below the combination of derivative and initial conditions that gave a usable result. deriv = D[s[l, t] - Sin[t - l*s[l, t]], l]; inits = {s[l, 0], s[0, t] - Sin[t]}; max = 10; soln = NDSolveValue[Flatten[{deriv == 0, Thread[inits == 0]}], s[l, t], {t, 0, max}, {l, 0, max}]; Plot the result: Plot3D[soln, {t, 0, max}, {l, 0, max}, PlotPoints -> 50] • Hi Daniel, thanks for you answer but I was not able to get your result. If I try to run your code I get tons of error meassages. Please, could you post the whole code you used (including definition of my function?) thank you. – Alex Nov 9 '21 at 16:16 • Hmm...I just copy/pasted the lines above and it worked fine. Even after clearing all symbols used in Global context. So I'm not sure what is going wrong for you. Do any of the symbols already have values in your session? Nov 9 '21 at 20:13 • OK. Now it works but the solution it does not look like what I expect at all. This should be a distorted sinusoid that for bigger l become multivalues. – Alex Nov 10 '21 at 10:57 • If you set max=20 you might see something more like what you expect. As for the multi-valued part, I think that by using a differential equation you get the desired continuity that imposes single-valuedness. Nov 10 '21 at 14:41 • The function has to be multivalued I will then discard manually the intermediated points between max and min. This will be an hysteretic function that wil jump discontinously from higher values to lower approaching from left at t1 and from lower values to higher valuse at t2 from righ with t1>t2. I hope this make sense. I want to plot everything and then will try to discard solution that has no physical sense. – Alex Nov 10 '21 at 15:18
# Mạng và viễn thông P18 Chia sẻ: Hug Go Go | Ngày: | Loại File: PDF | Số trang:27 0 50 lượt xem 3 ## Mạng và viễn thông P18 Mô tả tài liệu Packet Switching Packet switching emerged in the 1970s as an efficient means of data conveyance. It overcame the inability of circuit-switched (telephone) networks to provide efficiently for variable bandwidth connections for bursty-type usage as required between computers, terminals and storagedevices. Chủ đề: Bình luận(0) Lưu ## Nội dung Text: Mạng và viễn thông P18 1. Networks and Telecommunications: Design and Operation, Second Edition. Martin P. Clark Copyright © 1991, 1997 John Wiley & Sons Ltd ISBNs: 0-471-97346-7 (Hardback); 0-470-84158-3 (Electronic) PART 3 MODERN DATA NETWORKS 2. Networks and Telecommunications: Design and Operation, Second Edition. Martin P. Clark Copyright © 1991, 1997 John Wiley & Sons Ltd ISBNs: 0-471-97346-7 (Hardback); 0-470-84158-3 (Electronic) Packet Switching Packet switching emerged in the 1970s as an efficient means of data conveyance. It overcame the inability of circuit-switched (telephone) networks to provide efficiently for variable bandwidth connections for bursty-type usage as required between computers, terminals and storagedevices. In this chapter we discuss the basics of packet switching and ITU-T’s X.25 recommendation, nowadays the worldwide technical standard interface to packet-switched networks. We then also go on todiscuss the IBM company’s SNA (systems network architecture), a proprietary form of packet switching, important because of its dominant role in IBM computer networks. 18.1 PACKET SWITCHING BASICS Packet switching is so-called because the user’s overall message is broken up into a number of smaller packets, each of which is sent separately. We illustrated the concept in Figure 1.10 of Chapter 1. Each packet of data is labelled to identify its intended destination, and protocol control information (PCZ) is added, as we saw in Chapter 9, before it is sent. The receiving end re-assembles the packets in their proper order, with the aid of sequencenumbers and the other PC1 fields. Each packet is carried across the network in a store-and-forward fashion, taking the most efficient route available at the time. Packet switching is a form of statistical multiplexing, as we discovered in Chapter 9. Figure 18.1 illustrates how a link within a packet switching network is used to carry the jumbled-up packets of various different messages and the use of the information carried in the packet header to sort arriving packets at the destination end into the separate logical channels, virtual circuits ( VCs) or virtual calls (VCs). Transmissioncapacity between pairs of nodesinapacket-switchednetwork is generally not split up into rigidly separate physical channels, each of a fixed bandwidth. Instead,theentireavailablebandwidth between two nodalpoints (switches) inthe network is bundled together as a single high bitrate pipe, and all packets to be sent between the two endpointsof the link share the same pipe (Figure 18.1). In this way, the entire bandwidth (i.e. full bitspeed)can be used momentarily by any of the logical channels sharing the connection. This means that individual packets are transported more quickly and bursts of transmission can be accommodated. 341 3. 342 PACKET SWITCHING mm n U packet may switch Figure 18.1 The statistical multiplexing principle of packet switching A problem arises when more than one or all logical channels try to send packets at once. This is accommodated by buffers at sending and receiving ends of the connection as shown in Figure 18.2. These delay some of the simultaneous packets for an instant until the line becomes free. By use of buffers as shown in Figure 18.2, it is possible to run thetransmission link at very close to100% utilization.This is achieved by sharingthecapacity between a number of end devices (each with a logical channel). The statistical average of the total bitrate of all the logical channels must be slightly lower than the line bitrate so that all packetsmay be carried,butatany individualpoint in timethe buffers may be accumulating packets or emptying their contents to the line. Packet switching is able to carry logical channels of almost any average bitrate. Thus a 128 kbit/s trunk between two packet switches might carry 6 logical channels of mixed and varying bitrates 5.6 kbit/s, 11.4 kbit/s, 12.3 kbit/s,22.1 kbit/s, 28.7 kbit/s, 43.0 kbit/s and still have capacity to spare. This compares with the two channels which a telephone network would be able to carry using the same trunk capacity. (The excess capacity of the telephone channels simply has to be wasted, and the other four channels cannot be carried.) packet switch buffer Figure 18.2 The use of a buffer to accommodate simultaneous sending of packets by different logical channels 4. TRANSMISSION DELAY IN PACKET-SWITCHED NETWORKS 343 18.2 TRANSMISSION DELAY IN PACKET-SWITCHED NETWORKS When using the trunksin a packet-switched networkat very close to full utilization, very large buffers are required for each of the logical channels,to smooth out the bursts from individual channels into a smooth output for carriage by the line. (This is rather like having a very large water reservoir,collecting water during showersof rain, and varying in water depth, butalways capableof outputting a constant volume of water for munic- ipal use (Figure 18.3). The water reservoir is analagous to the data buffers, the showers of rain to the bursts of data information, and the constant output to the information carried by the line.) We can make sure that the packets accumulated in the buffer are despatched on a jirst-in-jirst out (FIFO) basis to fairly share out the queueing delays which result, but it is critical to ensure that the queueing delay does not become unacceptably long. The chance of a very long delay is much greater when close to 100% utilization of the line is expected. (Imagine waiting in line for a bus, all of the seats of which had to be full before it pulled away; either the bus doesn’t come very often, or there is a very long queue to ensure that all the seats can be filled). A certainamount of queueing delaycaused by buffering is not noticeable to computer users (a $second is a very long queueing delay in packet switching network terms). Even if a typed character did not appear on the computer screen until a f second afterhittingthekeyboard,the user is unlikely to notice. A variation inthedelay (sometimes a$ second, and sometimes no delay) is also unimportant. (The fact that some characters appear on the screen more quickly than the f second maximum delay will not be noticed.) On the other hand, once the average delay becomes much longer, then computer work may become frustrating, so that much longer queueing delays are unacceptable. There is an entirestatistical science used to estimatequeueingdelays. Themost important formulais the Erlang call-waiting formula, which we willdiscuss in Chapter 30. In simple terms, however, the unacceptability of long queueing delays means that the 6. ROUTING IN PACKET-SWITCHEDNETWORKS 345 m L U U) 2 7. 346 PACKET SWITCHING during periodsof sudden surge in demand resulting from simultaneous packet bursts by many logical channels sharing the same path. The second type of routing, datagram routing, allows for more dynamic routing of individual packets (Figure 18.5), and thus has the potential for better overall networkefficiency. The technique, however, requires more sophisticated equipment, and powerful switch processors capable of determining routes for individual packets. Packet switching gives good end-to-end reliability, with well-designed switches and networks it is possible to bypass network failures (even during the progress of a call). Packet switching is also efficient in its use of network links andresources, sharing them between a number of calls, thereby increasing their utilization. 18.4 ITU-T RECOMMENDATION X.25 Most packet-switched networks use the protocol standards set by ITU-T’s recommen- dation X.25. Thissets out the mannerin which adata terminal equipment ( D T E )should interact with a data circuit terminating equipment ( D C E ) , forming the interface to a packet-switched network. The relationship is shown in Figure 18.6. The X.25 recommendation defines theprotocolsbetween DTE (e.g. personal computer computer or terminal controller (e.g. IBM 3174)) and DCE (i.e. the connection point to a wide area network, W A N )corresponding to OS1 layers 1, 2 and 3 (Figure 18.7) which we learned about in Chapter 9). The physical connection may either be X.21 (digital leaseline) or X.21 bis (V.24/V.28 modem in conjunction with an analogue leaseline: Chapter 9). Alternatively, the X.31 recommendation (Chapter 10) specifies how the physical connection (DTE/DCE) may be achieved via an ISDN (integrated digital services network). Finally, recommenda- tion X.32 specifies the use of a dial-up connection for apacket mode connection via the telephone or ISDN network to an X.25 packet exchange. The X.25 recommendation itself defines theOS1 Iayer 2 and layer 3 protocols. These are called the link access procedures (LAPB and L A P ) and thepacket level interface. The link access procedureassures the correct carriage data across the of link connecting DTE DTE DCE DCE DTE I I I I l *X25 --U I c X 2 5 - W Packet switched network Figure 18.6 The X.25 interface to packet switched networks
# number of ways of expressing a number as sum of 5 squares modulo 10 Look at the function $r_5(n)$, which is defined by the number of ordered integers $(a,b,c,d,e)$ which satisfy $a^2+b^2+c^2+d^2+e^2= n$. Now, I have conjectured that the unit's digit of $r_5(n)$ is 2 when $n$ is of the form $5p^2$, and its unit's digit is 0 when $n$ is not of that form. Is there any proof for my conjecture? I have checked this for very large values of $p$ (upto $p= 100$). But can this be proved? - For what it's worth, these numbers are tabulated at oeis.org/A038671 (but for positive $a,\dots,e$). –  Gerry Myerson Jul 19 '13 at 12:28 I wouldn't call $p=100$ very large :) –  Thomas Andrews Jul 19 '13 at 12:28 Requesting to close this for a week, as this is a Brilliant problem. –  Calvin Lin Jul 19 '13 at 14:38 @CalvinLin I've deleted my answer. Wondering, how does Stack Exchange support closing a question for a period - is it something that needs to be re-opened explicitly, or is that a SE mechanism for temporary closures? –  Thomas Andrews Jul 19 '13 at 14:43 @ThomasAndrews Thanks. I'm not too sure what their procedure is. Currently, I flag the question and a moderator closes or deletes it. I would favorite the question, so that I can flag it for reopening in a week. –  Calvin Lin Jul 19 '13 at 14:45
# Integration over an Ellipsoidal Domain - Clarification 1. Aug 7, 2009 ### tim85ruhruniv 1. The problem statement, all variables and given/known data I want to integrate a function over an Ellipsoidal domain. $$$\underset{\left(\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}+\frac{z^{2}}{c^{2}}-1\right)}{\intop\intop\intop}f\left(x,y,z\right)dxdydz$$$ I have already looked into the above thread and posts descibing this and i found that a bit too difficult to understand hence i tried out a solution of my own. 2.Question - A Please could you tell me if my solution is right. 3. The attempt at a solution I stay in the cartesian coordinate system but make a transoformation of variables and hence i also find the jacobian determinant for the volume transformation. $$$x=ua,y=vb,z=wc$$$ and hence i get, $$$\underset{\left(u^{2}+v^{2}+w^{2}-1\right)}{\intop\intop\intop}f\left(ua,vb,wc\right)\left[abc\right]dududw$$$ now this looks like i have a domain that is a unit sphere and to make things easier i transform from the cartesian co-ordinate system to the spherical co-ordinate system with the standard transofrmation rules and i get, $$$\intop_{0}^{2\pi}\intop_{0}^{\pi}\intop_{0}^{1}f\left(\frac{a}{r}cos\varphi sin\theta,\frac{b}{r}sin\varphi sin\theta,\frac{c}{r}cos\theta\right)\left[\left[abc\right]r^{2}sin\theta\right]drd\theta d\varphi$$$ hence to confirm whether its right i just have to assume the function = 1 and if i integrate i must get the volume of the ellipsoid, $$$\intop_{0}^{2\pi}\intop_{0}^{\pi}\intop_{0}^{1}\left[\left[abc\right]r^{2}sin\theta\right]drd\theta d\varphi$$$ which is simple to integrate and which exactly gives me the volume of an ellipsoid $$=\frac{4}{3}\pi abc$$ 2.Question - B Now my function is a dirac delta function that in itself which is a function of certain vectors. When I Integrate my dirac delta function over the ellipsoid as above i get strange results. thanx a lot. Tim Last edited by a moderator: Apr 24, 2017
Articles / Making Presentations with L… # Back to all articles Making Presentations with LaTeX and Prosper A number of dedicated presentation programs have been written for Unix systems, but they may not serve your needs if you have special requirements, especially the need to display mathematical formulas. The Prosper package can help you create attractive presentations while letting you use the full power of LaTeX. If you write a lot of technical documents, especially those containing formulas, you've probably used LaTeX. LaTeX is, basically, a set of macros for TeX. TeX, in turn, is a powerful typesetting system first developed by Donald Knuth. It has become an important tool for people who prefer to look at a document as series of logical units, leaving the actual presentation or layout to the software. LaTeX was developed by Leslie Lamport to aid in the writing of classes of documents such as journal articles, book chapters, and even letters. LaTeX abstracts many of the nitty gritty details of TeX, such as margin widths, line offsets, etc., allowing the user to simply decide on a document class and leave the style and format to the macros. Numerous people have written macro packages that can be used with LaTeX. These packages provide an enormous range of functions, from formatting of citations to drawing Feynman diagrams. Together with features such as automatic index generation and bibliographies (using the BibTeX package), they provide the technical writer with an extremely powerful tool to create beautiful documents, concentrating on the logical flow, rather than having to worry about the underlying details of formatting and layout. However, documents are not the only the things that need to be written; many times, a presentation must be made. Under Linux, tools such as KPresent and MagicPoint exist, and, of course, Windows users have MS PowerPoint. These are the traditional GUI tools. However, when you have to make a presentation containing formulas, they seem a little clunky, and you're stuck with whatever the package provides. Furthermore, if your documents are written using LaTeX, it would be nice if you could use those documents to generate slides for a presentation. TeX and LaTeX being the all-powerful pieces of software they are, this is indeed possible. However, the problem with making presentations in LaTeX is the large number of packages available to do so. I've listed a few of the packages available, but there are quite a few more which I haven't mentioned. The slides class Part of the LaTeX distribution, it defines the page sizes, font sizes, etc. suitable for printing transparencies. Though the resultant DVI file can be converted to a PDF, there is no support for the various features of PDFs such as slide transitions and hyperlinks. Also, the package provides no defined slide styles (i.e., backgrounds, frames, etc.). The Seminar package Developed by Timothy van Zandt, this is an extremely powerful set of macros with which you can develop presentations that take full advantage of the PostScript and PDF specifications. There are an extremely large number of options and commands available for this package, so the learning curve is a little steep. The PDFLatex package This package is specifically designed for converting LaTeX source files to the PDF format without having to go through the intermediate DVI stage. Using this package along with the FoilTeX, pdfslide, and PPower4 packages allows you to generate presentations as well. Prosper This is a set of macros which allows you to generate PostScript or PDF presentations. There are certain advantages of this package over the others. First, though it has a simple structure, it provides enough options to generate good-looking slides. All the features of a PDF document (such as transitions, overlays, etc.) are available. In addition, it is easy to generate different slide styles, a la PowerPoint. Of course, you still have access to the full power of TeX, so you are free to extend your documents if you have the knowhow. For LaTeX beginners, however, Prosper encapsulates a lot of the details in an easy-to-use manner. In this article, I'll be discussing the Prosper package in some detail. You can find a good review of presentation tools for both PDF and HTML formats here. ## Prosper All LaTeX documents have a common basic structure. The first line always defines the document type -- article, letter, chapter, or, in this case, slides. After that comes the preamble. In the case of Prosper, this is where you specify the title slide. The next section is the document proper. When using Prosper, this is where you define the contents of successive slides. I'll cover the individual sections of a document written with Prosper in detail, but the first step is to install the package. ### Installation: As I mentioned above, the Prosper package provides a set of macros which define functional elements of a presentation -- the slides, how slides should transition, etc. To use the package, you will require the seminar, pstricks, and hyperref packages (which come with the standard TeX distribution on Red Hat). To generate the final output, you'll also need dvips, GhostScript, and ps2pdf. After downloading the tarball, extract it into a directory. To make use of the package and associated style files, you can place the required files (prosper.cls, the style file that you are using, and any associated images, such as for bullets) into the directory that contains your LaTeX document. However, a neater method is to put the Prosper directory into your TEXINPUTS environment variable: ~: export TEXINPUTS=~/src/tex/Prosper:$TEXINPUTS (Where ~/src/tex/Prosper is the directory into which you extracted the Prosper files.) That completes your installation. ### The prosper Document Class To make a presentation using the Prosper package, you need to specify it in your \documentclass (you can also specify it in a \usepackage command in the preamble). Thus, the first line in the LaTeX file should be of the form: \documentclass[ OPTIONS ]{prosper} There are several options that can be specified to the package. You can read about all the options in detail in the documentation that comes with Prosper. I'll just give a brief overview of some of the common and useful ones: draft Compiles a draft version of the presentation, with figures replaced by bounding boxes. final Compiles a complete version of the presentation with figures and captions in their proper places. ps Compiles the LaTeX file to PostScript for printing purposes. pdf Compile the LaTeX file to a PDF format suitable for projectors. Another important option to specify is which presentation style to use. Prosper comes with several styles, and new styles can easily be made with a little knowledge of the pstricks package. There are also options to specify slide background colors, slide numbers, etc. In general, unless you require black and white slides (e.g., for printing purposes), you won't need to set any color options in the \documentclass; the style files will manage them for you. ### The Preamble The next section is the preamble, the part between \documentclass and \begin{document}. In this section, you should specify the contents of the title page and some options (such as logos and slide captions) that can be applied to all the slides. The normal LaTeX macros have been redefined to generate the title and associated text with proper font sizes, etc. Some of the macros available for designing the title slide include: • \title • \subtitle • \author • \email • \slideCaption (You can use this macro to put a caption at the bottom of each slide.) • \Logo (This allows you to place a logo on each slide at a specified position.) • \DefaultTransition (This defines the type of transition that should occur between slides.) Since the hyperref package is included by Prosper, you can use the \href command to include mailto: links or direct hyperlinks to Web pages in the above commands (and, of course, in the rest of your document). As in standard LaTeX, the title slide is generated by the \maketitle command in the document body. ### The slide Environment The Prosper package defines the slide environment. This represents the basic unit of a presentation (a single slide) and is placed in the document body (i.e., after the \begin{document} command). Within a slide environment, all the usual LaTeX commands may be used. Images, formulas, tables, footnotes, page structure commands, etc. can all be used. The Prosper package does redefine the itemize environment so that the text is no longer justified. It also supplies images for the bullets. Thus, a single slide containing a bulleted list can be represented by the following LaTeX source (alongside, you can see how the final PDF output for this slide would look): \begin{slide}{The Title of the Slide} \begin{itemize} \item Item 1 \item Item 2 \item Item 3 \end{itemize} \end{slide} The environment does not provide any means to divide the slide area into columns or rows; it simply provides a rectangular display area (the dimensions of which may vary from style to style). However, using the minipage environment, it is very easy to make a two-column slide. For example, the following would create a slide with a picture in one column and a bulleted list in the other: \begin{slide}{Another Example Slide} \begin{minipage}{4cm} \epsfig{file=./picture.eps} \end{minpage} \begin{minipage}{7cm} \begin{itemize} \item Item 1 \item Item 2 \item Item 3 \end{itemize} \end{minipage} \end{slide} Prosper also defines some commands which are allowed to appear in a slide environment. Examples include: \FontTitle Defines the font to be used in the slide title \FontText Defines the font to be used in the slide text \fontTitle Writes its argument as the slide title \fontText Writes its argument as the slide text In general, the above macros are not used when writing a presentation. They are, however, useful when you create slide styles of your own. ### Page Transitions An important command is \PDFtransition, which can be used to specify how the current slide should appear. However, the usual way to specify a slide transition for a specific slide is to put the transition mode into the \begin{slide} command as: \begin{slide}[Glitter]{Slide Title} The Prosper package supports several types of transitions: • Split • Blinds • Box • Wipe • Dissolve • Glitter • Replace (the default) The above transition modes provide you with ample opportunity to make flashy presentations (if that's what you're into :). You can see a PDF which displays each of the transitions here. ### Overlays A very useful feature of computer-based presentations is the ability to make overlay slides so parts of the same slide will appear at different times. Prosper provides commands to implement this in a very simple fashion. The \overlay command is used to specify that a given \slide environment will consist of a sequence of overlays. You must specify the number of overlays that make up the slide. There are several commands that can be used to specify exactly what material should appear on which slide within an overlay: \fromSlide{p}{material} Puts material on slides p to the end of the overlay. \onlySlide{p}{material} Puts material only on slide p. \untilSlide{p}{material} Puts material on all slides from the first to the pth. There are three macros analogous to the above (obtained by capitalizing the first letter) which cause all material after the occurrence of the macro to be included (rather than specifically defining material). The macros in the above list also have starred counterparts (i.e., \fromSlide*, etc.). These versions are useful when the successive overlays need to replace previous overlays. Below, I've provided an example of a slide that consists of several overlays and uses the itemstep environment to allow an itemized list to progress through successive overlays. Alongside is an animation of how the PDF version of the slide would look: \overlays{5}{ \begin{slide}{The Effects of Power} \begin{tabular}{rc} \begin{minipage}{4cm} \onlySlide*{1}{\epsfig{file=stage1.eps}} \onlySlide*{2}{\epsfig{file=./stage2.eps}} \onlySlide*{3}{\epsfig{file=./stage3.eps}} \onlySlide*{4}{\epsfig{file=./stage4.eps}} \onlySlide*{5}{\epsfig{file=./stage5.eps}} \end{minipage} & \begin{minipage}{6cm} \begin{itemstep} \item Alignment \item Deformation \item Coulomb explosion \item X-ray emission \item Nuclear reaction \end{itemstep} \end{minipage} \end{tabular} \end{slide}} An important point to note about the overlay commands is that they are only valid when the Prosper package is used with the pdf option. However, the package does provide a set of macros: • \PDForPS{ifpdf}{ifps} • \onlyInPS{material} • \onlyInPDF{material} which allow you to include different material depending on whether the LaTeX document is compiled in PS or PDF mode. An example of the use of these macros would be: \overlays{3}{ \begin{slide}{An Example Slide} \onlySlide*{1}{\epsfig=./pic1.eps} \onlySlide*{2}{\epsfig=./pic2.eps} \onlySlide*{3}{\epsfig=./pic3.eps} \onlyInPS{\epsfig=./epspic.eps} \end{slide}} If the snippet were converted to a PDF, we would get a slide which would successively display pic1.eps, pic2.eps, and pic3.eps. If it were compiled to PS format, the slide would only contain the image epspic.eps. ### Presentation Styles The Prosper package comes with several style files. Essentially, these provide predefined background colors and patterns, title fonts, bullet styles, etc. You can easily change the look of your presentation by including a different style file. Which style to use is specified in the \documentclass. Below, you can see slides generated using the different slide styles. Default Alienglow Autumn Azure Blends Contemporain Dark Blue Frames Lignes Bleues Nuance trois It should be noted that all the styles do not provide the same display area for the actual slide material. You can see this in some of the slide examples above. If you decide to change the slide style of your presentation, you might need to tweak things such as spacing (\hspace, \vspace, etc.) or line lengths, etc. Furthermore, if a given style does not really suit your taste, it is possible to make modifications such as font type, colors, etc. using the Prosper macros, rather than digging into the source of the style in question. Assuming you're comfortable with the pstricks package, designing a new slide is made easier by a number of macros defined by Prosper. You have access to a number of boolean macros which allow you to include features depending on the current environment (PDF or PS, color or black & white, etc.). The main macro that Prosper provides to design a new style is the \NewSlideStyle command. After designing the style, you need to tell Prosper the details, such as how much display area you are providing, where it should be located, etc., using this macro. ### Processing the LaTeX File At this point, you should be able to write your presentation. The last step is to convert the LaTeX source to a PDF file. The steps involved are pretty simple: 1. latex file.tex 2. dvips -Ppdf -G0 file.dvi -o file.ps 3. ps2pdf -dPDFsettings=/prepress file.ps file.pdf Two points to note: • The -G0 parameter passed to dvips is used to get around a bug in GhostScript which converts the "f" character to a pound sign in the final PDF. • The -dPDFsettings parameter for ps2pdf is used to prevent downsampling of EPS images when they are converted to PDF. Without this switch, EPS graphics in the final PDF look very fuzzy, especially when viewed with a projector. ### Miscellaneous Features • Since Prosper includes the hyperref package by default, you can easily set links and targets within your presentation with the \hyperlink and \hypertarget commands to enable easy navigation. • PowerPoint allows you to embed animations within a presentation. This is also possible when using Prosper, since it uses the hyperref package. To embed an MPEG movie, you can include the following code snippet: \href{run:movie.mpg}{Click here to view the movie} Two points to note: • Viewing the movie depends on Acrobat Reader being able to run the viewing program. This can be set by making sure you have an entry in your .mailcap file for the filetype you want to play. • The resultant movie plays in its own window; it is not possible to actually "embed" the movie in the presentation itself (at least under Linux). Using this technique, you could run any type of file (assuming you have a program to handle it) or even executables like shell scripts, etc. • You may want to convert your PDF presentation to an HTML slideshow. This is possible using the program pdf2htmlpres.py. It can use the convert program from the ImageMagick suite or GhostScript directly to convert the PDF slides to a series of JPGs (or GIFs or PNGs) and generate HTML pages to form a slideshow. ### Conclusion I hope I've been able to convey some of the features and benefits that the Prosper package provides. Granted, for a person who doesn't use LaTeX, a GUI alternative would be easier. But for all the TeXnicians out there, the Prosper package allows you to generate well-designed and stylish slides efficiently, at the same time allowing the knowledgeable user to extend the package using predefined macros and pure TeX. The Prosper community has a very useful mailing list which can be accessed at the Prosper Web site. The Prosper tarball contains comprehensive documentation explaining the available commands and macros provided by the package. It also includes a document displaying the capabilities of the package. The LaTeX sources of these documents are the best way to learn how to use the various features of Prosper. I, for one, have finally been able to get rid of MS PowerPoint and use Prosper to develop all my presentations. Using this package, I'm able to create presentations which rival those produced by more popular GUI packages and which can be viewed with the very common Acrobat Reader (and converted to clean HTML when required!). You can take a look at presentations I've made using prosper on my Web site For all LaTeX users, I strongly recommend taking a look at Prosper. ## Recent comments 11 Sep 2007 07:24 Re: making bitmap graphics in PDF look good > > % > % After a long search, I found that the > % options > % -dAutoFilterColorImages=false > % -dColorImageFilter=/FlateEncode > % together produced the intended result > of > % good-looking bitmaps (which had to be > % converted into eps using > ImageMagick's > % convert before). The .pdf files don't > % get too big. > % > > Unfortunately under Windows this options > doesnt work at all. > D:\TEXFILES>ps2pdf -sPAPERSIZE=a4 > test.ps test.pdf > GPL Ghostscript 8.54 (2006-05-17) > Copyright (C) 2006 artofcode LLC, > Benicia, CA. All rights reserved. > This software comes with NO WARRANTY: > see the file PUBLIC for details. > Unknown paper size: (). > Unrecoverable error: stackunderflow in > dup > > OR: > D:\TEXFILES>ps2pdf > -dPDFsettings=/prepress test.ps test.ps > test.pdf > GPL Ghostscript 8.54 (2006-05-17) > Copyright (C) 2006 artofcode LLC, > Benicia, CA. All rights reserved. > This software comes with NO WARRANTY: > see the file PUBLIC for details. > Error: /undefinedfilename in > (/prepress) > Operand stack: > > Execution stack: > %interp_exit .runexec2 > --nostringval-- --nostringval-- > --nostringval- > - 2 %stopped_push --nostringval-- > --nostringval-- --nostringval-- fa > lse 1 %stopped_push > Dictionary stack: > --dict:1122/1686(ro)(G)-- > --dict:0/20(G)-- --dict:70/200(L)-- > Current allocation mode is local > Last OS error: No such file or > directory > GPL Ghostscript 8.54: Unrecoverable > error, exit code 1 > > I dont know what is going on and nobody > can help me > > > > use: ps2pdf -sPAPERSIZE#a4 test.ps test.pdf Apparently under windows the "#" symbol must be used instead of "=". Had me confused for over an hour. 29 Oct 2006 14:38 Re: making bitmap graphics in PDF look good > > After a long search, I found that the > options > -dAutoFilterColorImages=false > -dColorImageFilter=/FlateEncode > together produced the intended result of > good-looking bitmaps (which had to be > converted into eps using ImageMagick's > convert before). The .pdf files don't > get too big. > Unfortunately under Windows this options doesnt work at all. D:\TEXFILES>ps2pdf -sPAPERSIZE=a4 test.ps test.pdf GPL Ghostscript 8.54 (2006-05-17) Copyright (C) 2006 artofcode LLC, Benicia, CA. All rights reserved. This software comes with NO WARRANTY: see the file PUBLIC for details. Unknown paper size: (). Unrecoverable error: stackunderflow in dup OR: D:\TEXFILES>ps2pdf -dPDFsettings=/prepress test.ps test.ps test.pdf GPL Ghostscript 8.54 (2006-05-17) Copyright (C) 2006 artofcode LLC, Benicia, CA. All rights reserved. This software comes with NO WARRANTY: see the file PUBLIC for details. Error: /undefinedfilename in (/prepress) Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval- - 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- fa lse 1 %stopped_push Dictionary stack: --dict:1122/1686(ro)(G)-- --dict:0/20(G)-- --dict:70/200(L)-- Current allocation mode is local Last OS error: No such file or directory GPL Ghostscript 8.54: Unrecoverable error, exit code 1 I dont know what is going on and nobody can help me 27 Aug 2005 17:31 gnuplot gnuplot is a graphing program that integrates extremely well with LaTeX and thus prosper, if you do it right. In gnuplot set terminal pslatex set output &quot;myfile.tex&quot; set format xy &quot;$%g$&quot; # if in 3D set format z &quot;$%g$&quot; Use latex in your title/labels etc - but you need to escape the backslash. IE set title &quot;Over Approximation for\n$a=4$,$b=2$,$h=3$, with$\\Delta y=0.5$&quot; The '$\\Delta y=0.5$' gets spit to the output tex file as '$\Delta y=0.5\$' so that the Delta gets properly drawn. \usepackage{pslatex} To include the gnuplot tex output - in your document just use \input{myfile} The real neat thing about doing it this way - as you play around with type of prosper style you want, as the fonts change in color - so will the fonts on your gnuplot generated images. Not only will the mathematical formulas in your slideshow look better than anything anyone using PowerPoint can produce, but your graphs will look a hell of a lot better as well. 19 Nov 2004 08:21 Slide clipping in acroread when using prosper I am very pleased with the way prosper works and plan to use it soon for the first time. I did find that acroread clipped the right sides of the slide (in all modes) unless ps2pdf was used with the option ps2pdf -sPAPERSIZE=a4 &lt;file&gt;.ps &lt;file&gt;.pdf I am using the standard texmf distribution with RH 9.1. Regards, 04 Jan 2004 15:36 err... powerpoint? > good work!! Iam using LaTeX for one year > now and its really amazing. I used the > OpenOffice Powerpoint for presentations > and LaTeX for my text documents. I never > recognize, that slide handling is so > easy with LaTeX. Allright, for now > Powerpoint is the past ;) i think what you're saying here is impress not powerpoint. by the way, good article. although i'm still in the newbie stage when it comes to latex. ## Project Spotlight ### Kigo Video Converter Ultimate for Mac A tool for converting and editing videos. ## Project Spotlight ### Kid3 An efficient tagger for MP3, Ogg/Vorbis, and FLAC files.
Find the Mean (Arithmetic) 24 , 360-8 , 621 24 , 360-8 , 621 Subtract 8 from 360. 24,352,621 The mean of a set of numbers is the sum divided by the number of terms. 24+352+6213 Simplify the numerator. 376+6213
## sticking coefficient in surface chemistry https://doi.org/10.1351/goldbook.S06012 The ratio of the rate of @[email protected] to the rate at which the @[email protected] strikes the total surface, i.e. covered and uncovered. It is usually a function of @[email protected], of temperature and of the details of the surface structure of the @[email protected] Source: PAC, 1976, 46, 71. (Manual of Symbols and Terminology for Physicochemical Quantities and Units - Appendix II. Definitions, Terminology and Symbols in Colloid and Surface Chemistry. Part II: Heterogeneous Catalysis) on page 78 [Terms] [Paper]
The minimal polynomial can't have multiple roots (in my proof) I have a question about the roots of the minimal polynomial $f_\alpha$, I can't see why $f_\alpha$ is separable, i.e. all it's roots must be different on a splitting field $\mathbf K$. I know it's roots are $\sigma_i(\alpha),$ but What is the proof to know that all the $\sigma_i's$ are different? Proposition: If $\mathbf K:\mathbf F<\infty,$ and is Galois, then it is normal and separable. Proof: (summary of t.gunn's proof :) Let $\alpha\in\mathbf K.$ The minimal polynomial of $\alpha$ is $$f_\alpha(x) := \prod_{\beta \in G \cdot \alpha} (x - \beta).$$ Indeed: First, note that $f_\alpha(\alpha) = 0$, which follows since $\alpha = \operatorname{id}(\alpha) \in G \cdot \alpha$. • Second, note that $f_\alpha \in \mathbf{F}[x]$, • Third note that $f_\alpha$ is minimal. Indeed if $f(\alpha) = 0$ then $f(\sigma(\alpha)) = \sigma(f(\alpha)) = \sigma(0) = 0$ for all $\sigma \in G$. Thus $\sigma(\alpha)$ is a root for all $\sigma \in G$. Thus $f_\alpha \mid f$. Finally, we note that $f_\alpha$ splits over $\mathbf{K}$ and is separable, by construction. • What is the definition of Galois extension you are using? – Ennar Jul 30 '17 at 0:17 • @Ennar $\mathbf K:\mathbf F<\infty$ is Galois if $G(\mathbf K:\mathbf F)^+=\sigma(\mathbf F)$, where $\sigma$ is the monomorphism between the fields – user441848 Jul 30 '17 at 0:20 • I'm not familiar with the notation, do you mean that the fixed field of automorphism group of $K$ over $F$ is $\sigma(F)$? – Ennar Jul 30 '17 at 0:24 • $$\prod_{\beta\in G\cdot\alpha}(x-\beta)\neq \prod_{\sigma\in G}(x-\sigma(\alpha))$$ – Ennar Jul 30 '17 at 1:02 • You know that $I = \{1\} = \{1,1,1\}$, right? If I wrote $\prod_{i\in I}(x-i)$ it would mean $x-1$, not $(x-1)(x-1)(x-1)$. I will once again say, if $\alpha\in F$, then $\sigma_1(\alpha)=\sigma_2(\alpha)=\ldots=\sigma_n(\alpha) = \alpha$, $G\cdot\alpha = \{\sigma_1(\alpha),\sigma_2(\alpha),\ldots,\sigma_n(\alpha)\} = \{\alpha\}$ and finally, $$\prod_{\beta\in G\cdot\alpha}(x-\beta) = x - \alpha \neq (x-\alpha)^n = \prod_{\sigma\in G} (x-\sigma(\alpha)).$$ – Ennar Jul 30 '17 at 1:34 Did you listen when we said in your previous questions that $K/F$ is Galois iff $F = K^G$ where $G= Gal(K/F)$ is a finite group of automorphisms of $K$ ? For $\alpha \in K$, it means that the polynomial with distinct roots $f(x) = \prod_{\beta \in G (\alpha)} (x-\beta) \in K[x]$ has coefficients in the fixed field, ie. $f \in F[x]$, therefore it is the minimal polynomial of $\alpha$. $G( \alpha)= \{ \beta \in K, \exists \sigma \in G, \sigma(\alpha) = \beta\}$. $K^G = \{ \alpha \in K, \forall \sigma \in G, \sigma(\alpha) = \alpha\}$. $f(x) =\prod_{\beta \in G (\alpha)} (x-\beta)= \sum_{n=0}^d c_n x^n, \quad \sum_{n=0}^d \sigma(c_n) x^n = \prod_{\beta \in G (\alpha)} (x-\sigma(\beta))= f(x)$. $f \in F[x] \land f(\alpha) = 0 \land \sigma \in Gal(K/F) \implies f(\sigma(\alpha)) =\sigma(f(\alpha)) = 0$. • Show the main statement when $K = F(\alpha)$ and use induction. It is equivalent to $|Gal(K/F)| = [K:F]$. • Because $x^p-1 \equiv (x-1)^p \bmod p$ it means $\zeta_p$ doesn't exist in $\mathbb{F}_p$ and $x^p-t^p$ is the non-separable minimal polynomial of $t$ over $\mathbb{F}_p(t^p)$ so that $\mathbb{F}_p(t)/\mathbb{F}_p(t^p)$ is a non-separable finite extension. • If $E=F(\alpha)$ where the minimal polynomial $f$ of $\alpha$ is separable then its normal closure (the splitting field of $f$) is Galois. • why did you mention the bullets? – user441848 Jul 30 '17 at 3:19 • This polynomial $f(x) = \prod_{\beta \in G (\alpha)} (x-\beta) \in K[x]$ has different roots, why?. I already know that $f\in F[x],$ and that $f$ is the minimal polynomial of $\alpha$. Why did you write that as 'conclusion'? – user441848 Jul 30 '17 at 3:28 • @Annet. Are you serious ? $G(\alpha)$ is a finite set, we pick $\beta$ only once. – reuns Jul 30 '17 at 3:32 • then why the product $\prod$ notation? – user441848 Jul 30 '17 at 3:50 • Of course I'm serious 😳 – user441848 Jul 30 '17 at 3:51
# If $a_1a_2\cdots a_n=1$, then the sum $\sum_k a_k\prod_{j\le k} (1+a_j)^{-1}$ is bounded below by $1-2^{-n}$ I am having trouble with an inequality. Let $a_1,a_2,\ldots, a_n$ be positive real numbers whose product is $1$. Show that the sum $$\frac{a_1}{1+a_1}+\frac{a_2}{(1+a_1)(1+a_2)}+\frac{a_3}{(1+a_1)(1+a_2)(1+a_3)}+\cdots+\frac{a_n}{(1+a_1)(1+a_2)\cdots(1+a_n)}$$ is greater than or equal to $$\frac{2^n-1}{2^n}$$ If someone could help approaching this, that would be great. I don't even know where to start. • Generally, words like "tough," "hard," "difficult" aren't very useful in the title, because they give no real information. Presumably when you ask the question here, it is because you find it hard. But does it let a potential answerer know if they can help you? – Thomas Andrews Aug 4 '14 at 20:52 • Please consider accepting an answer by pressing the tick when you are happy. I realized you have not yet accepted any answer. – Lost1 Aug 4 '14 at 21:04 • @ThomasAndrews Also, "question" is always redundant, which left the original title with the content of "an inequality". Hope the new one is better. – user147263 Aug 4 '14 at 22:45 Note that for that every positive integer $i$ we have \begin{eqnarray} \frac{a_i}{(1+a_1)(1+a_2) \cdots (1+a_i)} & = & \frac{1 + a_i}{(1+a_1)(1+a_2) \cdots (1+a_i)} - \frac{1}{(1+a_1)(1+a_2) \cdots (1+a_i)} \nonumber \\ & = & \frac{1}{(1+a_1) \cdots (1+a_{i-1})} - \frac{1}{(1+a_1) \cdots (1+a_i)}. \nonumber \end{eqnarray} Let $b_i = (1+a_1)(1+a_2) \cdots (1+a_i)$, with $b_0= 0$. Then by telescopy $$\sum\limits_{i=1}^n \left( \frac{1}{b_{i-1}} - \frac{1}{b_i} \right) = 1 - \frac{1}{b_n}.$$ Since $1+x\geq 2\sqrt{x}$ for all $x\ge 0$, we have $$b_n = (1+a_1)(1+a_2) \cdots (1+a_n) \geq (2 \sqrt{a_1})(2 \sqrt{a_2}) \cdots (2 \sqrt{a_n}) = 2^n,$$ with equality precisely if $a_i=1$ for all $i$. It follows that $$1 - \frac{1}{b_n} \geq 1 - \frac{1}{2^n} = \frac{2^n-1}{2^n}.$$ Hint: $$\frac{a_k}{(1+a_1)\cdots(1+a_k)} = \frac{1}{(1+a_1)\cdots(1+a_{k-1})} - \frac{1}{(1+a_1)\cdots(1+a_k)}$$ • ... so we just have to prove that $$\prod_{i=1}^{n}(1+a_i)\geq 2^n,$$ right? – Jack D'Aurizio Aug 4 '14 at 20:58 • It's just $$1-\frac{1}{a_k}=\frac{a_k}{1+a_k}$$ multiplied on both sides by $$\frac{1}{(1+a_1)(1+a_2)\dots(1+a_{k-1})}$$ – Thomas Andrews Aug 7 '14 at 2:51
an encyclopedia of finite element definitions # Gauss–Legendre Orders $$0\leqslant k$$ Reference elements interval, quadrilateral, hexahedron Polynomial set $$\mathcal{Q}_{k}$$↓ Show polynomial set definitions ↓ DOFs On each vertex: point evaluations On each edge: point evaluations at Gauss–Legendre points On each face: point evaluations at Gauss–Legendre points On each volume: point evaluations at Gauss–Legendre points Number of DOFs interval: $$k+1$$ (A000027)quadrilateral: $$(k+1)^2$$ (A000290)hexahedron: $$(k+1)^3$$ (A000578) Categories Scalar-valued elements ## Implementations Symfem "Lagrange", variant="legendre"↓ Show Symfem examples ↓ ## Examples interval order 1 interval order 2 order 1 order 2 • $$R$$ is the reference interval. The following numbering of the subentities of the reference is used: • $$\mathcal{V}$$ is spanned by: $$1$$, $$x$$ • $$\mathcal{L}=\{l_0,...,l_{1}\}$$ • Functionals and basis functions: $$\displaystyle l_{0}:v\mapsto v(0)$$ $$\displaystyle \phi_{0} = 1 - x$$ This DOF is associated with vertex 0 of the reference element. $$\displaystyle l_{1}:v\mapsto v(1)$$ $$\displaystyle \phi_{1} = x$$ This DOF is associated with vertex 1 of the reference element. • $$R$$ is the reference interval. The following numbering of the subentities of the reference is used: • $$\mathcal{V}$$ is spanned by: $$1$$, $$x$$, $$x^{2}$$ • $$\mathcal{L}=\{l_0,...,l_{2}\}$$ • Functionals and basis functions: $$\displaystyle l_{0}:v\mapsto v(0)$$ $$\displaystyle \phi_{0} = 2 x^{2} - 3 x + 1$$ This DOF is associated with vertex 0 of the reference element. $$\displaystyle l_{1}:v\mapsto v(1)$$ $$\displaystyle \phi_{1} = x \left(2 x - 1\right)$$ This DOF is associated with vertex 1 of the reference element. $$\displaystyle l_{2}:v\mapsto v(\tfrac{1}{2})$$ $$\displaystyle \phi_{2} = 4 x \left(1 - x\right)$$ This DOF is associated with edge 0 of the reference element. • $$R$$ is the reference quadrilateral. The following numbering of the subentities of the reference is used: • $$\mathcal{V}$$ is spanned by: $$1$$, $$y$$, $$x$$, $$x y$$ • $$\mathcal{L}=\{l_0,...,l_{3}\}$$ • Functionals and basis functions: $$\displaystyle l_{0}:v\mapsto v(0,0)$$ $$\displaystyle \phi_{0} = x y - x - y + 1$$ This DOF is associated with vertex 0 of the reference element. $$\displaystyle l_{1}:v\mapsto v(1,0)$$ $$\displaystyle \phi_{1} = x \left(1 - y\right)$$ This DOF is associated with vertex 1 of the reference element. $$\displaystyle l_{2}:v\mapsto v(0,1)$$ $$\displaystyle \phi_{2} = y \left(1 - x\right)$$ This DOF is associated with vertex 2 of the reference element. $$\displaystyle l_{3}:v\mapsto v(1,1)$$ $$\displaystyle \phi_{3} = x y$$ This DOF is associated with vertex 3 of the reference element. • $$R$$ is the reference quadrilateral. The following numbering of the subentities of the reference is used: • $$\mathcal{V}$$ is spanned by: $$1$$, $$y$$, $$y^{2}$$, $$x$$, $$x y$$, $$x y^{2}$$, $$x^{2}$$, $$x^{2} y$$, $$x^{2} y^{2}$$ • $$\mathcal{L}=\{l_0,...,l_{8}\}$$ • Functionals and basis functions: $$\displaystyle l_{0}:v\mapsto v(0,0)$$ $$\displaystyle \phi_{0} = 4 x^{2} y^{2} - 6 x^{2} y + 2 x^{2} - 6 x y^{2} + 9 x y - 3 x + 2 y^{2} - 3 y + 1$$ This DOF is associated with vertex 0 of the reference element. $$\displaystyle l_{1}:v\mapsto v(1,0)$$ $$\displaystyle \phi_{1} = x \left(4 x y^{2} - 6 x y + 2 x - 2 y^{2} + 3 y - 1\right)$$ This DOF is associated with vertex 1 of the reference element. $$\displaystyle l_{2}:v\mapsto v(0,1)$$ $$\displaystyle \phi_{2} = y \left(4 x^{2} y - 2 x^{2} - 6 x y + 3 x + 2 y - 1\right)$$ This DOF is associated with vertex 2 of the reference element. $$\displaystyle l_{3}:v\mapsto v(1,1)$$ $$\displaystyle \phi_{3} = x y \left(4 x y - 2 x - 2 y + 1\right)$$ This DOF is associated with vertex 3 of the reference element. $$\displaystyle l_{4}:v\mapsto v(\tfrac{1}{2},0)$$ $$\displaystyle \phi_{4} = 4 x \left(- 2 x y^{2} + 3 x y - x + 2 y^{2} - 3 y + 1\right)$$ This DOF is associated with edge 0 of the reference element. $$\displaystyle l_{5}:v\mapsto v(0,\tfrac{1}{2})$$ $$\displaystyle \phi_{5} = 4 y \left(- 2 x^{2} y + 2 x^{2} + 3 x y - 3 x - y + 1\right)$$ This DOF is associated with edge 1 of the reference element. $$\displaystyle l_{6}:v\mapsto v(1,\tfrac{1}{2})$$ $$\displaystyle \phi_{6} = 4 x y \left(- 2 x y + 2 x + y - 1\right)$$ This DOF is associated with edge 2 of the reference element. $$\displaystyle l_{7}:v\mapsto v(\tfrac{1}{2},1)$$ $$\displaystyle \phi_{7} = 4 x y \left(- 2 x y + x + 2 y - 1\right)$$ This DOF is associated with edge 3 of the reference element. $$\displaystyle l_{8}:v\mapsto v(\tfrac{1}{2},\tfrac{1}{2})$$ $$\displaystyle \phi_{8} = 16 x y \left(x y - x - y + 1\right)$$ This DOF is associated with face 0 of the reference element. ## DefElement stats Element added 20 February 2021 Element last updated 03 July 2021
# Find the first $5$ terms of the sequence whose $n^{th}$ term is given by $t_n=2^n$ $\begin{array}{1 1} 2,4,8,16,32 \\ 2,4,6,8,10 \\ 2,4,12,24,48 \\ 2,4,8,32,64 \end{array}$
EN|RU English version: Journal of Applied and Industrial Mathematics, 2017, 11:1, 130-144 Volume 24, No 1, 2017, P. 31-55 UDC 519.715 E. M. Zamaraeva On teaching sets for 2-threshold functions of two variables Abstract: We consider $k$-threshold functions of n variables, i. e. the functions representable as the conjunction of $k$ threshold functions. For $n$ = 2, $k$ = 2, we give upper bounds for the cardinality of the minimal teaching set depending on the various properties of the function. Illustr. 6, bibliogr. 9. Keywords: machine learning, threshold function, teaching dimension, teaching set. DOI: 10.17377/daio.2017.24.508 Elena M. Zamaraeva 1 1. Lobachevsky State University, 23 Gagarin Ave., 603950 Nizhny Novgorod, Russia e-mail: [email protected] Revised 2 August 2016 # References [1] N. Yu. Zolotykh and V. N. Shevchenko, Estimating the complexity of deciphering a threshold function in a $k$-valued logic, Zh. Vychisl. Mat. Mat. Fiz., 39, No. 2, 346–352, 1999 [Russian]. Translated in Comput. Math. Math. Phys., 39, No. 2, 328–334, 1999. [2] V. N. Shevchenko and N. Yu. Zolotykh, On the complexity of deciphering the threshold functions of $k$-valued logic, Dokl. Akad. Nauk, 362, No. 5, 606–608, 1998 [Russian]. Translated in Dokl. Math., 58, No. 2, 268–270, 1998. [3] M. A. Alekseyev, M. G. Basova, and N. Yu. Zolotykh, On the minimal teaching sets of two-dimensional threshold functions, SIAM J. Discrete Math., 29, No. 1, 157–165, 2015. [4] M. Anthony, G. Brightwell, and J. Shawe-Taylor, On specifying Boolean functions by labelled examples, Discrete Appl. Math., 61, No. 1, 1–25, 1995. [5] W. J. Bultman and W. Maass, Fast identification of geometric objects with membership queries, Inf. Comput., 118, No. 1, 48–64, 1995. [6] A. Yu. Chirkov and N. Yu. Zolotykh, On the number of irreducible points in polyhedra, Graphs Comb., 32, No. 5, 1789–1803, 2016. [7] V. N. Shevchenko and N. Yu. Zolotykh, Lower bounds for the complexity of learning half-spaces with membership queries, in Algorithmic Learning Theory (Proc. 9th Int. Conf., Otzenhausen, Germany, Oct. 8–10, 1998), pp. 61–71, Springer, Berlin, 1998 (Lect. Notes Comput. Sci., Vol. 1501). [8] J. Trainin, An elementary proof of Pick’s theorem, Math. Gaz., 91, No. 522, 536–540, 2007. [9] E. M. Zamaraeva On teaching sets of k-threshold functions, 2015 (Cornell Univ. Libr. e-Print Archive, arXiv:1502.04340). © Sobolev Institute of Mathematics, 2015